Open Access
Open access
Social Influence, volume 20, issue 1

Values tensions and values tradeoffs in the development of healthcare artificial intelligence technology: a conceptual model of decisions to create trustworthy technology

Anna Tovmasyan 1, 2
Netta Weinstein 1, 2
Brent Mittelstadt 2
Publication typeJournal Article
Publication date2025-03-17
Journal: Social Influence
scimago Q2
wos Q3
SJR0.553
CiteScore1.5
Impact factor1.8
ISSN15534510, 15534529
Kenny M.E., Medvide M.B., Gordon P.
2024-07-19 citations by CoLab: 1 Abstract  
Building on prior research documenting associations between youth purpose and academic, psychological and physical well-being, this study examined the contributions of workplace learning (WPL) to youth purpose and internal motivation among 281 youth of diverse racial and ethnic identities and economic status enrolled in two high school networks offering innovatively designed WPL. Sequential regression analyses revealed that the quality of WPL, defined by mentor support for training, learning opportunities, and youth autonomy, contributed positively to youth purpose and internal motivation, beyond the negative effects of perceived social and economic barriers. Findings are discussed through the perspectives of psychology of working, decent education, self-determination theory and career construction and suggest that WPL is a promising intervention for overcoming inequities in fostering youth talent and purpose, including personal goals, meaning and intentions for social contribution.
van der Wal R.C., Litzellachner L.F., Karremans J.C., Buiter N., Breukel J., Maio G.R.
2023-03-21 citations by CoLab: 5 Abstract  
There are substantive theoretical questions about whether personal values affect romantic relationship functioning. The current research tested the association between personal values and romantic relationship quality while considering potential mediating mechanisms related to pro-relational attitudes, communal strength, intrinsic relationship motivation, and entitlement. Across five studies using different measures of value priorities, we found that the endorsement of self-transcendence values (i.e., benevolence, universalism) was related to higher romantic relationship quality. The findings provided support for the mediating roles of pro-relational attitudes, communal strength, and intrinsic relationship motivation. Finally, a dyadic analysis in our fifth study showed that self-transcendence values mostly influence a person’s own relationship quality but not that of their partner. These findings provide the first evidence that personal values are important variables in romantic relationship functioning while helping to map the mechanisms through which this role occurs.
Mittelstadt B.
2022-10-20 citations by CoLab: 7 Abstract  
Abstract Artificial Intelligence (AI) systems are frequently thought of as opaque, meaning their performance or logic is thought to be inaccessible or incomprehensible to human observers. Models can consist of millions of features connected in a complex web of dependent behaviours. Conveying this internal state and dependencies in a humanly comprehensible way is extremely challenging. Explaining the functionality and behaviour of AI systems in a meaningful and useful way to people designing, operating, regulating, or affected by their outputs is a complex technical, philosophical, and ethical project. Despite this complexity, principles citing ‘transparency’ or ‘interpretability’ are commonly found in ethical and regulatory frameworks addressing technology. This chapter provides an overview of these concepts and methods design to explain how AI works. After reviewing key concepts and terminology, two sets of methods are examined: (1) interpretability methods designed to explain and approximate AI functionality and behaviour; and (2) transparency frameworks meant to help assess and provide information about the development, governance, and potential impact of training datasets, models, and specific applications. These methods are analysed in the context of prior work on explanations in the philosophy of science. The chapter closes by introducing a framework of criteria to evaluate the quality and utility of methods in explainable AI (XAI) and to clarify the open challenges facing the field.
Solanki P., Grundy J., Hussain W.
2022-07-19 citations by CoLab: 56 Abstract  
Artificial intelligence (AI) offers much promise for improving healthcare. However, it runs the looming risk of causing individual and societal harms; for instance, exacerbating inequalities amongst minority groups, or enabling compromises in the confidentiality of patients’ sensitive data. As such, there is an expanding, unmet need for ensuring AI for healthcare is developed in concordance with human values and ethics. Augmenting “principle-based” guidance that highlight adherence to ethical ideals (without necessarily offering translation into actionable practices), we offer a solution-based framework for operationalising ethics in AI for healthcare. Our framework is built from a scoping review of existing solutions of ethical AI guidelines, frameworks and technical solutions to address human values such as self-direction in healthcare. Our view spans the entire length of the AI lifecycle: data management, model development, deployment and monitoring. Our focus in this paper is to collate actionable solutions (whether technical or non-technical in nature), which can be steps that enable and empower developers in their daily practice to ensuring ethical practices in the broader picture. Our framework is intended to be adopted by AI developers, with recommendations that are accessible and driven by the existing literature. We endorse the recognised need for ‘ethical AI checklists’ co-designed with health AI practitioners, which could further operationalise the technical solutions we have collated. Since the risks to health and wellbeing are so large, we believe a proactive approach is necessary for ensuring human values and ethics are appropriately respected in AI for healthcare.
Ghassemi M., Mohamed S.
npj Digital Medicine scimago Q1 wos Q1 Open Access
2022-04-22 citations by CoLab: 14 PDF Abstract  
Health care is a human process that generates data from human lives, as well as the care they receive. Machine learning has worked in health to bring new technology into this sociotechnical environment, using data to support a vision of healthier living for everyone. Interdisciplinary fields of research like machine learning for health bring different values and judgements together, requiring that those value choices be deliberate and measured. More than just abstract ideas, our values are the basis upon which we choose our research topics, set up research collaborations, execute our research methodologies, make assessments of scientific and technical correctness, proceed to product development, and finally operationalize deployments and describe policy. For machine learning to achieve its aims of supporting healthier living while minimizing harm, we believe that a deeper introspection of our field’s values and contentions is overdue. In this perspective, we highlight notable areas in need of attention within the field. We believe deliberate and informed introspection will lead our community to renewed opportunities for understanding disease, new partnerships with clinicians and patients, and allow us to better support people and communities to live healthier, dignified lives.
Liao T., Tang S., Shim Y.
2022-02-05 citations by CoLab: 12 PDF Abstract  
This study applies the theory of planned behavior (TPB) and self-determination theory (SDT) to predict the sports participation and exercise intentions of college students in Central China by considering the mediating roles of attitudes, subjective norms, and perceived behavioral control. Structural equation modeling (SEM) was used to analyze self-reported data from 294 college students (144 males and 150 females). The relationship between the research variables was tested by the mediation model and Bootstrap 5000 sampling using AMOS version 24. The results show that the direct effects of attitudes and perceived behavioral control on motor intention and motor participation are significant in the model. The satisfaction of the three psychological needs had a positive indirect effect on motor participation through attitudes; competence and autonomy had a positive indirect effect on motor participation mediated through subjective norms; however, only competence had a positive indirect effect on motor mediated through perceived behavioral control. In conclusion, this research demonstrates the importance of meeting these three basic psychological needs when designing intervention measures to promote college students’ sports participation.
Richardson J.P., Smith C., Curtis S., Watson S., Zhu X., Barry B., Sharp R.R.
npj Digital Medicine scimago Q1 wos Q1 Open Access
2021-09-21 citations by CoLab: 175 PDF Abstract  
While there is significant enthusiasm in the medical community about the use of artificial intelligence (AI) technologies in healthcare, few research studies have sought to assess patient perspectives on these technologies. We conducted 15 focus groups examining patient views of diverse applications of AI in healthcare. Our results indicate that patients have multiple concerns, including concerns related to the safety of AI, threats to patient choice, potential increases in healthcare costs, data-source bias, and data security. We also found that patient acceptance of AI is contingent on mitigating these possible harms. Our results highlight an array of patient concerns that may limit enthusiasm for applications of AI in healthcare. Proactively addressing these concerns is critical for the flourishing of ethical innovation and ensuring the long-term success of AI applications in healthcare.
Hanel P.H., Foad C., Maio G.R.
2021-08-31 citations by CoLab: 9 Abstract  
Attitudes are people’s likes and dislikes toward anything and anyone that can be evaluated. This can be something as concrete as a mosquito that is tormenting you during the night or as abstract and broad as capitalism or communism. In contrast, human values have been defined as abstract ideals and guiding principles in one’s life and are considered as abstract as well as trans-situational. Thus, while both attitudes and values are important constructs in psychology that are necessarily related, there are also a range of differences between the two. Attitudes are specific judgments toward an object, while values are abstract and trans-situational; attitudes can be positive and negative, while values are mainly positive; and attitudes are less relevant for one’s self-concept than values. A range of studies have investigated how values and attitudes toward specific topics are associated. The rationale for most studies is that people’s values guide whether they like certain people, an object, or an idea. For example, the more people value universalism (e.g., equality, broad-mindedness), the more they support equal rights for groups that are typically disadvantaged. However, these associations can also be complex. If people do not consider an attitude to be a relevant expression of a value, it is less likely that the value predicts this attitude. Further, it can also matter for people’s attitudes whether their values match those of the people in their country, are similar to other social groups (e.g., immigrants), and whether they think their own group’s values are similar or dissimilar to the values of other groups. In sum, the literature shows that the links between values and attitudes are both entrenched and malleable and that these interrelations have many important consequences for understanding social-political divisions and well-being.
Cadario R., Longoni C., Morewedge C.K.
Nature Human Behaviour scimago Q1 wos Q1
2021-06-28 citations by CoLab: 158 Abstract  
Medical artificial intelligence is cost-effective and scalable and often outperforms human providers, yet people are reluctant to use it. We show that resistance to the utilization of medical artificial intelligence is driven by both the subjective difficulty of understanding algorithms (the perception that they are a ‘black box’) and by an illusory subjective understanding of human medical decision-making. In five pre-registered experiments (1–3B: N = 2,699), we find that people exhibit an illusory understanding of human medical decision-making (study 1). This leads people to believe they better understand decisions made by human than algorithmic healthcare providers (studies 2A,B), which makes them more reluctant to utilize algorithmic than human providers (studies 3A,B). Fortunately, brief interventions that increase subjective understanding of algorithmic decision processes increase willingness to utilize algorithmic healthcare providers (studies 3A,B). A sixth study on Google Ads for an algorithmic skin cancer detection app finds that the effectiveness of such interventions generalizes to field settings (study 4: N = 14,013). Cadario et al. identify potential reasons underlying the resistance to use medical artificial intelligence and test interventions to overcome this resistance.
Wachter S., Mittelstadt B., Russell C.
2021-03-04 citations by CoLab: 45 Abstract  
Western societies are marked by diverse and extensive biases and inequality that are unavoidably embedded in the data used to train machine learning. Algorithms trained on biased data will, without intervention, produce biased outcomes and increase the inequality experienced by historically disadvantaged groups. Recognising this problem, much work has emerged in recent years to test for bias in machine learning and AI systems using various fairness and bias metrics. Often these metrics address technical bias but ignore the underlying causes of inequality. In this paper we make three contributions. First, we assess the compatibility of fairness metrics used in machine learning against the aims and purpose of EU non-discrimination law. We show that the fundamental aim of the law is not only to prevent ongoing discrimination, but also to change society, policies, and practices to ‘level the playing field’ and achieve substantive rather than merely formal equality. Based on this, we then propose a novel classification scheme for fairness metrics in machine learning based on how they handle pre-existing bias and thus align with the aims of non-discrimination law. Specifically, we distinguish between ‘bias preserving’ and ‘bias transforming’ fairness metrics. Our classification system is intended to bridge the gap between non-discrimination law and decisions around how to measure fairness in machine learning and AI in practice. Finally, we show that the legal need for justification in cases of indirect discrimination can impose additional obligations on developers, deployers, and users that choose to use bias preserving fairness metrics when making decisions about individuals because they can give rise to prima facie discrimination. To achieve substantive equality in practice, and thus meet the aims of the law, we instead recommend using bias transforming metrics. To conclude, we provide concrete recommendations including a user-friendly checklist for choosing the most appropriate fairness metric for uses of machine learning and AI under EU non-discrimination law.
Walmsley J.
AI and Society scimago Q1 wos Q2
2020-09-08 citations by CoLab: 51 Abstract  
Some recent developments in Artificial Intelligence—especially the use of machine learning systems, trained on big data sets and deployed in socially significant and ethically weighty contexts—have led to a number of calls for “transparency”. This paper explores the epistemological and ethical dimensions of that concept, as well as surveying and taxonomising the variety of ways in which it has been invoked in recent discussions. Whilst “outward” forms of transparency (concerning the relationship between an AI system, its developers, users and the media) may be straightforwardly achieved, what I call “functional” transparency about the inner workings of a system is, in many cases, much harder to attain. In those situations, I argue that contestability may be a possible, acceptable, and useful alternative so that even if we cannot understand how a system came up with a particular output, we at least have the means to challenge it.
Asan O., Bayrak A.E., Choudhury A.
2020-06-19 citations by CoLab: 404 Abstract  
Artificial intelligence (AI) can transform health care practices with its increasing ability to translate the uncertainty and complexity in data into actionable—though imperfect—clinical decisions or suggestions. In the evolving relationship between humans and AI, trust is the one mechanism that shapes clinicians’ use and adoption of AI. Trust is a psychological mechanism to deal with the uncertainty between what is known and unknown. Several research studies have highlighted the need for improving AI-based systems and enhancing their capabilities to help clinicians. However, assessing the magnitude and impact of human trust on AI technology demands substantial attention. Will a clinician trust an AI-based system? What are the factors that influence human trust in AI? Can trust in AI be optimized to improve decision-making processes? In this paper, we focus on clinicians as the primary users of AI systems in health care and present factors shaping trust between clinicians and AI. We highlight critical challenges related to trust that should be considered during the development of any AI system for clinical use.
Shinners L., Aggar C., Grace S., Smith S.
Health Informatics Journal scimago Q2 wos Q3 Open Access
2019-09-30 citations by CoLab: 74 PDF Abstract  
Background:The integration of artificial intelligence (AI) into our digital healthcare system is seen as a significant strategy to contain Australia’s rising healthcare costs, support clinical decision making, manage chronic disease burden and support our ageing population. With the increasing roll-out of ‘digital hospitals’, electronic medical records, new data capture and analysis technologies, as well as a digitally enabled health consumer, the Australian healthcare workforce is required to become digitally literate to manage the significant changes in the healthcare landscape. To ensure that new innovations such as AI are inclusive of clinicians, an understanding of how the technology will impact the healthcare professions is imperative.Method:In order to explore the complex phenomenon of healthcare professionals’ understanding and experiences of AI use in the delivery of healthcare, an integrative review inclusive of quantitative and qualitative studies was undertaken in June 2018.Results:One study met all inclusion criteria. This study was an observational study which used a questionnaire to measure healthcare professional’s intrinsic motivation in adoption behaviour when using an artificially intelligent medical diagnosis support system (AIMDSS).Discussion:The study found that healthcare professionals were less likely to use AI in the delivery of healthcare if they did not trust the technology or understand how it was used to improve patient outcomes or the delivery of care which is specific to the healthcare setting. The perception that AI would replace them in the healthcare setting was not evident. This may be due to the fact that AI is not yet at the forefront of technology use in healthcare setting. More research is needed to examine the experiences and perceptions of healthcare professionals using AI in the delivery of healthcare.

Are you a researcher?

Create a profile to get free access to personal recommendations for colleagues and new articles.
Share
Cite this
GOST | RIS | BibTex
Found error?