Mitigating the impact of biased artificial intelligence in emergency decision-making
Author(s)
Adam, Hammaad; Balagopalan, Aparna; Alsentzer, Emily; Christia, Fotini; Ghassemi, Marzyeh
DownloadPublished version (493.0Kb)
Publisher with Creative Commons License
Publisher with Creative Commons License
Creative Commons Attribution
Terms of use
Metadata
Show full item recordAbstract
<jats:title>Abstract</jats:title><jats:sec>
<jats:title>Background</jats:title>
<jats:p>Prior research has shown that artificial intelligence (AI) systems often encode biases against minority subgroups. However, little work has focused on ways to mitigate the harm discriminatory algorithms can cause in high-stakes settings such as medicine.</jats:p>
</jats:sec><jats:sec>
<jats:title>Methods</jats:title>
<jats:p>In this study, we experimentally evaluated the impact biased AI recommendations have on emergency decisions, where participants respond to mental health crises by calling for either medical or police assistance. We recruited 438 clinicians and 516 non-experts to participate in our web-based experiment. We evaluated participant decision-making with and without advice from biased and unbiased AI systems. We also varied the style of the AI advice, framing it either as prescriptive recommendations or descriptive flags.</jats:p>
</jats:sec><jats:sec>
<jats:title>Results</jats:title>
<jats:p>Participant decisions are unbiased without AI advice. However, both clinicians and non-experts are influenced by prescriptive recommendations from a biased algorithm, choosing police help more often in emergencies involving African-American or Muslim men. Crucially, using descriptive flags rather than prescriptive recommendations allows respondents to retain their original, unbiased decision-making.</jats:p>
</jats:sec><jats:sec>
<jats:title>Conclusions</jats:title>
<jats:p>Our work demonstrates the practical danger of using biased models in health contexts, and suggests that appropriately framing decision support can mitigate the effects of AI bias. These findings must be carefully considered in the many real-world clinical scenarios where inaccurate or biased models may be used to inform important decisions.</jats:p>
</jats:sec>
Date issued
2022-11-21Department
Massachusetts Institute of Technology. Institute for Data, Systems, and Society; Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science; Harvard University--MIT Division of Health Sciences and Technology; Massachusetts Institute of Technology. Institute for Medical Engineering & Science; Massachusetts Institute of Technology. Sociotechnical Systems Research Center; Massachusetts Institute of Technology. Department of Political ScienceJournal
Communications Medicine
Publisher
Springer Science and Business Media LLC
Citation
Adam, Hammaad, Balagopalan, Aparna, Alsentzer, Emily, Christia, Fotini and Ghassemi, Marzyeh. 2022. "Mitigating the impact of biased artificial intelligence in emergency decision-making." Communications Medicine, 2 (1).
Version: Final published version