Show simple item record

dc.contributor.authorAdam, Hammaad
dc.contributor.authorBalagopalan, Aparna
dc.contributor.authorAlsentzer, Emily
dc.contributor.authorChristia, Fotini
dc.contributor.authorGhassemi, Marzyeh
dc.date.accessioned2023-02-07T13:12:16Z
dc.date.available2023-02-07T13:12:16Z
dc.date.issued2022-11-21
dc.identifier.urihttps://hdl.handle.net/1721.1/147922
dc.description.abstract<jats:title>Abstract</jats:title><jats:sec> <jats:title>Background</jats:title> <jats:p>Prior research has shown that artificial intelligence (AI) systems often encode biases against minority subgroups. However, little work has focused on ways to mitigate the harm discriminatory algorithms can cause in high-stakes settings such as medicine.</jats:p> </jats:sec><jats:sec> <jats:title>Methods</jats:title> <jats:p>In this study, we experimentally evaluated the impact biased AI recommendations have on emergency decisions, where participants respond to mental health crises by calling for either medical or police assistance. We recruited 438 clinicians and 516 non-experts to participate in our web-based experiment. We evaluated participant decision-making with and without advice from biased and unbiased AI systems. We also varied the style of the AI advice, framing it either as prescriptive recommendations or descriptive flags.</jats:p> </jats:sec><jats:sec> <jats:title>Results</jats:title> <jats:p>Participant decisions are unbiased without AI advice. However, both clinicians and non-experts are influenced by prescriptive recommendations from a biased algorithm, choosing police help more often in emergencies involving African-American or Muslim men. Crucially, using descriptive flags rather than prescriptive recommendations allows respondents to retain their original, unbiased decision-making.</jats:p> </jats:sec><jats:sec> <jats:title>Conclusions</jats:title> <jats:p>Our work demonstrates the practical danger of using biased models in health contexts, and suggests that appropriately framing decision support can mitigate the effects of AI bias. These findings must be carefully considered in the many real-world clinical scenarios where inaccurate or biased models may be used to inform important decisions.</jats:p> </jats:sec>en_US
dc.language.isoen
dc.publisherSpringer Science and Business Media LLCen_US
dc.relation.isversionof10.1038/s43856-022-00214-4en_US
dc.rightsCreative Commons Attribution 4.0 International licenseen_US
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/en_US
dc.sourceNatureen_US
dc.titleMitigating the impact of biased artificial intelligence in emergency decision-makingen_US
dc.typeArticleen_US
dc.identifier.citationAdam, Hammaad, Balagopalan, Aparna, Alsentzer, Emily, Christia, Fotini and Ghassemi, Marzyeh. 2022. "Mitigating the impact of biased artificial intelligence in emergency decision-making." Communications Medicine, 2 (1).
dc.contributor.departmentMassachusetts Institute of Technology. Institute for Data, Systems, and Society
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
dc.contributor.departmentHarvard University--MIT Division of Health Sciences and Technology
dc.contributor.departmentMassachusetts Institute of Technology. Institute for Medical Engineering & Science
dc.contributor.departmentMassachusetts Institute of Technology. Sociotechnical Systems Research Center
dc.contributor.departmentMassachusetts Institute of Technology. Department of Political Science
dc.relation.journalCommunications Medicineen_US
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/JournalArticleen_US
eprint.statushttp://purl.org/eprint/status/PeerRevieweden_US
dc.date.updated2023-02-07T13:07:25Z
dspace.orderedauthorsAdam, H; Balagopalan, A; Alsentzer, E; Christia, F; Ghassemi, Men_US
dspace.date.submission2023-02-07T13:07:27Z
mit.journal.volume2en_US
mit.journal.issue1en_US
mit.licensePUBLISHER_CC
mit.metadata.statusAuthority Work and Publication Information Neededen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record