Unknown

Dataset Information

0

Mitigating the impact of biased artificial intelligence in emergency decision-making.


ABSTRACT:

Background

Prior research has shown that artificial intelligence (AI) systems often encode biases against minority subgroups. However, little work has focused on ways to mitigate the harm discriminatory algorithms can cause in high-stakes settings such as medicine.

Methods

In this study, we experimentally evaluated the impact biased AI recommendations have on emergency decisions, where participants respond to mental health crises by calling for either medical or police assistance. We recruited 438 clinicians and 516 non-experts to participate in our web-based experiment. We evaluated participant decision-making with and without advice from biased and unbiased AI systems. We also varied the style of the AI advice, framing it either as prescriptive recommendations or descriptive flags.

Results

Participant decisions are unbiased without AI advice. However, both clinicians and non-experts are influenced by prescriptive recommendations from a biased algorithm, choosing police help more often in emergencies involving African-American or Muslim men. Crucially, using descriptive flags rather than prescriptive recommendations allows respondents to retain their original, unbiased decision-making.

Conclusions

Our work demonstrates the practical danger of using biased models in health contexts, and suggests that appropriately framing decision support can mitigate the effects of AI bias. These findings must be carefully considered in the many real-world clinical scenarios where inaccurate or biased models may be used to inform important decisions.

SUBMITTER: Adam H 

PROVIDER: S-EPMC9681767 | biostudies-literature | 2022 Nov

REPOSITORIES: biostudies-literature

altmetric image

Publications

Mitigating the impact of biased artificial intelligence in emergency decision-making.

Adam Hammaad H   Balagopalan Aparna A   Alsentzer Emily E   Christia Fotini F   Ghassemi Marzyeh M  

Communications medicine 20221121 1


<h4>Background</h4>Prior research has shown that artificial intelligence (AI) systems often encode biases against minority subgroups. However, little work has focused on ways to mitigate the harm discriminatory algorithms can cause in high-stakes settings such as medicine.<h4>Methods</h4>In this study, we experimentally evaluated the impact biased AI recommendations have on emergency decisions, where participants respond to mental health crises by calling for either medical or police assistance.  ...[more]

Similar Datasets

| S-EPMC7286802 | biostudies-literature
| S-EPMC6710912 | biostudies-literature
| S-EPMC10251321 | biostudies-literature
| S-EPMC9399841 | biostudies-literature
| S-EPMC6332801 | biostudies-literature
| S-EPMC10041097 | biostudies-literature
| S-EPMC10438833 | biostudies-literature
| S-EPMC9860095 | biostudies-literature
| S-EPMC7531984 | biostudies-literature
| S-EPMC11324138 | biostudies-literature