Unknown

Dataset Information

0

Leveraging explainable artificial intelligence to optimize clinical decision support.


ABSTRACT:

Objective

To develop and evaluate a data-driven process to generate suggestions for improving alert criteria using explainable artificial intelligence (XAI) approaches.

Methods

We extracted data on alerts generated from January 1, 2019 to December 31, 2020, at Vanderbilt University Medical Center. We developed machine learning models to predict user responses to alerts. We applied XAI techniques to generate global explanations and local explanations. We evaluated the generated suggestions by comparing with alert's historical change logs and stakeholder interviews. Suggestions that either matched (or partially matched) changes already made to the alert or were considered clinically correct were classified as helpful.

Results

The final dataset included 2 991 823 firings with 2689 features. Among the 5 machine learning models, the LightGBM model achieved the highest Area under the ROC Curve: 0.919 [0.918, 0.920]. We identified 96 helpful suggestions. A total of 278 807 firings (9.3%) could have been eliminated. Some of the suggestions also revealed workflow and education issues.

Conclusion

We developed a data-driven process to generate suggestions for improving alert criteria using XAI techniques. Our approach could identify improvements regarding clinical decision support (CDS) that might be overlooked or delayed in manual reviews. It also unveils a secondary purpose for the XAI: to improve quality by discovering scenarios where CDS alerts are not accepted due to workflow, education, or staffing issues.

SUBMITTER: Liu S 

PROVIDER: S-EPMC10990514 | biostudies-literature | 2024 Apr

REPOSITORIES: biostudies-literature

altmetric image

Publications

Leveraging explainable artificial intelligence to optimize clinical decision support.

Liu Siru S   McCoy Allison B AB   Peterson Josh F JF   Lasko Thomas A TA   Sittig Dean F DF   Nelson Scott D SD   Andrews Jennifer J   Patterson Lorraine L   Cobb Cheryl M CM   Mulherin David D   Morton Colleen T CT   Wright Adam A  

Journal of the American Medical Informatics Association : JAMIA 20240401 4


<h4>Objective</h4>To develop and evaluate a data-driven process to generate suggestions for improving alert criteria using explainable artificial intelligence (XAI) approaches.<h4>Methods</h4>We extracted data on alerts generated from January 1, 2019 to December 31, 2020, at Vanderbilt University Medical Center. We developed machine learning models to predict user responses to alerts. We applied XAI techniques to generate global explanations and local explanations. We evaluated the generated sug  ...[more]

Similar Datasets

| S-EPMC6710912 | biostudies-literature
| S-EPMC11443205 | biostudies-literature
| S-EPMC8328289 | biostudies-literature
| S-EPMC11563427 | biostudies-literature
| S-EPMC8209524 | biostudies-literature
| S-EPMC9931364 | biostudies-literature
| S-EPMC8294941 | biostudies-literature
| S-EPMC7308977 | biostudies-literature
| S-EPMC6000484 | biostudies-literature
| S-EPMC10501571 | biostudies-literature