<HashMap><database>biostudies-literature</database><scores/><additional><submitter>Aaron S</submitter><funding>National Library of Medicine</funding><funding>NLM NIH HHS</funding><funding>National Institutes of Health</funding><pagination>37-43</pagination><full_dataset_link>https://www.ebi.ac.uk/biostudies/studies/S-EPMC6308015</full_dataset_link><repository>biostudies-literature</repository><omics_type>Unknown</omics_type><volume>26(1)</volume><pubmed_abstract>&lt;h4>Background&lt;/h4>Rule-base clinical decision support alerts are known to malfunction, but tools for discovering malfunctions are limited.&lt;h4>Objective&lt;/h4>Investigate whether user override comments can be used to discover malfunctions.&lt;h4>Methods&lt;/h4>We manually classified all rules in our database with at least 10 override comments into 3 categories based on a sample of override comments: "broken," "not broken, but could be improved," and "not broken." We used 3 methods (frequency of comments, cranky word list heuristic, and a Naïve Bayes classifier trained on a sample of comments) to automatically rank rules based on features of their override comments. We evaluated each ranking using the manual classification as truth.&lt;h4>Results&lt;/h4>Of the rules investigated, 62 were broken, 13 could be improved, and the remaining 45 were not broken. Frequency of comments performed worse than a random ranking, with precision at 20 of 8 and AUC = 0.487. The cranky comments heuristic performed better with precision at 20 of 16 and AUC = 0.723. The Naïve Bayes classifier had precision at 20 of 17 and AUC = 0.738.&lt;h4>Discussion&lt;/h4>Override comments uncovered malfunctions in 26% of all rules active in our system. This is a lower bound on total malfunctions and much higher than expected. Even for low-resource organizations, reviewing comments identified by the cranky word list heuristic may be an effective and feasible way of finding broken alerts.&lt;h4>Conclusion&lt;/h4>Override comments are a rich data source for finding alerts that are broken or could be improved. If possible, we recommend monitoring all override comments on a regular basis.</pubmed_abstract><journal>Journal of the American Medical Informatics Association : JAMIA</journal><pubmed_title>Cranky comments: detecting clinical decision support malfunctions through free-text override reasons.</pubmed_title><pmcid>PMC6308015</pmcid><funding_grant_id>R01LM011966</funding_grant_id><funding_grant_id>R01 LM011966</funding_grant_id><pubmed_authors>Aaron S</pubmed_authors><pubmed_authors>Wright A</pubmed_authors><pubmed_authors>McEvoy DS</pubmed_authors><pubmed_authors>Ray S</pubmed_authors><pubmed_authors>Hickman TT</pubmed_authors></additional><is_claimable>false</is_claimable><name>Cranky comments: detecting clinical decision support malfunctions through free-text override reasons.</name><description>&lt;h4>Background&lt;/h4>Rule-base clinical decision support alerts are known to malfunction, but tools for discovering malfunctions are limited.&lt;h4>Objective&lt;/h4>Investigate whether user override comments can be used to discover malfunctions.&lt;h4>Methods&lt;/h4>We manually classified all rules in our database with at least 10 override comments into 3 categories based on a sample of override comments: "broken," "not broken, but could be improved," and "not broken." We used 3 methods (frequency of comments, cranky word list heuristic, and a Naïve Bayes classifier trained on a sample of comments) to automatically rank rules based on features of their override comments. We evaluated each ranking using the manual classification as truth.&lt;h4>Results&lt;/h4>Of the rules investigated, 62 were broken, 13 could be improved, and the remaining 45 were not broken. Frequency of comments performed worse than a random ranking, with precision at 20 of 8 and AUC = 0.487. The cranky comments heuristic performed better with precision at 20 of 16 and AUC = 0.723. The Naïve Bayes classifier had precision at 20 of 17 and AUC = 0.738.&lt;h4>Discussion&lt;/h4>Override comments uncovered malfunctions in 26% of all rules active in our system. This is a lower bound on total malfunctions and much higher than expected. Even for low-resource organizations, reviewing comments identified by the cranky word list heuristic may be an effective and feasible way of finding broken alerts.&lt;h4>Conclusion&lt;/h4>Override comments are a rich data source for finding alerts that are broken or could be improved. If possible, we recommend monitoring all override comments on a regular basis.</description><dates><release>2019-01-01T00:00:00Z</release><publication>2019 Jan</publication><modification>2024-12-04T04:23:40.242Z</modification><creation>2019-03-26T22:29:43Z</creation></dates><accession>S-EPMC6308015</accession><cross_references><pubmed>30590557</pubmed><doi>10.1093/jamia/ocy139</doi></cross_references></HashMap>