Project description:ImportanceSurgeons make complex, high-stakes decisions under time constraints and uncertainty, with significant effect on patient outcomes. This review describes the weaknesses of traditional clinical decision-support systems and proposes that artificial intelligence should be used to augment surgical decision-making.ObservationsSurgical decision-making is dominated by hypothetical-deductive reasoning, individual judgment, and heuristics. These factors can lead to bias, error, and preventable harm. Traditional predictive analytics and clinical decision-support systems are intended to augment surgical decision-making, but their clinical utility is compromised by time-consuming manual data management and suboptimal accuracy. These challenges can be overcome by automated artificial intelligence models fed by livestreaming electronic health record data with mobile device outputs. This approach would require data standardization, advances in model interpretability, careful implementation and monitoring, attention to ethical challenges involving algorithm bias and accountability for errors, and preservation of bedside assessment and human intuition in the decision-making process.Conclusions and relevanceIntegration of artificial intelligence with surgical decision-making has the potential to transform care by augmenting the decision to operate, informed consent process, identification and mitigation of modifiable risk factors, decisions regarding postoperative management, and shared decisions regarding resource use.
Project description:Acute kidney injury (AKI) is a common complication of hospitalization that greatly and negatively affects the short-term and long-term outcomes of patients. Current guidelines use serum creatinine level and urine output rate for defining AKI and as the staging criteria of AKI. However, because they are not sensitive or specific markers of AKI, clinicians find it difficult to predict the occurrence of AKI and prescribe timely treatment. Advances in computing technology have led to the recent use of machine learning and artificial intelligence in AKI prediction, recent research reported that by using electronic health records (EHR) the AKI prediction via machine-learning models can reach AUROC over 0.80, in some studies even reach 0.93. Our review begins with the background and history of the definition of AKI, and the evolution of AKI risk factors and prediction models is also appraised. Then, we summarize the current evidence regarding the application of e-alert systems and machine-learning models in AKI prediction.
Project description:BackgroundArtificial intelligence (AI) has shown promising results in various fields of medicine. It has the potential to facilitate shared decision making (SDM). However, there is no comprehensive mapping of how AI may be used for SDM.ObjectiveWe aimed to identify and evaluate published studies that have tested or implemented AI to facilitate SDM.MethodsWe performed a scoping review informed by the methodological framework proposed by Levac et al, modifications to the original Arksey and O'Malley framework of a scoping review, and the Joanna Briggs Institute scoping review framework. We reported our results based on the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews) reporting guideline. At the identification stage, an information specialist performed a comprehensive search of 6 electronic databases from their inception to May 2021. The inclusion criteria were: all populations; all AI interventions that were used to facilitate SDM, and if the AI intervention was not used for the decision-making point in SDM, it was excluded; any outcome related to patients, health care providers, or health care systems; studies in any health care setting, only studies published in the English language, and all study types. Overall, 2 reviewers independently performed the study selection process and extracted data. Any disagreements were resolved by a third reviewer. A descriptive analysis was performed.ResultsThe search process yielded 1445 records. After removing duplicates, 894 documents were screened, and 6 peer-reviewed publications met our inclusion criteria. Overall, 2 of them were conducted in North America, 2 in Europe, 1 in Australia, and 1 in Asia. Most articles were published after 2017. Overall, 3 articles focused on primary care, and 3 articles focused on secondary care. All studies used machine learning methods. Moreover, 3 articles included health care providers in the validation stage of the AI intervention, and 1 article included both health care providers and patients in clinical validation, but none of the articles included health care providers or patients in the design and development of the AI intervention. All used AI to support SDM by providing clinical recommendations or predictions.ConclusionsEvidence of the use of AI in SDM is in its infancy. We found AI supporting SDM in similar ways across the included articles. We observed a lack of emphasis on patients' values and preferences, as well as poor reporting of AI interventions, resulting in a lack of clarity about different aspects. Little effort was made to address the topics of explainability of AI interventions and to include end-users in the design and development of the interventions. Further efforts are required to strengthen and standardize the use of AI in different steps of SDM and to evaluate its impact on various decisions, populations, and settings.
Project description:Artificial intelligence (AI) based on convolutional neural networks (CNNs) has a great potential to enhance medical workflow and improve health care quality. Of particular interest is practical implementation of such AI-based software as a cloud-based tool aimed for telemedicine, the practice of providing medical care from a distance using electronic interfaces. Methods: In this study, we used a dataset of labeled 35,900 optical coherence tomography (OCT) images obtained from age-related macular degeneration (AMD) patients and used them to train three types of CNNs to perform AMD diagnosis. Results: Here, we present an AI- and cloud-based telemedicine interaction tool for diagnosis and proposed treatment of AMD. Through deep learning process based on the analysis of preprocessed optical coherence tomography (OCT) imaging data, our AI-based system achieved the same image discrimination rate as that of retinal specialists in our hospital. The AI platform's detection accuracy was generally higher than 90% and was significantly superior (p < 0.001) to that of medical students (69.4% and 68.9%) and equal (p = 0.99) to that of retinal specialists (92.73% and 91.90%). Furthermore, it provided appropriate treatment recommendations comparable to those of retinal specialists. Conclusions: We therefore developed a website for realistic cloud computing based on this AI platform, available at https://www.ym.edu.tw/~AI-OCT/. Patients can upload their OCT images to the website to verify whether they have AMD and require treatment. Using an AI-based cloud service represents a real solution for medical imaging diagnostics and telemedicine.
Project description:BackgroundThe many improvements in cancer therapies have led to an increased number of survivors, which comes with a greater risk of consequent/subsequent cardiovascular disease. Identifying effective management strategies that can mitigate this risk of cardiovascular complications is vital. Therefore, developing computer-driven and personalized clinical decision aid interventions that can provide early detection of patients at risk, stratify that risk, and recommend specific cardio-oncology management guidelines and expert consensus recommendations is critically important.ObjectivesTo assess the feasibility, acceptability, and utility of the use of an artificial intelligence (AI)-powered clinical decision aid tool in shared decision making between the cancer survivor patient and the cardiologist regarding prevention of cardiovascular disease.DesignThis is a single-center, double-arm, open-label, randomized interventional feasibility study. Our cardio-oncology cohort of > 4000 individuals from our Clinical Research Data Warehouse will be queried to identify at least 200 adult cancer survivors who meet the eligibility criteria. Study participants will be randomized into either the Clinical Decision Aid Group (where patients will use the clinical decision aid in addition to current practice) or the Control Group (current practice). The primary endpoint of this study is to assess for each patient encounter whether cardiovascular medications and imaging pursued were consistent with current medical society recommendations. Additionally, the perceptions of using the clinical decision tool will be evaluated based on patient and physician feedback through surveys and focus groups. This trial will determine whether a clinical decision aid tool improves cancer survivors' medication use and imaging surveillance recommendations aligned with current medical guidelines.Trial registrationClinicalTrials.Gov Identifier: NCT05377320.
Project description:BackgroundPrior research has shown that artificial intelligence (AI) systems often encode biases against minority subgroups. However, little work has focused on ways to mitigate the harm discriminatory algorithms can cause in high-stakes settings such as medicine.MethodsIn this study, we experimentally evaluated the impact biased AI recommendations have on emergency decisions, where participants respond to mental health crises by calling for either medical or police assistance. We recruited 438 clinicians and 516 non-experts to participate in our web-based experiment. We evaluated participant decision-making with and without advice from biased and unbiased AI systems. We also varied the style of the AI advice, framing it either as prescriptive recommendations or descriptive flags.ResultsParticipant decisions are unbiased without AI advice. However, both clinicians and non-experts are influenced by prescriptive recommendations from a biased algorithm, choosing police help more often in emergencies involving African-American or Muslim men. Crucially, using descriptive flags rather than prescriptive recommendations allows respondents to retain their original, unbiased decision-making.ConclusionsOur work demonstrates the practical danger of using biased models in health contexts, and suggests that appropriately framing decision support can mitigate the effects of AI bias. These findings must be carefully considered in the many real-world clinical scenarios where inaccurate or biased models may be used to inform important decisions.
Project description:How will superhuman artificial intelligence (AI) affect human decision-making? And what will be the mechanisms behind this effect? We address these questions in a domain where AI already exceeds human performance, analyzing more than 5.8 million move decisions made by professional Go players over the past 71 y (1950 to 2021). To address the first question, we use a superhuman AI program to estimate the quality of human decisions across time, generating 58 billion counterfactual game patterns and comparing the win rates of actual human decisions with those of counterfactual AI decisions. We find that humans began to make significantly better decisions following the advent of superhuman AI. We then examine human players' strategies across time and find that novel decisions (i.e., previously unobserved moves) occurred more frequently and became associated with higher decision quality after the advent of superhuman AI. Our findings suggest that the development of superhuman AI programs may have prompted human players to break away from traditional strategies and induced them to explore novel moves, which in turn may have improved their decision-making.
Project description:Thinking about God promotes greater acceptance of Artificial intelligence (AI)-based recommendations. Eight preregistered experiments (n = 2,462) reveal that when God is salient, people are more willing to consider AI-based recommendations than when God is not salient. Studies 1 and 2a to 2d demonstrate across a wide variety of contexts, from choosing entertainment and food to mutual funds and dental procedures, that God salience reduces reliance on human recommenders and heightens willingness to consider AI recommendations. Studies 3 and 4 demonstrate that the reduced reliance on humans is driven by a heightened feeling of smallness when God is salient, followed by a recognition of human fallibility. Study 5 addresses the similarity in mysteriousness between God and AI as an alternative, but unsupported, explanation. Finally, study 6 (n = 53,563) corroborates the experimental results with data from 21 countries on the usage of robo-advisors in financial decision-making.