Project description:Rational Deep learning (DL) has demonstrated a remarkable performance in diagnostic imaging for various diseases and modalities and therefore has a high potential to be used as a clinical tool. However, current practice shows low deployment of these algorithms in clinical practice, because DL algorithms lack transparency and trust due to their underlying black-box mechanism. For successful employment, explainable artificial intelligence (XAI) could be introduced to close the gap between the medical professionals and the DL algorithms. In this literature review, XAI methods available for magnetic resonance (MR), computed tomography (CT), and positron emission tomography (PET) imaging are discussed and future suggestions are made. Methods PubMed, Embase.com and Clarivate Analytics/Web of Science Core Collection were screened. Articles were considered eligible for inclusion if XAI was used (and well described) to describe the behavior of a DL model used in MR, CT and PET imaging. Results A total of 75 articles were included of which 54 and 17 articles described post and ad hoc XAI methods, respectively, and 4 articles described both XAI methods. Major variations in performance is seen between the methods. Overall, post hoc XAI lacks the ability to provide class-discriminative and target-specific explanation. Ad hoc XAI seems to tackle this because of its intrinsic ability to explain. However, quality control of the XAI methods is rarely applied and therefore systematic comparison between the methods is difficult. Conclusion There is currently no clear consensus on how XAI should be deployed in order to close the gap between medical professionals and DL algorithms for clinical implementation. We advocate for systematic technical and clinical quality assessment of XAI methods. Also, to ensure end-to-end unbiased and safe integration of XAI in clinical workflow, (anatomical) data minimization and quality control methods should be included.
Project description:IntroductionThe research in consumer neuroscience has identified computational methods, particularly artificial intelligence (AI) and machine learning, as a significant frontier for advancement. Previously, we utilized functional magnetic resonance imaging (fMRI) and artificial neural networks (ANNs) to model brain processes related to brand preferences in a paradigm exempted from motor actions. In the current study, we revisit this data, introducing recent advancements in explainable artificial intelligence (xAI) to gain insights into this domain. By integrating fMRI data analysis, machine learning, and xAI, our study aims to search for functional brain networks that support brand perception and, ultimately, search for brain networks that disentangle between preferred and indifferent brands, focusing on the early processing stages.MethodsWe applied independent component analysis (ICA) to overcome the expected fMRI data's high dimensionality, which raises hurdles in AI applications. We extracted pertinent features from the returned ICs. An ANN is then trained on this data, followed by pruning and retraining processes. We then apply explanation techniques, based on path-weights and Shapley values, to make the network more transparent, explainable, and interpretable, and to obtain insights into the underlying brain processes.ResultsThe fully connected ANN model obtained an accuracy of 54.6%, which dropped to 50.4% after pruning. However, the retraining process allowed it to surpass the fully connected network, achieving an accuracy of 55.9%. The path-weights and Shapley-based analysis concludes that, regarding brand perception, the expected initial participation of the primary visual system is followed. Other brain areas participate in early processing and discriminate between preferred and indifferent brands, such as the cuneal and the lateral occipital cortices.DiscussionThe most important finding is that a split between processing brands|preferred from brands|indifferent may occur during early processing stages, still in the visual system. However, we found no evidence of a "decision pipeline" that would yield if a brand is preferred or indifferent. The results suggest the existence of a "tagging"-like process in parallel flows in the extrastriate. Network training dynamics aggregate specific processes within the hidden nodes by analyzing the model's hidden layer. This yielded that some nodes contribute to both global brand appraisal and specific brand category classification, shedding light on the neural substrates of decision-making in response to brand stimuli.
Project description:The impacts and threats posed by wildfires are dramatically increasing due to climate change. In recent years, the wildfire community has attempted to estimate wildfire occurrence with machine learning models. However, to fully exploit the potential of these models, it is of paramount importance to make their predictions interpretable and intelligible. This study is a first attempt to provide an eXplainable artificial intelligence (XAI) framework for estimating wildfire occurrence using a Random Forest model with Shapley values for interpretation. Our findings accurately detected regions with a high presence of wildfires (area under the curve 81.3%) and outlined the drivers empowering occurrence, such as the Fire Weather Index and Normalized Difference Vegetation Index. Furthermore, our analysis suggests the presence of anomalous hotspots. In contexts where human and natural spheres constantly intermingle and interact, the XAI framework, suitably integrated into decision support systems, could support forest managers to prevent and mitigate future wildfire disasters and develop strategies for effective fire management, response, recovery, and resilience.
Project description:Machine Learning (ML) and Deep Learning (DL) models show potential in surpassing traditional methods including generalised linear models for healthcare predictions, particularly with large, complex datasets. However, low interpretability hinders practical implementation. To address this, Explainable Artificial Intelligence (XAI) methods are proposed, but a comprehensive evaluation of their effectiveness is currently limited. The aim of this scoping review is to critically appraise the application of XAI methods in ML/DL models using Electronic Health Record (EHR) data. In accordance with PRISMA scoping review guidelines, the study searched PUBMED and OVID/MEDLINE (including EMBASE) for publications related to tabular EHR data that employed ML/DL models with XAI. Out of 3220 identified publications, 76 were included. The selected publications published between February 2017 and June 2023, demonstrated an exponential increase over time. Extreme Gradient Boosting and Random Forest models were the most frequently used ML/DL methods, with 51 and 50 publications, respectively. Among XAI methods, Shapley Additive Explanations (SHAP) was predominant in 63 out of 76 publications, followed by partial dependence plots (PDPs) in 11 publications, and Locally Interpretable Model-Agnostic Explanations (LIME) in 8 publications. Despite the growing adoption of XAI methods, their applications varied widely and lacked critical evaluation. This review identifies the increasing use of XAI in tabular EHR research and highlights a deficiency in the reporting of methods and a lack of critical appraisal of validity and robustness. The study emphasises the need for further evaluation of XAI methods and underscores the importance of cautious implementation and interpretation in healthcare settings.
Project description:The Comprehensive In-vitro Proarrhythmia Assay (CiPA) initiative aims to refine the assessment of drug-induced torsades de pointes (TdP) risk, utilizing computational models to predict cardiac drug toxicity. Despite advancements in machine learning applications for this purpose, the specific contribution of in-silico biomarkers to toxicity risk levels has yet to be thoroughly elucidated. This study addresses this gap by implementing explainable artificial intelligence (XAI) to illuminate the impact of individual biomarkers in drug toxicity prediction. We employed the Markov chain Monte Carlo method to generate a detailed dataset for 28 drugs, from which twelve in-silico biomarkers of 12 drugs were computed to train various machine learning models, including Artificial Neural Networks (ANN), Support Vector Machines (SVM), Random Forests (RF), XGBoost, K-Nearest Neighbors (KNN), and Radial Basis Function (RBF) networks. Our study's innovation is leveraging XAI, mainly through the SHAP (SHapley Additive exPlanations) method, to dissect and quantify the contributions of biomarkers across these models. Furthermore, the model performance was evaluated using the test set from 16 drugs. We found that the ANN model coupled with the eleven most influential in-silico biomarkers namely dVmdtrepol,dVmdtmax,APD90,APD50,APDtri,CaD90,CaD50,Catri,CaDiastole,qInward,andqNet showed the highest classification performance among all classifiers with Area Under the Curve (AUC) scores of 0.92 for predicting high-risk, 0.83 for intermediate-risk, and 0.98 for low-risk drugs. We also found that the optimal in silico biomarkers selected based on SHAP analysis may be different for various classification models. However, we also found that the biomarker selection only sometimes improved the performance; therefore, evaluating various classifiers is still essential to obtain the desired classification performance. Our proposed method could provide a systematic way to assess the best classifier with the optimal in-silico biomarkers for predicting the TdP risk of drugs, thereby advancing the field of cardiac safety evaluations.
Project description:The field of radiology imaging has experienced a remarkable increase in using of deep learning (DL) algorithms to support diagnostic and treatment decisions. This rise has led to the development of Explainable AI (XAI) system to improve the transparency and trust of complex DL methods. However, XAI systems face challenges in gaining acceptance within the healthcare sector, mainly due to technical hurdles in utilizing these systems in practice and the lack of human-centered evaluation/validation. In this study, we focus on visual XAI systems applied to DL-enabled diagnostic system in chest radiography. In particular, we conduct a user study to evaluate two prominent visual XAI techniques from the human perspective. To this end, we created two clinical scenarios for diagnosing pneumonia and COVID-19 using DL techniques applied to chest X-ray and CT scans. The achieved accuracy rates were 90% for pneumonia and 98% for COVID-19. Subsequently, we employed two well-known XAI methods, Grad-CAM (Gradient-weighted Class Activation Mapping) and LIME (Local Interpretable Model-agnostic Explanations), to generate visual explanations elucidating the AI decision-making process. The visual explainability results were shared through a user study, undergoing evaluation by medical professionals in terms of clinical relevance, coherency, and user trust. In general, participants expressed a positive perception of the use of XAI systems in chest radiography. However, there was a noticeable lack of awareness regarding their value and practical aspects. Regarding preferences, Grad-CAM showed superior performance over LIME in terms of coherency and trust, although concerns were raised about its clinical usability. Our findings highlight key user-driven explainability requirements, emphasizing the importance of multi-modal explainability and the necessity to increase awareness of XAI systems among medical practitioners. Inclusive design was also identified as a crucial need to ensure better alignment of these systems with user needs.
Project description:Background and objectiveMachine learning (ML) models are increasingly being utilized in oncology research for use in the clinic. However, while more complicated models may provide improvements in predictive or prognostic power, a hurdle to their adoption are limits of model interpretability, wherein the inner workings can be perceived as a "black box". Explainable artificial intelligence (XAI) frameworks including Local Interpretable Model-agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP) are novel, model-agnostic approaches that aim to provide insight into the inner workings of the "black box" by producing quantitative visualizations of how model predictions are calculated. In doing so, XAI can transform complicated ML models into easily understandable charts and interpretable sets of rules, which can give providers with an intuitive understanding of the knowledge generated, thus facilitating the deployment of such models in routine clinical workflows.MethodsWe performed a comprehensive, non-systematic review of the latest literature to define use cases of model-agnostic XAI frameworks in oncologic research. The examined database was PubMed/MEDLINE. The last search was run on May 1, 2022.Key content and findingsIn this review, we identified several fields in oncology research where ML models and XAI were utilized to improve interpretability, including prognostication, diagnosis, radiomics, pathology, treatment selection, radiation treatment workflows, and epidemiology. Within these fields, XAI facilitates determination of feature importance in the overall model, visualization of relationships and/or interactions, evaluation of how individual predictions are produced, feature selection, identification of prognostic and/or predictive thresholds, and overall confidence in the models, among other benefits. These examples provide a basis for future work to expand on, which can facilitate adoption in the clinic when the complexity of such modeling would otherwise be prohibitive.ConclusionsModel-agnostic XAI frameworks offer an intuitive and effective means of describing oncology ML models, with applications including prognostication and determination of optimal treatment regimens. Using such frameworks presents an opportunity to improve understanding of ML models, which is a critical step to their adoption in the clinic.
Project description:The design of masonry structures requires accurate estimation of compressive strength (CS) of hollow concrete masonry prisms. Generally, the CS of masonry prisms is determined by destructive laboratory testing which results in time and resource wastage. Thus, this study aims to provide machine learning-based predictive models for CS of hollow concrete masonry blocks using different algorithms including Multi Expression Programming (MEP), Random Forest Regression (RFR), and Extreme Gradient Boosting (XGB) etc. A dataset of 159 experimental results was collected from published literature for this purpose. The collected dataset consisted of four input parameters including strength of masonry units ( fb ), height-to-thickness ratio (h/t), strength of mortar ( fm ), and ratio of fm/fb and only one output parameter i.e., CS. Out of all the algorithms employed in current study, only MEP and GEP expressed their output in the form of an empirical equation. The accuracy of developed models was assessed using root mean squared error (RMSE), objective function (OF), and R2 etc. Among all algorithms assessed, XGB turned out to be the most accurate having R2 = 0.99 and least OF value of 0.0063 followed by AdaBoost, RFR, and other algorithms. The developed XGB model was also used to conduct different explainable artificial intelligence (XAI) analysis including sensitivity and shapley analysis and the results showed that strength of masonry unit ( fb ) is the most significant variable in predicting CS. Thus, the ML-based predictive models presented in this study can be utilized practically for determining CS of hollow concrete masonry prisms without requiring expensive and time-consuming laboratory testing.
Project description:IntroductionThe COVID-19 pandemic had a global impact and created an unprecedented emergency in healthcare and other related frontline sectors. Various Artificial-Intelligence-based models were developed to effectively manage medical resources and identify patients at high risk. However, many of these AI models were limited in their practical high-risk applicability due to their "black-box" nature, i.e., lack of interpretability of the model. To tackle this problem, Explainable Artificial Intelligence (XAI) was introduced, aiming to explore the "black box" behavior of machine learning models and offer definitive and interpretable evidence. XAI provides interpretable analysis in a human-compliant way, thus boosting our confidence in the successful implementation of AI systems in the wild.MethodsIn this regard, this study explores the use of model-agnostic XAI models, such as SHapley Additive exPlanations values (SHAP) and Local Interpretable Model-Agnostic Explanations (LIME), for COVID-19 symptom analysis in Indian patients toward a COVID severity prediction task. Various machine learning models such as Decision Tree Classifier, XGBoost Classifier, and Neural Network Classifier are leveraged to develop Machine Learning models.Results and discussionThe proposed XAI tools are found to augment the high performance of AI systems with human interpretable evidence and reasoning, as shown through the interpretation of various explainability plots. Our comparative analysis illustrates the significance of XAI tools and their impact within a healthcare context. The study suggests that SHAP and LIME analysis are promising methods for incorporating explainability in model development and can lead to better and more trustworthy ML models in the future.