Project description:Ultra-precision machining requires system modelling that both satisfies explainability and conforms to data fidelity. Existing modelling approaches, whether based on data-driven methods in present artificial intelligence (AI) or on first-principle knowledge, fall short of these qualities in high-demanding industrial applications. Therefore, this paper develops an explainable and generalizable 'grey-box' AI informatics method for real-world dynamic system modelling. Such a grey-box model serves as a multiscale 'world model' by integrating the first principles of the system in a white-box architecture with data-fitting black boxes for varying hyperparameters of the white box. The physical principles serve as an explainable global meta-structure of the real-world system driven by physical knowledge, while the black boxes enhance local fitting accuracy driven by training data. The grey-box model thus encapsulates implicit variables and relationships that a standalone white-box model or black-box model fails to capture. Case study on an industrial cleanroom high-precision temperature regulation system verifies that the grey-box method outperforms existing modelling methods and is suitable for varying operating conditions.
Project description:This paper proposes a model called X-LSTM-EO, which integrates explainable artificial intelligence (XAI), long short-term memory (LSTM), and equilibrium optimizer (EO) to reliably forecast solar power generation. The LSTM component forecasts power generation rates based on environmental conditions, while the EO component optimizes the LSTM model's hyper-parameters through training. The XAI-based Local Interpretable and Model-independent Explanation (LIME) is adapted to identify the critical factors that influence the accuracy of the power generation forecasts model in smart solar systems. The effectiveness of the proposed X-LSTM-EO model is evaluated through the use of five metrics; R-squared (R2), root mean square error (RMSE), coefficient of variation (COV), mean absolute error (MAE), and efficiency coefficient (EC). The proposed model gains values 0.99, 0.46, 0.35, 0.229, and 0.95, for R2, RMSE, COV, MAE, and EC respectively. The results of this paper improve the performance of the original model's conventional LSTM, where the improvement rate is; 148%, 21%, 27%, 20%, 134% for R2, RMSE, COV, MAE, and EC respectively. The performance of LSTM is compared with other machine learning algorithm such as Decision tree (DT), Linear regression (LR) and Gradient Boosting. It was shown that the LSTM model worked better than DT and LR when the results were compared. Additionally, the PSO optimizer was employed instead of the EO optimizer to validate the outcomes, which further demonstrated the efficacy of the EO optimizer. The experimental results and simulations demonstrate that the proposed model can accurately estimate PV power generation in response to abrupt changes in power generation patterns. Moreover, the proposed model might assist in optimizing the operations of photovoltaic power units. The proposed model is implemented utilizing TensorFlow and Keras within the Google Collab environment.
Project description:In the modern era of digitalization, integration with blockchain and machine learning (ML) technologies is most important for improving applications in healthcare management and secure prediction analysis of health data. This research aims to develop a novel methodology for securely storing patient medical data and analyzing it for PCOS prediction. The main goals are to leverage Hyperledger Fabric for immutable, private data and to integrate Explainable Artificial Intelligence (XAI) techniques to enhance transparency in decision-making. The innovation of this study is the unique integration of blockchain technology with ML and XAI, solving critical issues of data security and model interpretability in healthcare. With the Caliper tool, the Hyperledger Fabric blockchain's performance is evaluated and enhanced. The suggested Explainable AI-based blockchain system for Polycystic Ovary Syndrome detection (EAIBS-PCOS) system demonstrates outstanding performance and records 98% accuracy, 100% precision, 98.04% recall, and a resultant F1-score of 99.01%. Such quantitative measures ensure the success of the proposed methodology in delivering dependable and intelligible predictions for PCOS diagnosis, therefore making a great addition to the literature while serving as a solid solution for healthcare applications in the near future.
Project description:Artificial intelligence (AI) provides considerable opportunities to assist human work. However, one crucial challenge of human-AI collaboration is that many AI algorithms operate in a black-box manner where the way how the AI makes predictions remains opaque. This makes it difficult for humans to validate a prediction made by AI against their own domain knowledge. For this reason, we hypothesize that augmenting humans with explainable AI improves task performance in human-AI collaboration. To test this hypothesis, we implement explainable AI in the form of visual heatmaps in inspection tasks conducted by domain experts. Visual heatmaps have the advantage that they are easy to understand and help to localize relevant parts of an image. We then compare participants that were either supported by (a) black-box AI or (b) explainable AI, where the latter supports them to follow AI predictions when the AI is accurate or overrule the AI when the AI predictions are wrong. We conducted two preregistered experiments with representative, real-world visual inspection tasks from manufacturing and medicine. The first experiment was conducted with factory workers from an electronics factory, who performed [Formula: see text] assessments of whether electronic products have defects. The second experiment was conducted with radiologists, who performed [Formula: see text] assessments of chest X-ray images to identify lung lesions. The results of our experiments with domain experts performing real-world tasks show that task performance improves when participants are supported by explainable AI with heatmaps instead of black-box AI. We find that explainable AI as a decision aid improved the task performance by 7.7 percentage points (95% confidence interval [CI]: 3.3% to 12.0%, [Formula: see text]) in the manufacturing experiment and by 4.7 percentage points (95% CI: 1.1% to 8.3%, [Formula: see text]) in the medical experiment compared to black-box AI. These gains represent a significant improvement in task performance.
Project description:Although presepsin, a crucial biomarker for the diagnosis and management of sepsis, has gained prominence in contemporary medical research, its relationship with routine laboratory parameters, including demographic data and hospital blood test data, remains underexplored. This study integrates machine learning with explainable artificial intelligence (XAI) to provide insights into the relationship between presepsin and these parameters. Advanced machine learning classifiers provide a multilateral view of data and play an important role in highlighting the interrelationships between presepsin and other parameters. XAI enhances analysis by ensuring transparency in the model's decisions, especially in selecting key parameters that significantly enhance classification accuracy. Utilizing XAI, this study successfully identified critical parameters that increased the predictive accuracy for sepsis patients, achieving a remarkable ROC AUC of 0.97 and an accuracy of 0.94. This breakthrough is possibly attributed to the comprehensive utilization of XAI in refining parameter selection, thus leading to these significant predictive metrics. The presence of missing data in datasets is another concern; this study addresses it by employing Extreme Gradient Boosting (XGBoost) to manage missing data, effectively mitigating potential biases while preserving both the accuracy and relevance of the results. The perspective of examining data from higher dimensions using machine learning transcends traditional observation and analysis. The findings of this study hold the potential to enhance patient diagnoses and treatment, underscoring the value of merging traditional research methods with advanced analytical tools.
Project description:The accuracy and flexibility of artificial intelligence (AI) systems often comes at the cost of a decreased ability to offer an intuitive explanation of their predictions. This hinders trust and discourage adoption of AI in healthcare, exacerbated by concerns over liabilities and risks to patients' health in case of misdiagnosis. Providing an explanation for a model's prediction is possible due to recent advances in the field of interpretable machine learning. We considered a data set of hospital admissions linked to records of antibiotic prescriptions and susceptibilities of bacterial isolates. An appropriately trained gradient boosted decision tree algorithm, supplemented by a Shapley explanation model, predicts the likely antimicrobial drug resistance, with the odds of resistance informed by characteristics of the patient, admission data, and historical drug treatments and culture test results. Applying this AI-based system, we found that it substantially reduces the risk of mismatched treatment compared with the observed prescriptions. The Shapley values provide an intuitive association between observations/data and outcomes; the associations identified are broadly consistent with expectations based on prior knowledge from health specialists. The results, and the ability to attribute confidence and explanations, support the wider adoption of AI in healthcare.
Project description:Indecipherable black boxes are common in machine learning (ML), but applications increasingly require explainable artificial intelligence (XAI). The core of XAI is to establish transparent and interpretable data-driven algorithms. This work provides concrete tools for XAI in situations where prior knowledge must be encoded and untrustworthy inferences flagged. We use the "learn to optimize" (L2O) methodology wherein each inference solves a data-driven optimization problem. Our L2O models are straightforward to implement, directly encode prior knowledge, and yield theoretical guarantees (e.g. satisfaction of constraints). We also propose use of interpretable certificates to verify whether model inferences are trustworthy. Numerical examples are provided in the applications of dictionary-based signal recovery, CT imaging, and arbitrage trading of cryptoassets. Code and additional documentation can be found at https://xai-l2o.research.typal.academy .
Project description:Fecal metabolites effectively discriminate inflammatory bowel disease (IBD) and show differential associations with diet. Metabolomics and AI-based models, including explainable AI (XAI), play crucial roles in understanding IBD. Using datasets from the UK Biobank and the Human Microbiome Project Phase II IBD Multi'omics Database (HMP2 IBDMDB), this study uses multiple machine learning (ML) classifiers and Shapley additive explanations (SHAP)-based XAI to prioritize plasma and fecal metabolites and analyze their diet correlations. Key findings include the identification of discriminative metabolites like glycoprotein acetyl and albumin in plasma, as well as nicotinic acid metabolites andurobilin in feces. Fecal metabolites provided a more robust disease predictor model (AUC [95%]: 0.93 [0.87-0.99]) compared to plasma metabolites (AUC [95%]: 0.74 [0.69-0.79]), with stronger and more group-differential diet-metabolite associations in feces. The study validates known metabolite associations and highlights the impact of IBD on the interplay between gut microbial metabolites and diet.
Project description:Explainable artificial intelligence (XAI) is of paramount importance to various domains, including healthcare, fitness, skill assessment, and personal assistants, to understand and explain the decision-making process of the artificial intelligence (AI) model. Smart homes embedded with smart devices and sensors enabled many context-aware applications to recognize physical activities. This study presents XAI-HAR, a novel XAI-empowered human activity recognition (HAR) approach based on key features identified from the data collected from sensors located at different places in a smart home. XAI-HAR identifies a set of new features (i.e., the total number of sensors used in a specific activity), as physical key features selection (PKFS) based on weighting criteria. Next, it presents statistical key features selection (SKFS) (i.e., mean, standard deviation) to handle the outliers and higher class variance. The proposed XAI-HAR is evaluated using machine learning models, namely, random forest (RF), K-nearest neighbor (KNN), support vector machine (SVM), decision tree (DT), naive Bayes (NB) and deep learning models such as deep neural network (DNN), convolution neural network (CNN), and CNN-based long short-term memory (CNN-LSTM). Experiments demonstrate the superior performance of XAI-HAR using RF classifier over all other machine learning and deep learning models. For explainability, XAI-HAR uses Local Interpretable Model Agnostic (LIME) with an RF classifier. XAI-HAR achieves 0.96% of F-score for health and dementia classification and 0.95 and 0.97% for activity recognition of dementia and healthy individuals, respectively.
Project description:In the high-stakes realm of critical care, where daily decisions are crucial and clear communication is paramount, comprehending the rationale behind Artificial Intelligence (AI)-driven decisions appears essential. While AI has the potential to improve decision-making, its complexity can hinder comprehension and adherence to its recommendations. "Explainable AI" (XAI) aims to bridge this gap, enhancing confidence among patients and doctors. It also helps to meet regulatory transparency requirements, offers actionable insights, and promotes fairness and safety. Yet, defining explainability and standardising assessments are ongoing challenges and balancing performance and explainability can be needed, even if XAI is a growing field.