Project description:Explainable Artificial Intelligence (XAI) is a branch of AI that mainly focuses on developing systems that provide understandable and clear explanations for their decisions. In the context of cancer diagnoses on medical imaging, an XAI technology uses advanced image analysis methods like deep learning (DL) to make a diagnosis and analyze medical images, as well as provide a clear explanation for how it arrived at its diagnoses. This includes highlighting specific areas of the image that the system recognized as indicative of cancer while also providing data on the fundamental AI algorithm and decision-making process used. The objective of XAI is to provide patients and doctors with a better understanding of the system's decision-making process and to increase transparency and trust in the diagnosis method. Therefore, this study develops an Adaptive Aquila Optimizer with Explainable Artificial Intelligence Enabled Cancer Diagnosis (AAOXAI-CD) technique on Medical Imaging. The proposed AAOXAI-CD technique intends to accomplish the effectual colorectal and osteosarcoma cancer classification process. To achieve this, the AAOXAI-CD technique initially employs the Faster SqueezeNet model for feature vector generation. As well, the hyperparameter tuning of the Faster SqueezeNet model takes place with the use of the AAO algorithm. For cancer classification, the majority weighted voting ensemble model with three DL classifiers, namely recurrent neural network (RNN), gated recurrent unit (GRU), and bidirectional long short-term memory (BiLSTM). Furthermore, the AAOXAI-CD technique combines the XAI approach LIME for better understanding and explainability of the black-box method for accurate cancer detection. The simulation evaluation of the AAOXAI-CD methodology can be tested on medical cancer imaging databases, and the outcomes ensured the auspicious outcome of the AAOXAI-CD methodology than other current approaches.
Project description:AI has, to varying degrees, affected all aspects of molecular imaging, from image acquisition to diagnosis. During the last decade, the advent of deep learning in particular has transformed medical image analysis. Although the majority of recent advances have resulted from neural-network models applied to image segmentation, a broad range of techniques has shown promise for image reconstruction, image synthesis, differential-diagnosis generation, and treatment guidance. Applications of AI for drug design indicate the way forward for using AI to facilitate molecular-probe design, which is still in its early stages. Deep-learning models have demonstrated increased efficiency and image quality for PET reconstruction from sinogram data. Generative adversarial networks (GANs), which are paired neural networks that are jointly trained to generate and classify images, have found applications in modality transformation, artifact reduction, and synthetic-PET-image generation. Some AI applications, based either partly or completely on neural-network approaches, have demonstrated superior differential-diagnosis generation relative to radiologists. However, AI models have a history of brittleness, and physicians and patients may not trust AI applications that cannot explain their reasoning. To date, the majority of molecular-imaging applications of AI have been confined to research projects, and are only beginning to find their ways into routine clinical workflows via commercialization and, in some cases, integration into scanner hardware. Evaluation of actual clinical products will yield more realistic assessments of AI's utility in molecular imaging.
Project description:BackgroundArtificial intelligence (AI) is seen as one of the major disrupting forces in the future healthcare system. However, the assessment of the value of these new technologies is still unclear, and no agreed international health technology assessment-based guideline exists. This study provides an overview of the available literature in the value assessment of AI in the field of medical imaging.MethodsWe performed a systematic scoping review of published studies between January 2016 and September 2020 using 10 databases (Medline, Scopus, ProQuest, Google Scholar, and six related databases of grey literature). Information about the context (country, clinical area, and type of study) and mentioned domains with specific outcomes and items were extracted. An existing domain classification, from a European assessment framework, was used as a point of departure, and extracted data were grouped into domains and content analysis of data was performed covering predetermined themes.ResultsSeventy-nine studies were included out of 5890 identified articles. An additional seven studies were identified by searching reference lists, and the analysis was performed on 86 included studies. Eleven domains were identified: (1) health problem and current use of technology, (2) technology aspects, (3) safety assessment, (4) clinical effectiveness, (5) economics, (6) ethical analysis, (7) organisational aspects, (8) patients and social aspects, (9) legal aspects, (10) development of AI algorithm, performance metrics and validation, and (11) other aspects. The frequency of mentioning a domain varied from 20 to 78% within the included papers. Only 15/86 studies were actual assessments of AI technologies. The majority of data were statements from reviews or papers voicing future needs or challenges of AI research, i.e. not actual outcomes of evaluations.ConclusionsThis review regarding value assessment of AI in medical imaging yielded 86 studies including 11 identified domains. The domain classification based on European assessment framework proved useful and current analysis added one new domain. Included studies had a broad range of essential domains about addressing AI technologies highlighting the importance of domains related to legal and ethical aspects.
Project description:BackgroundOpinions seem somewhat divided when considering the effect of artificial intelligence (AI) on medical imaging. The aim of this study was to characterise viewpoints presented online relating to the impact of AI on the field of radiology and to assess who is engaging in this discourse.MethodsTwo search methods were used to identify online information relating to AI and radiology. Firstly, 34 terms were searched using Google and the first two pages of results for each term were evaluated. Secondly, a Rich Search Site (RSS) feed evaluated incidental information over 3 weeks. Webpages were evaluated and categorized as having a positive, negative, balanced, or neutral viewpoint based on study criteria.ResultsOf the 680 webpages identified using the Google search engine, 248 were deemed relevant and accessible. 43.2% had a positive viewpoint, 38.3% a balanced viewpoint, 15.3% a neutral viewpoint, and 3.2% a negative viewpoint. Peer-reviewed journals represented the most common webpage source (48%), followed by media (29%), commercial sources (12%), and educational sources (8%). Commercial webpages had the highest proportion of positive viewpoints (66%). Radiologists were identified as the most common author group (38.9%). The RSS feed identified 177 posts of which were relevant and accessible. 86% of posts were of media origin expressing positive viewpoints (64%).ConclusionThe overall opinion of the impact of AI on radiology presented online is a positive one. Consistency across a range of sources and author groups exists. Radiologists were significant contributors to this online discussion and the results may impact future recruitment.
Project description:BackgroundMedical history contributes approximately 80% to a diagnosis, although physical examinations and laboratory investigations increase a physician's confidence in the medical diagnosis. The concept of artificial intelligence (AI) was first proposed more than 70 years ago. Recently, its role in various fields of medicine has grown remarkably. However, no studies have evaluated the importance of patient history in AI-assisted medical diagnosis.ObjectiveThis study explored the contribution of patient history to AI-assisted medical diagnoses and assessed the accuracy of ChatGPT in reaching a clinical diagnosis based on the medical history provided.MethodsUsing clinical vignettes of 30 cases identified in The BMJ, we evaluated the accuracy of diagnoses generated by ChatGPT. We compared the diagnoses made by ChatGPT based solely on medical history with the correct diagnoses. We also compared the diagnoses made by ChatGPT after incorporating additional physical examination findings and laboratory data alongside history with the correct diagnoses.ResultsChatGPT accurately diagnosed 76.6% (23/30) of the cases with only the medical history, consistent with previous research targeting physicians. We also found that this rate was 93.3% (28/30) when additional information was included.ConclusionsAlthough adding additional information improves diagnostic accuracy, patient history remains a significant factor in AI-assisted medical diagnosis. Thus, when using AI in medical diagnosis, it is crucial to include pertinent and correct patient histories for an accurate diagnosis. Our findings emphasize the continued significance of patient history in clinical diagnoses in this age and highlight the need for its integration into AI-assisted medical diagnosis systems.
Project description:BackgroundIn medical imaging courses, due to the complexity of anatomical relationships, limited number of practical course hours and instructors, how to improve the teaching quality of practical skills and self-directed learning ability has always been a challenge for higher medical education. Artificial intelligence-assisted diagnostic (AISD) software based on volume data reconstruction (VDR) technique is gradually entering radiology. It converts two-dimensional images into three-dimensional images, and AI can assist in image diagnosis. However, the application of artificial intelligence in medical education is still in its early stages. The purpose of this study is to explore the application value of AISD software based on VDR technique in medical imaging practical teaching, and to provide a basis for improving medical imaging practical teaching.MethodsTotally 41 students majoring in clinical medicine in 2017 were enrolled as the experiment group. AISD software based on VDR was used in practical teaching of medical imaging to display 3D images and mark lesions with AISD. Then annotations were provided and diagnostic suggestions were given. Also 43 students majoring in clinical medicine from 2016 were chosen as the control group, who were taught with the conventional film and multimedia teaching methods. The exam results and evaluation scales were compared statistically between groups.ResultsThe total skill scores of the test group were significantly higher compared with the control group (84.51 ± 3.81 vs. 80.67 ± 5.43). The scores of computed tomography (CT) diagnosis (49.93 ± 3.59 vs. 46.60 ± 4.89) and magnetic resonance (MR) diagnosis (17.41 ± 1.00 vs. 16.93 ± 1.14) of the experiment group were both significantly higher. The scores of academic self-efficacy (82.17 ± 4.67) and self-directed learning ability (235.56 ± 13.50) of the group were significantly higher compared with the control group (78.93 ± 6.29, 226.35 ± 13.90).ConclusionsApplying AISD software based on VDR to medical imaging practice teaching can enable students to timely obtain AI annotated lesion information and 3D images, which may help improve their image reading skills and enhance their academic self-efficacy and self-directed learning abilities.
Project description:The recent advent of large language models (LLMs), such as ChatGPT, has drawn attention to generative artificial intelligence (AI) in a number of fields. Generative AI can produce different types of data including text, images, and voice, depending on the training methods and datasets used. Additionally, recent advancements in multimodal techniques, which can simultaneously process multiple data types like text and images, have expanded the potential of using multimodal generative AI in the medical environment where various types of clinical and imaging information are used together. This review summarizes the concepts and types of LLMs, image generative AI, and multimodal AI, and it examines the status and future possibilities of generative AI in the field of radiology.
Project description:Explainable artificial intelligence (XAI) has experienced a vast increase in recognition over the last few years. While the technical developments are manifold, less focus has been placed on the clinical applicability and usability of systems. Moreover, not much attention has been given to XAI systems that can handle multimodal and longitudinal data, which we postulate are important features in many clinical workflows. In this study, we review, from a clinical perspective, the current state of XAI for multimodal and longitudinal datasets and highlight the challenges thereof. Additionally, we propose the XAI orchestrator, an instance that aims to help clinicians with the synopsis of multimodal and longitudinal data, the resulting AI predictions, and the corresponding explainability output. We propose several desirable properties of the XAI orchestrator, such as being adaptive, hierarchical, interactive, and uncertainty-aware.
Project description:Although a plethora of research articles on AI methods on COVID-19 medical imaging are published, their clinical value remains unclear. We conducted the largest systematic review of the literature addressing the utility of AI in imaging for COVID-19 patient care. By keyword searches on PubMed and preprint servers throughout 2020, we identified 463 manuscripts and performed a systematic meta-analysis to assess their technical merit and clinical relevance. Our analysis evidences a significant disparity between clinical and AI communities, in the focus on both imaging modalities (AI experts neglected CT and ultrasound, favoring X-ray) and performed tasks (71.9% of AI papers centered on diagnosis). The vast majority of manuscripts were found to be deficient regarding potential use in clinical practice, but 2.7% (n = 12) publications were assigned a high maturity level and are summarized in greater detail. We provide an itemized discussion of the challenges in developing clinically relevant AI solutions with recommendations and remedies.
Project description:BackgroundArtificial intelligence (AI) application is increasingly used in all fields, especially, in medicine. However, for the successful incorporation of AI-driven tools into medicine, healthcare professional should be equipped with the necessary knowledge. From that, we aimed to assess the AI readiness among medical students in Jordan.MethodsA cross-sectional survey was conducted among medical students across 6 Jordanian universities. Prevalidated Medical Artificial Intelligence Readiness Scale for Medical Students questionnaire was used. The questionnaire was distributed through social media groups of students. SPSS v.27 was used for analysis.ResultsA total of 858 responses were collected. The mean AI readiness score was 64.2%. Students scored more in the ability domain with a mean of 22.57. We found that academic performance (Grade point average) positively associated with overall AI readiness (P = .023), and prior exposure to AI through formal education or experience significantly enhances readiness (P = .009). In contrast, AI readiness levels did not significantly vary across different medical schools in Jordan. Notably, most students (84%) did not receive a formal education about AI from their schools.ConclusionIncorporation of AI education in medical curricula is crucial to close knowledge gaps and ensure that students are prepared for the use of AI in their future career. Our findings highlight the importance of preparing students to engage with AI technologies, and to be equipped with the necessary knowledge about its aspect.