Project description:The COVID-19 pandemic has changed the way we practice medicine. Cancer patient and obstetric care landscapes have been distorted. Delaying cancer diagnosis or maternal-fetal monitoring increased the number of preventable deaths or pregnancy complications. One solution is using Artificial Intelligence to help the medical personnel establish the diagnosis in a faster and more accurate manner. Deep learning is the state-of-the-art solution for image classification. Researchers manually design the structure of fix deep learning neural networks structures and afterwards verify their performance. The goal of this paper is to propose a potential method for learning deep network architectures automatically. As the number of networks architectures increases exponentially with the number of convolutional layers in the network, we propose a differential evolution algorithm to traverse the search space. At first, we propose a way to encode the network structure as a candidate solution of fixed-length integer array, followed by the initialization of differential evolution method. A set of random individuals is generated, followed by mutation, recombination, and selection. At each generation the individuals with the poorest loss values are eliminated and replaced with more competitive individuals. The model has been tested on three cancer datasets containing MRI scans and histopathological images and two maternal-fetal screening ultrasound images. The novel proposed method has been compared and statistically benchmarked to four state-of-the-art deep learning networks: VGG16, ResNet50, Inception V3, and DenseNet169. The experimental results showed that the model is competitive to other state-of-the-art models, obtaining accuracies between 78.73% and 99.50% depending on the dataset it had been applied on.
Project description:BackgroundIn medical imaging courses, due to the complexity of anatomical relationships, limited number of practical course hours and instructors, how to improve the teaching quality of practical skills and self-directed learning ability has always been a challenge for higher medical education. Artificial intelligence-assisted diagnostic (AISD) software based on volume data reconstruction (VDR) technique is gradually entering radiology. It converts two-dimensional images into three-dimensional images, and AI can assist in image diagnosis. However, the application of artificial intelligence in medical education is still in its early stages. The purpose of this study is to explore the application value of AISD software based on VDR technique in medical imaging practical teaching, and to provide a basis for improving medical imaging practical teaching.MethodsTotally 41 students majoring in clinical medicine in 2017 were enrolled as the experiment group. AISD software based on VDR was used in practical teaching of medical imaging to display 3D images and mark lesions with AISD. Then annotations were provided and diagnostic suggestions were given. Also 43 students majoring in clinical medicine from 2016 were chosen as the control group, who were taught with the conventional film and multimedia teaching methods. The exam results and evaluation scales were compared statistically between groups.ResultsThe total skill scores of the test group were significantly higher compared with the control group (84.51 ± 3.81 vs. 80.67 ± 5.43). The scores of computed tomography (CT) diagnosis (49.93 ± 3.59 vs. 46.60 ± 4.89) and magnetic resonance (MR) diagnosis (17.41 ± 1.00 vs. 16.93 ± 1.14) of the experiment group were both significantly higher. The scores of academic self-efficacy (82.17 ± 4.67) and self-directed learning ability (235.56 ± 13.50) of the group were significantly higher compared with the control group (78.93 ± 6.29, 226.35 ± 13.90).ConclusionsApplying AISD software based on VDR to medical imaging practice teaching can enable students to timely obtain AI annotated lesion information and 3D images, which may help improve their image reading skills and enhance their academic self-efficacy and self-directed learning abilities.
Project description:ObjectiveThis study aims to investigate the long-term effects of the COVID-19 pandemic on medical imaging case volumes.MethodsThis retrospective study analyzed data from the Integrated Radiology Information System-Picture Archive and Communication System (RIS-PACS), including monthly medical imaging case volumes at a public hospital, spanning from January 2019 to December 2022. The study collected data on medical imaging examinations, comparing the pre COVID-19 period, which acted as a control group, with the periods following COVID-19, which were designated as cohort groups.ResultsThe total number of medical imaging procedures performed (n = 597,645) was found significantly different (F = 6.69, P < 0.001) between 2019 and 2022. Specifically, the bone mineral density/computed radiography (BMD/CR) modality experienced a significant decrease (P = 0.01) of the procedures performed in 2020 and 2021 compared to 2019. Conversely, the nuclear medicine/computed tomography (NM/CT) and computed tomography (CT) modalities demonstrated a significant increase of the procedures performed in 2021 (P = 0.04) and (P < 0.0001), respectively, and in 2022 (P = 0.0095) and (P < 0.0001), respectively, compared to the pre-pandemic year. The digital X-ray modality (DX) showed the highest volume (67.63%) of the performed procedures overall between 2019 and 2022. Meanwhile, magnetic resonance imaging (MR) and ultrasound (US) modalities experienced a slight drop in the number of procedures in 2020-4.47% for MR and 1.00% for US, which subsequently recovered by 22.15% and 19.74% in 2021, and 24.36% and 17.40% in 2022, respectively, compared to 2019.ConclusionThe COVID-19 pandemic initially led to a drop in the number of medical imaging procedures performed in 2020, with the most noticeable drop occurring during the early waves of the pandemic. However, this trend revealed a gradual recovery in the subsequent years, 2021 and 2022, as healthcare systems adapted, and pandemic-related restrictions were modified.
Project description:The development of medical image analysis algorithm is a complex process including the multiple sub-steps of model training, data visualization, human-computer interaction and graphical user interface (GUI) construction. To accelerate the development process, algorithm developers need a software tool to assist with all the sub-steps so that they can focus on the core function implementation. Especially, for the development of deep learning (DL) algorithms, a software tool supporting training data annotation and GUI construction is highly desired. In this work, we constructed AnatomySketch, an extensible open-source software platform with a friendly GUI and a flexible plugin interface for integrating user-developed algorithm modules. Through the plugin interface, algorithm developers can quickly create a GUI-based software prototype for clinical validation. AnatomySketch supports image annotation using the stylus and multi-touch screen. It also provides efficient tools to facilitate the collaboration between human experts and artificial intelligent (AI) algorithms. We demonstrate four exemplar applications including customized MRI image diagnosis, interactive lung lobe segmentation, human-AI collaborated spine disc segmentation and Annotation-by-iterative-Deep-Learning (AID) for DL model training. Using AnatomySketch, the gap between laboratory prototyping and clinical testing is bridged and the development of MIA algorithms is accelerated. The software is opened at https://github.com/DlutMedimgGroup/AnatomySketch-Software .
Project description:ObjectivesTo investigate acoustic noise reduction, image quality and white matter lesion detection rates of cranial magnetic resonance imaging (MRI) scans acquired with and without sequence-based acoustic noise reduction software.Material and methodsThirty-one patients, including 18 men and 13 women, with a mean age of 58.3±14.5 years underwent cranial MRI. A fluid-attenuated inversion recovery (FLAIR) sequence was acquired with and without acoustic noise reduction using the Quiet Suite (QS) software (Siemens Healthcare). During data acquisition, peak sound pressure levels were measured with a sound level meter (Testo, Typ 815). In addition, two observers assessed subjective image quality for both sequences using a five-point scale (1 very good-5 inadequate). Signal-to-noise ratio (SNR) was measured for both sequences in the following regions: white matter, gray matter, and cerebrospinal fluid. Furthermore, lesion detection rates in white matter pathologies were evaluated by two observers for both sequences. Acoustic noise, image quality including SNR and white matter lesion detection rates were compared using the Mann-Whitney-U-test.ResultsPeak sound pressure levels were slightly but significantly reduced using QS, P≤0.017. Effective sound pressure, measured in Pascal, was decreased by 19.7%. There was no significant difference in subjective image quality between FLAIR sequences acquired without/with QS: observer 1: 2.03/2.07, P = 0.730; observer 2: 1.98/2.10, P = 0.362. In addition, SNR was significantly increased in white matter, P≤0.001, and gray matter, P = 0.006, using QS. The lesion detection rates did not decline utilizing QS: observer 1: P = 0.944 observer 2: P = 0.952.ConclusionsSequence-based noise reduction software such as QS can significantly reduce peak sound pressure levels, without a loss of subjective image quality and increase SNR at constant lesion detection rates.
Project description:In this work we investigate whether the innate visual recognition and learning capabilities of untrained humans can be used in conducting reliable microscopic analysis of biomedical samples toward diagnosis. For this purpose, we designed entertaining digital games that are interfaced with artificial learning and processing back-ends to demonstrate that in the case of binary medical diagnostics decisions (e.g., infected vs. uninfected), with the use of crowd-sourced games it is possible to approach the accuracy of medical experts in making such diagnoses. Specifically, using non-expert gamers we report diagnosis of malaria infected red blood cells with an accuracy that is within 1.25% of the diagnostics decisions made by a trained medical professional.
Project description:With increasing utilization of medical imaging in clinical practice and the growing dimensions of data volumes generated by various medical imaging modalities, the distribution, storage, and management of digital medical image data sets requires data compression. Over the past few decades, several image compression standards have been proposed by international standardization organizations. This paper discusses the current status of these image compression standards in medical imaging applications together with some of the legal and regulatory issues surrounding the use of compression in medical settings.
Project description:Development of artificial intelligence (AI) for medical imaging demands curation and cleaning of large-scale clinical datasets comprising hundreds of thousands of images. Some modalities, such as mammography, contain highly standardized imaging. In contrast, breast ultrasound imaging (BUS) can contain many irregularities not indicated by scan metadata, such as enhanced scan modes, sonographer annotations, or additional views. We present an open-source software solution for automatically processing clinical BUS datasets. The algorithm performs BUS scan filtering (flagging of invalid and non-B-mode scans), cleaning (dual-view scan detection, scan area cropping, and caliper detection), and knowledge extraction (BI-RADS Labeling and Measurement fields) from sonographer annotations. Its modular design enables users to adapt it to new settings. Experiments on an internal testing dataset of 430 clinical BUS images achieve >95% sensitivity and >98% specificity in detecting every type of text annotation, >98% sensitivity and specificity in detecting scans with blood flow highlighting, alternative scan modes, or invalid scans. A case study on a completely external, public dataset of BUS scans found that BUSClean identified text annotations and scans with blood flow highlighting with 88.6% and 90.9% sensitivity and 98.3% and 99.9% specificity, respectively. Adaptation of the lesion caliper detection method to account for a type of caliper specific to the case study demonstrates the intended use of BUSClean in new data distributions and improved performance in lesion caliper detection from 43.3% and 93.3% out-of-the-box to 92.1% and 92.3% sensitivity and specificity, respectively. Source code, example notebooks, and sample data are available at https://github.com/hawaii-ai/bus-cleaning.
Project description:BackgroundThe long case is a traditional method of clinical assessment which has fallen out of favour in certain contexts, primarily due to psychometric concerns. This study explored the long case's educational impact, an aspect which has been neglected in previous research.MethodsThree focus groups of medical students (20 in total) and semi-structured interviews of six examiners were conducted. Cook and Lineberry's framework for exploring educational impact was used as a sensitising tool during thematic analysis of the data.ResultsParticipants described the long case and its scoring as having influence on student learning. Engaging in the activity of a long case had an essential role in fostering students' clinical skills and served as a powerful driving force for them to spend time with patients. The long case was seen as authentic, and the only assessment to promote a holistic approach to patients. Students had concerns about inter-case variability, but there was general consensus that the long case was valuable, with allocation of marks being an important motivator for students.ConclusionsThis study offers a unique focus on the traditional long case's educational consequences; the extent of its positive impact would support its place within a program of assessment.