Project description:Convolutional neural networks (CNNs) show potential for computer-aided diagnosis (CADx) by learning features directly from the image data instead of using analytically extracted features. However, CNNs are difficult to train from scratch for medical images due to small sample sizes and variations in tumor presentations. Instead, transfer learning can be used to extract tumor information from medical images via CNNs originally pretrained for nonmedical tasks, alleviating the need for large datasets. Our database includes 219 breast lesions (607 full-field digital mammographic images). We compared support vector machine classifiers based on the CNN-extracted image features and our prior computer-extracted tumor features in the task of distinguishing between benign and malignant breast lesions. Five-fold cross validation (by lesion) was conducted with the area under the receiver operating characteristic (ROC) curve as the performance metric. Results show that classifiers based on CNN-extracted features (with transfer learning) perform comparably to those using analytically extracted features [area under the ROC curve [Formula: see text]]. Further, the performance of ensemble classifiers based on both types was significantly better than that of either classifier type alone ([Formula: see text] versus 0.81, [Formula: see text]). We conclude that transfer learning can improve current CADx methods while also providing standalone classifiers without large datasets, facilitating machine-learning methods in radiomics and precision medicine.
Project description:Celiac disease (CD) is a gluten-sensitive immune-mediated enteropathy. This proof-of-concept study used a convolutional neural network (CNN) to classify hematoxylin and eosin (H&E) CD histological images, normal small intestine control, and non-specified duodenal inflammation (7294, 11,642, and 5966 images, respectively). The trained network classified CD with high performance (accuracy 99.7%, precision 99.6%, recall 99.3%, F1-score 99.5%, and specificity 99.8%). Interestingly, when the same network (already trained for the 3 class images), analyzed duodenal adenocarcinoma (3723 images), the new images were classified as duodenal inflammation in 63.65%, small intestine control in 34.73%, and CD in 1.61% of the cases; and when the network was retrained using the 4 histological subtypes, the performance was above 99% for CD and 97% for adenocarcinoma. Finally, the model added 13,043 images of Crohn's disease to include other inflammatory bowel diseases; a comparison between different CNN architectures was performed, and the gradient-weighted class activation mapping (Grad-CAM) technique was used to understand why the deep learning network made its classification decisions. In conclusion, the CNN-based deep neural system classified 5 diagnoses with high performance. Narrow artificial intelligence (AI) is designed to perform tasks that typically require human intelligence, but it operates within limited constraints and is task-specific.
Project description:The Machine Recognition of Crystallization Outcomes (MARCO) initiative has assembled roughly half a million annotated images of macromolecular crystallization experiments from various sources and setups. Here, state-of-the-art machine learning algorithms are trained and tested on different parts of this data set. We find that more than 94% of the test images can be correctly labeled, irrespective of their experimental origin. Because crystal recognition is key to high-density screening and the systematic analysis of crystallization experiments, this approach opens the door to both industrial and fundamental research applications.
Project description:Convolutional Neural Networks (CNNs) are image analysis techniques that have been applied to image classification in various fields. In this study, we applied a CNN to classify scanning electron microscopy (SEM) images of pharmaceutical raw material powders to determine if a CNN can evaluate particle morphology. We tested 10 pharmaceutical excipients with widely different particle morphologies. SEM images for each excipient were acquired and divided into training, validation, and test sets. Classification models were constructed by applying transfer learning to pretrained CNN models such as VGG16 and ResNet50. The results of a 5-fold cross-validation showed that the classification accuracy of the CNN model was sufficiently high using either pretrained model and that the type of excipient could be classified with high accuracy. The results suggest that the CNN model can detect differences in particle morphology, such as particle size, shape, and surface condition. By applying Grad-CAM to the constructed CNN model, we succeeded in finding particularly important regions in the particle image of the excipients. CNNs have been found to have the potential to be applied to the identification and characterization of raw material powders for pharmaceutical development.
Project description:In the absence of accurate medical records, it is critical to correctly classify implant fixture systems using periapical radiographs to provide accurate diagnoses and treatments to patients or to respond to complications. The purpose of this study was to evaluate whether deep neural networks can identify four different types of implants on intraoral radiographs. In this study, images of 801 patients who underwent periapical radiographs between 2005 and 2019 at Yonsei University Dental Hospital were used. Images containing the following four types of implants were selected: Brånemark Mk TiUnite, Dentium Implantium, Straumann Bone Level, and Straumann Tissue Level. SqueezeNet, GoogLeNet, ResNet-18, MobileNet-v2, and ResNet-50 were tested to determine the optimal pre-trained network architecture. The accuracy, precision, recall, and F1 score were calculated for each network using a confusion matrix. All five models showed a test accuracy exceeding 90%. SqueezeNet and MobileNet-v2, which are small networks with less than four million parameters, showed an accuracy of approximately 96% and 97%, respectively. The results of this study confirmed that convolutional neural networks can classify the four implant fixtures with high accuracy even with a relatively small network and a small number of images. This may solve the inconveniences associated with unnecessary treatments and medical expenses caused by lack of knowledge about the exact type of implant.
Project description:The quantification and identification of cellular phenotypes from high-content microscopy images has proven to be very useful for understanding biological activity in response to different drug treatments. The traditional approach has been to use classical image analysis to quantify changes in cell morphology, which requires several nontrivial and independent analysis steps. Recently, convolutional neural networks have emerged as a compelling alternative, offering good predictive performance and the possibility to replace traditional workflows with a single network architecture. In this study, we applied the pretrained deep convolutional neural networks ResNet50, InceptionV3, and InceptionResnetV2 to predict cell mechanisms of action in response to chemical perturbations for two cell profiling datasets from the Broad Bioimage Benchmark Collection. These networks were pretrained on ImageNet, enabling much quicker model training. We obtain higher predictive accuracy than previously reported, between 95% and 97%. The ability to quickly and accurately distinguish between different cell morphologies from a scarce amount of labeled data illustrates the combined benefit of transfer learning and deep convolutional neural networks for interrogating cell-based images.
Project description:Neurodegenerative disorders such as Alzheimer's Disease (AD) and Mild Cognitive Impairment (MCI) significantly impact brain function and cognition. Advanced neuroimaging techniques, particularly Magnetic Resonance Imaging (MRI), play a crucial role in diagnosing these conditions by detecting structural abnormalities. This study leverages the ADNI and OASIS datasets, renowned for their extensive MRI data, to develop effective models for detecting AD and MCI. The research conducted three sets of tests, comparing multiple groups: multi-class classification (AD vs. Cognitively Normal (CN) vs. MCI), binary classification (AD vs. CN, and MCI vs. CN), to evaluate the performance of models trained on ADNI and OASIS datasets. Key preprocessing techniques such as Gaussian filtering, contrast enhancement, and resizing were applied to both datasets. Additionally, skull stripping using U-Net was utilized to extract features by removing the skull. Several prominent deep learning architectures including DenseNet-201, EfficientNet-B0, ResNet-50, ResNet-101, and ResNet-152 were investigated to identify subtle patterns associated with AD and MCI. Transfer learning techniques were employed to enhance model performance, leveraging pre-trained datasets for improved Alzheimer's MCI detection. ResNet-101 exhibited superior performance compared to other models, achieving 98.21% accuracy on the ADNI dataset and 97.45% accuracy on the OASIS dataset in multi-class classification tasks encompassing AD, CN, and MCI. It also performed well in binary classification tasks distinguishing AD from CN. ResNet-152 excelled particularly in binary classification between MCI and CN on the OASIS dataset. These findings underscore the utility of deep learning models in accurately identifying and distinguishing neurodegenerative diseases, showcasing their potential for enhancing clinical diagnosis and treatment monitoring.
Project description:PurposeTo assess the capability of deep convolutional neural networks to classify anatomical location and projection from a series of 48 standard views of racehorse limbs.Materials and methodsRadiographs (N = 9504) of horse limbs from image sets made for veterinary inspections by 10 independent veterinary clinics were used to train, validate and test (116, 40 and 42 radiographs, respectively) six deep learning architectures available as part of the open source machine learning framework PyTorch. The deep learning architectures with the best top-1 accuracy had the batch size further investigated.ResultsTop-1 accuracy of six deep learning architectures ranged from 0.737 to 0.841. Top-1 accuracy of the best deep learning architecture (ResNet-34) ranged from 0.809 to 0.878, depending on batch size. ResNet-34 (batch size = 8) achieved the highest top-1 accuracy (0.878) and the majority (91.8%) of misclassification was due to laterality error. Class activation maps indicated that joint morphology, not side markers or other non-anatomical image regions, drove the model decision.ConclusionsDeep convolutional neural networks can classify equine pre-import radiographs into the 48 standard views including moderate discrimination of laterality, independent of side marker presence.
Project description:Wheat blast is a threat to global wheat production, and limited blast-resistant cultivars are available. The current estimations of wheat spike blast severity rely on human assessments, but this technique could have limitations. Reliable visual disease estimations paired with Red Green Blue (RGB) images of wheat spike blast can be used to train deep convolutional neural networks (CNN) for disease severity (DS) classification. Inter-rater agreement analysis was used to measure the reliability of who collected and classified data obtained under controlled conditions. We then trained CNN models to classify wheat spike blast severity. Inter-rater agreement analysis showed high accuracy and low bias before model training. Results showed that the CNN models trained provide a promising approach to classify images in the three wheat blast severity categories. However, the models trained on non-matured and matured spikes images showing the highest precision, recall, and F1 score when classifying the images. The high classification accuracy could serve as a basis to facilitate wheat spike blast phenotyping in the future.
Project description:In computer-aided analysis of cardiac MRI data, segmentations of the left ventricle (LV) and myocardium are performed to quantify LV ejection fraction and LV mass, and they are performed after the identification of a short axis slice coverage, where automatic classification of the slice range of interest is preferable. Standard cardiac image post-processing guidelines indicate the importance of the correct identification of a short axis slice range for accurate quantification. We investigated the feasibility of applying transfer learning of deep convolutional neural networks (CNNs) as a means to automatically classify the short axis slice range, as transfer learning is well suited to medical image data where labeled data is scarce and expensive to obtain. The short axis slice images were classified into out-of-apical, apical-to-basal, and out-of-basal, on the basis of short axis slice location in the LV. We developed a custom user interface to conveniently label image slices into one of the three categories for the generation of training data and evaluated the performance of transfer learning in nine popular deep CNNs. Evaluation with unseen test data indicated that among the CNNs the fine-tuned VGG16 produced the highest values in all evaluation categories considered and appeared to be the most appropriate choice for the cardiac slice range classification.