Project description:Breast cancer is associated with the highest morbidity rates for cancer diagnoses in the world and has become a major public health issue. Early diagnosis can increase the chance of successful treatment and survival. However, it is a very challenging and time-consuming task that relies on the experience of pathologists. The automatic diagnosis of breast cancer by analyzing histopathological images plays a significant role for patients and their prognosis. However, traditional feature extraction methods can only extract some low-level features of images, and prior knowledge is necessary to select useful features, which can be greatly affected by humans. Deep learning techniques can extract high-level abstract features from images automatically. Therefore, we introduce it to analyze histopathological images of breast cancer via supervised and unsupervised deep convolutional neural networks. First, we adapted Inception_V3 and Inception_ResNet_V2 architectures to the binary and multi-class issues of breast cancer histopathological image classification by utilizing transfer learning techniques. Then, to overcome the influence from the imbalanced histopathological images in subclasses, we balanced the subclasses with Ductal Carcinoma as the baseline by turning images up and down, right and left, and rotating them counterclockwise by 90 and 180 degrees. Our experimental results of the supervised histopathological image classification of breast cancer and the comparison to the results from other studies demonstrate that Inception_V3 and Inception_ResNet_V2 based histopathological image classification of breast cancer is superior to the existing methods. Furthermore, these findings show that Inception_ResNet_V2 network is the best deep learning architecture so far for diagnosing breast cancers by analyzing histopathological images. Therefore, we used Inception_ResNet_V2 to extract features from breast cancer histopathological images to perform unsupervised analysis of the images. We also constructed a new autoencoder network to transform the features extracted by Inception_ResNet_V2 to a low dimensional space to do clustering analysis of the images. The experimental results demonstrate that using our proposed autoencoder network results in better clustering results than those based on features extracted only by Inception_ResNet_V2 network. All of our experimental results demonstrate that Inception_ResNet_V2 network based deep transfer learning provides a new means of performing analysis of histopathological images of breast cancer.
Project description:Identifying forest types is vital for evaluating the ecological, economic, and social benefits provided by forests, and for protecting, managing, and sustaining them. Although traditionally based on expert observation, recent developments have increased the use of technologies such as artificial intelligence (AI). The use of advanced methods such as deep learning will make forest species recognition faster and easier. In this study, the deep network models RestNet18, GoogLeNet, VGG19, Inceptionv3, MobileNetv2, DenseNet201, InceptionResNetv2, EfficientNet and ShuffleNet, which were pre-trained with ImageNet dataset, were adapted to a new dataset. In this adaptation, transfer learning method is used. These models have different architectures that allow a wide range of performance evaluation. The performance of the model was evaluated by accuracy, recall, precision, F1-score, specificity and Matthews correlation coefficient. ShuffleNet was proposed as a lightweight network model that achieves high performance with low computational power and resource requirements. This model was an efficient model with an accuracy close to other models with customisation. This study reveals that deep network models are an effective tool in the field of forest species recognition. This study makes an important contribution to the conservation and management of forests.
Project description:BackgroundMyopia is the leading cause of visual impairment and affects millions of children worldwide. Timely and annual manual optometric screenings of the entire at-risk population improve outcomes, but screening is challenging due to the lack of availability and training of assessors and the economic burden imposed by the screenings. Recently, deep learning and computer vision have shown powerful potential for disease screening. However, these techniques have not been applied to large-scale myopia screening using ocular appearance images.MethodsWe trained a deep learning system (DLS) for myopia detection using 2,350 ocular appearance images (processed by 7,050 pictures) from children aged 6 to 18. Myopia is defined as a spherical equivalent refraction (SER) [the algebraic sum in diopters (D), sphere + 1/2 cylinder] ≤-0.5 diopters. Saliency maps and gradient class activation maps (grad-CAM) were used to highlight the regions recognized by VGG-Face. In a prospective clinical trial, 100 ocular appearance images were used to assess the performance of the DLS.ResultsThe area under the curve (AUC), sensitivity, and specificity of the DLS were 0.9270 (95% CI, 0.8580-0.9610), 81.13% (95% CI, 76.86-5.39%), and 86.42% (95% CI, 82.30-90.54%), respectively. Based on the saliency maps and grad-CAMs, the DLS mainly focused on eyes, especially the temporal sclera, rather than the background or other parts of the face. In the prospective clinical trial, the DLS achieved better diagnostic performance than the ophthalmologists in terms of sensitivity [DLS: 84.00% (95% CI, 73.50-94.50%) versus ophthalmologists: 64.00% (95% CI, 48.00-72.00%)] and specificity [DLS: 74.00% (95% CI, 61.40-86.60%) versus ophthalmologists: 53.33% (95% CI, 30.00-66.00%)]. We also computed AUC subgroups stratified by sex and age. DLS achieved comparable AUCs for children of different sexes and ages.ConclusionsThis study for the first time applied deep learning to myopia screening using ocular images and achieved high screening accuracy, enabling the remote monitoring of the refractive status in children with myopia. The application of our DLS will directly benefit public health and relieve the substantial burden imposed by myopia-associated visual impairment or blindness.
Project description:Ductal carcinoma in situ (DCIS) is a non-invasive breast cancer that can progress into invasive ductal carcinoma (IDC). Studies suggest DCIS is often overtreated since a considerable part of DCIS lesions may never progress into IDC. Lower grade lesions have a lower progression speed and risk, possibly allowing treatment de-escalation. However, studies show significant inter-observer variation in DCIS grading. Automated image analysis may provide an objective solution to address high subjectivity of DCIS grading by pathologists. In this study, we developed and evaluated a deep learning-based DCIS grading system. The system was developed using the consensus DCIS grade of three expert observers on a dataset of 1186 DCIS lesions from 59 patients. The inter-observer agreement, measured by quadratic weighted Cohen's kappa, was used to evaluate the system and compare its performance to that of expert observers. We present an analysis of the lesion-level and patient-level inter-observer agreement on an independent test set of 1001 lesions from 50 patients. The deep learning system (dl) achieved on average slightly higher inter-observer agreement to the three observers (o1, o2 and o3) (κo1,dl = 0.81, κo2,dl = 0.53 and κo3,dl = 0.40) than the observers amongst each other (κo1,o2 = 0.58, κo1,o3 = 0.50 and κo2,o3 = 0.42) at the lesion-level. At the patient-level, the deep learning system achieved similar agreement to the observers (κo1,dl = 0.77, κo2,dl = 0.75 and κo3,dl = 0.70) as the observers amongst each other (κo1,o2 = 0.77, κo1,o3 = 0.75 and κo2,o3 = 0.72). The deep learning system better reflected the grading spectrum of DCIS than two of the observers. In conclusion, we developed a deep learning-based DCIS grading system that achieved a performance similar to expert observers. To the best of our knowledge, this is the first automated system for the grading of DCIS that could assist pathologists by providing robust and reproducible second opinions on DCIS grade.
Project description:Breast cancer tumor grade is strongly associated with patient survival. In current clinical practice, pathologists assign tumor grade after visual analysis of tissue specimens. However, different studies show significant inter-observer variation in breast cancer grading. Computer-based breast cancer grading methods have been proposed but only work on specifically selected tissue areas and/or require labor-intensive annotations to be applied to new datasets. In this study, we trained and evaluated a deep learning-based breast cancer grading model that works on whole-slide histopathology images. The model was developed using whole-slide images from 706 young (< 40 years) invasive breast cancer patients with corresponding tumor grade (low/intermediate vs. high), and its constituents nuclear grade, tubule formation and mitotic rate. The performance of the model was evaluated using Cohen's kappa on an independent test set of 686 patients using annotations by expert pathologists as ground truth. The predicted low/intermediate (n = 327) and high (n = 359) grade groups were used to perform survival analysis. The deep learning system distinguished low/intermediate versus high tumor grade with a Cohen's Kappa of 0.59 (80% accuracy) compared to expert pathologists. In subsequent survival analysis the two groups predicted by the system were found to have a significantly different overall survival (OS) and disease/recurrence-free survival (DRFS/RFS) (p < 0.05). Univariate Cox hazard regression analysis showed statistically significant hazard ratios (p < 0.05). After adjusting for clinicopathologic features and stratifying for molecular subtype the hazard ratios showed a trend but lost statistical significance for all endpoints. In conclusion, we developed a deep learning-based model for automated grading of breast cancer on whole-slide images. The model distinguishes between low/intermediate and high grade tumors and finds a trend in the survival of the two predicted groups.
Project description:Accurate detection of invasive breast cancer (IC) can provide decision support to pathologists as well as improve downstream computational analyses, where detection of IC is a first step. Tissue containing IC is characterized by the presence of specific morphological features, which can be learned by convolutional neural networks (CNN). Here, we compare the use of a single CNN model versus an ensemble of several base models with the same CNN architecture, and we evaluate prediction performance as well as variability across ensemble based model predictions. Two in-house datasets comprising 587 whole slide images (WSI) are used to train an ensemble of ten InceptionV3 models whose consensus is used to determine the presence of IC. A novel visualisation strategy was developed to communicate ensemble agreement spatially. Performance was evaluated in an internal test set with 118 WSIs, and in an additional external dataset (TCGA breast cancer) with 157 WSI. We observed that the ensemble-based strategy outperformed the single CNN-model alternative with respect to accuracy on tile level in 89 % of all WSIs in the test set. The overall accuracy was 0.92 (DICE coefficient, 0.90) for the ensemble model, and 0.85 (DICE coefficient, 0.83) for the single CNN alternative in the internal test set. For TCGA the ensemble outperformed the single CNN in 96.8 % of the WSI, with an accuracy of 0.87 (DICE coefficient 0.89), the single model provides an accuracy of 0.75 (DICE coefficient 0.78). The results suggest that an ensemble-based modeling strategy for breast cancer invasive cancer detection consistently outperforms the conventional single model alternative. Furthermore, visualisation of the ensemble agreement and confusion areas provide direct visual interpretation of the results. High performing cancer detection can provide decision support in the routine pathology setting as well as facilitate downstream computational analyses.
Project description:Due to this epidemic of COVID-19, the everyday lives, welfare, and wealth of a country are affected. Inefficiency, a lack of medical diagnostics, and inadequately trained healthcare professionals are among the most significant barriers to arresting the development of this disease. Blockchain offers enormous promise for providing consistent and reliable real time and smart health facilities offsite. The infected patients with COVID-19 have shown they often have a lung infection upon arrival. It can be detected and analyzed using CT scan images. Unfortunately, though, it is time-consuming and liable to error. Thus, the assessment of chest CT scans must be automated. The proposed method uses transfer deep learning techniques to analyze CT scan images automatically. Transfer deep learning can improve the parameters of networks on huge databases, and pretrained networks can be used effectively on small datasets. We proposed a model built on VGGNet19, a convolutional neural network to classify individuals infected with coronavirus utilizing images of CT radiographs. We have used a globally accessible CT scan database that included 2500 CT pictures with COVID-19 infection and 2500 CT images without COVID-19 infection. An extensive experiment has been conducted using three deep learning methods such as VGG19, Xception Net, and CNN. Experiment findings indicate that the proposed model outperforms the other Xception Net and CNN models considerably. The results demonstrate that the proposed models have an accuracy of up to 95% and area under the receiver operating characteristic curve up to 95%.
Project description:The immunohistochemical (IHC) staining of the human epidermal growth factor receptor 2 (HER2) biomarker is widely practiced in breast tissue analysis, preclinical studies, and diagnostic decisions, guiding cancer treatment and investigation of pathogenesis. HER2 staining demands laborious tissue treatment and chemical processing performed by a histotechnologist, which typically takes one day to prepare in a laboratory, increasing analysis time and associated costs. Here, we describe a deep learning-based virtual HER2 IHC staining method using a conditional generative adversarial network that is trained to rapidly transform autofluorescence microscopic images of unlabeled/label-free breast tissue sections into bright-field equivalent microscopic images, matching the standard HER2 IHC staining that is chemically performed on the same tissue sections. The efficacy of this virtual HER2 staining framework was demonstrated by quantitative analysis, in which three board-certified breast pathologists blindly graded the HER2 scores of virtually stained and immunohistochemically stained HER2 whole slide images (WSIs) to reveal that the HER2 scores determined by inspecting virtual IHC images are as accurate as their immunohistochemically stained counterparts. A second quantitative blinded study performed by the same diagnosticians further revealed that the virtually stained HER2 images exhibit a comparable staining quality in the level of nuclear detail, membrane clearness, and absence of staining artifacts with respect to their immunohistochemically stained counterparts. This virtual HER2 staining framework bypasses the costly, laborious, and time-consuming IHC staining procedures in laboratory and can be extended to other types of biomarkers to accelerate the IHC tissue staining used in life sciences and biomedical workflow.
Project description:BackgroundBreast cancer is one of the most common cancers and the leading cause of death from cancer among women worldwide. The genetic predisposition to breast cancer may be associated with a mutation in particular genes such as gene BRCA1/2. Patients who carry a germline pathogenic mutation in BRCA1/2 genes have a significantly increased risk of developing breast cancer and might benefit from targeted therapy. However, genetic testing is time consuming and costly. This study aims to predict the risk of gBRCA mutation by using the whole-slide pathology features of breast cancer H&E stains and the patients' gBRCA mutation status.MethodsIn this study, we trained a deep convolutional neural network (CNN) of ResNet on whole-slide images (WSIs) to predict the gBRCA mutation in breast cancer. Since the dimensions are too large for slide-based training, we divided WSI into smaller tiles with the original resolution. The tile-based classification was then combined by adding the positive classification result to generate the combined slide-based accuracy. Models were trained based on the annotated tumor location and gBRCA mutation status labeled by a designated breast cancer pathologist. Four models were trained on tiles cropped at 5×, 10×, 20×, and 40× magnification, assuming that low magnification and high magnification may provide different levels of information for classification.ResultsA trained model was validated through an external dataset that contains 17 mutants and 47 wilds. In the external validation dataset, AUCs (95% CI) of DL models that used 40×, 20×, 10×, and 5× magnification tiles among all cases were 0.766 (0.763-0.769), 0.763 (0.758-0.769), 0.750 (0.738-0.761), and 0.551 (0.526-0.575), respectively, while the corresponding magnification slides among all cases were 0.774 (0.642-0.905), 0.804 (0.676-0.931), 0.828 (0.691-0.966), and 0.635 (0.471-0.798), respectively. The study also identified the influence of histological grade to the accuracy of the prediction.ConclusionIn this paper, the combination of pathology and molecular omics was used to establish the gBRCA mutation risk prediction model, revealing the correlation between the whole-slide histopathological images and gRCA mutation risk. The results indicated that the prediction accuracy is likely to improve as the training data expand. The findings demonstrated that deep CNNs could be used to assist pathologists in the detection of gene mutation in breast cancer.
Project description:Mastering the molecular mechanism of breast cancer (BC) can provide an in-depth understanding of BC pathology. This study explored existing technologies for diagnosing BC, such as mammography, ultrasound, magnetic resonance imaging (MRI), computed tomography (CT), and positron emission tomography (PET) and summarized the disadvantages of the existing cancer diagnosis. The purpose of this article is to use gene expression profiles of The Cancer Genome Atlas (TCGA) and Gene Expression Omnibus (GEO) to classify BC samples and normal samples. The method proposed in this article triumphs over some of the shortcomings of traditional diagnostic methods and can conduct BC diagnosis more rapidly with high sensitivity and have no radiation. This study first selected the genes most relevant to cancer through weighted gene co-expression network analysis (WGCNA) and differential expression analysis (DEA). Then it used the protein-protein interaction (PPI) network to screen 23 hub genes. Finally, it used the support vector machine (SVM), decision tree (DT), Bayesian network (BN), artificial neural network (ANN), convolutional neural network CNN-LeNet and CNN-AlexNet to process the expression levels of 23 hub genes. For gene expression profiles, the ANN model has the best performance in the classification of cancer samples. The ten-time average accuracy is 97.36% (±0.34%), the F1 value is 0.8535 (±0.0260), the sensitivity is 98.32% (±0.32%), the specificity is 89.59% (±3.53%) and the AUC is 0.99. In summary, this method effectively classifies cancer samples and normal samples and provides reasonable new ideas for the early diagnosis of cancer in the future.