Project description:Due to this epidemic of COVID-19, the everyday lives, welfare, and wealth of a country are affected. Inefficiency, a lack of medical diagnostics, and inadequately trained healthcare professionals are among the most significant barriers to arresting the development of this disease. Blockchain offers enormous promise for providing consistent and reliable real time and smart health facilities offsite. The infected patients with COVID-19 have shown they often have a lung infection upon arrival. It can be detected and analyzed using CT scan images. Unfortunately, though, it is time-consuming and liable to error. Thus, the assessment of chest CT scans must be automated. The proposed method uses transfer deep learning techniques to analyze CT scan images automatically. Transfer deep learning can improve the parameters of networks on huge databases, and pretrained networks can be used effectively on small datasets. We proposed a model built on VGGNet19, a convolutional neural network to classify individuals infected with coronavirus utilizing images of CT radiographs. We have used a globally accessible CT scan database that included 2500 CT pictures with COVID-19 infection and 2500 CT images without COVID-19 infection. An extensive experiment has been conducted using three deep learning methods such as VGG19, Xception Net, and CNN. Experiment findings indicate that the proposed model outperforms the other Xception Net and CNN models considerably. The results demonstrate that the proposed models have an accuracy of up to 95% and area under the receiver operating characteristic curve up to 95%.
Project description:The classification of bird species is of significant importance in the field of ornithology, as it plays an important role in assessing and monitoring environmental dynamics, including habitat modifications, migratory behaviors, levels of pollution, and disease occurrences. Traditional methods of bird classification, such as visual identification, were time-intensive and required a high level of expertise. However, audio-based bird species classification is a promising approach that can be used to automate bird species identification. This study aims to establish an audio-based bird species classification system for 264 Eastern African bird species employing modified deep transfer learning. In particular, the pre-trained EfficientNet technique was utilized for the investigation. The study adapts the fine-tune model to learn the pertinent patterns from mel spectrogram images specific to this bird species classification task. The fine-tuned EfficientNet model combined with a type of Recurrent Neural Networks (RNNs) namely Gated Recurrent Unit (GRU) and Long short-term memory (LSTM). RNNs are employed to capture the temporal dependencies in audio signals, thereby enhancing bird species classification accuracy. The dataset utilized in this work contains nearly 17,000 bird sound recordings across a diverse range of species. The experiment was conducted with several combinations of EfficientNet and RNNs, and EfficientNet-B7 with GRU surpasses other experimental models with an accuracy of 84.03% and a macro-average precision score of 0.8342.
Project description:BackgroundMyopia is the leading cause of visual impairment and affects millions of children worldwide. Timely and annual manual optometric screenings of the entire at-risk population improve outcomes, but screening is challenging due to the lack of availability and training of assessors and the economic burden imposed by the screenings. Recently, deep learning and computer vision have shown powerful potential for disease screening. However, these techniques have not been applied to large-scale myopia screening using ocular appearance images.MethodsWe trained a deep learning system (DLS) for myopia detection using 2,350 ocular appearance images (processed by 7,050 pictures) from children aged 6 to 18. Myopia is defined as a spherical equivalent refraction (SER) [the algebraic sum in diopters (D), sphere + 1/2 cylinder] ≤-0.5 diopters. Saliency maps and gradient class activation maps (grad-CAM) were used to highlight the regions recognized by VGG-Face. In a prospective clinical trial, 100 ocular appearance images were used to assess the performance of the DLS.ResultsThe area under the curve (AUC), sensitivity, and specificity of the DLS were 0.9270 (95% CI, 0.8580-0.9610), 81.13% (95% CI, 76.86-5.39%), and 86.42% (95% CI, 82.30-90.54%), respectively. Based on the saliency maps and grad-CAMs, the DLS mainly focused on eyes, especially the temporal sclera, rather than the background or other parts of the face. In the prospective clinical trial, the DLS achieved better diagnostic performance than the ophthalmologists in terms of sensitivity [DLS: 84.00% (95% CI, 73.50-94.50%) versus ophthalmologists: 64.00% (95% CI, 48.00-72.00%)] and specificity [DLS: 74.00% (95% CI, 61.40-86.60%) versus ophthalmologists: 53.33% (95% CI, 30.00-66.00%)]. We also computed AUC subgroups stratified by sex and age. DLS achieved comparable AUCs for children of different sexes and ages.ConclusionsThis study for the first time applied deep learning to myopia screening using ocular images and achieved high screening accuracy, enabling the remote monitoring of the refractive status in children with myopia. The application of our DLS will directly benefit public health and relieve the substantial burden imposed by myopia-associated visual impairment or blindness.
Project description:The COVID-19 pandemic is causing a major outbreak in more than 150 countries around the world, having a severe impact on the health and life of many people globally. One of the crucial step in fighting COVID-19 is the ability to detect the infected patients early enough, and put them under special care. Detecting this disease from radiography and radiology images is perhaps one of the fastest ways to diagnose the patients. Some of the early studies showed specific abnormalities in the chest radiograms of patients infected with COVID-19. Inspired by earlier works, we study the application of deep learning models to detect COVID-19 patients from their chest radiography images. We first prepare a dataset of 5000 Chest X-rays from the publicly available datasets. Images exhibiting COVID-19 disease presence were identified by board-certified radiologist. Transfer learning on a subset of 2000 radiograms was used to train four popular convolutional neural networks, including ResNet18, ResNet50, SqueezeNet, and DenseNet-121, to identify COVID-19 disease in the analyzed chest X-ray images. We evaluated these models on the remaining 3000 images, and most of these networks achieved a sensitivity rate of 98% ( ± 3%), while having a specificity rate of around 90%. Besides sensitivity and specificity rates, we also present the receiver operating characteristic (ROC) curve, precision-recall curve, average prediction, and confusion matrix of each model. We also used a technique to generate heatmaps of lung regions potentially infected by COVID-19 and show that the generated heatmaps contain most of the infected areas annotated by our board certified radiologist. While the achieved performance is very encouraging, further analysis is required on a larger set of COVID-19 images, to have a more reliable estimation of accuracy rates. The dataset, model implementations (in PyTorch), and evaluations, are all made publicly available for research community at https://github.com/shervinmin/DeepCovid.git.
Project description:Oral cancer is a prevalent malignancy that affects the oral cavity in the region of head and neck. The study of oral malignant lesions is an essential step for the clinicians to provide a better treatment plan at an early stage for oral cancer. Deep learning based computer-aided diagnostic system has achieved success in many applications and can provide an accurate and timely diagnosis of oral malignant lesions. In biomedical image classification, getting large training dataset is a challenge, which can be efficiently handled by transfer learning as it retrieves the general features from a dataset of natural images and adapted directly to new image dataset. In this work, to achieve an effective deep learning based computer-aided system, the classifications of Oral Squamous Cell Carcinoma (OSCC) histopathology images are performed using two proposed approaches. In the first approach, to identify the best appropriate model to differentiate between benign and malignant cancers, transfer learning assisted deep convolutional neural networks (DCNNs), are considered. To handle the challenge of small dataset and further increase the training efficiency of the proposed model, the pretrained VGG16, VGG19, ResNet50, InceptionV3, and MobileNet, are fine-tuned by training half of the layers and leaving others frozen. In the second approach, a baseline DCNN architecture, trained from scratch with 10 convolution layers is proposed. In addition, a comparative analysis of these models is carried out in terms of classification accuracy and other performance measures. The experimental results demonstrate that ResNet50 obtains substantially superior performance than selected fine-tuned DCNN models as well as the proposed baseline model with an accuracy of 96.6%, precision and recall values are 97% and 96%, respectively.
Project description:Pseudouridine(Ψ) is widely popular among various RNA modifications which have been confirmed to occur in rRNA, mRNA, tRNA, and nuclear/nucleolar RNA. Hence, identifying them has vital significance in academic research, drug development and gene therapies. Several laboratory techniques for Ψ identification have been introduced over the years. Although these techniques produce satisfactory results, they are costly, time-consuming and requires skilled experience. As the lengths of RNA sequences are getting longer day by day, an efficient method for identifying pseudouridine sites using computational approaches is very important. In this paper, we proposed a multi-channel convolution neural network using binary encoding. We employed k-fold cross-validation and grid search to tune the hyperparameters. We evaluated its performance in the independent datasets and found promising results. The results proved that our method can be used to identify pseudouridine sites for associated purposes. We have also implemented an easily accessible web server at http://103.99.176.239/ipseumulticnn/.
Project description:Deep learning is being employed in disease detection and classification based on medical images for clinical decision making. It typically requires large amounts of labelled data; however, the sample size of such medical image datasets is generally small. This study proposes a novel training framework for building deep learning models of disease detection and classification with small datasets. Our approach is based on a hierarchical classification method where the healthy/disease information from the first model is effectively utilized to build subsequent models for classifying the disease into its sub-types via a transfer learning method. To improve accuracy, multiple input datasets were used, and a stacking ensembled method was employed for final classification. To demonstrate the method's performance, a labelled dataset extracted from volumetric ophthalmic optical coherence tomography data for 156 healthy and 798 glaucoma eyes was used, in which glaucoma eyes were further labelled into four sub-types. The average weighted accuracy and Cohen's kappa for three randomized test datasets were 0.839 and 0.809, respectively. Our approach outperformed the flat classification method by 9.7% using smaller training datasets. The results suggest that the framework can perform accurate classification with a small number of medical images.
Project description:Weather recognition is crucial due to its significant impact on various aspects of daily life, such as weather prediction, environmental monitoring, tourism, and energy production. Several studies have already conducted research on image-based weather recognition. However, previous studies have addressed few types of weather phenomena recognition from images with insufficient accuracy. In this paper, we propose a transfer learning CNN framework for classifying air temperature levels from human clothing images. The framework incorporates various deep transfer learning approaches, including DeepLabV3 Plus for semantic segmentation and others for classification such as BigTransfer (BiT), Vision Transformer (ViT), ResNet101, VGG16, VGG19, and DenseNet121. Meanwhile, we have collected a dataset called the Human Clothing Image Dataset (HCID), consisting of 10,000 images with two categories (High and Low air temperature). All the models were evaluated using various classification metrics, such as the confusion matrix, loss, precision, F1-score, recall, accuracy, and AUC-ROC. Additionally, we applied Gradient-weighted Class Activation Mapping (Grad-CAM) to emphasize significant features and regions identified by models during the classification process. The results show that DenseNet121 outperformed other models with an accuracy of 98.13%. Promising experimental results highlight the potential benefits of the proposed framework for detecting air temperature levels, aiding in weather prediction and environmental monitoring.
Project description:Pancreatic cancer is one of the most adverse diseases and it is very difficult to treat because the cancer cells formed in the pancreas intertwine themselves with nearby blood vessels and connective tissue. Hence, the surgical procedure of treatment becomes complicated and it does not always lead to a cure. Histopathological diagnosis is the usual approach for cancer diagnosis. However, the pancreas remains so deep inside the body that experts sometimes struggle to detect cancer in it. Computer-aided diagnosis can come to the aid of pathologists in this scenario. It assists experts by supporting their diagnostic decisions. In this research, we carried out a deep learning-based approach to analyze histopathology images. We collected whole-slide images of KPC mice to implement this work. The pancreatic abnormalities observed in KPC mice develop similar histological features to human beings. We created random patches from whole-slide images. Then, a convolutional autoencoder framework was used to embed these patches into an integrated latent space. We applied 'information maximization', a deep learning clustering technique to cluster the identical patches in an unsupervised manner since our dataset does not have annotation. Moreover, Uniform manifold approximation and projection, a nonlinear dimension reduction technique was utilized to visualize the embedded patches in a 2-dimensional space. Finally, we calculated a few internal cluster validation metrics to determine the optimal cluster set. Our work concentrated on patch-based anomaly detection in the whole slide histopathology images of KPC mice.
Project description:BackgroundDespite the high commercial fisheries value and ecological importance as prey item for higher marine predators, very limited taxonomic work has been done on cephalopods in Malaysia. Due to the soft-bodied nature of cephalopods, the identification of cephalopod species based on the beak hard parts can be more reliable and useful than conventional body morphology. Since the traditional method for species classification was time-consuming, this study aimed to develop an automated identification model that can identify cephalopod species based on beak images.MethodsA total of 174 samples of seven cephalopod species were collected from the west coast of Peninsular Malaysia. Both upper and lower beaks were extracted from the samples and the left lateral views of upper and lower beak images were acquired. Three types of traditional morphometric features were extracted namely grey histogram of oriented gradient (HOG), colour HOG, and morphological shape descriptor (MSD). In addition, deep features were extracted by using three pre-trained convolutional neural networks (CNN) models which are VGG19, InceptionV3, and Resnet50. Eight machine learning approaches were used in the classification step and compared for model performance.ResultsThe results showed that the Artificial Neural Network (ANN) model achieved the best testing accuracy of 91.14%, using the deep features extracted from the VGG19 model from lower beak images. The results indicated that the deep features were more accurate than the traditional features in highlighting morphometric differences from the beak images of cephalopod species. In addition, the use of lower beaks of cephalopod species provided better results compared to the upper beaks, suggesting that the lower beaks possess more significant morphological differences between the studied cephalopod species. Future works should include more cephalopod species and sample size to enhance the identification accuracy and comprehensiveness of the developed model.