Project description:Optic-disc photography (ODP) has proven to be very useful for optic nerve evaluation in glaucoma. In real clinical practice, however, limited patient cooperation, small pupils, or media opacities can limit the performance of ODP. The purpose of this study was to propose a deep-learning approach for increased resolution and improved legibility of ODP by contrast, color, and brightness compensation. Each high-resolution original ODP was transformed into two counterparts: (1) down-scaled 'low-resolution ODPs', and (2) 'compensated high-resolution ODPs' produced via enhancement of the visibility of the optic disc margin and surrounding retinal vessels using a customized image post-processing algorithm. Then, the differences between these two counterparts were directly learned through a super-resolution generative adversarial network (SR-GAN). Finally, by inputting the high-resolution ODPs into SR-GAN, 4-times-up-scaled and overall-color-and-brightness-transformed 'enhanced ODPs' could be obtained. General ophthalmologists were instructed (1) to assess each ODP's image quality, and (2) to note any abnormal findings, at 1-month intervals. The image quality score for the enhanced ODPs was significantly higher than that for the original ODP, and the overall optic disc hemorrhage (DH)-detection accuracy was significantly higher with the enhanced ODPs. We expect that this novel deep-learning approach will be applied to various types of ophthalmic images.
Project description:Visual field assessment is recognized as the important criterion of glaucomatous damage judgement; however, it can show large test-retest variability. We developed a deep learning (DL) algorithm that quantitatively predicts mean deviation (MD) of standard automated perimetry (SAP) from monoscopic optic disc photographs (ODPs). A total of 1200 image pairs (ODPs and SAP results) for 563 eyes of 327 participants were enrolled. A DL model was built by combining a pre-trained DL network and subsequently trained fully connected layers. The correlation coefficient and mean absolute error (MAE) between the predicted and measured MDs were calculated. The area under the receiver operating characteristic curve (AUC) was calculated to evaluate the detection ability for glaucomatous visual field (VF) loss. The data were split into training/validation (1000 images) and testing (200 images) sets to evaluate the performance of the algorithm. The predicted MD showed a strong correlation and good agreement with the actual MD (correlation coefficient = 0.755; R2 = 57.0%; MAE = 1.94 dB). The model also accurately predicted the presence of glaucomatous VF loss (AUC 0.953). The DL algorithm showed great feasibility for prediction of MD and detection of glaucomatous functional loss from ODPs.
Project description:Acute retinal necrosis (ARN) is a relatively rare but highly damaging and potentially sight-threatening type of uveitis caused by infection with the human herpesvirus. Without timely diagnosis and appropriate treatment, ARN can lead to severe vision loss. We aimed to develop a deep learning framework to distinguish ARN from other types of intermediate, posterior, and panuveitis using ultra-widefield color fundus photography (UWFCFP). We conducted a two-center retrospective discovery and validation study to develop and validate a deep learning model called DeepDrARN for automatic uveitis detection and differentiation of ARN from other uveitis types using 11,508 UWFCFPs from 1,112 participants. Model performance was evaluated with the area under the receiver operating characteristic curve (AUROC), the area under the precision and recall curves (AUPR), sensitivity and specificity, and compared with seven ophthalmologists. DeepDrARN for uveitis screening achieved an AUROC of 0.996 (95% CI: 0.994-0.999) in the internal validation cohort and demonstrated good generalizability with an AUROC of 0.973 (95% CI: 0.956-0.990) in the external validation cohort. DeepDrARN also demonstrated excellent predictive ability in distinguishing ARN from other types of uveitis with AUROCs of 0.960 (95% CI: 0.943-0.977) and 0.971 (95% CI: 0.956-0.986) in the internal and external validation cohorts. DeepDrARN was also tested in the differentiation of ARN, non-ARN uveitis (NAU) and normal subjects, with sensitivities of 88.9% and 78.7% and specificities of 93.8% and 89.1% in the internal and external validation cohorts, respectively. The performance of DeepDrARN is comparable to that of ophthalmologists and even exceeds the average accuracy of seven ophthalmologists, showing an improvement of 6.57% in uveitis screening and 11.14% in ARN identification. Our study demonstrates the feasibility of deep learning algorithms in enabling early detection, reducing treatment delays, and improving outcomes for ARN patients.
Project description:BackgroundFundus Autofluorescence (FAF) is a valuable imaging technique used to assess metabolic alterations in the retinal pigment epithelium (RPE) associated with various age-related and disease-related changes. The practical uses of FAF are ever-growing. This study aimed to evaluate the effectiveness of a generative deep learning (DL) model in translating color fundus (CF) images into synthetic FAF images and explore its potential for enhancing screening of age-related macular degeneration (AMD).MethodsA generative adversarial network (GAN) model was trained on pairs of CF and FAF images to generate synthetic FAF images. The quality of synthesized FAF images was assessed objectively by common generation metrics. Additionally, the clinical effectiveness of the generated FAF images in AMD classification was evaluated by measuring the area under the curve (AUC), using the LabelMe dataset.ResultsA total of 8410 FAF images from 2586 patients were analyzed. The synthesized FAF images exhibited an impressive objectively assessed quality, achieving a multi-scale structural similarity index (MS-SSIM) of 0.67. When evaluated on the LabelMe dataset, the combination of generated FAF images and CF images resulted in a noteworthy improvement in AMD classification accuracy, with the AUC increasing from 0.931 to 0.968.ConclusionsThis study presents the first attempt to use a generative deep learning model to create authentic and high-quality FAF images from CF images. The incorporation of the translated FAF images on top of CF images improved the accuracy of AMD classification. Overall, this study presents a promising approach to enhance large-scale AMD screening.
Project description:PurposeOptic disc (OD) and optic cup (OC) segmentation are fundamental for fundus image analysis. Manual annotation is time consuming, expensive, and highly subjective, whereas an automated system is invaluable to the medical community. The aim of this study is to develop a deep learning system to segment OD and OC in fundus photographs, and evaluate how the algorithm compares against manual annotations.MethodsA total of 1200 fundus photographs with 120 glaucoma cases were collected. The OD and OC annotations were labeled by seven licensed ophthalmologists, and glaucoma diagnoses were based on comprehensive evaluations of the subject medical records. A deep learning system for OD and OC segmentation was developed. The performances of segmentation and glaucoma discriminating based on the cup-to-disc ratio (CDR) of automated model were compared against the manual annotations.ResultsThe algorithm achieved an OD dice of 0.938 (95% confidence interval [CI] = 0.934-0.941), OC dice of 0.801 (95% CI = 0.793-0.809), and CDR mean absolute error (MAE) of 0.077 (95% CI = 0.073 mean absolute error (MAE)0.082). For glaucoma discriminating based on CDR calculations, the algorithm obtained an area under receiver operator characteristic curve (AUC) of 0.948 (95% CI = 0.920 mean absolute error (MAE)0.973), with a sensitivity of 0.850 (95% CI = 0.794-0.923) and specificity of 0.853 (95% CI = 0.798-0.918).ConclusionsWe demonstrated the potential of the deep learning system to assist ophthalmologists in analyzing OD and OC segmentation and discriminating glaucoma from nonglaucoma subjects based on CDR calculations.Translational relevanceWe investigate the segmentation of OD and OC by deep learning system compared against the manual annotations.
Project description:ObjectiveTo report the development and performance of 2 distinct deep learning models trained exclusively on retinal color fundus photographs to classify Alzheimer disease (AD).Patients and methodsTwo independent datasets (UK Biobank and our tertiary academic institution) of good-quality retinal photographs derived from patients with AD and controls were used to build 2 deep learning models, between April 1, 2021, and January 30, 2024. ADVAS is a U-Net-based architecture that uses retinal vessel segmentation. ADRET is a bidirectional encoder representations from transformers style self-supervised learning convolutional neural network pretrained on a large data set of retinal color photographs from UK Biobank. The models' performance to distinguish AD from non-AD was determined using mean accuracy, sensitivity, specificity, and receiving operating curves. The generated attention heatmaps were analyzed for distinctive features.ResultsThe self-supervised ADRET model had superior accuracy when compared with ADVAS, in both UK Biobank (98.27% vs 77.20%; P<.001) and our institutional testing data sets (98.90% vs 94.17%; P=.04). No major differences were noted between the original and binary vessel segmentation and between both eyes vs single-eye models. Attention heatmaps obtained from patients with AD highlighted regions surrounding small vascular branches as areas of highest relevance to the model decision making.ConclusionA bidirectional encoder representations from transformers style self-supervised convolutional neural network pretrained on a large data set of retinal color photographs alone can screen symptomatic AD with high accuracy, better than U-Net-pretrained models. To be translated in clinical practice, this methodology requires further validation in larger and diverse populations and integrated techniques to harmonize fundus photographs and attenuate the imaging-associated noise.
Project description:Retinal hemorrhage (RH) is a significant clinical finding with various etiologies, necessitating accurate classification for effective management. This study aims to externally validate deep learning (DL) models, specifically FastVit_SA12 and ResNet18, for distinguishing between traumatic and medical causes of RH using diverse fundus photography datasets. A comprehensive dataset was compiled, including private collections from South Korea and Virginia, alongside publicly available datasets such as RFMiD, BRSET, and DeepEyeNet. The models were evaluated on a total of 2661 images, achieving high performance metrics. FastVit_SA12 demonstrated an overall accuracy of 96.99%, with a precision of 0.9935 and recall of 0.9723 for medical cases, while ResNet18 achieved a 94.66% accuracy with a precision of 0.9893. A Grad-CAM analysis revealed that ResNet18 emphasized global vascular patterns, such as arcuate vessels, while FastVit_SA12 focused on clinically relevant areas, including the optic disk and hemorrhagic regions. Medical cases showed localized activations, whereas trauma-related images displayed diffuse patterns across the fundus. Both models exhibited strong sensitivity and specificity, indicating their potential utility in clinical settings for accurate RH diagnosis. This study underscores the importance of external validation in enhancing the reliability and applicability of AI models in ophthalmology, paving the way for improved patient care and outcomes.
Project description:Background/aimsTo investigate the utility of a data-driven deep learning approach in patients with inherited retinal disorder (IRD) and to predict the causative genes based on fundus photography and fundus autofluorescence (FAF) imaging.MethodsClinical and genetic data from 1302 subjects from 729 genetically confirmed families with IRD registered with the Japan Eye Genetics Consortium were reviewed. Three categories of genetic diagnosis were selected, based on the high prevalence of their causative genes: Stargardt disease (ABCA4), retinitis pigmentosa (EYS) and occult macular dystrophy (RP1L1). Fundus photographs and FAF images were cropped in a standardised manner with a macro algorithm. Images for training/testing were selected using a randomised, fourfold cross-validation method. The application program interface was established to reach the learning accuracy of concordance (target: >80%) between the genetic diagnosis and the machine diagnosis (ABCA4, EYS, RP1L1 and normal).ResultsA total of 417 images from 156 Japanese subjects were examined, including 115 genetically confirmed patients caused by the three prevalent causative genes and 41 normal subjects. The mean overall test accuracy for fundus photographs and FAF images was 88.2% and 81.3%, respectively. The mean overall sensitivity/specificity values for fundus photographs and FAF images were 88.3%/97.4% and 81.8%/95.5%, respectively.ConclusionA novel application of deep neural networks in the prediction of the causative IRD genes from fundus photographs and FAF, with a high prediction accuracy of over 80%, was highlighted. These achievements will extensively promote the quality of medical care by facilitating early diagnosis, especially by non-specialists, access to care, reducing the cost of referrals, and preventing unnecessary clinical and genetic testing.
Project description:We aimed to determine the effect of optic disc tilt on deep learning-based optic disc classification. A total of 2507 fundus photographs were acquired from 2236 eyes of 1809 subjects (mean age of 46 years; 53% men). Among all photographs, 1010 (40.3%) had tilted optic discs. Image annotation was performed to label pathologic changes of the optic disc (normal, glaucomatous optic disc changes, disc swelling, and disc pallor). Deep learning-based classification modeling was implemented to develop optic-disc appearance classification models with the photographs of all subjects and those with and without tilted optic discs. Regardless of deep learning algorithms, the classification models showed better overall performance when developed based on data from subjects with non-tilted discs (AUC, 0.988 ± 0.002, 0.991 ± 0.003, and 0.986 ± 0.003 for VGG16, VGG19, and DenseNet121, respectively) than when developed based on data with tilted discs (AUC, 0.924 ± 0.046, 0.928 ± 0.017, and 0.935 ± 0.008). In classification of each pathologic change, non-tilted disc models had better sensitivity and specificity than the tilted disc models. The optic disc appearance classification models developed based all-subject data demonstrated lower accuracy in patients with the appearance of tilted discs than in those with non-tilted discs. Our findings suggested the need to identify and adjust for the effect of optic disc tilt on the optic disc classification algorithm in future development.
Project description:This study aimed to investigate the risk factors for glaucoma conversion and progression in eyes with large optic disc cupping without retinal nerve fiber layer defect (RNFLD). Five hundred forty-two eyes of 271 subjects who had a vertical cup-to-disc ratio (CDR) ≥ 0.6 without RNFLD were enrolled. Characteristics for optic disc configuration (including CDR, vertical cupping, ISNT rule, disc ovality, peripapillary atrophy [PPA]-to-disc area [DA] ratio, and lamina cribrosa pore visibility) and blood vessels (including central retinal vessel trunk [CRVT] nasalization, bayoneting of vessels, baring of circumlinear vessels, history of disc hemorrhage [DH] and vessel narrowing/sclerotic change) were evaluated. From a median follow-up of 11.3 years, 26.6% of eyes (n = 144) developed RNFLD within a median of 5.1 years. Baseline factors, including vertical CDR ≥ 0.7 (hazard ratio [HR] = 2.12), vertical cupping (HR = 1.93), ISNT rule violation (HR = 2.84), disc ovality ≥ 1.2 (HR = 1.61), PPA-to-DA ratio ≥ 0.4 (HR = 1.77), CRVT nasalization ≥ 60% (HR = 1.77), vessel narrowing/sclerotic change (HR = 2.13), DH history (HR = 5.60), and baseline intraocular pressure ≥ 14 mmHg (HR = 1.70) were significantly associated with glaucoma conversion (all Ps < 0.05). An HR-matched scoring system based on initial fundus photography predicted glaucoma conversion with specificity of 90.4%. Careful examination of the optic nerve head and vascular structures can help to predict the risk of glaucoma conversion in eyes with large optic disc cupping.