Development and validation of a novel prognostic model for predicting AMD progression using longitudinal fundus images.
ABSTRACT: Objective:To develop a prognostic tool to predict the progression of age-related eye disease progression using longitudinal colour fundus imaging. Methods and analysis:Previous prognostic models using deep learning with imaging data require annotation during training or only use a single time point. We propose a novel deep learning method to predict the progression of diseases using longitudinal imaging data with uneven time intervals, which requires no prior feature extraction. Given previous images from a patient, our method aims to predict whether the patient will progress onto the next stage of the disease. The proposed method uses InceptionV3 to produce feature vectors for each image. In order to account for uneven intervals, a novel interval scaling is proposed. Finally, a recurrent neural network is used to prognosticate the disease. We demonstrate our method on a longitudinal dataset of colour fundus images from 4903 eyes with age-related macular degeneration (AMD), taken from the Age-Related Eye Disease Study, to predict progression to late AMD. Results:Our method attains a testing sensitivity of 0.878, a specificity of 0.887 and an area under the receiver operating characteristic of 0.950. We compare our method to previous methods, displaying superior performance in our model. Class activation maps display how the network reaches the final decision. Conclusion:The proposed method can be used to predict progression to advanced AMD at some future visit. Using multiple images at different time points improves predictive performance.
Project description:Both genetic and environmental factors influence the etiology of age-related macular degeneration (AMD), a leading cause of blindness. AMD severity is primarily measured by fundus images and recently developed machine learning methods can successfully predict AMD progression using image data. However, none of these methods have utilized both genetic and image data for predicting AMD progression. Here we jointly used genotypes and fundus images to predict an eye as having progressed to late AMD with a modified deep convolutional neural network (CNN). In total, we used 31,262 fundus images and 52 AMD-associated genetic variants from 1,351 subjects from the Age-Related Eye Disease Study (AREDS) with disease severity phenotypes and fundus images available at baseline and follow-up visits over a period of 12 years. Our results showed that fundus images coupled with genotypes could predict late AMD progression with an averaged area under the curve (AUC) value of 0.85 (95%CI: 0.83-0.86). The results using fundus images alone showed an averaged AUC of 0.81 (95%CI: 0.80-0.83). We implemented our model in a cloud-based application for individual risk assessment.
Project description:The aim of this study is to evaluate the diagnosis, staging, imaging and management preferences, and the effect of advanced imaging among practising optometrists in age-related macular degeneration (AMD).Up to 20 case vignettes (computer-based case simulations) were completed online in a computer laboratory in random order by 81 practising optometrists of Australia. Each case presented findings from a randomly selected patient seen previously at the Centre for Eye Health for a macular assessment in the following order: case history, preliminary tests and colour fundus photography. Participants were prompted to provide their diagnosis, management and imaging preference. One additional imaging result (either modified fundus photographs and infrared images, fundus autofluorescence, or optical coherence tomography [OCT]) was then provided and the questions repeated. Finally, all imaging results were provided and the questions repeated a third time.A total of 1,436 responses were analysed. The presence of macular pathology in AMD was accurately detected in 94 per cent of instances. The overall diagnostic accuracy of AMD was 61 per cent using colour fundus photography. This improved by one per cent using one additional imaging modality and a further four per cent using all imaging. Across all responses, a greater improvement in the diagnostic accuracy of AMD occurred following the presentation of OCT findings (versus other modalities). OCT was the most preferred imaging modality for AMD, while multimodal imaging was of greatest benefit in cases more often misdiagnosed using colour fundus photography alone. Overall, the cohort also displayed a tendency to underestimate disease severity.Despite reports that imaging technologies improve the stratification of AMD, our findings suggest that this effect may be small when applied among practising optometrists without additional or specific training.
Project description:By 2040, age-related macular degeneration (AMD) will affect ~288 million people worldwide. Identifying individuals at high risk of progression to late AMD, the sight-threatening stage, is critical for clinical actions, including medical interventions and timely monitoring. Although deep learning has shown promise in diagnosing/screening AMD using color fundus photographs, it remains difficult to predict individuals' risks of late AMD accurately. For both tasks, these initial deep learning attempts have remained largely unvalidated in independent cohorts. Here, we demonstrate how deep learning and survival analysis can predict the probability of progression to late AMD using 3298 participants (over 80,000 images) from the Age-Related Eye Disease Studies AREDS and AREDS2, the largest longitudinal clinical trials in AMD. When validated against an independent test data set of 601 participants, our model achieved high prognostic accuracy (5-year <i>C</i>-statistic 86.4 (95% confidence interval 86.2-86.6)) that substantially exceeded that of retinal specialists using two existing clinical standards (81.3 (81.1-81.5) and 82.0 (81.8-82.3), respectively). Interestingly, our approach offers additional strengths over the existing clinical standards in AMD prognosis (e.g., risk ascertainment above 50%) and is likely to be highly generalizable, given the breadth of training data from 82 US retinal specialty clinics. Indeed, during external validation through training on AREDS and testing on AREDS2 as an independent cohort, our model retained substantially higher prognostic accuracy than existing clinical standards. These results highlight the potential of deep learning systems to enhance clinical decision-making in AMD patients.
Project description:Transplantation of autologous human induced pluripotent stem cell-derived retinal pigment epithelial (hiPSC-RPE) sheets is a promising therapy for age-related macular degeneration (AMD). As melanin content is a representative feature of healthy RPE, we used polarization-sensitive optical coherence tomography (PS-OCT) to estimate the relative melanin content of RPE in diseased and non-diseased area, and in human iPSC-RPE sheets in vitro and in vivo by evaluating the randomness of polarization (entropy). Two aged Japanese women, one with neovascular AMD that underwent transplantation of an autologous hiPSC-RPE cell sheet and another with binocular dry AMD, were selected for this study. Entropy value was minimal in cells containing no melanin, whereas that of human RPE and hiPSC-RPE sheets was high. En face entropy of the cultured hiPSC-RPE sheet was compared with its grey-scale photo and its values were found to be inversely correlated with the extent of absence of pigmentation in vitro. En face entropy maps were compared to colour fundus photographs, fundus autofluorescence images, and fluorescein angiography images from patients. Entropy values of intact and defective RPEs and of iPSC-RPE transplant areas were determined in vivo using PS-OCT B-scan images. PS-OCT was found to be applicable in the estimation of relative melanin content of cultured and transplanted RPEs in regenerative medicine.
Project description:Importance:Although deep learning (DL) can identify the intermediate or advanced stages of age-related macular degeneration (AMD) as a binary yes or no, stratified gradings using the more granular Age-Related Eye Disease Study (AREDS) 9-step detailed severity scale for AMD provide more precise estimation of 5-year progression to advanced stages. The AREDS 9-step detailed scale's complexity and implementation solely with highly trained fundus photograph graders potentially hampered its clinical use, warranting development and use of an alternate AREDS simple scale, which although valuable, has less predictive ability. Objective:To describe DL techniques for the AREDS 9-step detailed severity scale for AMD to estimate 5-year risk probability with reasonable accuracy. Design, Setting, and Participants:This study used data collected from November 13, 1992, to November 30, 2005, from 4613 study participants of the AREDS data set to develop deep convolutional neural networks that were trained to provide detailed automated AMD grading on several AMD severity classification scales, using a multiclass classification setting. Two AMD severity classification problems using criteria based on 4-step (AMD-1, AMD-2, AMD-3, and AMD-4 from classifications developed for AREDS eligibility criteria) and 9-step (from AREDS detailed severity scale) AMD severity scales were investigated. The performance of these algorithms was compared with a contemporary human grader and against a criterion standard (fundus photograph reading center graders) used at the time of AREDS enrollment and follow-up. Three methods for estimating 5-year risk were developed, including one based on DL regression. Data were analyzed from December 1, 2017, through April 15, 2018. Main Outcomes and Measures:Weighted κ scores and mean unsigned errors for estimating 5-year risk probability of progression to advanced AMD. Results:This study used 67 401 color fundus images from the 4613 study participants. The weighted κ scores were 0.77 for the 4-step and 0.74 for the 9-step AMD severity scales. The overall mean estimation error for the 5-year risk ranged from 3.5% to 5.3%. Conclusions and Relevance:These findings suggest that DL AMD grading has, for the 4-step classification evaluation, performance comparable with that of humans and achieves promising results for providing AMD detailed severity grading (9-step classification), which normally requires highly trained graders, and for estimating 5-year risk of progression to advanced AMD. Use of DL has the potential to assist physicians in longitudinal care for individualized, detailed risk assessment as well as clinical studies of disease progression during treatment or as public screening or monitoring worldwide.
Project description:PURPOSE: We examined the association between abnormal fundus autofluorescence (FAF) features on images obtained by a modified fundus camera (mFC) and geographic atrophy (GA) progression in patients with age-related macular degeneration (AMD). METHODS: Serial FAF images of 131 eyes from 131 patients with GA were included in the study. All FAF images were obtained with an mFC (excitation, ∼ 500-610 nm; emission, ∼ 675-715 nm). The GA area was quantified at baseline and 1 year later using a customized segmentation program. The yearly GA enlargement rate was then calculated. Abnormal FAF patterns in the junctional zone of GA were classified as None or Minimal change, Focal, Patchy, Banded, or Diffuse according to previously published classification based on confocal scanning laser ophthalmoscopy (cSLO). The relationship between GA enlargement and abnormal FAF was evaluated. RESULTS: The mean rate of GA enlargement was the fastest in eyes with Diffuse pattern (1.74 mm(2) per year), followed by eyes with the Banded pattern (1.69 mm(2) per year). Binary logistic regression analysis revealed that eyes with the Banded and Diffuse pattern had significantly higher risk for GA enlargement compared with eyes with the other patterns. CONCLUSIONS: FAF image obtained by mFC appears to be acceptable for evaluating GA in accordance with an established cSLO-based classification. Eyes with the Banded or the Diffuse patterns of abnormal FAF at baseline indicate a high risk for GA progression. Identifying patients at high risk for GA progression using an mFC is broadly available method that can provide additional information to help predict disease course.
Project description:PURPOSE:To validate the performance of a commercially available, CE-certified deep learning (DL) system, RetCAD v.1.3.0 (Thirona, Nijmegen, The Netherlands), for the joint automatic detection of diabetic retinopathy (DR) and age-related macular degeneration (AMD) in colour fundus (CF) images on a dataset with mixed presence of eye diseases. METHODS:Evaluation of joint detection of referable DR and AMD was performed on a DR-AMD dataset with 600 images acquired during routine clinical practice, containing referable and non-referable cases of both diseases. Each image was graded for DR and AMD by an experienced ophthalmologist to establish the reference standard (RS), and by four independent observers for comparison with human performance. Validation was furtherly assessed on Messidor (1200 images) for individual identification of referable DR, and the Age-Related Eye Disease Study (AREDS) dataset (133 821 images) for referable AMD, against the corresponding RS. RESULTS:Regarding joint validation on the DR-AMD dataset, the system achieved an area under the ROC curve (AUC) of 95.1% for detection of referable DR (SE = 90.1%, SP = 90.6%). For referable AMD, the AUC was 94.9% (SE = 91.8%, SP = 87.5%). Average human performance for DR was SE = 61.5% and SP = 97.8%; for AMD, SE = 76.5% and SP = 96.1%. Regarding detection of referable DR in Messidor, AUC was 97.5% (SE = 92.0%, SP = 92.1%); for referable AMD in AREDS, AUC was 92.7% (SE = 85.8%, SP = 86.0%). CONCLUSION:The validated system performs comparably to human experts at simultaneous detection of DR and AMD. This shows that DL systems can facilitate access to joint screening of eye diseases and become a quick and reliable support for ophthalmological experts.
Project description:Purpose:To build and validate artificial intelligence (AI)-based models for AMD screening and for predicting late dry and wet AMD progression within 1 and 2 years. Methods:The dataset of the Age-related Eye Disease Study (AREDS) was used to train and validate our prediction model. External validation was performed on the Nutritional AMD Treatment-2 (NAT-2) study. First Step:An ensemble of deep learning screening methods was trained and validated on 116,875 color fundus photos from 4139 participants in the AREDS study to classify them as no, early, intermediate, or advanced AMD and further stratified them along the AREDS 12 level severity scale. Second step: the resulting AMD scores were combined with sociodemographic clinical data and other automatically extracted imaging data by a logistic model tree machine learning technique to predict risk for progression to late AMD within 1 or 2 years, with training and validation performed on 923 AREDS participants who progressed within 2 years, 901 who progressed within 1 year, and 2840 who did not progress within 2 years. For those found at risk of progression to late AMD, we further predicted the type (dry or wet) of the progression of late AMD. Results:For identification of early/none vs. intermediate/late (i.e., referral level) AMD, we achieved 99.2% accuracy. The prediction model for a 2-year incident late AMD (any) achieved 86.36% accuracy, with 66.88% for late dry and 67.15% for late wet AMD. For the NAT-2 dataset, the 2-year late AMD prediction accuracy was 84%. Conclusions:Validated color fundus photo-based models for AMD screening and risk prediction for late AMD are now ready for clinical testing and potential telemedical deployment. Translational Relevance:Noninvasive, highly accurate, and fast AI methods to screen for referral level AMD and to predict late AMD progression offer significant potential improvements in our care of this prevalent blinding disease.
Project description:Utilize high-resolution imaging to examine retinal anatomy in patients with known genetic relative risk (RR) for developing age-related macular degeneration (AMD).Forty asymptomatic subjects were recruited (9 men, 31 women; age range, 51 to 69 years; mean age, 61.4 years). Comprehensive eye examination, fundus photography, and high-resolution retinal imaging using spectral domain optical coherence tomography and adaptive optics were performed on each patient. Genetic RR scores were developed using an age-independent algorithm. Adaptive optics scanning light ophthalmoscope images were acquired in the macula extending to 10 degrees temporal and superior from fixation and were used to calculate cone density in up to 35 locations for each subject.Relative risk was not significantly predictive of fundus grade (p = 0.98). Only patients with a high RR displayed drusen on Cirrus or Bioptigen OCT. Compared to an eye with a grade of 0, an eye with a fundus grade equal to or greater than 1 had a 12% decrease in density (p < 0.0001) and a 5% increase in spacing (p = 0.0014). No association between genetic RR and either cone density (p = 0.435) or spacing (p = 0.538) was found. Three distinct adaptive optics scanning light ophthalmoscope phenotypical variations of photoreceptor appearance were noted in patients with grade 1 to 3 fundi. These included variable reflectivity of photoreceptors, decreased waveguiding, and altered photoreceptor mosaic overlying drusen.Our data demonstrate the potential of multimodal assessment in the understanding of early anatomical changes associated with AMD. Adaptive optics scanning light ophthalmoscope imaging reveals a decrease in photoreceptor density and increased spacing in patients with grade 1 to 3 fundi, as well as a spectrum of photoreceptor changes, ranging from variability in reflectivity to decreased density. Future longitudinal studies are needed in genetically characterized subjects to assess the significance of these findings with respect to the development and progression of AMD.
Project description:To develop and evaluate a software tool for automated detection of focal hyperpigmentary changes (FHC) in eyes with intermediate age-related macular degeneration (AMD).Color fundus (CFP) and autofluorescence (AF) photographs of 33 eyes with FHC of 28 AMD patients (mean age 71 years) from the prospective longitudinal natural history MODIAMD-study were included. Fully automated to semiautomated registration of baseline to corresponding follow-up images was evaluated. Following the manual circumscription of individual FHC (four different readings by two readers), a machine-learning algorithm was evaluated for automatic FHC detection.The overall pixel distance error for the semiautomated (CFP follow-up to CFP baseline: median 5.7; CFP to AF images from the same visit: median 6.5) was larger as compared for the automated image registration (4.5 and 5.7; P < 0.001 and P < 0.001). The total number of manually circumscribed objects and the corresponding total size varied between 637 to 1163 and 520,848 pixels to 924,860 pixels, respectively. Performance of the learning algorithms showed a sensitivity of 96% at a specificity level of 98% using information from both CFP and AF images and defining small areas of FHC ("speckle appearance") as "neutral."FHC as a high-risk feature for progression of AMD to late stages can be automatically assessed at different time points with similar sensitivity and specificity as compared to manual outlining. Upon further development of the research prototype, this approach may be useful both in natural history and interventional large-scale studies for a more refined classification and risk assessment of eyes with intermediate AMD.Automated FHC detection opens the door for a more refined and detailed classification and risk assessment of eyes with intermediate AMD in both natural history and future interventional studies.