January 2020
Volume 9, Issue 2
Open Access
Special Issue  |   December 2020
Exploring a Structural Basis for Delayed Rod-Mediated Dark Adaptation in Age-Related Macular Degeneration Via Deep Learning
Author Affiliations & Notes
  • Aaron Y. Lee
    Department of Ophthalmology, School of Medicine, University of Washington, Seattle, WA, USA
  • Cecilia S. Lee
    Department of Ophthalmology, School of Medicine, University of Washington, Seattle, WA, USA
  • Marian S. Blazes
    Department of Ophthalmology, School of Medicine, University of Washington, Seattle, WA, USA
  • Julia P. Owen
    Department of Ophthalmology, School of Medicine, University of Washington, Seattle, WA, USA
  • Yelena Bagdasarova
    Department of Ophthalmology, School of Medicine, University of Washington, Seattle, WA, USA
  • Yue Wu
    Department of Ophthalmology, School of Medicine, University of Washington, Seattle, WA, USA
  • Theodore Spaide
    Department of Ophthalmology, School of Medicine, University of Washington, Seattle, WA, USA
  • Ryan T. Yanagihara
    Department of Ophthalmology, School of Medicine, University of Washington, Seattle, WA, USA
  • Yuka Kihara
    Department of Ophthalmology, School of Medicine, University of Washington, Seattle, WA, USA
  • Mark E. Clark
    Department of Ophthalmology and Visual Sciences, School of Medicine, University of Alabama at Birmingham, Birmingham, AL, USA
  • MiYoung Kwon
    Department of Psychology, Northeastern University, Boston, MA, USA
  • Cynthia Owsley
    Department of Ophthalmology and Visual Sciences, School of Medicine, University of Alabama at Birmingham, Birmingham, AL, USA
  • Christine A. Curcio
    Department of Ophthalmology and Visual Sciences, School of Medicine, University of Alabama at Birmingham, Birmingham, AL, USA
  • Correspondence: Aaron Y. Lee, University of Washington, 325 Ninth Ave, Box 359608, Seattle, WA 98104-2499, USA. e-mail: leeay@uw.edu 
Translational Vision Science & Technology December 2020, Vol.9, 62. doi:https://doi.org/10.1167/tvst.9.2.62
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Aaron Y. Lee, Cecilia S. Lee, Marian S. Blazes, Julia P. Owen, Yelena Bagdasarova, Yue Wu, Theodore Spaide, Ryan T. Yanagihara, Yuka Kihara, Mark E. Clark, MiYoung Kwon, Cynthia Owsley, Christine A. Curcio; Exploring a Structural Basis for Delayed Rod-Mediated Dark Adaptation in Age-Related Macular Degeneration Via Deep Learning. Trans. Vis. Sci. Tech. 2020;9(2):62. https://doi.org/10.1167/tvst.9.2.62.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose: Delayed rod-mediated dark adaptation (RMDA) is a functional biomarker for incipient age-related macular degeneration (AMD). We used anatomically restricted spectral domain optical coherence tomography (SD-OCT) imaging data to localize de novo imaging features associated with and to test hypotheses about delayed RMDA.

Methods: Rod intercept time (RIT) was measured in participants with and without AMD at 5 degrees from the fovea, and macular SD-OCT images were obtained. A deep learning model was trained with anatomically restricted information using a single representative B-scan through the fovea of each eye. Mean-occlusion masking was utilized to isolate the relevant imaging features.

Results: The model identified hyporeflective outer retinal bands on macular SD-OCT associated with delayed RMDA. The validation mean standard error (MSE) registered to the foveal B-scan localized the lowest error to 0.5 mm temporal to the fovea center, within an overall low-error region across the rod-free zone and adjoining parafovea. Mean absolute error (MAE) on the test set was 4.71 minutes (8.8% of the dynamic range).

Conclusions: We report a novel framework for imaging biomarker discovery using deep learning and demonstrate its ability to identify and localize a previously undescribed biomarker in retinal imaging. The hyporeflective outer retinal bands in central macula on SD-OCT demonstrate a structural basis for dysfunctional rod vision that correlates to published histopathologic findings.

Translational Relevance: This agnostic approach to anatomic biomarker discovery strengthens the rationale for RMDA as an outcome measure in early AMD clinical trials, and also expands the utility of deep learning beyond automated diagnosis to fundamental discovery.

Introduction
Age-related macular degeneration (AMD) causes significant visual impairment and progressive loss of central vision in older adults.1 Although therapies are available for exudative AMD, the more common nonexudative form of AMD still lacks an effective treatment.2,3 Early biomarkers for nonexudative AMD are needed to advance clinical trials of potential therapies and to help identify patients at risk for progressing to advanced disease. Imaging biomarkers are favored for speed and objectivity; ideally, imaging biomarkers are also indices of visual function. 
Delayed rod-mediated dark adaptation (RMDA) is a slower return to retinal sensitivity following a bright light flash stimulus and a known functional biomarker for incipient early AMD. Older patients with normal macular health who show delayed RMDA, measured as longer rod intercept times (RITs), have an increased risk of developing AMD.4,5 In addition, delayed RMDA has been associated with common polymorphisms of two major AMD-associated genes, complement factor H and the age-related maculopathy susceptibility 2 (ARMS2) gene.6 The idea that rod-mediated vision has merit for documenting the progression of macular disease is well supported. First, older adults in normal health report difficulty with visual tasks performed at low luminance levels.7 Second, comprehensive maps of photoreceptor density in human macula demonstrate not only a high density of foveal cones but also numerous rods.8 Expressed in units of the widely used Early Treatment Diabetic Retinopathy Study (ETDRS) grid, the central subfield contains almost exclusively cone photoreceptors, the inner ring (0.5–1.5 mm from the foveal center) has a 4:1 rod:cone ratio, and the outer ring (1.5–3 mm) has a 10:1 ratio.9 Eyes of aged donors exhibit loss of rods especially in the inner ring.10 Third, RMDA was proposed by Bird and Fitzke as a dynamic measure of retinoid resupply to rods across the choriocapillaris-Bruch's membrane-retinal pigment epithelium (RPE) interface,11 where AMD pathology is prominent, and given a strong neurophysiologic underpinning by Lamb and Pugh.12 Fourth, documented cellular and molecular age changes in the retinoid resupply route include loss of macular choriocapillaris and lipidization of Bruch's membrane due to retention of lipoproteins of intraocular origin, while RPE cell numbers are maintained, suggesting a vascular-originating degeneration.1315 In contrast, cone-mediated visual acuity in bright light can remain preserved well into the disease course, attributed to additional sustenance by foveal Müller glia. 
Early signs of AMD, as revealed by color fundus photography, can also be detected on spectral domain optical coherence tomography (SD-OCT). Normal outer retinal structure shows bands of varying reflectivity on SD-OCT due to horizontally aligned, vertically compartmentalized photoreceptors and supporting RPE and glia.16,17 In early and intermediate AMD, hyper-reflective foci (clumps) in the retina are associated with both progression risk and delayed RMDA.1820 Recent studies clearly linked aberrant imaging findings in imaging to histopathologic changes.20,21 However, biomarkers for even earlier stages of disease may require a different strategy. The intricacy and small size of outer retinal cells challenge histologic quantification of anatomy due to disorganization during postmortem processing, including problems discerning the lengths of photoreceptor inner and outer segments. Thus, SD-OCT images of the human retina in vivo have the potential to answer such questions. In addition, human eyes are advantageous for studying AMD over laboratory animals lacking maculae. Even monkey eyes that do have maculae and develop drusen are less overall rod-dominant than humans and do not progress to AMD end-stages.22,23 
Advances in artificial intelligence may provide novel methods for identifying anatomic features on SD-OCT that correlate with a known measure of retinal dysfunction. Deep learning algorithms in particular offer a unique approach to this challenge. Unlike automated diagnostic machine learning models,2428 which are trained using hand-labeled data to detect known findings, supervised deep learning models can also be trained to identify image characteristics that correspond to a known measurement in a previously unrecognized way.29,30 Using functional, objective training targets with visualization techniques, a deep learning model could potentially identify novel imaging features and specific anatomic details on SD-OCT that correlate with a known functional biomarker, such as delayed RMDA. In this study, we sought to train deep learning models to predict the rate of RMDA using RIT and anatomically restricted SD-OCT imaging data as well as localize de novo imaging features associated with RMDA. 
Methods
This study was approved by the Institutional Review Board of the University of Alabama at Birmingham, followed the tenets of the Declaration of Helsinki and was conducted in compliance with the Health Insurance Portability and Accountability Act. Informed consent was obtained from all subjects. The collection of data for the Alabama Study on Early Age-Related Macular Degeneration has been described previously.4,31 
SD-OCT volumes were obtained with Spectralis HRA + SD-OCT (Heidelberg Engineering, Heidelberg, Germany). We acquired SD-OCT volumes of all maculae (Spectralis HRA + SD-OCT; Heidelberg Engineering). B-scans (n = 73) were horizontally oriented and centered over the fovea in a 20 degree × 15 degree (5.7 × 4.2 mm) area. Automatic Real-Time averaging was 8 to 18, and quality was 20 to 47 dB. All SD-OCT images shown in this paper are unadjusted from the original manufacturer's intensities. 
Dark adaptation was measured at 1-2 visits, using the AdaptDx (MacuLogix, Harrisburg, PA) adaptometer in a 20 minute test protocol, which has been described4,32 and validated33 in previous studies. Briefly, patients’ eyes were dilated to ≥ 6 mm and the non-test eye was occluded. The test eye was aligned to a red fixation light using an infrared camera system, and a focal photoflash (0.25 ms duration, 58,000 scotopic cd/m2 second intensity; equivalent approximately 85% bleach) centered at 5 degrees on the superior vertical meridian was applied for bleaching. Targets were then presented to this area every 2 to 3 seconds (beginning at 5.00 cd/m2) and decreasing in intensity by steps (0.3 log units), and patients responded by pressing a button when they saw a stimulus light until they could no longer detect them. The stimulus light intensity then increased in small (0.1 log unit) increments, and the intensity at which the patient was able to detect the light once again was recorded. The RIT was defined as the duration after the photo bleach required for sensitivity to recover to a stimulus light sensitivity of 5 × 10−4 cd/m2, which is located within the second component of rod recovery.12 No subjects were excluded due to fixation loss or poor reliability. The RIT and the OCT images were captured on the same day or within 1 week. 
The dataset was partitioned into three mutually exclusive sets at the patient level for training (60%), validation (20%), and test (20%). A conceptual framework for processing and localizing biomarkers was developed (Fig. 1). This framework consisted of two parts. First, the SD-OCT volumes were anatomically registered and deep learning models were separately trained on narrow bands of the B-scan that passed through the foveal center of each eye, where each band was centered at an anatomic location (see Fig. 1B). Second, the vertical B-scan window that corresponded to the anatomic location with the highest performance was then extracted (see Fig. 1B) and systematically perturbed to find the areas leading to higher or lower predicted RIT using the test set (see Fig. 1C). Only B-scan vertical windows were used to train the models. The number of B scans per volume did not differ between subjects. 
Figure 1.
 
Concept diagram of framework for biomarker discovery using deep learning. The overall framework is shown in (A). After aligning spectral domain optical coherence tomography (SD-OCT) images, separate datasets are created and different convolutional neural networks (CNN) deep learning models are trained (B). The frozen models with the best performance, lowest validation loss, are systematically perturbed with mean occlusion in the test set and perturbations increasing and decreasing the predictions are shown in green and red, respectively (C).
Figure 1.
 
Concept diagram of framework for biomarker discovery using deep learning. The overall framework is shown in (A). After aligning spectral domain optical coherence tomography (SD-OCT) images, separate datasets are created and different convolutional neural networks (CNN) deep learning models are trained (B). The frozen models with the best performance, lowest validation loss, are systematically perturbed with mean occlusion in the test set and perturbations increasing and decreasing the predictions are shown in green and red, respectively (C).
In part one, one B-scan that was found to pass through the foveal center was extracted from each volume. At each anatomic location in this foveal B-scan, a 64 × 256 pixel window was created such that the anatomic location was in the middle of the window and the vertical placement of the window was placed on the horizontal maximum intensity projection. A deep learning model was trained for each anatomic location using the same neural architecture (Supplementary Figure S1). The input to the model was the raw pixel intensities divided by 255, without other normalizations or transformations from the 64 × 256 pixel window, and the output for the model was a single node with linear activation to predict the RIT divided by 40 minutes from the RMDA testing to scale the output of the models between 0 and 1. Mean squared error was used as the loss function. Weights for the convolutional layers were initialized randomly using the Xavier normal distribution.34 Nesterov Adam35 was used as an optimizer with an initial learning rate of 2 × 10−4. Batch size was set to 26 and the number of epochs was set to 600. For each anatomic position, nine repetitions of the training session were performed to account for different random initializations because the main outcome of the study was to evaluate the model capacity to learn to predict the RIT. Each training session used a fixed set of hyperparameters. The validation losses were collected and the weights of models from the lowest validation loss of each training session were saved for further analysis (see Fig. 1B). 
For part two, test-set vertical B-scan images from the anatomic location with the lowest loss were used for visualization of relevant features. At each pixel position in the image for the whole OCT B-scan, a 16 × 6 pixel window was occluded using the mean of the pixel intensity of the window and inference was performed. Occlusion using permuted pixel intensities, zero intensity, different window sizes led to similar results. The difference in predicted RIT between the model output of the altered image and the unaltered image was measured for an occlusion window centered at each pixel position. A color map with the difference in predicted perturbed RIT from the unperturbed RIT was then plotted to visualize both lengthening and shortening perturbations on the OCT B-scan (see Fig. 1C). These differences were then evaluated qualitatively using a random sampling method in the test-set. 
All analyses were performed using Python (version 2.7.12) and R (version 3.3.2). Deep learning models were developed using Keras (version 2.2.0), Tensorflow (version 1.7.0), accelerated using NVIDIA CUDA (version 9.0.333), and trained on a server with dual Xeon 3.4 GHz processors, 256 GB of random access memory, and 8 x NVIDIA P100 GPUs. 
Results
Seven hundred fifteen patients were imaged using SD-OCT and tested for RMDA. RIT was measured in 737 eyes in 1-2 visits, resulting in 1218 OCT volumes of individual eyes paired with RIT measurements. The demographics of this population are shown in the Table 1
Table.
 
Patient Characteristics
Table.
 
Patient Characteristics
Convolutional neural networks were independently trained on each of the narrow SD-OCT bands at various anatomic locations. At each anatomic location, the model was trained nine times with newly randomized initialized weights, and the weights corresponding to the lowest root mean squared error (RMSE) in the predicted RIT were chosen out of each training session. The RMSE and mean absolute error (MAE) of the nine models at each anatomic position were averaged (Fig. 2A) and collected as a function of eccentricity from the fovea in mm (see Fig. 2B). The trained models achieved an overall MAE across all the bands in the test set of 4.71 minutes for predicting RIT (8.8% of the dynamic range and lower than normal upper bound of 12.3 minutes).4 In the test set, the correlation between the predicted RIT and the true RIT showed moderately high correlation (Pearson's correlation of 0.69). 
Figure 2.
 
Performance of deep learning models by anatomic location. Training curves for two different anatomic locations (blue and orange curves) (A) by root mean standard error (RMSE) and mean absolute error (MAE); shaded region shows 95% confidence intervals by repeated training sessions. The anatomic positions are indicated by the two dotted lines of corresponding color in panel (B). Lowest error on foveal B-scan by millimeters eccentricity and RMSE loss with lower being higher performance B. The fovea is labeled with the white arrow.
Figure 2.
 
Performance of deep learning models by anatomic location. Training curves for two different anatomic locations (blue and orange curves) (A) by root mean standard error (RMSE) and mean absolute error (MAE); shaded region shows 95% confidence intervals by repeated training sessions. The anatomic positions are indicated by the two dotted lines of corresponding color in panel (B). Lowest error on foveal B-scan by millimeters eccentricity and RMSE loss with lower being higher performance B. The fovea is labeled with the white arrow.
Figure 2B plots RSME across an entire foveal B-scan. This curve decreases smoothly from highest values at approximately 2 mm eccentricity to lowest values (i.e. most accurate RIT predictions), at 0.34 mm within a central area 1 mm in diameter (0.5 mm radius, or eccentricity from the foveal center). This region corresponds to the central subfield of the ETDRS grid, commonly used in clinical and epidemiologic studies,36 and includes the all-cone fovea (350 µm diameter) with a rim of low rod density. In the macular retina immediately surrounding the fovea, cone density steadily declines and rod density steadily increases.8 
Using images from the test set centered on 0.34 mm (1.2 degrees) nasal eccentricity, the relative impact of systematic mean-occlusion based perturbations were assessed on the trained models. For each pixel position, an occlusion mask was placed using the mean value of the occlusion window, and inference was performed, where the predicted RIT was the nine-model ensemble average. The deep learning model dependence on specific SD-OCT features was identified by systematically perturbing the input images and testing the model error. The resulting differences in the RIT predictions were compared against the baseline RIT prediction without perturbations, identifying specific SD-OCT signatures that the models were most dependent upon for predicting RIT. Because RIT is a continuous value, both the direction and magnitude of the perturbed inferences were measured. The specific SD-OCT signatures of different areas that caused a longer, more pathologic RIT were identified. 
Visual observation of the test set results showed that the model was reliant on two hyporeflective regions that bounded the ellipsoid zone (EZ) of the inner segment (Fig. 3). The first region of interest represents the myoid portion of the photoreceptor inner segments (above the EZ) and the second represents a relatively hyporeflective area below the EZ and above the interdigitation zone (IZ), between the outer segment tips and the RPE apical processes (below the EZ).37 The areas associated with an increase of RIT were in the myoid zone, whereas areas associated with a decrease of RIT were localized to apical and basal aspects of the RPE cell body, possibly including choriocapillaris basally. Examination of high-resolution extractions from these areas in the test set revealed that subtle reflectivity changes in these areas were correlated with the RIT (Fig. 4). 
Figure 3.
 
Visualization of deep learning features from the test set. The original spectral domain optical coherence tomography (SD-OCT) scan in mm used by the deep learning model to predict rod intercept time (RIT) are shown in (A, D, G). Panels (B, E, H) show the magnitude of the difference between the perturbed and baseline predictions caused by occlusion of each possible pixel position, with red showing elongation and blue showing shortening of the RIT. The corresponding overlays are shown in (C, F, I) in relation to the ellipsoid zone (EZ).
Figure 3.
 
Visualization of deep learning features from the test set. The original spectral domain optical coherence tomography (SD-OCT) scan in mm used by the deep learning model to predict rod intercept time (RIT) are shown in (A, D, G). Panels (B, E, H) show the magnitude of the difference between the perturbed and baseline predictions caused by occlusion of each possible pixel position, with red showing elongation and blue showing shortening of the RIT. The corresponding overlays are shown in (C, F, I) in relation to the ellipsoid zone (EZ).
Figure 4.
 
Correlation of hyporeflective bands with rod intercept time (RIT). Panel (A) shows a reference image of the external limiting membrane (ELM), the ellipsoid zone (EZ), the interdigitation zone (IZ), and the retinal pigment epithelium-Bruch's membrane (RPE-BrM) on spectral domain optical coherence tomography (SD-OCT). Three examples of low RIT (B), medium RIT (C), and high RIT (D) sampled randomly from the test set are shown with high resolution insets (red boxes) and the RIT in minutes. The IZ, which is apparent in the reference figure (also from this population), is not apparent in any of the randomly sampled figures. Further the gap between the RPE-BrM and the EZ is more hyper-reflective in C, D than in B. Blurring of hyporeflective bands superficial and deep to the EZ correlates with RIT.
Figure 4.
 
Correlation of hyporeflective bands with rod intercept time (RIT). Panel (A) shows a reference image of the external limiting membrane (ELM), the ellipsoid zone (EZ), the interdigitation zone (IZ), and the retinal pigment epithelium-Bruch's membrane (RPE-BrM) on spectral domain optical coherence tomography (SD-OCT). Three examples of low RIT (B), medium RIT (C), and high RIT (D) sampled randomly from the test set are shown with high resolution insets (red boxes) and the RIT in minutes. The IZ, which is apparent in the reference figure (also from this population), is not apparent in any of the randomly sampled figures. Further the gap between the RPE-BrM and the EZ is more hyper-reflective in C, D than in B. Blurring of hyporeflective bands superficial and deep to the EZ correlates with RIT.
Discussion
Advancements in artificial intelligence with the advent of deep learning have revolutionized biomedical image analysis. Although many applications of deep learning models have been shown to reach expert level consensus for automated diagnosis and feature segmentation, current applications mainly recapitulate human understanding of diseases. In our study, we demonstrate a novel framework for localization and identification of biomarkers using deep learning by first restricting information to anatomically registered locations and subsequently applying visualization techniques. By training a deep learning model with 1218 SD-OCT volumes paired with RMDA measurements, we isolated the relevant imaging features corresponding to the RMDA in a label-agnostic way. Thus, we discovered hyporeflective outer retinal bands, in a specific topography, as a novel structural basis for a functional biomarker of incipient AMD. Our use of deep learning in this study is additionally novel relative to recent publications in ophthalmology by focusing on mechanistic questions rather than automated diagnosis.25,3842 Interestingly, our findings point to specific laminar and topographic correlations within the retina for delayed RMDA, over and above person-level associations, such as genetic predisposition6 and plasma metabolites.4346 
Human retina at the interface of aging and early AMD is especially suited for the demonstrated framework of novel biomarker discovery. The eye is the brain's camera, and in it the photoreceptors and supporting cells (glia, vascular endothelium, and RPE) are deployed with high geometric precision similar to a charge-coupled device chip. Comprehensive histologic mapping studies show that humans have a cone-only fovea with the point of highest cone photoreceptor density in the foveal center.8 Rods are absent from this center. In young adults, rods outnumber cones 4:1 at 0.5-1.5 mm eccentricity9 and crest in an elliptical ring at 3 to 5 mm, encircling the optic nerve head. In eyes of older adults like those included in our data, cones are stable in number and rods decline 30% in the 0.5 to 1.5 mm ring, exactly including the location of our biomarker. 
What are the anatomic correlates of the OCT signatures identified as associated with delayed RMDA? Each reflective band in the outer retina represents the precise horizontal alignment of vertically compartmentalized cells (i.e. photoreceptors, RPE, and Müller glia).47 One prominent landmark is the hyperreflective EZ, which in commercial SD-OCT represents the mitochondria-rich ellipsoid of photoreceptor inner segments. In Figure 3, the EZ is flanked above and below with hyporeflective bands that are predictive of RIT. The upper of these hyporeflective bands identified in this study represents the myoid portion of photoreceptor inner segments, and is identified as the “myoid zone of the photoreceptors” in the SD-OCT clinical lexicon.37 This part of the cell contains Golgi apparatus and ribosomes and is known to shorten in AMD.48 The lower of the two hyporeflective bands (visible in Fig. 3 but not in Fig. 4) is unnamed in the clinical lexicon, but is located below the EZ and above the IZ, between the outer segment tips and the RPE apical processes. The IZ is a hyper-reflective band that involves photoreceptor outer segments and the delicate apical processes of the RPE, which contain melanosomes.49 The difference between Figures 3 and 4 suggests that in many eyes with longer RIT, the region between the EZ and RPE-Bruch's membrane bands (containing both this unnamed hyporeflective region of interest and the IZ) becomes variably reflective, shorter, or both, thus blunting distinctions between the EZ and the apical RPE on OCT imaging. Whether this is due to altered apical processes or outer segments or both cannot be determined with the resolution of these SD-OCT images. Other OCT technologies with greater axial resolution50 may be useful in addressing this question. 
The regional specificity of the discovered biomarker near the fovea deserves comment, in two ways. First, the SD-OCT B-scan used for training did not pass through the retinal location used for RMDA testing (5 degrees or 1.44 mm superior to the fovea, on the vertical meridian). Second, the most accurate anatomic correlates for rod-mediated vision included the fovea, an area containing only cones, and extended to the adjoining rod-dominant parafovea. These seeming contradictions occur, because rods at the RMDA testing location are impacted by pathologic changes in the central macular region identified by the model. 
RMDA is a readout of age-related changes and pathology in the underlying choriocapillaris endothelium and Bruch's membrane, representing the retinoid resupply route from the circulation. A sequence of age-related changes in these support tissues leading to soft drusen and advanced AMD pathology have been elucidated by ultrastructural studies, histochemistry, lipid profiling, gene expression, cell biology, and in vivo clinical imaging. This sequence is most prominent under the fovea, with a spread within the central 3 mm diameter of the macula that includes precisely the area identified by the deep learning model. In fact, compared with other retinal locations, the presence and growth of drusen concentrated under the fovea have the greatest effect size in predicting 10-year risk of neovascularization or atrophy (relative risk at 10 years = 26.5 for baseline drusen in central 1 mm; 8.6 for drusen at 0.5–1.5 mm eccentricity).51 An Oil Spill model of drusen formation has been recently proposed52,53 as a late-life sequela of plasma high density lipoproteins (HDLs) delivering xanthophyll carotenoid pigments (lutein and zeaxanthin) to foveal cells, in particular the Müller glia that extend processes laterally within the inner and outer plexiform layers.54 Cones themselves are sustained by these Müller glia, which are in turn supported by retinal capillaries at the edge of the avascular zone. The rods are relatively vulnerable, because they are more dependent on the choriocapillaris Bruch's RPE than are the cones. This hypothesis integrating drusen biology with retinal neuroscience52,55 to explain both rod vulnerability and cone resilience9 incorporates multiple evidence lines from human biology, including sequence variants in AMD-associated HDL genes.46 Parts of this hypothesis remain to be confirmed, and it does not exclude mechanisms with pan-retinal or systemic underpinnings (e.g. inflammation). It does emphasize a local-ness of AMD dysfunction that is best explained by heretofore unrecognized aspects of outer retinal cell physiology. 
Strengths of the current study include the use of a functional training target with deep learning models as opposed to human expert derived disease classifications, and a de novo, agnostic approach to image analysis. The use of RIT from RMDA testing allowed discovery of previously undescribed biomarkers rather than simply recapitulating human understanding of disease. The study patients were drawn from a carefully selected cohort spanning aging and early AMD disease severities allowing for investigations into the pathophysiology of AMD. If there were no association between eccentricity and the capacity of the deep learning models to predict RIT, a flat line would have been observed in Figure 2B. Instead, a gradual increase in predictability was found centrally, and the choice of the foveal B-scan allowed for unbiased hypothesis testing outside of the RMDA stimulus region. It is important to note that our deep learning framework shows for the first time in vivo the retinal area of interest that precisely matches the topography of age-related rod loss that was discovered histologically over several decades ago.8 In addition, it also validates the idea that rods near the fovea, which are not widely appreciated, are sensitive indicators of their support system, critical in diagnosing and understanding the pathophysiology of early AMD.9 Testing existing theories of disease are crucial for advancing our knowledge of AMD and allowing future therapeutic options; the agnostic nature of deep learning is particularly suitable for this task. 
Limitations of this study include the possibility that more than one area is important in a single B-scan through the fovea for predicting RIT. In the first step of the analysis, the areas were restricted to narrow windows of the foveal B-scan to allow deep learning models to have access to high-resolution information. Although one way to circumvent this limitation is to train models with the full B-scan image, the current limitations in computer hardware prevent using the full image at native resolution for training. Aggressive downsampling may limit biomarker identification. In addition, the RMDA stimulus area was set to an area outside of the foveal B-scan. As with many biomarkers, the discovered features may correlate with the RIT instead of being directly indicative of disease initiation in the choriocapillaris-Bruch's membrane complex. The deep learning model is limited by the resolution of the input images and therefore may miss subtle changes to outer segments or RPE apical processes in this early disease population. We chose mean occlusion as the visualization method utilized in this study, because many other methods developed are designed for classification problems.56,57 Whereas occlusion methods are sensitive to the window size and occlusion value, we performed sensitivity analyses that showed that the biomarker was robust to these choices. Similarly, the discovered features do not correlate with the location of subretinal drusenoid deposits, an extracellular deposit most abundant at eccentricities only partly captured by the SD-OCT volumes studied here and associated with markedly increased vision loss at more advanced disease stages than present in this study population.33,58 The blue lines in Figure 3 are weaker and less consistent than the red bands and they may be more prominent in an image dataset derived from a different study design. The scanning parameters used for the data are another potential limitation, as models are sensitive to the distribution of the training input signal. Because the goal of this study was to identify a biomarker internally within our dataset, the parameters should not have affected our results. Future studies may need to include a wider range of OCT scanning parameters. Finally, although our model did not show a performance that could be clinically useful, the model predictions did show moderately high correlation in the test set and enabled uncovering new biomarkers. 
Future work includes replication and longitudinal validation of this biomarker in external datasets and the application of this framework to other human diseases. In conclusion, we have demonstrated a new framework for discovery of biomarkers in human diseases using deep learning and applied this framework to AMD using the RIT from RMDA testing, and discovered a novel biomarker de novo. This biomarker fits with current concepts of AMD pathophysiology by highlighting both the topography and a structural basis for a functional biomarker (RIT). Establishment of biomarkers for the most common form of AMD where currently limited therapy is available will lead to more sensitive imaging-based clinical end points, an acceleration of clinical trials, and new therapeutic interventions. By confirming RMDA is closely linked to processes in the choriocapillaris-Bruch's membrane-RPE complex that lead to advanced disease, its use as an outcome measure is supported. 
Acknowledgments
Supported by NIH/NEI K23EY029246, R01AG060942, R01AG04212, R01EY029595, R01EY03039, EyeSight Foundation of Alabama, the Dorsett Davis Discovery Fund, Alfreda J. Schueler Trust, and an unrestricted grant, CDA from Research to Prevent Blindness (RPB), and RPB/Lions Clubs International Foundation low vision research grant. The sponsors/funding organizations had no role in the design or conduct of this research. 
Disclosure: A.Y. Lee, US Food and Drug Administration (E), grants from Santen (F), Carl Zeiss Meditec (F), and Novartis (F), personal fees from Genentech (R), Topcon (R), and Verana Health (R), outside of the submitted work. This article does not reflect the opinions of the Food and Drug Administration; C.S. Lee, None; M.S. Blazes, None; J.P. Owen, None; Y. Bagdasarova, None; Y. Wu, None; T. Spaide, None; R.T. Yanagihara, None; Y. Kihara, None; M.E. Clark, None; M.Y. Kwon, None; C. Owsley, is an inventor on the device used to measure dark adaptation in this study; C.A. Curcio, is a stockholder in MacRegen Inc. 
References
Friedman DS, O'Colmain BJ, Muñoz B, et al. Prevalence of age-related macular degeneration in the United States. Arch Ophthalmol. 2004; 122(4): 564–572. [CrossRef] [PubMed]
Rosenfeld PJ, Brown DM, Heier JS, et al. Ranibizumab for neovascular age-related macular degeneration. N Engl J Med. 2006; 355(14): 1419–1431. [CrossRef] [PubMed]
Brown DM, Kaiser PK, Michels M, et al. Ranibizumab versus verteporfin for neovascular age-related macular degeneration. N Engl J Med. 2006; 355(14): 1432–1444. [CrossRef] [PubMed]
Owsley C, Jr McGwin G, Clark ME, et al. Delayed rod-mediated dark adaptation is a functional biomarker for incident early age-related macular degeneration. Ophthalmology. 2016; 123(2): 344–351. [CrossRef] [PubMed]
Chen KG, Alvarez JA, Yazdanie M, et al. Longitudinal study of dark adaptation as a functional outcome measure for age-related macular degeneration. Ophthalmology. 2019; 126(6): 856–865. [CrossRef] [PubMed]
Mullins RF, McGwin G, Jr, Searcey K, et al. The ARMS2 A69S polymorphism is associated with delayed rod-mediated dark adaptation in eyes at risk for incident age-related macular degeneration. Ophthalmology. 2019; 126(4): 591–600. [CrossRef] [PubMed]
Kosnik W, Winslow L, Kline D, Rasinski K, Sekuler R. Visual changes in daily life throughout adulthood. J Gerontol. 1988; 43(3): P63–P70. [CrossRef] [PubMed]
Curcio CA, Sloan KR, Kalina RE, Hendrickson AE. Human photoreceptor topography. J Comp Neurol. 1990; 292(4): 497–523. [CrossRef] [PubMed]
Curcio CA, Jr McGwin G, Sadda SR, et al. Functionally validated imaging endpoints in the Alabama study on early age-related macular degeneration 2 (ALSTAR2): design and methods. BMC Ophthalmol. 2020; 20(1): 196. [CrossRef] [PubMed]
Curcio CA, Millican CL, Allen KA, Kalina RE. Aging of the human photoreceptor mosaic: evidence for selective vulnerability of rods in central retina. Invest Ophthalmol Vis Sci. 1993; 34(12): 3278–3296. [PubMed]
Steinmetz RL, Haimovici R, Jubb C, Fitzke FW, Bird AC. Symptomatic abnormalities of dark adaptation in patients with age-related Bruch's membrane change. Br J Ophthalmol. 1993; 77(9): 549–554. [CrossRef] [PubMed]
Lamb TD, Pugh EN. Dark adaptation and the retinoid cycle of vision. Prog Retin Eye Res. 2004; 23(3): 307–380. [CrossRef] [PubMed]
Curcio CA, Johnson M, Huang J-D, Rudolf M. Apolipoprotein B-containing lipoproteins in retinal aging and age-related macular degeneration. J Lipid Res. 2010; 51(3): 451–467. [CrossRef] [PubMed]
Ramrattan RS, van der Schaft TL, Mooy CM, de Bruijn WC, Mulder PG, de Jong PT. Morphometric analysis of Bruch's membrane, the choriocapillaris, and the choroid in aging. Invest Ophthalmol Vis Sci. 1994; 35(6): 2857–2864. [PubMed]
Mullins RF, Schoo DP, Sohn EH, et al. The membrane attack complex in aging human choriocapillaris: relationship to macular degeneration and choroidal thinning. Am J Pathol. 2014; 184(11): 3142–3153. [CrossRef] [PubMed]
Khanifar AA, Koreishi AF, Izatt JA, Toth CA. Drusen ultrastructure imaging with spectral domain optical coherence tomography in age-related macular degeneration. Ophthalmology. 2008; 115(11): 1883–1890.e1. [CrossRef] [PubMed]
Christenbury JG, Folgar FA, O'Connell RV, et al. Progression of intermediate age-related macular degeneration with proliferation and inner retinal migration of hyperreflective foci. Ophthalmology. 2013; 120(5): 1038–1045. [CrossRef] [PubMed]
Nassisi M, Lei J, Abdelfattah NS, et al. OCT risk factors for development of late age-related macular degeneration in the fellow eyes of patients enrolled in the HARBOR study. Ophthalmology. 2019; 126(12): 1667–1674. [CrossRef] [PubMed]
Waldstein SM, Vogl W-D, Bogunovic H, Sadeghipour A, Riedl S, Schmidt-Erfurth U. Characterization of drusen and hyperreflective foci as biomarkers for disease progression in age-related macular degeneration using artificial intelligence in optical coherence tomography. JAMA Ophthalmol. 2020; 138(7): 740–747. [CrossRef] [PubMed]
Echols BS, Clark ME, Swain TA, et al. Hyperreflective foci and specks are associated with delayed rod-mediated dark adaptation in non-neovascular age-related macular degeneration. Ophthalmol Retina. 2020; 4(11): 1059–1068. [CrossRef] [PubMed]
Curcio CA, Zanzottera EC, Ach T, Balaratnasingam C, Freund KB. Activated retinal pigment epithelium, an optical coherence tomography biomarker for progression in age-related macular degeneration. Invest Ophthalmol Vis Sci. 2017; 58(6): BIO211–BIO226. [PubMed]
Yiu G, Chung SH, Mollhoff IN, et al. Long-term evolution and remodeling of soft drusen in rhesus macaques. Invest Ophthalmol Vis Sci. 2020; 61(2): 32. [CrossRef] [PubMed]
Samy CN, Hirsch J. Comparison of human and monkey retinal photoreceptor sampling mosaics. Vis Neurosci. 1989; 3(3): 281–285. [CrossRef] [PubMed]
Kermany DS, Goldbaum M, Cai W, et al. Identifying medical diagnoses and treatable diseases by image-based deep learning. Cell. 2018; 172(5): 1122–1131.e9. [CrossRef] [PubMed]
Lee CS, Baughman DM, Lee AY. Deep learning is effective for the classification of OCT images of normal versus age-related macular degeneration. Ophthalmol Retina. 2017; 1(4): 322–327. [CrossRef] [PubMed]
De Fauw J, Ledsam JR, Romera-Paredes B, et al. Clinically applicable deep learning for diagnosis and referral in retinal disease. Nat Med. 2018; 24(9): 1342–1350. [CrossRef] [PubMed]
Ting DSW, Cheung CY-L, Lim G, et al. Development and validation of a deep learning system for diabetic retinopathy and related eye diseases using retinal images from multiethnic populations with diabetes. JAMA. 2017; 318(22): 2211. [CrossRef] [PubMed]
Abràmoff MD, Lavin PT, Birch M, Shah N, Folk JC. Pivotal trial of an autonomous AI-based diagnostic system for detection of diabetic retinopathy in primary care offices. NPJ Digit Med. 2018; 1: 39. [CrossRef] [PubMed]
Kihara Y, Heeren TFC, Lee CS, et al. Estimating retinal sensitivity using optical coherence tomography with deep-learning algorithms in macular telangiectasia type 2. JAMA Netw Open. 2019; 2(2): e188029. [CrossRef] [PubMed]
Lee CS, Tyring AJ, Wu Y, et al. Generating retinal flow maps from structural optical coherence tomography with artificial intelligence. Sci Rep. 2019; 9(1): 5694. [CrossRef] [PubMed]
Crosson JN, Swain TA, Clark ME, et al. Retinal pathologic features on OCT among eyes of older adults judged healthy by color fundus photography. Ophthalmology Retina. 2019; 3(8): 670–680. [CrossRef] [PubMed]
Jackson GR, Edwards JG. A short-duration dark adaptation protocol for assessment of age-related maculopathy. J Ocul Biol Dis Infor. 2008; 1(1): 7–11. [CrossRef] [PubMed]
Flamendorf J, Agrón E, Wong WT, et al. Impairments in dark adaptation are associated with age-related macular degeneration severity and reticular pseudodrusen. Ophthalmology. 2015; 122(10): 2053–2062. [CrossRef] [PubMed]
Glorot X, Bengio Y. Understanding the difficulty of training deep feedforward neural networks. Teh YW, Titterington M, (Eds.) Journal of Machine Learning Research. 2010; 9: 249–256.
Dozat T . Incorporating Nesterov Momentum into Adam. Available at: http://cs229.stanford.edu/proj2015/054_report.pdf.
Early Treatment Diabetic Retinopathy Study Research Group. Grading diabetic retinopathy from stereoscopic color fundus photographs–an extension of the modified Airlie House classification. ETDRS report number 10. Ophthalmology. 1991; 98(5 suppl): 786–806. [PubMed]
Staurenghi G, Sadda S, Chakravarthy U, Spaide RF, International Nomenclature for Optical Coherence Tomography (IN•OCT) Panel. Proposed lexicon for anatomic landmarks in normal posterior segment spectral-domain optical coherence tomography: the IN•OCT consensus. Ophthalmology. 2014; 121(8): 1572–1578. [CrossRef] [PubMed]
Peng Y, Dharssi S, Chen Q, et al. DeepSeeNet: a deep learning model for automated classification of patient-based age-related macular degeneration severity from color fundus photographs. Ophthalmology. 2019; 126(4): 565–575. [CrossRef] [PubMed]
Burlina PM, Joshi N, Pekala M, Pacheco KD, Freund DE, Bressler NM. Automated grading of age-related macular degeneration from color fundus images using deep convolutional neural networks. JAMA Ophthalmol. 2017; 135(11): 1170–1176. [CrossRef] [PubMed]
Grassmann F, Mengelkamp J, Brandl C, et al. A deep learning algorithm for prediction of age-related eye disease study severity scale for age-related macular degeneration from color fundus photography. Ophthalmology. 2018; 125(9): 1410–1420. [CrossRef] [PubMed]
Burlina P, Joshi N, Pacheco KD, Freund DE, Kong J, Bressler NM. Utility of deep learning methods for referability classification of age-related macular degeneration. JAMA Ophthalmology. 2018; 136(11): 1305. [CrossRef] [PubMed]
Burlina PM, Joshi N, Pacheco KD, Freund DE, Kong J, Bressler NM. Use of deep learning for detailed severity characterization and estimation of 5-year risk among patients with age-related macular degeneration. JAMA Ophthalmology. 2018; 136(12): 1359. [CrossRef] [PubMed]
Rowan S, Jiang S, Korem T, et al. Involvement of a gut-retina axis in protection against dietary glycemia-induced age-related macular degeneration. Proc Natl Acad Sci USA. 2017; 114(22): E4472–E4481. [CrossRef] [PubMed]
Laíns I, Kelly RS, Miller JB, et al. Human plasma metabolomics study across all stages of age-related macular degeneration identifies potential lipid biomarkers. Ophthalmology. 2018; 125(2): 245–254. [CrossRef] [PubMed]
Brown CN, Green BD, Thompson RB, den Hollander AI, Lengyel I, EYE-RISK consortium. Metabolomics and age-related macular degeneration. Metabolites. 2018; 9(1): 4. [CrossRef]
Burgess S, Davey Smith G. Mendelian randomization implicates high-density lipoprotein cholesterol–associated mechanisms in etiology of age-related macular degeneration. Ophthalmology. 2017; 124(8): 1165–1174. [CrossRef] [PubMed]
Spaide RF, Curcio CA. Anatomical correlates to the bands seen in the outer retina by optical coherence tomography: literature review and model. Retina. 2011; 31(8): 1609. [CrossRef] [PubMed]
Litts KM, Messinger JD, Freund KB, Zhang Y, Curcio CA. Inner segment remodeling and mitochondrial translocation in cone photoreceptors in age-related macular degeneration with outer retinal tubulation. Invest Ophthalmol Vis Sci. 2015; 56(4): 2243–2253. [CrossRef] [PubMed]
Pollreisz A, Neschi M, Sloan KR, et al. Atlas of human retinal pigment epithelium organelles significant for clinical imaging. Invest Ophthalmol Vis Sci. 2020; 61(8): 13. [CrossRef] [PubMed]
Shirazi MF, Brunner E, Laslandes M, Pollreisz A, Hitzenberger CK, Pircher M. Visualizing human photoreceptor and retinal pigment epithelium cell mosaics in a single volume scan over an extended field of view with adaptive optics optical coherence tomography. Biomedical Optics Express. 2020; 11(8): 4520. [CrossRef] [PubMed]
Wang JJ, Rochtchina E, Lee AJ, et al. Ten-year incidence and progression of age-related maculopathy: the blue Mountains Eye Study. Ophthalmology. 2007; 114(1): 92–98. [CrossRef] [PubMed]
Curcio CA. Antecedents of soft drusen, the specific deposits of age-related macular degeneration, in the biology of human macula. Invest Ophthalmol Vis Sci. 2018; 59(4): AMD182–AMD194. [CrossRef] [PubMed]
Curcio CA, Johnson M, Rudolf M, Huang J-D. The oil spill in ageing Bruch membrane. Br J Ophthalmol. 2011; 95(12): 1638–1645. [CrossRef] [PubMed]
Li B, George EW, Rognon GT, et al. Imaging lutein and zeaxanthin in the human retina with confocal resonance Raman microscopy. Proc Natl Acad Sci USA. 2020; 117(22): 12352–12358. [CrossRef] [PubMed]
Curcio CA. Soft drusen in age-related macular degeneration: biology and targeting via the oil spill strategies. Invest Ophthalmol Vis Sci. 2018; 59(4): AMD160–AMD181. [CrossRef] [PubMed]
Selvaraju RR, Cogswell M, Das A, Vedantam R, Parikh D, Batra D. Grad-CAM: visual explanations from deep networks via gradient-based localization. 2017 IEEE International Conference on Computer Vision (ICCV). Available at: Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization (arxiv.org).
Simonyan K, Vedaldi A, Zisserman A. Deep inside convolutional networks: visualising image classification models and saliency maps. Available at: Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps (arxiv.org).
Neely D, Zarubina AV, Clark ME, et al. Association between visual function and subretinal drusenoid deposits in normal and early age-related macular degeneration eyes. Retina. 2017; 37(7): 1329–1336. [CrossRef] [PubMed]
Figure 1.
 
Concept diagram of framework for biomarker discovery using deep learning. The overall framework is shown in (A). After aligning spectral domain optical coherence tomography (SD-OCT) images, separate datasets are created and different convolutional neural networks (CNN) deep learning models are trained (B). The frozen models with the best performance, lowest validation loss, are systematically perturbed with mean occlusion in the test set and perturbations increasing and decreasing the predictions are shown in green and red, respectively (C).
Figure 1.
 
Concept diagram of framework for biomarker discovery using deep learning. The overall framework is shown in (A). After aligning spectral domain optical coherence tomography (SD-OCT) images, separate datasets are created and different convolutional neural networks (CNN) deep learning models are trained (B). The frozen models with the best performance, lowest validation loss, are systematically perturbed with mean occlusion in the test set and perturbations increasing and decreasing the predictions are shown in green and red, respectively (C).
Figure 2.
 
Performance of deep learning models by anatomic location. Training curves for two different anatomic locations (blue and orange curves) (A) by root mean standard error (RMSE) and mean absolute error (MAE); shaded region shows 95% confidence intervals by repeated training sessions. The anatomic positions are indicated by the two dotted lines of corresponding color in panel (B). Lowest error on foveal B-scan by millimeters eccentricity and RMSE loss with lower being higher performance B. The fovea is labeled with the white arrow.
Figure 2.
 
Performance of deep learning models by anatomic location. Training curves for two different anatomic locations (blue and orange curves) (A) by root mean standard error (RMSE) and mean absolute error (MAE); shaded region shows 95% confidence intervals by repeated training sessions. The anatomic positions are indicated by the two dotted lines of corresponding color in panel (B). Lowest error on foveal B-scan by millimeters eccentricity and RMSE loss with lower being higher performance B. The fovea is labeled with the white arrow.
Figure 3.
 
Visualization of deep learning features from the test set. The original spectral domain optical coherence tomography (SD-OCT) scan in mm used by the deep learning model to predict rod intercept time (RIT) are shown in (A, D, G). Panels (B, E, H) show the magnitude of the difference between the perturbed and baseline predictions caused by occlusion of each possible pixel position, with red showing elongation and blue showing shortening of the RIT. The corresponding overlays are shown in (C, F, I) in relation to the ellipsoid zone (EZ).
Figure 3.
 
Visualization of deep learning features from the test set. The original spectral domain optical coherence tomography (SD-OCT) scan in mm used by the deep learning model to predict rod intercept time (RIT) are shown in (A, D, G). Panels (B, E, H) show the magnitude of the difference between the perturbed and baseline predictions caused by occlusion of each possible pixel position, with red showing elongation and blue showing shortening of the RIT. The corresponding overlays are shown in (C, F, I) in relation to the ellipsoid zone (EZ).
Figure 4.
 
Correlation of hyporeflective bands with rod intercept time (RIT). Panel (A) shows a reference image of the external limiting membrane (ELM), the ellipsoid zone (EZ), the interdigitation zone (IZ), and the retinal pigment epithelium-Bruch's membrane (RPE-BrM) on spectral domain optical coherence tomography (SD-OCT). Three examples of low RIT (B), medium RIT (C), and high RIT (D) sampled randomly from the test set are shown with high resolution insets (red boxes) and the RIT in minutes. The IZ, which is apparent in the reference figure (also from this population), is not apparent in any of the randomly sampled figures. Further the gap between the RPE-BrM and the EZ is more hyper-reflective in C, D than in B. Blurring of hyporeflective bands superficial and deep to the EZ correlates with RIT.
Figure 4.
 
Correlation of hyporeflective bands with rod intercept time (RIT). Panel (A) shows a reference image of the external limiting membrane (ELM), the ellipsoid zone (EZ), the interdigitation zone (IZ), and the retinal pigment epithelium-Bruch's membrane (RPE-BrM) on spectral domain optical coherence tomography (SD-OCT). Three examples of low RIT (B), medium RIT (C), and high RIT (D) sampled randomly from the test set are shown with high resolution insets (red boxes) and the RIT in minutes. The IZ, which is apparent in the reference figure (also from this population), is not apparent in any of the randomly sampled figures. Further the gap between the RPE-BrM and the EZ is more hyper-reflective in C, D than in B. Blurring of hyporeflective bands superficial and deep to the EZ correlates with RIT.
Table.
 
Patient Characteristics
Table.
 
Patient Characteristics
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×