Open Access
Articles  |   July 2021
Deep Learning–Based Retinal Nerve Fiber Layer Thickness Measurement of Murine Eyes
Author Affiliations & Notes
  • Rui Ma
    Department of Electrical and Computer Engineering, University of Miami, Coral Gables, FL, USA
  • Yuan Liu
    Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, FL, USA
  • Yudong Tao
    Department of Electrical and Computer Engineering, University of Miami, Coral Gables, FL, USA
  • Karam A. Alawa
    Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, FL, USA
  • Mei-Ling Shyu
    Department of Electrical and Computer Engineering, University of Miami, Coral Gables, FL, USA
  • Richard K. Lee
    Department of Electrical and Computer Engineering, University of Miami, Coral Gables, FL, USA
    Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, FL, USA
  • Correspondence: Richard K. Lee, University of Miami Miller School of Medicine, 900 NW 17th Street Miami, FL 33136, USA. e-mail: rlee@med.miami.edu 
Translational Vision Science & Technology July 2021, Vol.10, 21. doi:https://doi.org/10.1167/tvst.10.8.21
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Rui Ma, Yuan Liu, Yudong Tao, Karam A. Alawa, Mei-Ling Shyu, Richard K. Lee; Deep Learning–Based Retinal Nerve Fiber Layer Thickness Measurement of Murine Eyes. Trans. Vis. Sci. Tech. 2021;10(8):21. https://doi.org/10.1167/tvst.10.8.21.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose: To design a robust and automated estimation method for measuring the retinal nerve fiber layer (RNFL) thickness using spectral domain optical coherence tomography (SD-OCT).

Methods: We developed a deep learning–based image segmentation network for automated segmentation of the RNFL in SD-OCT B-scans of mouse eyes. In total, 5500 SD-OCT B-scans (5200 B-scans were used as training data with the remaining 300 B-scans used as testing data) were used to develop this segmentation network. Postprocessing operations were then applied on the segmentation results to fill any discontinuities or remove any speckles in the RNFL. Subsequently, a three-dimensional retina thickness map was generated by z-stacking 100 segmentation processed thickness B-scan images together. Finally, the average absolute difference between algorithm predicted RNFL thickness compared to the ground truth manual human segmentation was calculated.

Results: The proposed method achieves an average dice similarity coefficient of 0.929 in the SD-OCT segmentation task and an average absolute difference of 0.0009 mm in thickness estimation task on the basis of the testing dataset. We also evaluated our segmentation algorithm on another biological dataset with SD-OCT volumes for RNFL thickness after the optic nerve crush injury. Results were shown to be comparable between the predicted and manually measured retina thickness values.

Conclusions: Experimental results demonstrate that our automated segmentation algorithm reliably predicts the RNFL thickness in SD-OCT volumes of mouse eyes compared to laborious and more subjective manual SD-OCT RNFL segmentation.

Translational Relevance: Automated segmentation using a deep learning–based algorithm for murine eye OCT effectively and rapidly produced nerve fiber layer thicknesses comparable to manual segmentation.

Introduction
Optical coherence tomography (OCT) is an imaging technology that uses light waves to generate cross-sectional images of living tissues. OCT provides high-resolution images using low-coherence interferometry, where low-coherence light is combined with a second beam (reference beam) to reduce background noise caused by scattered light. In addition to high image quality, OCT is also a noninvasive in vivo imaging technique capable of capturing micron-scale structural anatomy. 
For the human eye, OCT is currently the most commonly used imaging modality in clinical use for determining and managing ocular diseases. Using OCT, ophthalmologists can identify distinct layers of the retina, cornea, and optic nerve, and measure their thickness, shape and size with three-dimensional (3D) reconstructions of two-dimensional B-scan data stacked into a 3D cube volume. These measurements have become the gold standard for diagnosis of retina diseases (including age-related macular degeneration and diabetic eye disease13), glaucoma (through the measurement of the RNFL and ganglion cell complex4), corneal disease (including keratoconus,5 anterior segment neoplasms,6 and postsurgical corneal complications where the corneal view is disrupted7), and neuro-ophthalmic diseases.8 The retina is also part of the central nervous system and changes in retinal structure such as the RNFL have been associated with central nervous system disorders such as stroke, multiple sclerosis, Parkinson's disease, and Alzheimer's disease.9 With optic neuropathies such as glaucoma, RNFL quantification can also be used to monitor for disease progression and is an invaluable tool in clinical practice. 
Before this imaging technology, an ophthalmologist would diagnosis retinal disease by indirect ophthalmoscopy to view the retina and determine whether visual structural queues were present that suggested retinal thickening, retinal edema, or subretinal fluid. Similarly, an ophthalmologist would look at the optic nerve head (ONH) to estimate the amount of neural retina rim tissue to estimate differences in the RNFL tissue. With the advent of OCT, the ophthalmologist now has quantitative and qualitative measures of the retinal and RNFL thickness at the micron resolution versus the visual estimates, which varied when patients were observed by the same ophthalmologist at different examination times or by different ophthalmologists. 
Basic ophthalmic research for retina and glaucoma relies heavily not only on the function of the retina and the optic nerve, but also on the structural anatomy of these ocular tissues—which are readily analyzed by OCT. However, because of the significantly smaller size of the murine eye (the most frequently used experimental model for human eye disease), much larger crystalline lens, and its different optics compared to the human eye, the same OCT imaging analysis software used for human clinical studies cannot accurately and reliably analyze the lower signal strength images acquired from the murine eye. Thus accurate segmentation of the murine retina and RNFL is most accurately and reliably performed by human manual segmentation that is laborious and extremely time consuming. 
A significant need exists for automated OCT segmentation and 3D image analysis of the retina and optic nerve. Using a deep learning approach with a newly developed automated segmentation analysis program, we demonstrate the accuracy and reliability of this algorithm to analyze and quantitate the layers of the retina for the RNFL with the creation of a 3D topographic map for the spatial distribution of nerve fiber and induced ophthalmic pathologies, such as optic nerve crush (ONC)–induced optic neuropathy. Last, automating OCT segmentation with improved retinal thickness measurements will significantly accelerate the rate of eye research for retinal diseases, glaucoma, and neuroprotection in ocular models of disease. 
Methods
Overview
We have developed a deep learning–based RNFL thickness estimation algorithm. This method uses a convolutional neural network–based image segmentation algorithm to segment the regions of RNFLs in OCT B-scans of mouse eyes. Morphological operations were applied to complete any gaps or remove any speckles in the segmentation results. Finally, the RNFL thickness was measured on the basis of our deep learning rules. 
Materials and Methods
Animals
Mice were purchased from Jackson Laboratory (Bar Harbor, ME, USA) and maintained according to the ARVO Statement for the Use of Animals in Ophthalmic and Vision Research. All mice were kept on a 12-hour light/dark cycle and were fed standard rodent chow available as desired. All animal procedures were approved by the University of Miami Institutional Animal Care and Use Committee. 
SD-OCT Mouse Imaging
An SD-OCT system designed for small animal imaging (Bioptigen, Research Triangle Park, NC, USA) was used for in vivo imaging of the mouse retina. The axial resolution of the system is approximately 5 µm in retinal tissue. Mice were anesthetized with an intraperitoneal injection of a ketamine/xylazine mixture and then placed onto a positioning stage. The head was fixated onto a stabilizing bar, and a heating blanket was placed under the anesthetized mouse. Eyes were dilated with topical tropicamide, and the corneas were kept moist during imaging with regular application of artificial tears (Systane, Alcon, TX, USA). Raster scans centered on the optic disc consisted of 1000 × 100 (horizontal × vertical) depth scans covering an area of 1 × 1 mm2 of the mouse retina. Alignment of the mouse took less than one minute, and image acquisition took approximately two seconds per eye. 
Optic Nerve Crush
ONC injury was performed as described previously (Park 2008). Thy1-Chr2-eYFP and BL6/6J mice at age two months were anesthetized using a ketamine/xylazine cocktail injected intraperitoneally, and eyes were locally anesthetized using topical 0.5% proparacaine hydrochloride. A surgical opening was made immediately behind and above the eyeball conjunctiva and the eye muscles were gently retracted to expose the optic nerve. Dumont no. 5 forceps were used to crush the nerve approximately 0.5 to 1 mm posterior to the optic disk without damaging the retinal vessels or blood supply. 
SD-OCT Manual Segmentation
SD-OCT imaging files were converted and segmented using MATLAB-based software (MathWorks, Inc., Natick, MA, USA) to generate retinal thickness topographic heat maps, as we have previously published.10 The RNFL is composed of an RGC layer and an inner plexiform layer. The top and bottom boundaries of the RNFL were manually annotated by human experts. From a total of 55 OCT volumes of non-crushed normal eyes, 52 volumes were randomly chosen as training data, and the remaining were used as testing data to develop the RNFL segmentation network. Each volume contains 100 B-scans. 
To further evaluate our model, another thirty-five volumes of OCT images from five crushed eyes at different time points was also used. The thickness of the RNFL was measured at baseline and six weeks after ONC injury. These OCT volumes were manually segmented with the same methodology as those from non-crushed eyes. 
Retina Nerve Fiber Layer Segmentation
Given an OCT B-scan X, the goal of the RNFL segmentation task is to assign each pixel x  ∈  X to a class label y, whose value could be 0 or 1, indicating that x is either outside or inside the RNFL, respectively. In this study, we adopt a well-known image segmentation network, ResUNet,11 as the segmentation network. The detailed network architecture of ResUNet is shown in Figure 1. The network takes X as input and outputs a binary mask Y representing the RNFL. Similar to the original U-Net,12 ResUNet uses an encoder-decoder architecture with skip connections in between. Resolution of X is shrunk by a factor of two for five times through max-pooling operations in the encoder module to capture contextual information at different resolution levels, and then recovered to the original resolution through five up-sampling operations in the decoder module to enable precise localization. Moreover, the encoder module of ResUNet uses residual blocks, which are stacked convolutional layers with skip connections, to address the vanishing gradient problem.13 Skip connections between the encoder and decoder are used to recover spatial information lost during down-sampling operations. 
Figure 1.
 
Segmentation of nerve fiber layer in an OCT B-scan.
Figure 1.
 
Segmentation of nerve fiber layer in an OCT B-scan.
During training, horizontal flipping of the input images is used as a data augmentation technique to enhance the generalization performance of the segmentation network. The network is trained according to a pixel-wise cross entropy loss LCE, which is written as  
\begin{eqnarray*}{L_{CE}} = \mathop \sum \limits_{i = 1}^N \left( { - {y_i}\log \widehat {{y_i}} - \left( {1 - {y_i}} \right)\log \left( {1 - \widehat {{y_i}}} \right)} \right)\end{eqnarray*}
where ∑(·) is the summation symbol; yi and \(\widehat {{y_i}}\) are pixels from the predicted and ground truth binary masks, Y and \(\hat Y\), respectively; N is the total number of pixels in Y or \(\hat Y\). A transfer learning technique is also applied by using the pretrained weight from ResNet-5012 in the encoder. The network was trained with a fixed learning rate of 0.001 using the stochastic gradient descent optimizer for 2000 epochs on a Nvidia Tesla P100 GPU. 
After training, the segmentation network uses OCT B-scan from the testing dataset as input and predicts a binary mask that represents its RNFL boundaries. 
Postprocessing
The predicted binary masks from the segmentation network are further processed before they can be used for thickness estimation, because they may contain small gaps or speckles, as is shown in the left column in Figure 2. Therefore we apply morphological operations on these binary masks to fill the gaps and remove the speckles, as shown in the right column in Figure 2
Figure 2.
 
Effects of postprocessing.
Figure 2.
 
Effects of postprocessing.
Postprocessing first detects the boundaries of all separate objects in the predicted binary mask using the “findCountours” function in the OpenCV library.14 The results from this step are the boundaries of both the RNFL and speckles (false-positives) are going to be identified in this step if the segmentation network has generated reasonable output in the previous step. Next, the removing speckles operation is performed by only keeping a single boundary with the greatest number of pixels inside, which is the binary mask representing the RNFL. Finally, the filling gaps operation is performed by transforming the values of all pixels inside this boundary into 1s. 
Thickness Estimation
After the postprocessing step, a 3D retinal thickness map of every SD-OCT volume is generated by aggregating the thickness of RNFLs into its corresponding 100 segmentation results, as shown in Figure 3. The number of white pixels in each column (A-scan) of a binary mask (B-scan) is counted and converted into real thickness in millimeters (mm), resulting in a list of 1000 thickness values per B-scan. Next, the thickness lists from 100 B-scans in an SD-OCT volume are aggregated together to form a matrix of size 100 × 1000. This matrix is resized to a square size of 1000 × 1000 through linear interpolation operations, which is aligned with the size of the ground truth thickness measurement. A retinal thickness map with color representing thickness of the RNFL at different locations is then plotted. A B-scan at the ONH displaying a Bergmeister's papilla and ONH structures demonstrate the convex shape on the B-scan. The thickness of these ONH structures is even greater than the whole RNFL. To observe any RNFL thickness change, we removed a circle with radius of 150 nm from the center of the ONH from each thickness heatmap. 
Figure 3.
 
Thickness map generated by aggregating 100 segmentation results from an OCT volume.
Figure 3.
 
Thickness map generated by aggregating 100 segmentation results from an OCT volume.
Results
Performance Metrics
A variety of metrics, including accuracy (ACC), sensitivity (SEN), specificity (SPE), and dice similarity coefficient15 (DSC), are used to evaluate the performance of the OCT image segmentation model. Specifically, ACC, SEN, and SPE are the percentages of correctly classified pixels in the entire resulting binary mask, RNFL, and background, respectively; DSC is another well-known metric for image segmentation tasks that measures the similarity between the resulting and ground truth binary masks. The equations of these metrics are written as:  
\begin{eqnarray*} ACC &\;=& \frac{TP+TN}{TP+TN+FP+FN}\\ SEN &\;=& \frac{TP}{TP+FN}\\ SPE &\;=& \frac{TN}{TN+FP}\\ DSC &\;=& \frac{2*SEN*SPE}{SEN+SPE} \end{eqnarray*}
where TP, TN, FP, FN are the numbers of true-positive pixels, true-negative pixels, false-positive pixels, and false-negative pixels in a resulting binary mask. Compared with the other metrics, DSC is a more distinguishable measure for this task since it is a combination of SEN and SPE
For thickness estimation, the average RNFL thickness in an SD-OCT volume is calculated by averaging the thickness at every position with positive value in the retinal thickness map. The absolute difference between the predicted average thickness of a volume and its ground truth average thickness is then calculated, whereas the average absolute difference is calculated by averaging the absolute difference among all volumes. 
Results on Testing Data of SD-OCT Imaging Dataset
Figure 4 presents multiple input OCT B-scans (left column) from the testing dataset, the predicted RNFL segmentation results (middle column), and the ground truth segmentation results obtained through manual annotation (right column). The predicted segmentation results are very close to the ground truth despite the speckles in the input OCT images, thereby indicating that our segmentation model is very robust for minimizing the intrinsic noise in the OCT images. Moreover, the signal strength in the fourth OCT image is particularly low—with the central part of the RNFL fading out. Nevertheless, our segmentation model generates an almost-perfect matching result despite such defects that often lead to artifacts. 
Figure 4.
 
Segmentation results from our OCT segmentation network.
Figure 4.
 
Segmentation results from our OCT segmentation network.
Table 1 compares the metrics obtained from our OCT segmentation network, ResUNet with transfer learning, against the other approaches, including the original UNet12, ReLayNet16, a variant of UNet which was designed specifically for the OCT segmentation task, Panoptic Feature Pyramid Network (FPN)17, a lightweight, top-performing network for general image segmentation task, as well as our network without pre-trained weights, on the 300 SD-OCT B-scans with the testing data of the OCT imaging dataset. For RNFL thickness estimation, our method achieves the lowest DIFF of only 0.0009 mm. For RNFL segmentation, although the average SEN of our model is not the highest, its overall performance is still the best since it achieves the highest average ACC, SPE, DSC of 0.992, 0.998, 0.929, respectively. To demonstrate that the performance increase of our method is significant, we conduct a series of two-sample t tests hypothesizing that the segmentation metrics (ACC, SEN, SPE and DSC) of our method are higher than the other methods. Table 2 presents the p-values of these metrics for different methods. 
Table 1.
 
Comparison Between Our OCT Segmentation Network and the Other Approaches
Table 1.
 
Comparison Between Our OCT Segmentation Network and the Other Approaches
Table 2.
 
P Values From Two-Sample t Tests
Table 2.
 
P Values From Two-Sample t Tests
Results From an Optic Nerve Crush Dataset
The predicted average thickness values from our thickness estimation framework compared against the measured average thickness of RNFLs from murine eyes following different amounts of time after ONC injury is shown in Figure 5. A general decrease in RNFL thickness correlated with the number of weeks after ONC secondary to dying retinal ganglion cells in the retina, as is expected. However, the thickness does not decrease monotonically with time. For example, the thickness values of “One week” are slightly higher than “Baseline” in both “471” and “401” volumes. Previous study15 from our group has shown that the RNFL thinning lags behind RGC soma loss after ONC. Therefore, a significant RNFL thickness loss at one week after injury is typically not observed. Even experienced human experts cannot distinguish the difference of OCT data from these two time points. We observed the same trend comparing manually and automatically segmented data with an overall downward trajectory of RNFL thickness after ONC. 
Figure 5.
 
Comparison between the predicted and measured average thickness of five eyes in a timely manner.
Figure 5.
 
Comparison between the predicted and measured average thickness of five eyes in a timely manner.
The predicted thickness values after one week of ONC are higher than the measured thickness by approximately 0.004 mm. The non-crushed eye data used for training the prediction model were manually segmented by masked human expert A while the crushed eye data were segmented by another masked human expert B. These two human experts have fixed observer bias. However, this small difference has almost no effect on the general variation of layer thickness and does not affect the measurement of RNFL thickness change between different time points. To demonstrate this point, Figure 6 presents a scatterplot between the predicted thickness values and measured thickness values from 20 volumes. An R-squared value of 0.9846 is obtained by fitting a linear regression model using these data. This indicates that the predicted values are extremely linearly correlated to the measured values with almost no variance, thus the predicted thickness values are reliable despite the small difference. 
Figure 6.
 
Relation between predicted thickness and measured thickness.
Figure 6.
 
Relation between predicted thickness and measured thickness.
Six 3D retinal thickness maps generated from animal eye “472 OD” in a timely manner are shown in Figure 7. The corresponding B-scan images were taken at the time before ONC injury, and at 1, 2, 3, 4, and 5 weeks after ONC. As retinal ganglion cell (RGC) begin dying from apoptosis after ONC, the dendrites and axons of the RGC also begin to degenerate, leading to thinning of the RNFL. We clearly observe this phenomenon on our thickness map, where the latter timepoint retinal thickness maps demonstrate more blue coded thinning than earlier ones. Three dimensional retinal thickness maps provide more intuitive and rapid interpretations of RNFL thickness change. 
Figure 7.
 
Three-dimensional retinal thickness maps from six volumes in a timely manner.
Figure 7.
 
Three-dimensional retinal thickness maps from six volumes in a timely manner.
Discussion
OCT is a three-dimensional noninvasive in vivo imaging technique that is widely used in clinical and basic science ophthalmic research and human clinical care. The retina is organized into multiple layers and abnormalities in the layers of the retina are associated with ophthalmic, neurogenerative and vascular disorders, such as glaucoma and age-related macular degeneration. Thus segmentation of retinal layers, which can be used to generate quantitative data, is essential in OCT image analysis applications. Despite the power of SD-OCT to provide highly accurate and temporal in vivo imaging of murine retinal structures, OCT technology for quantitative analysis of murine models of neurodegenerative diseases has been underused because of the limitations associated with OCT image analysis processing. Although numerous methods have been proposed for the automated segmentation of retinal surfaces in human scans,1923 no rapid, automated, and simple method has been available for murine retina analysis because of the significantly smaller size and different optics of the murine compared to the human eye. 
We propose an automated RNFL thickness calculation method for SD-OCT volumes of mouse eyes on the basis of deep learning. The proposed system demonstrates superior performance on the retinal thickness estimation task with a challenging biological dataset of SD-OCT volumes of normal and pathological mouse eyes. Our deep learning–based segmentation algorithm for quantifying the RNFL is very robust relative to the intrinsic noise and weak signal strength in the mouse eyes dataset. This new automated OCT deep learning–based segmentation algorithm will accelerate the pace of ophthalmic research in animal models of eye disease. 
Traditional OCT segmentation methods include deformable model-based methods,24,25 which formulate the segmentation process into a classification on the basis of hand-crafted features from the target structures, graph-based methods21,26 that transform the problem into an optimization problem. Constraints with these approaches, such as layer smoothness, and contour-modeling methods27,28 leverage the prior shape information of the target structures. However, most of these methods require prior knowledge about the specific tasks or hand-crafted features from domain experts. These methods may not generalize to all types of OCT image segmentation tasks, especially when the target datasets are not obtained from human eyes. 
In recent years, deep learning techniques29 have demonstrated superior performance for various computer vision tasks, such as video analysis30,31 and disease diagnosis.32,33 Various deep neural networks–based OCT segmentation approaches have been proposed. For instance, Shah et al.34 created a convolutional neural network–based approach for segmentation of surfaces in volumes of OCT images with implicitly learned surface smoothness and surface separation models. Roy et al.16 proposed ReLayNet, an end-to-end fully convolutional framework for semantic segmentation of retinal OCT B-scan into seven retinal layers and fluid masses. Pekala et al.35 proposed another OCT segmentation method using a combination of fully convolutional networks based on DenseNet and Gaussian process regression. Islam et al.36 proposed to use a variant of feature pyramid network to obtain total-retinal thickness maps from 2D color fundus photographs. 
Whereas previous deep learning–based SD-OCT segmentation algorithms mainly focused on improving the performance on segmentation task, our study shows that deep learning is also capable of predicting the thickness of RNFL in SD-OCT volumes of murine eyes. Moreover, unlike many previous OCT segmentation methods that require preprocessing operations, such as noise reduction37 and resolution enhancement38 on the input scans, our proposed algorithm can directly take raw SD-OCT scans as inputs, thereby reducing inference time and avoiding an information loss problem. We had sufficient biological scan data and an experimental data set to train our segmentation model, thus making our algorithm very robust to circumventing intrinsic noise and defects in the input images. In addition, the use of transfer learning and data augmentation techniques significantly improves the generalization performance of our algorithm. This new automated algorithm, which produces results comparable to ground truth data gained by time-consuming and subjective manual segmentation, will accelerate and improve the quality of ophthalmic studies of the eye as a neuroscience model for central nervous system disease. 
Acknowledgments
Supported by NIH Center Core Grant P30EY014801 and a Research to Prevent Blindness Unrestricted Grant. R.K. Lee is supported by the Walter G. Ross Foundation. 
Disclosure: R. Ma, None; Y. Liu, None; Y. Tao, None; K.A. Alawa, None; M.-L. Shyu, None; R.K. Lee, None 
References
Miller JW. Comparison of age-related macular degeneration treatments trials 2: introducing comparative effectiveness research. Ophthalmology. 2020; 127: S133–S134. [CrossRef] [PubMed]
Jaffe GJ, Martin DF, Toth CA, et al. Macular morphology and visual acuity in the comparison of age-related macular degeneration treatments trials. Ophthalmology. 2013;120: 1860–1870. [CrossRef] [PubMed]
Lang G. Optical coherence tomography findings in diabetic retinopathy. Dev Ophthalmol. 2007; 39: 31–47. [PubMed]
Kanamori A, Nakamura M, Escano MFT, Seya R, Maeda H, Negi A. Evaluation of the glaucomatous damage on retinal nerve fiber layer thickness measured by optical coherence tomography. Am J Ophthalmol. 2003; 135: 513–520. [CrossRef] [PubMed]
Qin B, Chen S, Brass R, et al. Keratoconus diagnosis with optical coherence tomography-based pachymetric scoring system. J Cataract Refract Surg. 2013; 39(12): 1864–1871. [CrossRef] [PubMed]
Janssens K, Mertens M, Lauwers N, De Keizer RJW, Mathysen DGP, De Groot V. To study and determine the role of anterior segment optical coherence tomography and ultrasound biomicroscopy in corneal and conjunctival tumors [published online ahead of print December 6,2016]. J Ophthalmol, https://doi.org/10.1155/2016/1048760.
Venkateswaran N, Galor A, Wang J, Karp CL. Optical coherence tomography for ocular surface and corneal diseases: a review. Eye Vis. 2018; 5(1): 13. [CrossRef]
Pasol J. Neuro-ophthalmic disease and optical coherence tomography: glaucoma look-alikes. Curr Opin Ophthalmol. 2011; 22: 124–132. [CrossRef] [PubMed]
London A, Benhar I, Schwartz M. The retina as a window to the brain - From eye research to CNS disorders. Nat Rev Neurol. 2013; 9(1): 44–53.
In vivo retinal thickness quantification in brn3b −/− mice using optical coherence tomography segmentation | IOVS | ARVO Journals. https://iovs.arvojournals.org/article.aspx?articleid=2370812. Accessed October 26, 2020.
Zhang Q, Cui Z, Niu X, Geng S, Qiao Y. Image segmentation with pyramid dilated convolution based on ResNet and U-Net. Lect Notes Comput Sci (including Subser Lect Notes Artif Intell Lect Notes Bioinformatics). 2017; 10635 LNCS: 364–372.
Ronneberger O, Fischer P, Brox T. U-net: Convolutional networks for biomedical image segmentation. In: Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). Vol. 9351. Springer Verlag; 2015: 234–241.
He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. 2016: 770–778.
Bradski G. The OpenCV Library. Dr Dobb's J Softw Tools. 2000; 25: 120–125.
Zou KH, Warfield SK, Bharatha A, et al. Statistical validation of image segmentation quality based on a spatial overlap index. Acad Radiol. 2004; 11: 178–189. [CrossRef] [PubMed]
Roy AG, Conjeti S, Karri SPK, et al. ReLayNet: retinal layer and fluid segmentation of macular optical coherence tomography using fully convolutional networks. Biomed Opt Express. 2017; 8(8): 3627, doi:10.1364/boe.8.003627. [CrossRef] [PubMed]
Kirillov A, Girshick R, He K, Dollár P. Panoptic feature pyramid networks. arXiv. 2019:6399-6408. http://cocodataset. Accessed January 3, 2021.
Munguba GC, Galeb S, Liu Y, et al. Nerve fiber layer thinning lags retinal ganglion cell density following crush axonopathy. Invest Ophthalmol Vis Sci. 2014; 55: 6505–6513. [CrossRef] [PubMed]
Yazdanpanah A, Hamarneh G, Smith BR, Sarunic MV. Segmentation of intra-retinal layers from optical coherence tomography images using an active contour approach. IEEE Trans Med Imaging. 2011; 30: 484–496. [CrossRef] [PubMed]
Lee K, Niemeijer M, Garvin MK, Kwon YH, Sonka M, Abramoff MD. Segmentation of the optic disc in 3-D OCT scans of the optic nerve head. IEEE Trans Med Imaging. 2010; 29: 159–168. [PubMed]
Chiu SJ, Li XT, Nicholas P, Toth CA, Izatt JA, Farsiu S. Automatic segmentation of seven retinal layers in SDOCT images congruent with expert manual segmentation. Opt Express. 2010; 18(18): 19413. [CrossRef] [PubMed]
Garvin MK, Abràmoff MD, Abràmoff MD, et al. Automated 3-D intraretinal layer segmentation of macular spectral-domain optical coherence tomography images. IEEE Trans Med Imaging. 2009; 28: 1436–1447. [CrossRef] [PubMed]
Lang A, Carass A, Hauser M, et al. Retinal layer segmentation of macular OCT images using boundary classification. Biomed Opt Express. 2013; 4: 1133. [CrossRef] [PubMed]
Rossant F, Bloch I, Ghorbel I, Paques M. Parallel double snakes. Application to the segmentation of retinal layers in 2D-OCT for pathological subjects. Pattern Recog. 2015; 48: 3857–3870. [CrossRef]
Chan TF, Vese LA. Active contours without edges. IEEE Trans Image Process. 2001; 10: 266–277. [CrossRef] [PubMed]
Duan J, Tench C, Gottlob I, Proudlock F, Bai L. Automated segmentation of retinal layers from optical coherence tomography images using geodesic distance. Pattern Recognit. 2017; 72: 158–175. [CrossRef]
Xie W, Duan J, Shen L, Li Y, Yang M, Lin G. Open snake model based on global guidance field for embryo vessel location. IET Comput Vis. 2018; 12: 129–137. [CrossRef]
Cootes T, Baldock E, Graham J. An introduction to active shape models. Image Process Anal. 2000: 223–248.
Pouyanfar S, Sadiq S, Yan Y, et al. A survey on deep learning: Algorithms, techniques, and applications. ACM Comput Surv. 2018; 51(5): 1–36. [CrossRef]
Karpathy A, Toderici G, Shetty S, Leung T, Sukthankar R, Li FF. Large-scale video classification with convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2014: 1725–1732.
Tian H, Tao Y, Pouyanfar S, Chen SC, Shyu ML. Multimodal deep representation learning for video classification. World Wide Web. 2019; 22: 1325–1341. [CrossRef]
Tao Y, Shyu ML. SP-ASDNet: CNN-LSTM based ASD classification model using observer scanpaths. In: 2019 IEEE International Conference on Multimedia & Expo Workshops (ICMEW). 2019: 641–646.
Vyas K, Ma R, Rezaei B, et al. Recognition of atypical behavior in autism diagnosis from video using pose estimation over time. In: IEEE 29th International Workshop on Machine Learning for Signal Processing (MLSP). 2019: 1–6. IEEE.
Shah A, Zhou L, Abrámoff MD, Wu X. Multiple surface segmentation using convolution neural nets: application to retinal layer segmentation in OCT images. Biomed Opt Express. 2018; 9: 4509. [CrossRef] [PubMed]
Pekala M, Joshi N, Liu TYA, Bressler NM, DeBuc DC, Burlina P. Deep learning based retinal OCT segmentation. Comput Biol Med. 2019; 114: 103445. [CrossRef] [PubMed]
Islam MS, Wang JK, Deng W, Thurtell MJ, Kardon RH, Garvin MK. Deep-learning-based estimation of 3D optic-nerve-head shape from 2D color fundus photographs in cases of optic disc swelling. In: Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). Vol. 12069 LNCS. Springer Science and Business Media Deutschland GmbH; 2020: 136–145.
Quellec G, Lee K, Dolejsi M, Garvin MK, Abràmoff MD, Sonka M. Three-dimensional analysis of retinal layer texture: Identification of fluid-filled regions in SD-OCT of the macula. IEEE Trans Med Imaging. 2010; 29(6): 1321–1330. [CrossRef] [PubMed]
Bousi E, Zouvani I, Pitris C. Lateral resolution improvement of oversampled OCT images using Capon estimation of weighted subvolume contribution. Biomed Opt Express. 2017; 8(3): 1319. [CrossRef] [PubMed]
Figure 1.
 
Segmentation of nerve fiber layer in an OCT B-scan.
Figure 1.
 
Segmentation of nerve fiber layer in an OCT B-scan.
Figure 2.
 
Effects of postprocessing.
Figure 2.
 
Effects of postprocessing.
Figure 3.
 
Thickness map generated by aggregating 100 segmentation results from an OCT volume.
Figure 3.
 
Thickness map generated by aggregating 100 segmentation results from an OCT volume.
Figure 4.
 
Segmentation results from our OCT segmentation network.
Figure 4.
 
Segmentation results from our OCT segmentation network.
Figure 5.
 
Comparison between the predicted and measured average thickness of five eyes in a timely manner.
Figure 5.
 
Comparison between the predicted and measured average thickness of five eyes in a timely manner.
Figure 6.
 
Relation between predicted thickness and measured thickness.
Figure 6.
 
Relation between predicted thickness and measured thickness.
Figure 7.
 
Three-dimensional retinal thickness maps from six volumes in a timely manner.
Figure 7.
 
Three-dimensional retinal thickness maps from six volumes in a timely manner.
Table 1.
 
Comparison Between Our OCT Segmentation Network and the Other Approaches
Table 1.
 
Comparison Between Our OCT Segmentation Network and the Other Approaches
Table 2.
 
P Values From Two-Sample t Tests
Table 2.
 
P Values From Two-Sample t Tests
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×