January 2024
Volume 13, Issue 1
Open Access
Artificial Intelligence  |   January 2024
Are Macula or Optic Nerve Head Structures Better at Diagnosing Glaucoma? An Answer Using Artificial Intelligence and Wide-Field Optical Coherence Tomography
Author Affiliations & Notes
  • Charis Y. N. Chiang
    Department of Biomedical Engineering, National University of Singapore, Singapore
    Ophthalmic Engineering & Innovation Laboratory, Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
  • Fabian A. Braeu
    Ophthalmic Engineering & Innovation Laboratory, Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
    Singapore-MIT Alliance for Research and Technology, Singapore
    Yong Loo Lin School of Medicine, National University of Singapore, Singapore
  • Thanadet Chuangsuwanich
    Department of Biomedical Engineering, National University of Singapore, Singapore
    Ophthalmic Engineering & Innovation Laboratory, Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
  • Royston K. Y. Tan
    Department of Biomedical Engineering, National University of Singapore, Singapore
    Ophthalmic Engineering & Innovation Laboratory, Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
  • Jacqueline Chua
    Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
    SERI-NTU Advanced Ocular Engineering (STANCE), Singapore, Singapore
    Duke-NUS Graduate Medical School, Singapore
  • Leopold Schmetterer
    Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
    SERI-NTU Advanced Ocular Engineering (STANCE), Singapore, Singapore
    Duke-NUS Graduate Medical School, Singapore
    School of Chemical and Biological Engineering, Nanyang Technological University, Singapore
    Department of Clinical Pharmacology, Medical University of Vienna, Austria
    Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Austria
    Institute of Molecular and Clinical Ophthalmology, Basel, Switzerland
  • Alexandre H. Thiery
    Department of Statistics and Data Sciences, National University of Singapore, Singapore
  • Martin L. Buist
    Department of Biomedical Engineering, National University of Singapore, Singapore
  • Michaël J. A. Girard
    Ophthalmic Engineering & Innovation Laboratory, Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
    Duke-NUS Graduate Medical School, Singapore
    Institute of Molecular and Clinical Ophthalmology, Basel, Switzerland
  • Correspondence: Michaël J. A. Girard, Ophthalmic Engineering & Innovation Laboratory (OEIL), Singapore Eye Research Institute (SERI), The Academia, 20 College Road, Discovery Tower Level 6, Singapore 169856, Singapore. e-mail: mgirard@ophthalmic.engineering 
Translational Vision Science & Technology January 2024, Vol.13, 5. doi:https://doi.org/10.1167/tvst.13.1.5
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Charis Y. N. Chiang, Fabian A. Braeu, Thanadet Chuangsuwanich, Royston K. Y. Tan, Jacqueline Chua, Leopold Schmetterer, Alexandre H. Thiery, Martin L. Buist, Michaël J. A. Girard; Are Macula or Optic Nerve Head Structures Better at Diagnosing Glaucoma? An Answer Using Artificial Intelligence and Wide-Field Optical Coherence Tomography. Trans. Vis. Sci. Tech. 2024;13(1):5. https://doi.org/10.1167/tvst.13.1.5.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose: We wanted to develop a deep-learning algorithm to automatically segment optic nerve head (ONH) and macula structures in three-dimensional (3D) wide-field optical coherence tomography (OCT) scans and to assess whether 3D ONH or macula structures (or a combination of both) provide the best diagnostic power for glaucoma.

Methods: A cross-sectional comparative study was performed using 319 OCT scans of glaucoma eyes and 298 scans of nonglaucoma eyes. Scans were compensated to improve deep-tissue visibility. We developed a deep-learning algorithm to automatically label major tissue structures, trained with 270 manually annotated B-scans. The performance was assessed using the Dice coefficient (DC). A glaucoma classification algorithm (3D-CNN) was then designed using 500 OCT volumes and corresponding automatically segmented labels. This algorithm was trained and tested on three datasets: cropped scans of macular tissues, those of ONH tissues, and wide-field scans. The classification performance for each dataset was reported using the area under the curve (AUC).

Results: Our segmentation algorithm achieved a DC of 0.94 ± 0.003. The classification algorithm was best able to diagnose glaucoma using wide-field scans, followed by ONH scans, and finally macula scans, with AUCs of 0.99 ± 0.01, 0.93 ± 0.06 and 0.91 ± 0.11, respectively.

Conclusions: This study showed that wide-field OCT may allow for significantly improved glaucoma diagnosis over typical OCTs of the ONH or macula.

Translational Relevance: This could lead to mainstream clinical adoption of 3D wide-field OCT scan technology.

Introduction
Primary open-angle glaucoma (POAG) is one of the world's leading causes of permanent blindness.1,2 POAG is a progressive and chronic form of glaucoma,3 characterized by distinct damage to the optic nerve.4,5 This damage includes the excavation, or cupping, of the ONH and progressive loss of retinal ganglion cells (RGC) and RGC axons in the ONH and macula, leading to vision loss.6,7 Early detection is crucial to stop disease progression and prevent irreversible damage.8 
Because of the nature of glaucomatous damage, especially in earlier stages of the disease, it is crucial to identify three-dimensional (3D) structural changes in both the macula and ONH. Thus the use of optical coherence tomography (OCT) to obtain 3D images of either the macula or ONH has proven valuable for diagnosing and monitoring POAG.9 Recently, a new form of wide-field 3D-OCT has emerged, which could be advantageous because information from both the ONH and macula structures can be simultaneously captured within seconds, as opposed to separately scanning the ONH and macular. As such, widefield scanning should be more effective and efficient for earlier detection of POAG. 
Diagnostic studies using OCT in glaucoma have focused on either macula or ONH volumes. For example, Asaoka et al.10 applied various artificial intelligence (AI) methods, including random Forest and support vector machines, to diagnose POAG from 3D macula OCT volumes. They achieved an AUC of 0.937 with their six-layer 3D-convolutional neural network (CNN) model. Similarly, George et al.11 used an eight-layer 3D-CNN model to diagnose POAG from large datasets of raw 3D OCT volumes centered on the ONH. They achieved an AUC of 0.973 for glaucoma detection. Non-AI methods have also been used for glaucoma diagnosis, including a study from Mori et al.12 that achieved an AUC of 0.922 using structural measurements derived from macula scans. With wide-field OCT devices, it is now possible to image both the 3D ONH and macula structures in a single scan. This access to a wider area allows us to explore and compare the utility of using each separate structure and the combination of both to diagnose glaucoma.13 
In this study we developed a deep-learning approach to automatically and simultaneously identify seven major neural and connective tissue structures of the ONH and macula region from 3D wide-field OCT scans. We then exploited this information using a subsequent AI classification algorithm to assess whether 3D macula or ONH structures (or the combination of both) would provide the best diagnostic power for glaucoma. 
Methods
Patient Recruitment
A total of 230 subjects (120 non-POAG subjects and 110 POAG subjects) were retrospectively included in this study at the Singapore Eye Research Institute (SERI, Singapore). From these subjects, 319 OCT scans were taken from the subjects with POAG while 298 OCT scans were taken from the non-POAG subjects. All subjects gave written informed consent. The study adhered to the tenets of the Declaration of Helsinki and was approved by the institutional review board of SERI (SingHealth Centralised Institutional Review Board). POAG subjects were clinically diagnosed by glaucoma doctors if they met all the following criteria: (1) presence of glaucomatous optic neuropathy, defined as vertical cup-to-disc ratio >0.7 or an intereye asymmetry >0.2 or notching attributed to glaucoma, (2) visual fields in standard automated perimetry with losses compatible with the structural defects, (3) a corresponding glaucoma hemifield test outside normal limits, with mean deviation better than −6dB, (4) open angles on gonioscopy, and (5) not having secondary causes of glaucomatous optic neuropathy.14 The POAG subjects were using the following eye-drops to lower intraocular pressure: beta blockers (timolol), alpha-2 adrenergic agonists (brimonidine), prostaglandin analogs (latanoprost, bimatoprost, travoprost, tafluprost), and carbonic anhydrase inhibitors (brinzolamide). Individuals classified as non-POAG were those not diagnosed with any clinically relevant eye conditions, including glaucoma, age-related macular degeneration, diabetic retinopathy, and ocular vascular occlusive disorders, diabetes and other causes of neuro-ophthalmic disease.14 
OCT Imaging
OCT imaging was performed on seated subjects in a dark room, and tropicamide 1% solution was used when pupil dilation was necessary. In total 617 3D-OCT wide-field scan volumes were taken, 319 from subjects diagnosed with POAG and 298 from non-POAG subjects. The swept-source OCT machine (PlexElite 9000; Zeiss Meditec, Dublin, CA, USA), operating at 1060 nm, was used to take 12 mm × 12 mm wide-field 3D-OCT scans. All OCT volumes (horizontal raster scans) covered the entire ONH and macula with 500 B-scans (slices), and 500 A-scans per B-scan. The number of pixels per A-scan was 1536. In terms of resolution, the distance between B-scans and the lateral resolution were both 24 µm whereas the axial resolution was 1.95 µm. 
Correction of Light Attenuation Using Adaptive Compensation
To remove the deleterious effects of light attenuation from OCTs, all B-scans were post-processed using adaptive compensation.15,16 This correction for light attenuation was performed on both the inputs to the segmentation and classification networks. For OCT images of the ONH, adaptive compensation has been shown to mitigate blood vessel shadows, improve tissue contrast, and increase the visibility of tissue layer boundaries, especially for deep tissues such as the lamina cribrosa (LC).17 We compensated the scans using a decompression exponent of four in the initial step of compensation to increase the visibility of subtle details, and we used a compression exponent of four in the final step to recompress the dynamic range of pixel intensities and limit noise over-amplification. A contrast exponent of two was also used to exponentiate the pixel intensities and thus improve overall image contrast. This compensation step was particularly critical to visualize deep ONH connective tissues before manual segmentation was performed. 
Manual Segmentation of Seven Tissue Layers in OCT Images
Before training the deep-learning algorithm to automatically label major neural and connective tissues in a wide-field OCT scan, we manually segmented 270 OCT images (B-Scans). This included 135 B-scans from 48 OCT scans of non-POAG eyes and 135 B-scans from 69 OCT scans of POAG eyes randomly subsampled from a larger dataset of wide-field OCT scans. Each compensated OCT image was manually segmented using Amira (version 6; FEI, Hillsboro, OR, USA) by assigning each pixel a label corresponding to one of the following tissue groups as shown in Figure 1: (1) the RNFL and the prelaminar tissue; (2) the ganglion cell layer and the inner plexiform layer (IPL); (3) all other retinal layers; (4) the retinal pigment epithelium (RPE); (5) the choroid; (6) the peripapillary and posterior sclera; and (7) the LC. The background was assigned a value of zero. It should be noted that in most cases (especially POAG subjects), we could not achieve full-thickness segmentation of the peripapillary sclera and the LC because of limited visibility at high depth, even when compensation was used.18 Therefore only the OCT-visible portions of these tissues were segmented as per the observed compensated signal. 
Figure 1.
 
To train our AI algorithm, manual segmentation was performed on OCT wide-field images from both POAG and non-POAG subjects. The following tissues (or tissue groups) were identified: (1) RNFL + prelamina, (2) GPL + IPL, (3) other retinal layers, (4) RPE, (5) choroid, (6) sclera, and (7) lamina cribrosa. Baseline images (compensated) are shown 1st row while tissue boundaries are shown on the 2nd row.
Figure 1.
 
To train our AI algorithm, manual segmentation was performed on OCT wide-field images from both POAG and non-POAG subjects. The following tissues (or tissue groups) were identified: (1) RNFL + prelamina, (2) GPL + IPL, (3) other retinal layers, (4) RPE, (5) choroid, (6) sclera, and (7) lamina cribrosa. Baseline images (compensated) are shown 1st row while tissue boundaries are shown on the 2nd row.
Automated Segmentation of Tissue Layers Using Deep Learning
To automatically segment all major neural and connective tissues in a wide-field OCT scan, we used a Unet++ model, implemented in Pytorch. Unet++ is a nested U-net architecture designed to improve the accuracy and efficiency of medical picture segmentation operations.19 This is achieved via adding convolutional layers on skip pathways, adding dense blocks between the encoder and decoder and using deep supervision for model pruning.19 
The encoder extracted features from the input images that the decoder used to create segmentation masks whereas the skip connections were used to prevent the vanishing gradient problem and help in the backpropagation process. We used the ResNet-34 encoder with pretrained weights (on ImageNet), one of the most cutting-edge networks that makes use of residual connection.20 We used the Jaccard index (mean for all tissues) to represent the loss function. To avoid overfitting during the training, extensive data augmentation, using the python library Albumentations, was performed. These included image transformations such as horizontal flipping, random rotation and translation, additive Gaussian noise, and random saturations. These transformations were shown to significantly improve the performance of the segmentation model. 
Images from the segmentation dataset were split into training (192 B-scans), validation (48 B-scans), and test (30 B-scans) sets, respectively. Half of the images from each set were of POAG eyes, and the rest were non-POAG eyes. All images were resized to 320 × 480 pixels. Data augmentation was then applied to the training set only. The network was then trained on an Nvidia 1080Ti GPU card until optimum performance was reached in the validation set which was about 3000 epochs (computational time: ∼48 hours). Network performance was evaluated with the Dice coefficient, calculated by comparing the network predicted labels with the corresponding manually segmented images from the test set. DC are reported as mean ± standard deviation through a fivefold cross-validation process. 
Wide-Field OCT Scans Divided Into ONH and Macula Regions
To determine whether the use of wide-field scans improved glaucoma diagnosis over typical scans, each of the wide-field scans was divided into two sections: one including only ONH tissues, and another only macular tissues, as shown in Figure 2. We partitioned the images in such a way that the ONH section was one-quarter of the total scan width, which ensured that the ONH was included and approximately centered; the remaining three-quarters of the scan represented the macula section. This partitioning was possible because the scans were manually checked by the technician during acquisition to ensure that both the ONH and macula section were present and well centered. After partitioning, the central slice of each volume was checked by CC to ensure that the macula and ONH were well separated without the cropping out of any landmarks. The scans were all down-sampled by the same factors, resulting in three similar datasets with the following sizes: (1) 500 whole wide-field volumes (167 slices with 125 × 192 pixels); (2) 500 ONH volumes (167 slices with 125 × 48 pixels); and (3) 500 macula volumes (167 slices with 125 × 144 pixels). We arrived at these downsampled sizes by using a trial-and-error approach, downsampling the 3D image volumes iteratively to find a resolution enabling efficient training without overwhelming the memory resources. The downsampling was performed using cubic spline interpolation. 
Figure 2.
 
The wide-field scans (1st column) were cropped into ONH region (2nd column) and macular region (3rd column) at a fixed point along its width. Each full slice in the volume has a width of 192 pixels. Each slice in the scan is cropped along its width resulting in an ONH scan of 48 pixels width, and a macular scan of 144 pixels width. An example of a slice from a scan of the right and left eyes are shown in the 1st and 2nd row respectively.
Figure 2.
 
The wide-field scans (1st column) were cropped into ONH region (2nd column) and macular region (3rd column) at a fixed point along its width. Each full slice in the volume has a width of 192 pixels. Each slice in the scan is cropped along its width resulting in an ONH scan of 48 pixels width, and a macular scan of 144 pixels width. An example of a slice from a scan of the right and left eyes are shown in the 1st and 2nd row respectively.
Diagnostic Performance Comparison Using ONH, Macula or Wide-Field Scans
Binary classification was used to identify whether a given 3D-OCT volume (macula, ONH, or whole wide-field) was classified as either “non-glaucoma” or “glaucoma.” For such classifications, we opted to use a 3D-CNN. It was necessary to approach the classification problem with a dataset distinct from the one used to train our segmentation network to prevent bias for the two independent tasks. We used three datasets (macula, ONH, and whole wide-field volumes) for classification. Each dataset consisted of 250 POAG and 250 non-POAG 3D-OCT image volumes. The 3D image volumes were compensated, and then tissue labels were generated using the segmentation model trained earlier. Each of the three training datasets was then split into training plus validation dataset (consisting of 200 POAG and 200 non-POAG scans) and test datasets (consisting of 50 POAG and 50 non-POAG scans). The scans taken from the same subject were not separated into different datasets, but rather maintained within the same group (training, validation, or testing). Horizontal flips, slight rotations, and translations were then applied to the training set to augment the data. 
Subsequently, three classification 3D CNN models with identical architecture, implemented in TensorFlow Keras, were trained on each of the training datasets. Compensated 3D-widefield volume data and corresponding segmentation masks were separately inputted into two 3D-convolutional blocks. To allow the model to learn information from both the scans and the segmentations, the results of each convolutional blocks were concatenated to produce a single classification output: 0 (POAG) or 1 (non-POAG). 
The networks were each tested on its respective test dataset. The performance was measured by area under the receiver operating characteristic curves (AUCs; mean and standard deviation) and reported via a fivefold cross-validation process. The mean AUC of using either the macula or ONH section or both were compared. An overview of the methodology from acquired raw OCT scans to final POAG classification is shown in Figure 3
Figure 3.
 
(a) Raw slices (b-scans) of wide-field scans were (b) compensated, (c) manually segmented, (d) down-sampled, augmented, (e) and used to train a Unet++ segmentation model. (f) Next, wide-field scan volumes were compensated, (g) automatically segmented, downsampled, (h) and cropped into separate wide-field, macular and ONH regions. (i) Each of these separate scan regions (both compensated volumes and automatic segmentations) were used to train separate similar 3D CNN and the results of each of these classifiers to diagnose POAG were compared.
Figure 3.
 
(a) Raw slices (b-scans) of wide-field scans were (b) compensated, (c) manually segmented, (d) down-sampled, augmented, (e) and used to train a Unet++ segmentation model. (f) Next, wide-field scan volumes were compensated, (g) automatically segmented, downsampled, (h) and cropped into separate wide-field, macular and ONH regions. (i) Each of these separate scan regions (both compensated volumes and automatic segmentations) were used to train separate similar 3D CNN and the results of each of these classifiers to diagnose POAG were compared.
Results
In total, scans from 110 patients with POAG and from 120 normal control subjects were used, the demographic and ocular characteristics comparisons of the two groups are presented in the Table
Table.
 
Comparison of Demographics and Ocular Characteristics Between Primary Open-Angle Glaucoma Patients and Normal Control Subjects
Table.
 
Comparison of Demographics and Ocular Characteristics Between Primary Open-Angle Glaucoma Patients and Normal Control Subjects
Wide-Field Segmentation
Compensated, manually segmented, and AI-segmented images from the test dataset are shown in Figure 4. Our network was able to identify several major neural and connective tissues in a wide-field OCT scan with a DC of 0.94 ± 0.003, and Jaccard loss of 0.11 ± 0.02. Discrepancies were noted in some of the results, mainly: (1) LC with an inconsistent thickness throughout its width; (2) mismatched LC boundaries in automatically segmented images as compared to manual segmentations; and (3) noise segmented as tissue layers (large islands), this noise occurred in less than 5% of the test dataset. 
Figure 4.
 
Three compensated images (slices), its corresponding manual segmentation (ground truth) and automatic segmentation by the algorithm. The algorithm was largely able to identify each of the tissue layer boundaries.
Figure 4.
 
Three compensated images (slices), its corresponding manual segmentation (ground truth) and automatic segmentation by the algorithm. The algorithm was largely able to identify each of the tissue layer boundaries.
Figure 5.
 
Slices from segmented scan volumes that were classified as: (1) a true positive, correctly classified as POAG; (2) a true negative, correctly classified as non-POAG; (3) a false negative, wrongly classified as non-POAG, and (4) a false positive, wrongly classified as POAG.
Figure 5.
 
Slices from segmented scan volumes that were classified as: (1) a true positive, correctly classified as POAG; (2) a true negative, correctly classified as non-POAG; (3) a false negative, wrongly classified as non-POAG, and (4) a false positive, wrongly classified as POAG.
Classification Performance Using Different Anatomical Regions
An AUC of 0.91 ± 0.11 was achieved when using the macula portion of the scan to classify POAG and non-POAG scans. With the ONH portion of the scan, the resultant AUC of 0.93 ± 0.06 was better for the same classification. Using the whole wide-field scans of both the macula and ONH resulted in the best AUC of 0.99 ± 0.01. The number of true-positive, true-negative, false-positive, and false-negative results for each of the experiments are shown as a mean of the fivefold cross-validation test results in the confusion matrices in Supplementary Figure S1. The classification using wide-field scans had more true-positive results n = (44) and true-negative results (n = 48) than both the classification using the ONH and the macular scans. An example of the classification of four separate 3D-OCT volumes is shown in Figure 5, with one B-scan from each of the scan volumes displayed. 
Misclassification of Wide-Field OCT Scans: Qualitative Analysis
As seen in Figure 5, scans that were wrongly classified typically had segmented noise or segmentation errors especially around the LC area. This trend was seen in each of the models and highlights the importance of accurate segmentation to prevent segmentation errors contributing to glaucoma misclassification. 
Discussion
In this study, we explored whether the new wide-field 3D OCT scan technology allowed for improved POAG diagnosis as opposed to 3D OCT scans of the ONH or macula region alone. This was achieved via two steps. First, the proposed deep-learning algorithm was able to accurately segment ONH and macula structures in the wide-field scans. Second, by exploiting the information from both the scan and segmentations, we were able to classify POAG and non-POAG scans via three similar 3D CNN models. We found that the classification performance of the wide-field OCT scans surpassed both traditional scan types, and that the diagnostic ability of the ONH OCT scans was superior to that of the macula OCT scans. 
This study showed that it was possible to segment 3D-OCT wide-field scans accurately. The segmentation algorithm was able to simultaneously isolate seven classes of tissue layers, achieving very good segmentation performance with DC of 0.94 ± 0.003. Although this may be the first study segmenting wide-field scans, the resulting performance is comparable to the top algorithms for segmenting OCT scans of the ONH, as reviewed by Marques et al.21 This includes the models of Devalla et al.22,23 from 2018 and 2022 and the 2018 model of Yu et al.,24 which achieved DC of 0.91 ± 0.05 and 0.93 ± 0.02, and 0.925 ± 0.03 respectively. Furthermore, our model segmented more than or as many key tissue layers as the top performing algorithms.2125 Although there were some errors with segmentation, especially in the LC layer, these were also commonly reported in the aforementioned models.2123,25 
Using a 3D CNN, we found that the diagnostic performance of the ONH scans, which achieved an AUC of 0.93 ± 0.06, was better than that of the macula scans, with AUC of 0.91 ± 0.11. This suggests that the ONH region in wide-field scans provides more information than the macula region for POAG and non-POAG classification. This finding was echoed by the study of Wollstein et al.,26 which showed that ONH OCT scans could better discriminate between glaucoma and nonglaucoma scans than macula OCT scans. This could be because ONH scans capture a complete representation of RGC axons, whereas macula scans capture only 50% of RGCs in the scan region.27 Another factor could be that connective tissues, and especially the LC, which undergoes remodeling with loss of curvature and posterior bowing caused by POAG, are only visible in the ONH scans.2830 Other studies that used 3D OCT scans of the ONH for deep learning-based glaucoma classification also generally achieved higher AUC scores than studies which used 3D OCT scans of the macula.31 These results indicate that ONH OCT scans, instead of macula OCT scans, should be preferred for glaucoma studies using AI or deep learning where wide-field scans are not available. 
The results of the 3D CNN classification further revealed that the diagnostic performance of the wide-field scans surpassed both the macula and ONH scans by achieving 0.99 ± 0.01 AUC, corresponding to an increasingly accurate classification of POAG. This suggests that the additional information provided by including both the ONH and macula, as reflected in wide-field scans, adds value to the POAG classification process. A study conducted by Thakoor et al.32 similarly compared the performance of two CNNs: one trained on circumpapillary disc B-scans and the other trained on RNFL maps extracted from widefield OCT scans. The findings of this study revealed that the latter CNN achieved a slightly higher level of accuracy in glaucoma detection, with an accuracy of 94.8%, compared to 94.4% accuracy for the former. Instead of using the entire widefield OCT scan, this study exclusively used RNFL thickness maps. This choice may account for the observed significant improvement in our results when using widefield OCT scans. In addition, we compared our widefield model to similar POAG diagnosis methods using deep learning applied to macula or ONH scans from the literature. These include various 2D CNN methods performed on fundus images of the ONH which achieved AUCs between 0.80 and 0.90, 3D CNN applied to unsegmented 3D OCT scans of the ONH, which resulted in an AUC of 0.973, and 3D CNN performed on unsegmented macula 3D OCT scans, which achieved an AUC of 0.937.10,11,33 It should be noted that a direct comparison of AUC across different studies may not be entirely accurate because of different datasets and experimental conditions being used. However, the higher AUC our wide-field scans achieved as compared to our own macula and ONH datasets, as well as others from the literature, does suggest that wide fields could allow for more accurate diagnosis of POAG. 
Some limitations in this study warrant further discussion. First, there could be errors in the manual segmentation dataset because parts of the sclera and LC were not fully visible in some of the scan volumes and were segmented by an educated guess. This could perhaps explain why the segmentation model did not perform as well in segmenting the LC tissue layer. Going forward, each manual segmentation could be performed by more than one individual, thus mitigating observer bias. 
Second, there was a propagation of errors from the automated segmentation to the classification model. Although these errors were rare, they likely contributed to misclassification. Possible mitigations include AI-based post-processing to identify and correct these errors or to develop improved automated segmentation models, taking the mean, mode, or weighted mean of the results as the final resultant class label for each pixel in the segmentation. 
Third, the OCT images were not corrected for optical distortions, which often affect the posterior segment of the eye, including the ONH and which could have affected the 3D analysis of the scans.34,35 These artificial deformations to the ONH tissue shapes could potentially be more exaggerated in the wide-field scans, hampering POAG classification, especially for early-stage POAG where the tissue deformations caused by POAG are often smaller. Possible correction methods could include the correction equation derived for nonlinear distortion correction of Grytz et al.34 or the numeric correction approach derived from modeled linear scan beams of Kuo et al.35 
A fourth limitation of the study was the relatively small dataset sizes that may lead to generalization errors. These dataset sizes for both the segmentation and classification model were comparable to that of other similar problems involving 3D-OCT scans because of the difficulties involved in obtaining a large set of manually segmented scans. However, when comparing with deep-learning best practices, the datasets are rather small, and using a larger dataset may result in a more reliable and accurate assessment of the model's performance. 
Fifth, the adaptive compensation performed on the OCT scans may have improved deep tissue visibility, but at the same time it could have compromised the visibility of the anterior retinal layers. To ensure this was not the case, we computed the interlayer contrast—a measure of boundary visibility—across the RNFL and ganglion cell layer boundary and across the IPL and inner nuclear layer boundary.15 We found that the interlayer contrast of both boundaries improved with image compensation, suggesting that the compensation technique did improve the visibility of the anterior retinal layers. 
Finally, given the volumetric data and 3D CNN, there were constraints in computational memory and processing time, which resulted in a down-sampling of the training volumes. This could have contributed to a loss of information, especially for some of the thinner tissue layers such as the RNFL and RPE. Ideally the original resolution of the training data could be preserved. To assess the downsampling effect, we analyzed a random sample of 100 POAG and 100 non-POAG OCT image volumes from the classification dataset. The AUC of using mean RNFL thickness from each volume to discriminate POAG and non-POAG was calculated over 1000 iterations of bootstrapping. The mean and standard deviation of the AUCs are presented in Supplementary Table S1. It was found that the AUCs for the original and downsampled volumes were similar which suggests minimal influence of downsampling on the discriminatory power of RNFL thickness. Although these results are encouraging, it is important to note that because of insufficient computational resources, a comprehensive study to fully explore the effects of downsampling on the 3D CNN performance using the entire volume was not feasible. 
In conclusion, this study showed that using wide-field OCT as compared to the typical OCT images containing just the ONH or macula may allow for an improved POAG diagnosis. This may encourage the mainstream adoption of 3D-OCT wide-field scans clinically because widefield scans are now commercially available in most OCT machines. 
Acknowledgments
Supported by the National Glaucoma Research, a program of the BrightFocus Foundation (G2021010S [M.J.A.G.]); NMRC-LCG grant “Tackling & Reducing Glaucoma Blindness with Emerging Technologies (TARGET),” award ID: MOH-OFLCG21jun-0003 (M.J.A.G.); SingHealth Duke-NUS Academic Medicine Research Grant (SRDUKAMR21A6 [M.J.A.G.]); and the “Retinal Analytics through Machine learning aiding Physics (RAMP)” project that is supported by the National Research Foundation, Prime Minister's Office, Singapore under its Intra-Create Thematic Grant “Intersection of Engineering and Health”–NRF2019-THE002-0006 awarded to the Singapore MIT Alliance for Research and Technology (SMART) Centre (M.J.A.G.). 
Disclosure: C.Y.N. Chiang, None; F.A. Braeu, None; T. Chuangsuwanich, None; R.K.Y. Tan, None; J. Chua, None; L. Schmetterer, None; A.H. Thiery, Abyss Processing Pte Ltd. (S); M.L. Buist, None; M.J.A. Girard, Abyss Processing Pte Ltd. (S) 
References
Giangiacomo A, Coleman AL. The epidemiology of glaucoma. In: Grehn F, Stamper R, eds. Glaucoma Essentials in Ophthalmology. Berlin: Springer, 2009: 13–21.
Racette L, Wilson MR, Zangwill LM, Weinreb RN, Sample PA. Primary open-angle glaucoma in blacks: a review. Surv Ophthalmol. 2003; 48: 295–313. [CrossRef] [PubMed]
Weinreb RN, Peng TK. Primary open-angle glaucoma. Lancet. 2004; 363(9422): 1711–1720. [CrossRef] [PubMed]
Foster PJ, Buhrmann R, Quigley HA, Johnson GJ. The definition and classification of glaucoma in prevalence surveys. Br J Ophthalmol. 2002; 86: 238–242. [CrossRef] [PubMed]
Almasieh M, Wilson AM, Morquette B, Cueva Vargas JL, Di Polo A. The molecular basis of retinal ganglion cell death in glaucoma. Prog Retin Eye Res. 2012; 31: 152–181. [CrossRef] [PubMed]
Blumberg D, Skaat A, Liebmann JM. Emerging risk factors for glaucoma onset and progression. Prog Brain Res. 2015; 221;81–101.
Kuehn M, Fingert J, Kwon Y. Retinal ganglion cell death in glaucoma: mechanisms and neuroprotective strategies. Ophthalmol Clin N Am. 2005; 18: 383–395. [CrossRef]
Allingham RR, Moroi SE, Shields MB, Damji KF, eds. Shields Textbook of Glaucoma, 6th ed. Philadelphia: LWW; 2010.
Novita HD, Moestidjab. Optical coherence tomography (OCT) posterior segment. J Oftalmol Indones. 2008; 6(3): 169–177.
Asaoka R, Murata H, Hirasawa K, et al. Using deep learning and transfer learning to accurately diagnose early-onset glaucoma from macular optical coherence tomography images. Am J Ophthalmol. 2019; 198: 136–145. [CrossRef] [PubMed]
George Y, Antony B, Ishikawa H, Wollstein G, Schuman J, Garnavi R. 3D-CNN for glaucoma detection using optical coherence tomography. Ophthalmic Medical Image Analysis: 6th International Workshop, OMIA 2019, Held in Conjunction With MICCAI 2019. 2019: 52–59.
Mori S, Hangai M, Sakamoto A, Yoshimura N. Spectral-domain optical coherence tomography measurement of macular volume for diagnosing glaucoma. J Glaucoma. 2010; 19: 528–534. [CrossRef] [PubMed]
Chua J, Schwarzhans F, Wong D, et al. Multivariate normative comparison, a novel method for improved use of retinal nerve fiber layer thickness to detect early glaucoma. Ophthalmol Glaucoma. 2022; 5: 359–368. [CrossRef] [PubMed]
Lun K, Sim YC, Chong R, et al. Investigating the macular choriocapillaris in early primary open-angle glaucoma using swept-source optical coherence tomography angiography. Front Med. 2022; 9: 999167. [CrossRef]
Girard MJA, Strouthidis NG, Ethier CR, Mari JM. Shadow removal and contrast enhancement in optical coherence tomography images of the human optic nerve head. Invest Ophthalmol Vis Sci. 2011; 52: 7738–7748. [CrossRef] [PubMed]
Mari JM, Strouthidis NG, Park SC, Girard MJA. Enhancement of lamina cribrosa visibility in optical coherence tomography images using adaptive compensation. Invest Ophthalmol Vis Sci. 2013; 54: 2238–2247. [CrossRef] [PubMed]
Girard MJA, Panda S, Tun TA, et al. 3D structural analysis of the optic nerve head to robustly discriminate between optic disc drusen and papilledema. Invest Ophthalmol Vis Sci. 2022; 63: 435.
Girard MJA, Tun TA, Husain R, et al. Lamina cribrosa visibility using optical coherence tomography: comparison of devices and effects of image enhancement techniques. Invest Ophthalmol Vis Sci. 2015; 56: 865–874. [CrossRef] [PubMed]
Zhou Z, Siddiquee MMR, Tajbakhsh N, Liang J. UNet++: redesigning skip connections to exploit multiscale features in image segmentation. ArXiv.
Wu Z, Shen C, van den Hengel A. Wider or deeper: revisiting the ResNet Model for visual recognition. Pattern Recognit. 2019; 90: 119–133. [CrossRef]
Marques R, De Jesus DA, Barbosa-Breda J, et al. Automatic segmentation of the optic nerve head region in optical coherence tomography: a methodological review. Comput Methods Programs Biomed. 2022; 220: 106801. [CrossRef] [PubMed]
Devalla SK, Pham TH, Panda SK, et al. Towards label-free 3D segmentation of optical coherence tomography images of the optic nerve head using deep learning. Biomedical Opt Express. 2020; 11: 6356–6378. [CrossRef]
Devalla SK, Renukanand PK, Sreedhar BK, et al. DRUNET: a dilated-residual U-Net deep learning network to digitally stain optic nerve head tissues in optical coherence tomography images. Biomed Opt Express. 2018; 9: 3244–3265. [CrossRef] [PubMed]
Yu K, Shi F, Gao E, Zhu W, Chen H, Chen X. Shared-hole graph search with adaptive constraints for 3D optic nerve head optical coherence tomography image segmentation. Biomed Opt Express. 2018; 9(3): 962–983. [CrossRef] [PubMed]
Devalla SK, Chin KS, Mari J-M, et al. A deep learning approach to digitally stain optical coherence tomography images of the optic nerve head. Invest Ophthalmol Vis Sci. 2018; 59: 63–74. [CrossRef] [PubMed]
Wollstein G, Ishikawa H, Wang J, Beaton SA, Schuman JS. Comparison of three optical coherence tomography scanning areas for detection of glaucomatous damage. Am J Ophthalmol. 2005; 139: 39–43. [CrossRef] [PubMed]
Chua J, Tan B, Ke M, et al. Diagnostic ability of individual macular layers by spectral-domain OCT in different stages of glaucoma. Ophthalmol Glaucoma. 2020; 3: 314–326. [CrossRef] [PubMed]
Lee KM, Kim T-W, Weinreb RN, Lee EJ, Girard MJA, Mari JM. Anterior lamina cribrosa insertion in primary open-angle glaucoma patients and healthy subjects. Plos One. 2014; 9(12): e114935. [CrossRef] [PubMed]
Lee SH, Yu D-A, Kim T-W, Lee EJ, Girard MJA, Mari JM. Reduction of the lamina cribrosa curvature after trabeculectomy in glaucoma. Invest Ophthalmol Vis Sci. 2016; 57: 5006. [CrossRef] [PubMed]
Ha A, Kim TJ, Girard MJ, et al. Baseline lamina cribrosa curvature and subsequent visual field progression rate in primary open-angle glaucoma. Ophthalmology. 2018; 125: 1898–1906. [CrossRef] [PubMed]
Ran AR, Tham CC, Chan PP, et al. Deep learning in glaucoma with optical coherence tomography: a review. Eye. 2021; 35: 188–201. [CrossRef] [PubMed]
Thakoor KA, Li X, Tsamis E, et al. Strategies to improve convolutional neural network generalizability and reference standards for glaucoma detection from OCT scans. Transl Vis Sci Technol. 2021; 10(4): 16. [CrossRef] [PubMed]
Hagiwara Y, Koh JE, Tan JH, et al. Computer-aided diagnosis of glaucoma using fundus images: a review. Comput Methods Programs Biomed. 2018; 165: 1–12. [CrossRef] [PubMed]
Grytz R, El Hamdaoui M, Fuchs PA, et al. Nonlinear distortion correction for posterior eye segment optical coherence tomography with application to tree shrews. Biomed Opt Express. 2022; 13: 1070–1086. [CrossRef] [PubMed]
Kuo AN, Verkicharla PK, McNabb RP, et al. Posterior eye shape measurement with retinal OCT compared to MRI. Invest Ophthalmol Vis Sci. 2016; 57(9): OCT196–OCT203. [CrossRef] [PubMed]
Figure 1.
 
To train our AI algorithm, manual segmentation was performed on OCT wide-field images from both POAG and non-POAG subjects. The following tissues (or tissue groups) were identified: (1) RNFL + prelamina, (2) GPL + IPL, (3) other retinal layers, (4) RPE, (5) choroid, (6) sclera, and (7) lamina cribrosa. Baseline images (compensated) are shown 1st row while tissue boundaries are shown on the 2nd row.
Figure 1.
 
To train our AI algorithm, manual segmentation was performed on OCT wide-field images from both POAG and non-POAG subjects. The following tissues (or tissue groups) were identified: (1) RNFL + prelamina, (2) GPL + IPL, (3) other retinal layers, (4) RPE, (5) choroid, (6) sclera, and (7) lamina cribrosa. Baseline images (compensated) are shown 1st row while tissue boundaries are shown on the 2nd row.
Figure 2.
 
The wide-field scans (1st column) were cropped into ONH region (2nd column) and macular region (3rd column) at a fixed point along its width. Each full slice in the volume has a width of 192 pixels. Each slice in the scan is cropped along its width resulting in an ONH scan of 48 pixels width, and a macular scan of 144 pixels width. An example of a slice from a scan of the right and left eyes are shown in the 1st and 2nd row respectively.
Figure 2.
 
The wide-field scans (1st column) were cropped into ONH region (2nd column) and macular region (3rd column) at a fixed point along its width. Each full slice in the volume has a width of 192 pixels. Each slice in the scan is cropped along its width resulting in an ONH scan of 48 pixels width, and a macular scan of 144 pixels width. An example of a slice from a scan of the right and left eyes are shown in the 1st and 2nd row respectively.
Figure 3.
 
(a) Raw slices (b-scans) of wide-field scans were (b) compensated, (c) manually segmented, (d) down-sampled, augmented, (e) and used to train a Unet++ segmentation model. (f) Next, wide-field scan volumes were compensated, (g) automatically segmented, downsampled, (h) and cropped into separate wide-field, macular and ONH regions. (i) Each of these separate scan regions (both compensated volumes and automatic segmentations) were used to train separate similar 3D CNN and the results of each of these classifiers to diagnose POAG were compared.
Figure 3.
 
(a) Raw slices (b-scans) of wide-field scans were (b) compensated, (c) manually segmented, (d) down-sampled, augmented, (e) and used to train a Unet++ segmentation model. (f) Next, wide-field scan volumes were compensated, (g) automatically segmented, downsampled, (h) and cropped into separate wide-field, macular and ONH regions. (i) Each of these separate scan regions (both compensated volumes and automatic segmentations) were used to train separate similar 3D CNN and the results of each of these classifiers to diagnose POAG were compared.
Figure 4.
 
Three compensated images (slices), its corresponding manual segmentation (ground truth) and automatic segmentation by the algorithm. The algorithm was largely able to identify each of the tissue layer boundaries.
Figure 4.
 
Three compensated images (slices), its corresponding manual segmentation (ground truth) and automatic segmentation by the algorithm. The algorithm was largely able to identify each of the tissue layer boundaries.
Figure 5.
 
Slices from segmented scan volumes that were classified as: (1) a true positive, correctly classified as POAG; (2) a true negative, correctly classified as non-POAG; (3) a false negative, wrongly classified as non-POAG, and (4) a false positive, wrongly classified as POAG.
Figure 5.
 
Slices from segmented scan volumes that were classified as: (1) a true positive, correctly classified as POAG; (2) a true negative, correctly classified as non-POAG; (3) a false negative, wrongly classified as non-POAG, and (4) a false positive, wrongly classified as POAG.
Table.
 
Comparison of Demographics and Ocular Characteristics Between Primary Open-Angle Glaucoma Patients and Normal Control Subjects
Table.
 
Comparison of Demographics and Ocular Characteristics Between Primary Open-Angle Glaucoma Patients and Normal Control Subjects
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×