February 2023
Volume 12, Issue 2
Open Access
Artificial Intelligence  |   February 2023
Medical Application of Geometric Deep Learning for the Diagnosis of Glaucoma
Author Affiliations & Notes
  • Alexandre H. Thiéry
    Department of Statistics and Data Science, National University of Singapore, Singapore
  • Fabian Braeu
    Ophthalmic Engineering & Innovation Laboratory, Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
    Yong Loo Lin School of Medicine, National University of Singapore, Singapore
  • Tin A. Tun
    Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
    Duke-NUS Graduate Medical School, Singapore
  • Tin Aung
    Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
    Duke-NUS Graduate Medical School, Singapore
  • Michaël J. A. Girard
    Ophthalmic Engineering & Innovation Laboratory, Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
    Duke-NUS Graduate Medical School, Singapore
    Institute for Molecular and Clinical Ophthalmology, Basel, Switzerland
  • Correspondence: Michaël J.A. Girard, Ophthalmic Engineering & Innovation Laboratory (OEIL), Singapore Eye Research Institute (SERI), The Academia, 20 College Road, Discovery Tower Level 6, Singapore 169856, Singapore. e-mail: mgirard@ophthalmic.engineering 
  • Alexandre H. Thiéry, S-16, 06-113, Department of Statistics and Applied Probability, National University of Singapore, 6 Science Drive 2, Singapore 17546, Singapore. e-mail: a.h.thiery@nus.edu.sg 
Translational Vision Science & Technology February 2023, Vol.12, 23. doi:https://doi.org/10.1167/tvst.12.2.23
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Alexandre H. Thiéry, Fabian Braeu, Tin A. Tun, Tin Aung, Michaël J. A. Girard; Medical Application of Geometric Deep Learning for the Diagnosis of Glaucoma. Trans. Vis. Sci. Tech. 2023;12(2):23. https://doi.org/10.1167/tvst.12.2.23.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose: (1) To assess the performance of geometric deep learning in diagnosing glaucoma from a single optical coherence tomography (OCT) scan of the optic nerve head and (2) to compare its performance to that obtained with a three-dimensional (3D) convolutional neural network (CNN), and with a gold-standard parameter, namely, the retinal nerve fiber layer (RNFL) thickness.

Methods: Scans of the optic nerve head were acquired with OCT for 477 glaucoma and 2296 nonglaucoma subjects. All volumes were automatically segmented using deep learning to identify seven major neural and connective tissues. Each optic nerve head was then represented as a 3D point cloud with approximately 1000 points. Geometric deep learning (PointNet) was then used to provide a glaucoma diagnosis from a single 3D point cloud. The performance of our approach (reported using the area under the curve [AUC]) was compared with that obtained with a 3D CNN, and with the RNFL thickness.

Results: PointNet was able to provide a robust glaucoma diagnosis solely from a 3D point cloud (AUC = 0.95 ± 0.01).The performance of PointNet was superior to that obtained with a 3D CNN (AUC = 0.87 ± 0.02 [raw OCT images] and 0.91 ± 0.02 [segmented OCT images]) and with that obtained from RNFL thickness alone (AUC = 0.80 ± 0.03).

Conclusions: We provide a proof of principle for the application of geometric deep learning in glaucoma. Our technique requires significantly less information as input to perform better than a 3D CNN, and with an AUC superior to that obtained from RNFL thickness.

Translational Relevance: Geometric deep learning may help us to improve and simplify diagnosis and prognosis applications in glaucoma.

Introduction
With the development and progression of glaucoma, the optic nerve head (ONH) and the macula typically exhibits complex neural- and connective-tissue structural changes including, but not limited to, thinning of the retinal nerve fiber layer (RNFL) and the macular ganglion cell complex layer1,2; changes in the lamina cribrosa (LC) shape, depth, and curvature3,4; and posterior bowing of the peripapillary sclera.5,6 Clinically, optical coherence tomography (OCT) is the mainstay of imaging to observe such changes7; however, signal interpretation (by humans or machines) for glaucoma diagnosis and prognosis remains a challenge. 
Recently, a growing number of artificial intelligence (AI) studies have proposed to use deep learning algorithms to provide a robust glaucoma diagnosis from a single OCT scan of the ONH or of the macula. Such applications could have excellent clinical value, for example, by decreasing the number of tests needed to confirm a glaucoma diagnosis. Some of these algorithms were directly applied to the raw OCT scans,810 whereas others first simplified the images to only a few classes (or colors) by highlighting relevant tissue structures.11 Most algorithms achieved good to excellent performance. However, such algorithms need to be able to handle a considerable amount of information (e.g., voxel intensities distributed on three-dimensional [3D] grids) that could be heavily corrupted by noise, image artifacts, and 3D image orientation issues, thus limiting their ease of use and deployability. 
To this end, a family of algorithms fitting under the category of geometric deep learning has been proposed to solve classification problems from structures represented as 3D point clouds (such as those in medical imaging),12 with excellent performance. Geometric deep learning13 is an emerging field of AI that proposes inductive biases and network architectures that can efficiently process data structures such as grids, graphs, and cloud of points while respecting their intrinsic symmetries and invariances.14 In this study, we have leveraged the recently proposed PointNet neural architecture.15 This deep neural network has been especially designed to process point clouds, that is, an unordered set of points. A PointNet takes a point cloud as input and provides a unified architecture for applications ranging from object classification, part segmentation, to scene semantic parsing. Although simple, it has been demonstrated empirically that PointNets exhibit a performance on par, or even better, than the state of the art.15 In addition, because structural point clouds do not need to be dense and falling onto regular grids, this intrinsically means the amount of information needed to make, for example, a diagnosis could be significantly decreased, thus decreasing the black box element of AI. For our glaucoma diagnosis application, the ONH can simply be thought of as a complex 3D structure that can be represented by a cloud of points, as has been routinely performed in 3D histomorphometric and finite element studies.1618 
In this study, we aimed to apply a geometric deep learning solution (PointNet) to provide a robust glaucoma diagnosis from a single OCT scan of the ONH. Each OCT scan was first preprocessed and each ONH was represented as a 3D point cloud, thus limiting the amount of information to be processed by several orders of magnitude. Our approach was compared with a 3D convolutional neural network (CNN), and its performance compared with that from RNFL thickness alone (the current gold standard). 
Methods
Patient Recruitment
A total of 2773 subjects (477 with glaucoma and 2296 without glaucoma) were recruited at the Singapore National Eye Center (Singapore) (see Table for further demographic details). All subjects gave written informed consent. The study adhered to the tenets of the Declaration of Helsinki and was approved by the institutional review board of the respective hospital. Subjects with an intraocular pressure of less than 21 mm Hg, healthy optic discs, and normal visual fields tests were considered as not having glaucoma, whereas subjects with glaucomatous optic neuropathy and/or neuroretinal rim narrowing with repeatable glaucomatous visual field defects were considered as having glaucoma. Subjects with corneal abnormalities that have the potential to preclude the quality of the scans were excluded from the study. 
Table.
 
Summary of Patient Information
Table.
 
Summary of Patient Information
OCT Imaging
Spectral-domain OCT imaging (Spectralis; Heidelberg Engineering, Heidelberg, Germany) was performed on seated subjects under dark room conditions after dilation with tropicamide 1% solution. Images were acquired from either both eyes or one eye of each subject. Each OCT volume consisted of 97 serial horizontal B-scans (approximately 30 µm distance between B-scans; 384 A-scans per B-scan; 20× signal averaging; axial resolution, approximately 3.87 µm) that covered a rectangular area of 15° × 10° centered on the ONH. The eye tracking and enhanced depth imaging modalities of the Spectralis were used during image acquisition. In total, we obtained 4770 scans (873 glaucoma and 3897 nonglaucoma scans). 
Automated Segmentation of OCT Images
Because PointNet requires each ONH to be described as a point cloud, it was first necessary for us to identify (or highlight) the major neural and connective tissues that are involved in glaucoma pathogenesis. To this end, we used the software REFLECTIVITY (Abyss Processing Pte Ltd, Singapore) to automatically segment the following tissue groups: (1) the RNFL and the prelamina, (2) the ganglion cell layer and the inner plexiform layer, (3) all other retinal layers, (4) the retinal pigment epithelium with Bruch's membrane, (5) the choroid, (6) the peripapillary sclera including the scleral flange, and (7) the LC. All segmented tissues can be observed in Figure 1. Note that REFLECTIVITY was developed from advances in AI-based ONH segmentation as described in our previous publications.19,20 It is also important to point out that REFLECTIVITY cannot identify the true posterior boundaries of the peripapillary sclera and of the LC, but instead provides the OCT-visible portions of those two tissues. The AI-based segmentation process assigned a label to each voxel of each 3D OCT scan to indicate the tissue class. 
Figure 1.
 
PointNet Workflow: Each optical coherence tomography scan of the optic nerve head is first segmented using deep learning to identify the following tissue structures: retinal nerve fiber ligament + prelamina, inner plexiform layer + ganglion cell layer, all other retina layers, retinal pigment epithelium, choroid, peripapillary sclera, and lamina cribrosa. A three-dimensional point cloud is then generated strictly from the tissue boundaries. The 3D point cloud is ultimately passed through our PointNet network to produce a glaucoma diagnosis.
Figure 1.
 
PointNet Workflow: Each optical coherence tomography scan of the optic nerve head is first segmented using deep learning to identify the following tissue structures: retinal nerve fiber ligament + prelamina, inner plexiform layer + ganglion cell layer, all other retina layers, retinal pigment epithelium, choroid, peripapillary sclera, and lamina cribrosa. A three-dimensional point cloud is then generated strictly from the tissue boundaries. The 3D point cloud is ultimately passed through our PointNet network to produce a glaucoma diagnosis.
Point Cloud Generation
Once each voxel of each 3D OCT scan was assigned a label, the voxels situated at the boundaries between two different tissue groups were automatically identified. The following eight boundaries of interest were identified: anterior and posterior boundaries of the RNFL and the prelamina, and the posterior boundaries of the ganglion cell layer and the inner plexiform layer, other retina layers, retinal pigment epithelium, choroid, sclera, and LC. On average, a total of approximately 50,000 boundary voxels of interest were extracted from each 3D OCT scan. The 3D coordinates as well as the boundary class (expressed as a label varying between 1 and 8) of each boundary voxel of interest was recorded for further processing. The 3D coordinates were expressed within an [x, y, z] cartesian coordinate system with origin situated at the center of the Bruch's membrane opening circle, and such that the Bruch's membrane opening circle lies in the horizontal [x = 0, y = 0] plane. 
PointNet for Glaucoma Diagnosis
In this work, we aimed to design a binary classifier to identify whether a given ONH would be classified as glaucoma or not glaucoma. For designing such a classifier, we opted to use a simple and robust solution that produces a probabilistic forecast based only on the geometric properties of the ONH. 
For classifying a 3D OCT scan as glaucoma or nonglaucoma, we used a PointNet that processed a point cloud of size S = 1000. For this purpose, out of the typically much larger set of points of interest extracted from each OCT scan, a subset of S = 1000 points were randomly selected. There are two main reasons for this approach: (1) we have not observed any improved predictive performance with a larger number of points (S > 1000), and (2) among all the points of interests extracted from each 3D OCT scan, there is typically a large amount of redundancy. The point location (i.e., 3D coordinates) as well as its boundary class (represented as a one-hot-encoded vector of dimension N = 8) were concatenated into a vector of dimension D = 8 + 3 = 11. In summary, the PointNet was designed to process point clouds of size S = 1000 and dimension D = 11. 
The dataset of OCT scans was split into training (70%), validation (15%), and test (15%) sets. The split was performed in such a way that scans from the same subject did not exist in different sets. Furthermore, the proportion of glaucoma scans was identical in all the splits. Our network was then trained on a Nvidia 1080Ti GPU card until optimum performance was reached in the validation set in approximately 100 epochs. For this purpose, the standard cross-entropy loss was minimized with the ADAM optimizer.21 During the training process, each time an OCT scan was processed (i.e., once during each epoch), a newly generated subset of S = 1000 points was selected and fed into the PointNet architecture. For enhanced robustness and to further enrich the training set, subsequent to the random subsampling process we also used a data augmentation scheme that applied random rigid transformations to the subsampled point clouds. When applying a rigid transformation to such a point cloud, the rigid transformation was only applied to the spatial coordinates (i.e., the first three coordinates). To evaluate the performance of our method, we reported the area under the receiver operating characteristic curve (ROC-AUC) with uncertainty estimates obtained from a five-fold cross-validation process. 
Comparisons With a 3D CNN
To compare the performance of our PointNet with a more traditional approach, we trained a 3D CNN.8 For this purpose, each volume was linearly resampled to isotropic dimensions 128 × 128 × 128 voxels, and the resized OCT scans were fed into a 3D CNN. We decided to use the 3D CNN architecture that was described by Maetschke et al.,8 because it had been previously optimized to work with 3D OCT scans of the ONH. Briefly, the network was composed of five 3D convolutional layers with ReLU activation, batch normalization, filter banks of sizes 32–32–32–32–32, filters of sizes 7–5–3–3–3, and strides of 2–1–1–1–1. A global average pooling was used to compute the pre-softmax output of the neural network. The standard cross-entropy loss was minimized with the ADAM optimizer. As used during the training of the PointNet, the dataset of OCT scans was split into training (70%), validation (15%), and test (15%) sets, respectively. It was important to use data-augmentation during the training process: random translations and rotations (±15 degrees), left–right flips, and additions of Gaussian noise to the voxel intensities. We used early stopping and selected the network with the highest validation ROC-AUC during training. We reported the classification ROC-AUC with uncertainty estimates obtained from a five-fold cross-validation process. To provide a fair comparison with PointNet, we decided to train (and test) the 3D CNN either with raw OCT images, or with stacks of segmented B-Scans (i.e., label images with each pixel taking a value between 0 and 7 to represent the tissue class number). 
Comparisons With RNFL Thickness
We wanted to compare the performance of PointNet with that from a gold standard glaucoma parameter, namely, RNFL thickness.1 RNFL thickness is typically obtained from circular OCT scans. However, such scans were not available for all subjects of our population. Instead, we obtained RNFL thickness from the segmentation software REFLECTIVITY. Briefly, from the 3D segmentation of the OCT scans, the average RNFL thickness was calculated at a distance of 1.4 times the Bruch's membrane opening radius as the minimum distance between the anterior and posterior boundaries of the RNFL tissue. We then reported the diagnostic power as quantified by the AUC. Because the RNFL thickness is a scalar parameter, no classification algorithm was needed to compute the AUC. This strategy also did not require us to specify a single cutoff for RNFL thickness because the performance for all possible RNFL thresholds was evaluated. The AUC was computed for multiple instances (randomly selected scans; each instance represented by 80% of the total population) and was reported as mean ± standard deviation. 
Results
We evaluated the classification performance of (1) RNFL thickness as a gold standard glaucoma parameter, of (2) the 3D CNN approach, and of (3) our proposed PointNet method by performing a five-fold cross-validation study. The three methods were evaluated on the same splits of the data. The AUCs were found to be 0.80 ± 0.03 for RNFL Thickness, 0.87 ± 0.02 (raw OCT images), 0.91 ± 0.02 (segmented images) for the 3D CNN approach, and 0.95 ± 0.01 for the PointNet method (Fig. 2). 
Figure 2.
 
The areas under the curve were found to be 0.80 ± 0.03 for the retinal nerve fiber layer thickness, 0.87 ± 0.02 (raw optical coherence tomography images) and 0.91 ± 0.02 (segmented optical coherence tomography images) for the 3D CNN approach, and 0.95 ± 0.01 for the PointNet method.
Figure 2.
 
The areas under the curve were found to be 0.80 ± 0.03 for the retinal nerve fiber layer thickness, 0.87 ± 0.02 (raw optical coherence tomography images) and 0.91 ± 0.02 (segmented optical coherence tomography images) for the 3D CNN approach, and 0.95 ± 0.01 for the PointNet method.
When plotting the AUC for glaucoma diagnosis (mean ± standard deviation) as a function of the number of points used to represent a 3D point cloud, convergence is achieved with S = 1000 points and no further increase in performance is obtained with additional points (Fig. 3). 
Figure 3.
 
Diagnosis performance (area under the curve: mean ± standard deviation) as a function of the number of points used to represent a three-dimensional point cloud.
Figure 3.
 
Diagnosis performance (area under the curve: mean ± standard deviation) as a function of the number of points used to represent a three-dimensional point cloud.
Discussion
In this study, we proposed a relatively simple AI approach based on geometric deep learning to provide a robust glaucoma diagnosis from a single OCT scan of the ONH. Each ONH was first preprocessed as a 3D point cloud to represent all major neural and connective tissue boundaries. Overall, our proposed approach required significantly less information to perform glaucoma classification than other 3D AI approaches. It also performed better than a 3D CNN (originally developed for 3D OCT scans of the ONH), and by taking into account information about both ONH neural and connective tissue layers, it yielded an AUC higher than that obtained from RNFL thickness alone. 
In this study, we found that geometric deep learning (PointNet) was well-adapted to perform glaucoma classification with ONH tissues represented as point clouds. In this study, a given ONH was represented with 1000 data points (as a point cloud), which is significantly less than the approximately 18,000,000 data points (or voxels) needed to represent an OCT scan. In other words, we decreased the size of our input by four orders or magnitude. Dealing with smaller inputs may ultimately allow us to reduce the black box effect of AI and allow us to better identify the complex 3D structural and biomechanical signature of the glaucomatous ONH.17 
It is interesting to note that our PointNet performed better than a 3D CNN trained on either the raw or the segmented OCT images on the exact same dataset (AUC = 0.95 ± 0.01 for PointNet vs, AUC = 0.87 ± 0.02 [raw images] or AUC = 0.91 ± 0.02 [segmented images] for 3D CNN). For the first comparison with raw scans, this finding may not be surprising because 3D OCT scans typically exhibit a considerable amount of noise and artifacts that would have been otherwise eliminated through a preprocessing segmentation step, as proposed herein. For the second comparison with segmented scans, our PointNet may be able to better focus on the most important ONH structural features (such as tissue boundaries and, implicitly, tissue thicknesses and curvatures), because such features were given as direct inputs to the network. However, more research would be required to identify the exact 3D landmarks of the ONH that are critical for a diagnosis of glaucoma. 
In this study, we found that PointNet was able to provide a higher diagnostic accuracy (AUC = 0.95 ± 0.01) as compared with that obtained from a gold standard glaucoma parameter, that is, RNFL thickness (AUC = 0.80 ± 0.03). This result may not be surprising because the ONHs do not only exhibit neural tissue changes, but also connective tissue changes, such as bending of the peripapillary sclera and changes in LC morphology and pore shape and pathway.22,23 PointNet has the advantage of capturing some of these features while minimizing the total amount of information required to establish a diagnosis. PointNet may also help us to identify the contribution of each individual neural or connective tissue for the diagnosis or prognosis of glaucoma. PointNet also provided a higher diagnostic accuracy than that obtained from the vertical cup-to-disc ratio alone (AUC = 0.91 ± 0.02), suggesting that PointNet can exploit more structural information than that coded in the vertical cup-to-disc ratio. 
Geometric deep learning may have wide applicability in the field of ophthalmology. It is relatively attractive for its ease of use, ease (and speed) of training, and considerably smaller and simpler input size. Although our first application targeted glaucoma, geometric deep learning could also be used for the diagnosis and prognosis of other optic neuropathies24 and for a wide range of corneal25 and retinal disorders.26 
In this study, several limitations warrant further discussion. First, our approach was only tested with one OCT device (Spectralis OCT) and for one Singapore population. Second, we did not consider all structural landmarks that could have improved the diagnosis of glaucoma, such as the 3D configuration of the central retinal vessels27 or the presence of peripapillary atrophy28; neither did we consider other optical properties.29 Third, our nonglaucoma population did not include other major optic neuropathies. The inclusion of such cases could be critical for clinical translation.30 Fourth, the segmentation of the posterior LC and peripapillary sclera boundaries was artificial and was solely based on the amount of visible signal in the OCT scans, which most likely does not coincide with the true anatomical landmarks.31 This factor may have influenced our diagnostic powers. 
In conclusion, we provide herein a proof of principle for the application of geometric deep learning in the field of ophthalmology with a special emphasis on glaucoma diagnosis. We found that our technique required significantly less data as input to perform better than a 3D CNN and with an AUC superior to that obtained from RNFL thickness alone. Geometric deep learning may have wide applicability in the field of ophthalmology, and it should be explored for other pathologies. 
Acknowledgments
Funding from (1) the donors of the National Glaucoma Research, a program of the BrightFocus Foundation, for support of this research (G2021010S [MJAG]); (2) SingHealth Duke-NUS Academic Medicine Research Grant (SRDUKAMR21A6 [MJAG]); (3) Singapore MOE Tier 1 grant (R155-000-228-114) [AHT]; (4) the “Retinal Analytics through Machine learning aiding Physics (RAMP)” project supported by the National Research Foundation, Prime Minister's Office, Singapore under its Intra-Create Thematic Grant “Intersection Of Engineering And Health” – NRF2019-THE002-0006 [MJAG/AT]; (5) the NMRC-LCG grant ‘TAckling & Reducing Glaucoma Blindness with Emerging Technologies (TARGET)’, award ID: MOH-OFLCG21jun-0003 [MJAG]. 
Conflict of Interest: AHT and MJAG are the co-founders of the AI start-up company Abyss Processing Pte Ltd that provides 3D AI solutions for glaucoma diagnosis and prognosis. 
Disclosure: A.H. Thiéry, Abyss Processing Pte Ltd (S); F. Braeu, None; T.A. Tun, None; T. Aung, None; M.J.A. Girard, Abyss Processing Pte Ltd (S) 
References
Kanamori A, Nakamura M, Escano MF, Seya R, Maeda H, Negi A. Evaluation of the glaucomatous damage on retinal nerve fiber layer thickness measured by optical coherence tomography. Am J Ophthalmol. 2003; 135: 513–520. [CrossRef] [PubMed]
Mwanza JC, Durbin MK, Budenz DL, et al. Glaucoma diagnostic accuracy of ganglion cell-inner plexiform layer thickness: Comparison with nerve fiber layer and optic nerve head. Ophthalmology. 2012; 119: 1151–1158. [CrossRef] [PubMed]
Kim JA, Kim TW, Weinreb RN, Lee EJ, Girard MJA, Mari JM. Lamina cribrosa morphology predicts progressive retinal nerve fiber layer loss in eyes with suspected glaucoma. Sci Rep. 2018; 8: 738. [CrossRef] [PubMed]
Downs JC, Girkin CA. Lamina cribrosa in glaucoma. Curr Opin Ophthalmol. 2017; 28: 113–119. [CrossRef] [PubMed]
Wang X, Tun TA, Nongpiur ME, et al. Peripapillary sclera exhibits a v-shaped configuration that is more pronounced in glaucoma eyes. Br J Ophthalmol. 2022; 106: 491–496. [CrossRef] [PubMed]
Wang YX, Yang H, Luo H, et al. Peripapillary scleral bowing increases with age and is inversely associated with peripapillary choroidal thickness in healthy eyes. Am J Ophthalmol. 2020; 217: 91–103. [CrossRef] [PubMed]
Geevarghese A, Wollstein G, Ishikawa H, Schuman JS. Optical coherence tomography and glaucoma. Annu Rev Vis Sci. 2021; 7: 693–726. [CrossRef] [PubMed]
Maetschke S, Antony B, Ishikawa H, Wollstein G, Schuman J, Garnavi R. A feature agnostic approach for glaucoma detection in OCT volumes. PLoS One. 2019; 14: e0219126. [CrossRef] [PubMed]
Ran AR, Cheung CY, Wang X, et al. Detection of glaucomatous optic neuropathy with spectral-domain optical coherence tomography: A retrospective training and validation deep-learning analysis. Lancet Digit Health. 2019; 1: e172–e182. [CrossRef] [PubMed]
Russakoff DB, Mannil SS, Oakley JD, et al. A 3D deep learning system for detecting referable glaucoma using full OCT macular cube scans. Transl Vis Sci Technol. 2020; 9: 12. [CrossRef] [PubMed]
Panda SK, Cheong H, Tun TA, et al. Describing the structural phenotype of the glaucomatous optic nerve head using artificial intelligence. Am J Ophthalmol. 2022; 236: 172–182. [CrossRef] [PubMed]
Gutierrez-Becker B, Wachinger C. Deep multi-structural shape analysis: Application to neuroanatomy. 2018:arXiv:1806.01069.
Bronstein MM, Bruna J, LeCun Y, Szlam A, Vandergheynst P. Geometric deep learning: Going beyond Euclidean data. IEEE Signal Processing Magazine. 2017; 34: 18–42. [CrossRef]
Wu Z, Pan S, Chen F, Long G, Zhang C, Yu PS. A Comprehensive survey on graph neural networks. IEEE Trans Neural Netw Learn Syst. 2021; 32: 4–24. [CrossRef] [PubMed]
Qi CR, Su H, Mo K, Guibas LJ. PointNet: Deep learning on point sets for 3D classification and segmentation. 2016:arXiv:1612.00593.
Yang H, Reynaud J, Lockwood H, et al. 3D histomorphometric reconstruction and quantification of the optic nerve head connective tissues. Methods Mol Biol. 2018; 1695: 207–267. [CrossRef] [PubMed]
Sigal IA, Ethier CR. Biomechanics of the optic nerve head. Exp Eye Res. 2009; 88: 799–807. [CrossRef] [PubMed]
Jin Y, Wang X, Irnadiastputri SFR, et al. Effect of changing heart rate on the ocular pulse and dynamic biomechanical behavior of the optic nerve head. Invest Ophthalmol Vis Sci. 2020; 61: 27. [CrossRef] [PubMed]
Devalla SK, Renukanand PK, Sreedhar BK, et al. DRUNET: A dilated-residual U-Net deep learning network to segment optic nerve head tissues in optical coherence tomography images. Biomed Opt Express. 2018; 9: 3244–3265. [CrossRef] [PubMed]
Devalla SK, Pham TH, Panda SK, et al. Towards label-free 3D segmentation of optical coherence tomography images of the optic nerve head using deep learning. Biomed Opt Express. 2020; 11: 6356–6378. [CrossRef] [PubMed]
Kingma DP, Ba J. Adam: A method for stochastic optimization. 2014:arXiv:1412.6980.
Wang B, Lucy KA, Schuman JS, et al. Tortuous pore path through the glaucomatous lamina cribrosa. Sci Rep. 2018; 8: 7281. [CrossRef] [PubMed]
Shoji T, Kuroda H, Suzuki M, Ibuki H, Araie M, Yoneya S. Glaucomatous changes in lamina pores shape within the lamina cribrosa using wide bandwidth, femtosecond mode-locked laser OCT. PLoS One. 2017; 12: e0181675. [CrossRef] [PubMed]
Girard MJA, Panda SK, Aung Tun T, et al. 3D structural analysis of the optic nerve head to robustly discriminate between papilledema and optic disc drusen. 2021:arXiv:2112.09970.
Dos Santos VA, Schmetterer L, Stegmann H, et al. CorneaNet: Fast segmentation of cornea OCT scans of healthy and keratoconic eyes using deep learning. Biomed Opt Express. 2019; 10: 622–641. [CrossRef] [PubMed]
Schmidt-Erfurth U, Reiter GS, Riedl S, et al. AI-based monitoring of retinal fluid in disease activity and under therapy. Prog Retin Eye Res. 2022; 86: 100972. [CrossRef] [PubMed]
Panda SK, Cheong H, Tun TA, et al. The three-dimensional structural configuration of the central retinal vessel trunk and branches as a glaucoma biomarker. Am J Ophthalmol. 2022; 2401: 205–216.
Wang YX, Panda-Jonas S, Jonas JB. Optic nerve head anatomy in myopia and glaucoma, including parapapillary zones alpha, beta, gamma and delta: Histology and clinical features. Prog Retin Eye Res. 2021; 83: 100933. [CrossRef] [PubMed]
Leung CKS, Lam AKN, Weinreb RN, et al. Diagnostic assessment of glaucoma and non-glaucomatous optic neuropathies via optical texture analysis of the retinal nerve fibre layer. Nat Biomed Eng. 2022; 6: 593–604. [CrossRef] [PubMed]
Al-Aswad LA, Ramachandran R, Schuman JS, et al. Artificial intelligence for glaucoma: Creating and implementing AI for disease detection and progression. Ophthalmol Glaucoma. 2022; 5: e16–e25. [CrossRef] [PubMed]
Girard MJ, Strouthidis NG, Ethier CR, Mari JM. Shadow removal and contrast enhancement in optical coherence tomography images of the human optic nerve head. Invest Ophthalmol Vis Sci. 2011; 52: 7738–7748. [CrossRef] [PubMed]
Figure 1.
 
PointNet Workflow: Each optical coherence tomography scan of the optic nerve head is first segmented using deep learning to identify the following tissue structures: retinal nerve fiber ligament + prelamina, inner plexiform layer + ganglion cell layer, all other retina layers, retinal pigment epithelium, choroid, peripapillary sclera, and lamina cribrosa. A three-dimensional point cloud is then generated strictly from the tissue boundaries. The 3D point cloud is ultimately passed through our PointNet network to produce a glaucoma diagnosis.
Figure 1.
 
PointNet Workflow: Each optical coherence tomography scan of the optic nerve head is first segmented using deep learning to identify the following tissue structures: retinal nerve fiber ligament + prelamina, inner plexiform layer + ganglion cell layer, all other retina layers, retinal pigment epithelium, choroid, peripapillary sclera, and lamina cribrosa. A three-dimensional point cloud is then generated strictly from the tissue boundaries. The 3D point cloud is ultimately passed through our PointNet network to produce a glaucoma diagnosis.
Figure 2.
 
The areas under the curve were found to be 0.80 ± 0.03 for the retinal nerve fiber layer thickness, 0.87 ± 0.02 (raw optical coherence tomography images) and 0.91 ± 0.02 (segmented optical coherence tomography images) for the 3D CNN approach, and 0.95 ± 0.01 for the PointNet method.
Figure 2.
 
The areas under the curve were found to be 0.80 ± 0.03 for the retinal nerve fiber layer thickness, 0.87 ± 0.02 (raw optical coherence tomography images) and 0.91 ± 0.02 (segmented optical coherence tomography images) for the 3D CNN approach, and 0.95 ± 0.01 for the PointNet method.
Figure 3.
 
Diagnosis performance (area under the curve: mean ± standard deviation) as a function of the number of points used to represent a three-dimensional point cloud.
Figure 3.
 
Diagnosis performance (area under the curve: mean ± standard deviation) as a function of the number of points used to represent a three-dimensional point cloud.
Table.
 
Summary of Patient Information
Table.
 
Summary of Patient Information
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×