May 2021
Volume 10, Issue 6
Open Access
Articles  |   May 2021
Automatic Anterior Chamber Angle Classification Using Deep Learning System and Anterior Segment Optical Coherence Tomography Images
Author Affiliations & Notes
  • Wanyue Li
    Jiangsu Key Laboratory of Medical Optics, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, China
  • Qian Chen
    Eye Institute and Department of Ophthalmology, Eye and ENT Hospital, Fudan University, Shanghai, China
    Key NHC Key Laboratory of Myopia (Fudan University); Laboratory of Myopia, Chinese Academy of Medical Sciences, Shanghai, China
  • Chunhui Jiang
    Eye Institute and Department of Ophthalmology, Eye and ENT Hospital, Fudan University, Shanghai, China
    Key NHC Key Laboratory of Myopia (Fudan University); Laboratory of Myopia, Chinese Academy of Medical Sciences, Shanghai, China
  • Guohua Shi
    Jiangsu Key Laboratory of Medical Optics, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, China
    CAS Center for Excellence in Brain Science and Intelligence Technology, Shanghai, China
  • Guohua Deng
    Department of Ophthalmology, the Third People's Hospital of Changzhou, Changzhou, Jiangsu, China
  • Xinghuai Sun
    Eye Institute and Department of Ophthalmology, Eye and ENT Hospital, Fudan University, Shanghai, China
    Key NHC Key Laboratory of Myopia (Fudan University); Laboratory of Myopia, Chinese Academy of Medical Sciences, Shanghai, China
  • Correspondence: Chunhui Jiang, Eye Institute and Department of Ophthalmology, Eye and ENT Hospital, Fudan University, 83 Fenyang Road, Shanghai 200031, People's Republic of China. e-mail: chhjiang70@163.com 
  • Guohua Shi, Jiangsu Key Laboratory of Medical Optics, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, 88 Kelin Road, Suzhou 215163, People's Republic of China; CAS Center for Excellence in Brain Science and Intelligence Technology, Shanghai 200031, China. e-mail: ghshi_lab@126.com 
  • Footnotes
    *  WL and QC contributed equally to this study and share first authorship.
Translational Vision Science & Technology May 2021, Vol.10, 19. doi:https://doi.org/10.1167/tvst.10.6.19
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Wanyue Li, Qian Chen, Chunhui Jiang, Guohua Shi, Guohua Deng, Xinghuai Sun; Automatic Anterior Chamber Angle Classification Using Deep Learning System and Anterior Segment Optical Coherence Tomography Images. Trans. Vis. Sci. Tech. 2021;10(6):19. https://doi.org/10.1167/tvst.10.6.19.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose: The purpose of this study was to develop a software package for the automatic classification of anterior chamber angle using anterior segment optical coherence tomography (AS-OCT).

Methods: AS-OCT images were collected from subjects with open, narrow, and closure anterior chamber angles, which were graded based on ultrasound biomicroscopy (UBM) results. The Inception version 3 network and the transfer learning technique were applied in the design of an algorithm for anterior chamber angle classification. The classification performance was evaluated by fivefold cross-validation and on an independent test dataset.

Results: The proposed algorithm reached a sensitivity of 0.999 and specificity of 1.000 in the judgment of closed and nonclosed angles. The overall classification of the proposed method in open angle, narrow angle, and angle-closure classifications reached a sensitivity of 0.989 and specificity of 0.995. Additionally, the sensitivity and specificity reached 1.000 and 1.000 for angle-closure, 0.983 and 0.993 for narrow angle, and 0.985 and 0.991 for open angle.

Conclusions: The experimental results showed that the proposed method can achieve a high accuracy of anterior chamber angle classification using AS-OCT images, and could be of value in future practice.

Translational Relevance: The proposed deep learning-based method that automate the classification of anterior chamber angle can facilitate clinical assessment of glaucoma.

Introduction
Glaucoma is the second leading cause of blindness worldwide, as well as the foremost cause of irreversible blindness according to data from the World Health Organization.1 Primary angle closure glaucoma (PACG) is a major form of glaucoma in Asia.2 This type of glaucoma is characterized by a narrow anterior chamber angle (ACA).3 On the other hand, prophylactic treatment can well control the development of PACG, due to the accurate detection of narrow ACA that is extremely important for both diagnosis and management.4 Gonioscopy is considered the gold standard technique to examine the ACA, but it is limited by interobserver variability, even among experienced glaucoma specialists.5 Because of its good penetration, ultrasound biomicroscopy (UBM) can provide a more detailed assessment of the total anterior segment, so it has been used to evaluate the ACA for decades.6 However, UBM is a time-consuming examination that requires physical contact with the patient and offers limited resolution. Anterior segment optical coherence tomography (AS-OCT) is a promising alternative as it can acquire high-resolution images of the anterior segment, including the ACA, in a noninvasive and noncontact manner.5 
For ACA screening, a rather large number of subjects need to be examined within a short period of time, so the speed of the ACA assessment method is of great importance. There have been studies on ACA examinations in patients with glaucoma, but few have focused specifically on population screening.713 Recently, artificial intelligence (AI) has been used in ACA classification due to its major advantages of being high-speed and fully automatic.7,1416 Previously, we reported the auto-classification of the ACA using UBM and AI.16 Compared with UBM, OCT images have better contrast and a higher signal-to-noise ratio (SNR). For AS-OCT images, Xu et al.7 proposed a multilevel deep network for angle-closure detection, whereas Fu et al.14 proposed an angle-closure detection method based on the VGG-16 network and transfer learning technique. The purpose of this study was to test the efficacy of a new deep learning system in the automatic classification of ACA using AS-OCT images and to explore whether the new proposed method has a better performance than those in former reports. 
Methods
Dataset
The images used in this study were taken with a commercially available SS-OCT system (CASIA SS-1000; Tomey Corporation, Nagoya, Japan; software version 6H.4) at the Eye, Ear, Nose, and Throat Hospital of Fudan University (Shanghai, China) between January 2017 and December 2019. A total of 3000 AS-OCT images were collected from 3000 eyes (1826 subjects). OCT scans were taken by an experienced observer under consistent light conditions (approximately 340 lux). Subjects were asked to remain in the primary gaze position toward an internal fixation light. The eyelids were kept open by a second examiner, who took caution to avoid placing any pressure on the eye. The standard anterior segment protocol, composed of 128 radial scans (each 16 mm in length and 6 mm in depth) taken within approximately 2.4 seconds was used. Horizontal scans (0-180 degrees) were selected from the final analysis. The UBM results of the nasal and temporal angles were used as the standard references. The angles were clinically classified as open, narrow, or angle-closure by two specialists who showed good intra- and inter-observer consistency (Supplementary Table S1). The angle was defined as open angle if the trabecular-iris angle (TIA) was 15 degrees or above, and as a narrow angle if the TIA was less than 15 degrees. The angle was defined as angle-closure if there was contact between the peripheral iris and the scleral spur.13 Because the left and right ACA can be of different types in some cases, we split the original OCT images into the left and right eye groups (Fig. 1). The final dataset contained 6000 images, with 2000 images of open-angle, 2000 images of narrow-angle, and 2000 images of angle-closure (Supplementary Table S2). In this study, 80% of the dataset was used as a training set, and 20% was used as a testing set, so there were 4800 images in the training set, and 1200 images in the testing set. It should be noted that the training set and test set were split at the participant level, so there were no patients whose images were involved in both the training and testing sets. 
Figure 1.
 
Split the original OCT image in half along the red dotted line into left and right ACA image.
Figure 1.
 
Split the original OCT image in half along the red dotted line into left and right ACA image.
Network Architecture
Inception version 317 was used as an ACA-classifying convolutional neural network (CNN), which is illustrated in Figure 2. Inception version 3 is a deep CNN architecture based on GoogLeNet developed by Google. An Inception module was utilized in this network architecture, which factored 3 × 3 convolutions into 2 smaller convolutions, such as 1 × 3 and 3 × 1. The transfer learning technique was applied to train the proposed ACA classification model, and this model was pretrained on the “ImageNet” dataset. Generally, there are two common ways to fine tune the image classification network. One way is to redesign the last few layers (highlighted by the red dotted box in Fig. 2), because these layers were originally designed for the 1000-category classification task; we should redesign them for our specific task and then retrain these layers to recognize specific classes. The other way is to only freeze the low-level layers, and the remainder architectures, as shown in the part highlighted by the blue dotted box in Figure 2 were all retrained based on the weights pretrained on “ImageNet.” Because the OCT images and the images in the “ImageNet” dataset are substantially different in nature, the second way can be a good choice to retrain our ACA classification network. 
Figure 2.
 
Architecture of Inception version 3. The part highlighted by the red dotted box represents the layers that changed and retrained with way one; and the part highlighted by the blue dotted box represents the architecture that retrained with way two.
Figure 2.
 
Architecture of Inception version 3. The part highlighted by the red dotted box represents the layers that changed and retrained with way one; and the part highlighted by the blue dotted box represents the architecture that retrained with way two.
The Inception version 3 model was retrained with the cross-entropy loss for 50 epochs using an Adam optimizer, and the batch size was 64, and the initial learning rate was 0.001. This network was carried out with the Keras framework on two GeForce GTX 1080Ti GPUs. 
Statistics
Using the expert's labeled results as the reference, receiver operating characteristic (ROC) curves were created, and the area under the curve (AUC) was calculated to be used as a metric of the learning-based method. Accuracy, sensitivity, and specificity were also used to evaluate the classification performance of the proposed method, and can be calculated as follows:  
\begin{equation}accuracy = \frac{{TP + TN}}{{TP + FP + TN + FN}},\end{equation}
(1)
 
\begin{equation}sensitivity = \frac{{TP}}{{TP + FN}},\end{equation}
(2)
 
\begin{equation}specificity = \frac{{TN}}{{TN + FP}},\end{equation}
(3)
 
Where TP, TN, FP, and FN represent the number of true positives, true negatives, false positives, and false negatives, respectively. For three classes ACA classification, TP is defined as the number of cases correctly identified as an open angle (narrow angle or angle-closure), TN as the number of cases correctly identified as other angle, FP as the number of cases incorrectly identified as an open angle (narrow angle or angle-closure), and FN as the number of cases incorrectly identified as other angle. 
Results
Judgment of Angle-Closure
For the judgment of closed and nonclosed angles, the Inception version 3 and another three networks: VGG16, ResNet18, and ResNet50, were all trained on our dataset with transfer learning techniques in either way one or way two. The proposed method (Inception version 3 with transfer learning technique in way two) obtained the best results, the AUC was 1.000, and the sensitivity and specificity were 0.999 and 1.000, respectively (Table 1Fig. 3). 
Table 1.
 
Comparison of Different Networks With Different Transfer Learning Ways on Closed and Nonclosed ACA Classification
Table 1.
 
Comparison of Different Networks With Different Transfer Learning Ways on Closed and Nonclosed ACA Classification
Figure 3.
 
Receiver operating characteristic curves (ROC) of VGG16, ResNet18, ResNet50, and Inception version 3 on closed and nonclosed ACA classification. (a) The ROC of the four networks trained with way one; (b) The ROC of the four networks trained with way two.
Figure 3.
 
Receiver operating characteristic curves (ROC) of VGG16, ResNet18, ResNet50, and Inception version 3 on closed and nonclosed ACA classification. (a) The ROC of the four networks trained with way one; (b) The ROC of the four networks trained with way two.
Classification of ACA
Different networks with different transfer learning ways were also compared for open, narrow-angle, and angle-closure classifications. The average sensitivity and specificity of the proposed method (Inception version 3 with transfer learning technique in way two) in the testing set reached 0.989 and 0.995, respectively (Table 2), and the AUC was 1.000 (Fig. 4b). The proposed model also reached a sensitivity and specificity of 1.000 and 0.1.000 for angle-closure, 0.983 and 0.993 for narrow angle, and 0.985 and 0.991 for open angle, respectively (Table 3). 
Table 2.
 
Comparison of Different Networks With Different Transfer Learning Ways on Open, Narrow Angle, and Angle-Closure Classifications
Table 2.
 
Comparison of Different Networks With Different Transfer Learning Ways on Open, Narrow Angle, and Angle-Closure Classifications
Figure 4.
 
Receiver operating characteristic curves (ROC) of VGG16, ResNet18, ResNet50, and Inception version 3 on open, narrow-angle, and angle-closure classifications. (a) The ROC of the four networks trained with way one; (b) the ROC of the four networks trained with way two.
Figure 4.
 
Receiver operating characteristic curves (ROC) of VGG16, ResNet18, ResNet50, and Inception version 3 on open, narrow-angle, and angle-closure classifications. (a) The ROC of the four networks trained with way one; (b) the ROC of the four networks trained with way two.
Table 3.
 
Deep Learning-Based ACA Classification Method Compared with Manual Classification
Table 3.
 
Deep Learning-Based ACA Classification Method Compared with Manual Classification
Discussion
Using the algorithm described here, the ACA could be automatically divided into three categories: open, narrow, and angle-closure with AS-OCT images. The algorithm achieved an overall sensitivity of 0.989, specificity of 0.995, and the AUC was 1.000. 
In a population-based study of UBM by Henzan,18 the averaged TIA of PACG or suspected PACG was 10.3 ± 3.9 degrees, and the average of healthy control group was 24.2 ± 9.3 degrees. In this and a former study, we used 15 degrees, that is, one standard deviation (SD) above the average of PACG or suspected PACG subjects (10.3 + 3.9 = 14.2 degrees) and one SD below the average of healthy control subjects (24.2 - 9.3 = 14.9 degrees), as the boundary between narrow and open angle. Although the OCT system automatically provides the results of ACA, there is no well-accepted standard for narrow angle using AS-OCT in population-based studies. As a result, UBM results were used as a reference. 
Automatic assessment of ACA using AS-OCT has been reported before. In the early time, the Zhongshan Angle Assessment Program19 investigated a semi-automatic algorithm, but the method required manual marking of the scleral spur. Recently, others reported fully automated analysis of ACA using AS-OCT image and AI techniques. As most methods focus on the decision of angle-closure,20,14,15 for a fair comparison, we first trained and tested the proposed method for the ability of angle-closure judgment. Our method achieved an AUC of 1.00 for closed and non-closed angle classification. The algorithm was also able to classify the angle into open, narrow angle, and angle-closure with a rather high accuracy. To reveal insight to the decision of the neural networks, gradient-weighted class activation mapping21 was applied to generate visual explanations of the CNN-based network. The saliency maps highlight the regions that the neural network is interested in, the warmer color represents more interests. The saliency maps (Fig. 5) for different angles showed that the most interested region of the proposed method was around the ACA, which coincided with the main area ophthalmologists used to make a diagnosis. 
Figure 5.
 
Representative saliency maps highlight the regions that are most discriminative in the ACA classification.
Figure 5.
 
Representative saliency maps highlight the regions that are most discriminative in the ACA classification.
The accuracy achieved in this study was better than the accuracies reported in other studies. There are two possible explanations, one is that the method proposed in this study was based on the transfer learning technique in way two. Transfer learning is a useful method for overcoming the problem that a large number of labeled medical images cannot be obtained because transfer learning can help train the network in a short time with high accuracy. Therefore, it has been widely used in a variety of fields.2224,26 As described in Section 2.2, there are two common ways to fine tune the image classification network. One is to redesign only the last few layers (way one: highlighted by the red dotted box in Fig. 2), and the other way is to only freeze the low-level layers and retrain all the remaining architectures (way two: highlighted by the blue dotted box in Fig. 2). In this work, we did multiple experiments for ACA classification with different networks trained in different transfer learning ways. The results (see Tables 12Figs. 34) demonstrated the superiority of transfer learning way two over way one. The reason might be that the natural images and the OCT images have nearly the same low-level features, but their middle- and high-level features are different, only training the parameters of the last layers might have made the network less sensitive to the features of the new image modality. The previous work7,14,15 trained the network with transfer learning way one. The other possible explanation might be that the datasets in two papers7,14 were unbalanced, which could affect the performance of the network.25 
The performance of this study was also better than our previous work16 which classifies the ACA in UBM images, although the UBM images and AS-OCT images were trained with almost the same model architecture. There are two possible reasons: one reason is that the signal-of-noise ratio and image contrast of UBM images is lower than AS-OCT images, and the other reason is that the amount of data (without augmentation operation) for UBM classification (540 images) is much less than that for AS-OCT (6000 images). For the learning-based method, the amount and variety of data are important for the performance of the networks. 
The results of this study suggest that by using AI and AS-OCT images, closed, narrow, and open angle cases of ACA could be screened automatically with high accuracy. Classic methods of ACA assessment, like gonioscopy and UBM, are time-consuming and require contact with the patient, and their accuracy relies heavily on the examiner's experience. In contrast, AS-OCT is performed in a noncontact manner, and the automatic method developed here can process an image within 1 second. The combination of AS-OCT and automatic image processing is ideal for the screening of a large population. Subjects at risk could be detected and benefit from immediate follow-up and proper prophylactic treatment. 
This study had several limitations. Images acquired by only one AS-OCT system were included this study, all the subjects were Chinese, and the number of subjects was limited. In addition, as the incident angle of the scan light can affect the result of OCT imaging,27 and only images acquired by one protocol were included, the algorithm developed here needs to be further modified for the processing of ACA acquired using other models or protocols. Besides, because automatic measurement of ACA was provided by AS-OCT, the accuracy of classification might be improved if this process is also included in the algorithm. This will be tested in the future. 
In this study, an automated method was proposed and found to achieve a high accuracy of ACA classification using AS-OCT images and AI techniques, so the algorithm developed here could be of value in future practice. 
References
Thylefors B, Negrel AD, Pararajasegaram R, Dadzie KY. Global data on blindness. Bull World Health Organ. 1995; 73(1): 115–121. [PubMed]
Foster PJ, Johnson GJ. Glaucoma in China: how big is the problem? Br J Ophthalmol. 2001; 85(11): 1277–1282. [CrossRef] [PubMed]
Tham YC, Li X, Wong TY, et al. Global prevalence of glaucoma and 257 projections of glaucoma burden through 2040: a systematic review and 258 meta-analysis. Ophthalmology 2014; 121(11): 2081–2090 [CrossRef] [PubMed]
He M, Friedman DS, Ge J, et al. Laser peripheral iridotomy in primary angle-closure suspects: biometric and gonioscopic outcomes. The Liwan Eye study. Ophthalmology. 2007; 114(3): 494–500.
Rigi M, Bell NP, Lee DA, et al. Agreement between gonioscopic examination and swept source fourier domain anterior segment optical coherence tomography imaging. J Ophthalmol. 2016; 2016(2016): 1727039. [PubMed]
Pavlin CJ, Sherar MD, Foster FS. Subsurface ultrasound microscopic imaging of the intact eye. Ophthalmology. 1990; 97(2): 244–250. [CrossRef] [PubMed]
Fu H, Xu Y, In S, et al. Angle-closure detection in anterior segment OCT based on multi-level deep network. IEEE Trans Cybern. 2020; 50(7): 3358–3366. [CrossRef] [PubMed]
Jing T, Marziliano P, Baskaran M, Wong HT, Aung T. Automatic anterior chamber angle assessment for HD-OCT images. IEEE Trans Biomed Eng. 2011; 58(11): 3242–3249. [CrossRef] [PubMed]
Xu Y, Liu J, Tan NM, Lee BH, et al. Anterior chamber angle classification using multiscale histograms of oriented gradients for glaucoma subtype identification. Annu Int Conf IEEE Eng Med Biol Soc. 2012; 2012: 3167–3170.
Williams D, Zheng Y, Bao F, Elsheikh A. Automatic segmentation of anterior segment optical coherence tomography images. J Biomed Opt. 2013; 18(5): 56003. [CrossRef] [PubMed]
Xu Y, Liu J, Cheng J, et al . Automated anterior chamber angle localization and glaucoma type classification in OCT images. Annu Int conf IEEE Eng Med Biol Soc. 2013; 2013: 7380–7383.
Ni NS, Tian J, Pina M, Hong-Tym W. Anterior chamber angle shape analysis and classification of glaucoma in SS-OCT images. J Ophthalmol. 2014; 2014: 1–12. [CrossRef]
Swamidoss Issac, Niwas , et al. Cross-examination for angle-closure glaucoma feature detection. IEEE J Biomed Heal Inform. 2016; 20(1): 343–354. [CrossRef]
Fu H, Baskaran M, Xu Y, et al. A deep learning system for automated angle-closure detection in anterior segment optical coherence tomography images. Am J Ophthalmol. 2019; 203: 37–45. [CrossRef] [PubMed]
Xu BY, Chiang M, Chaudhary S, Kulkarni S, Pardeshi AA, Varma R. Deep learning classifiers for automated detection of gonioscopic angle closure based on anterior segment OCT images. Am J Ophthalmol. 2019; 208: 273–280. [CrossRef] [PubMed]
Shi G, Jiang Z, Deng G, et al. Automatic classification of anterior chamber angle using ultrasound biomicroscopy and deep learning. Transl Vis Sci Technol. 2019; 8(4): 25. [CrossRef] [PubMed]
Szegedy C, Vanhoucke V, Ioffe S, Shlens J, Wojna Z. Rethinking the inception architecture for computer vision. Computer Vision and Pattern Recognition. 2016: 2818–2826. Available at: https://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Szegedy_Rethinking_the_Inception_CVPR_2016_paper.pdf.
Henzan IM, Tomidokoro A, Uejo C, et al. Comparison of ultrasound biomicroscopic configurations among primary angle closure, its suspects, and nonoccludable angles: the Kumejima Study. Am J Ophthalmol. 2011; 151: 1065–1073. [CrossRef] [PubMed]
Console JW, Sakata LM, Aung T, et al. Quantitative analysis of anterior segment optical coherence tomography images: the Zhongshan Angle Assessment Program. Br J Ophthalmol. 2008; 92: 1612–1616. [CrossRef] [PubMed]
Fu H, Xu Y, Lin S, Wing D, Wong K, Baskaran M. Angle-closure detection in anterior segment OCT based on multilevel deep network. IEEE Trans Cybern. 2020; 50(7): 3358–3366. [CrossRef] [PubMed]
Selvaraju RR, Cogswell M, Das A, et al. Grad CAM: visual explanations from deep networks via gradient-based localization. Proc IEEE Comput Soc Comput Vis Pattern Recognit. 2017; 2017: 22–29.
Kermany DS, Goldbaum M, Cai W, et al. Identifying medical diagnoses and treatable diseases by image-based deep learning. Cell. 2018; 172(5): 1122–1124.e9. [CrossRef] [PubMed]
Karri SPK, Chakraborty D, Chatterjee J. Transfer learning based classification of optical coherence tomography images with diabetic macular edema and dry age-related macular degeneration. Biomed Opt Express. 2017; 8(2): 579–592. [CrossRef] [PubMed]
Burlina P, Joshi N, Pekala M, Pacheco KD, Freund DE, Bressler NM. Automated grading of age-related macular degeneration from color fundus images using deep convolutional neural networks. JAMA Ophthalmol. 2017; 135(11): 1170–1176. [CrossRef] [PubMed]
Feng Y, Zhou M, et al. Imbalanced classification: an objective-oriented review. Available at: https://arxiv.org/abs/2002.04592.
Wang J, Deng G, Li W, et al. Deep learning for quality assessment of retinal OCT images. Biomed Opt Express. 2019; 10(12): 6057–6072. [CrossRef] [PubMed]
Huang Y, Saarakkala S, Toyras J, et al. Effects of optical beam angle on quantitative optical coherence tomography (OCT) in normal and surface degenerated bovine articular cartilage. Phys Med Biol. 2011; 56: 491–509 [CrossRef] [PubMed]
Figure 1.
 
Split the original OCT image in half along the red dotted line into left and right ACA image.
Figure 1.
 
Split the original OCT image in half along the red dotted line into left and right ACA image.
Figure 2.
 
Architecture of Inception version 3. The part highlighted by the red dotted box represents the layers that changed and retrained with way one; and the part highlighted by the blue dotted box represents the architecture that retrained with way two.
Figure 2.
 
Architecture of Inception version 3. The part highlighted by the red dotted box represents the layers that changed and retrained with way one; and the part highlighted by the blue dotted box represents the architecture that retrained with way two.
Figure 3.
 
Receiver operating characteristic curves (ROC) of VGG16, ResNet18, ResNet50, and Inception version 3 on closed and nonclosed ACA classification. (a) The ROC of the four networks trained with way one; (b) The ROC of the four networks trained with way two.
Figure 3.
 
Receiver operating characteristic curves (ROC) of VGG16, ResNet18, ResNet50, and Inception version 3 on closed and nonclosed ACA classification. (a) The ROC of the four networks trained with way one; (b) The ROC of the four networks trained with way two.
Figure 4.
 
Receiver operating characteristic curves (ROC) of VGG16, ResNet18, ResNet50, and Inception version 3 on open, narrow-angle, and angle-closure classifications. (a) The ROC of the four networks trained with way one; (b) the ROC of the four networks trained with way two.
Figure 4.
 
Receiver operating characteristic curves (ROC) of VGG16, ResNet18, ResNet50, and Inception version 3 on open, narrow-angle, and angle-closure classifications. (a) The ROC of the four networks trained with way one; (b) the ROC of the four networks trained with way two.
Figure 5.
 
Representative saliency maps highlight the regions that are most discriminative in the ACA classification.
Figure 5.
 
Representative saliency maps highlight the regions that are most discriminative in the ACA classification.
Table 1.
 
Comparison of Different Networks With Different Transfer Learning Ways on Closed and Nonclosed ACA Classification
Table 1.
 
Comparison of Different Networks With Different Transfer Learning Ways on Closed and Nonclosed ACA Classification
Table 2.
 
Comparison of Different Networks With Different Transfer Learning Ways on Open, Narrow Angle, and Angle-Closure Classifications
Table 2.
 
Comparison of Different Networks With Different Transfer Learning Ways on Open, Narrow Angle, and Angle-Closure Classifications
Table 3.
 
Deep Learning-Based ACA Classification Method Compared with Manual Classification
Table 3.
 
Deep Learning-Based ACA Classification Method Compared with Manual Classification
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×