July 2019
Volume 8, Issue 4
Open Access
Articles  |   August 2019
Automatic Classification of Anterior Chamber Angle Using Ultrasound Biomicroscopy and Deep Learning
Author Affiliations & Notes
  • Guohua Shi
    Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, Jiangsu Province, China
  • Zhenying Jiang
    Department of Ophthalmology and Visual Science, Eye, Ear, Nose and Throat Hospital, Shanghai Medical College of Fudan University, Shanghai, China
    Key NHC Key Laboratory of Myopia (Fudan University), Laboratory of Myopia, Chinese Academy of Medical Sciences, Shanghai, China
  • Guohua Deng
    Department of Ophthalmology, the Third People's Hospital of Changzhou, Changzhou, Jiangsu Province, China
  • Guangxing Liu
    Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, Jiangsu Province, China
  • Yuan Zong
    Department of Ophthalmology and Visual Science, Eye, Ear, Nose and Throat Hospital, Shanghai Medical College of Fudan University, Shanghai, China
    Key NHC Key Laboratory of Myopia (Fudan University), Laboratory of Myopia, Chinese Academy of Medical Sciences, Shanghai, China
  • Chunhui Jiang
    Department of Ophthalmology and Visual Science, Eye, Ear, Nose and Throat Hospital, Shanghai Medical College of Fudan University, Shanghai, China
    Key NHC Key Laboratory of Myopia (Fudan University), Laboratory of Myopia, Chinese Academy of Medical Sciences, Shanghai, China
  • Qian Chen
    Department of Ophthalmology and Visual Science, Eye, Ear, Nose and Throat Hospital, Shanghai Medical College of Fudan University, Shanghai, China
    Key NHC Key Laboratory of Myopia (Fudan University), Laboratory of Myopia, Chinese Academy of Medical Sciences, Shanghai, China
  • Yi Lu
    Department of Ophthalmology and Visual Science, Eye, Ear, Nose and Throat Hospital, Shanghai Medical College of Fudan University, Shanghai, China
    Key NHC Key Laboratory of Myopia (Fudan University), Laboratory of Myopia, Chinese Academy of Medical Sciences, Shanghai, China
  • Xinhuai Sun
    Department of Ophthalmology and Visual Science, Eye, Ear, Nose and Throat Hospital, Shanghai Medical College of Fudan University, Shanghai, China
    Key NHC Key Laboratory of Myopia (Fudan University), Laboratory of Myopia, Chinese Academy of Medical Sciences, Shanghai, China
  • Correspondence: Chunhui Jiang, Department of Ophthalmology and Vision Science, Eye and ENT Hospital, Fudan University, 83 Fenyang Rd, Shanghai 200031, People's Republic of China. e-mail: [email protected] 
  • Qian Chen, Department of Ophthalmology and Vision Science, Eye and ENT Hospital, Fudan University, 83 Fenyang Rd, Shanghai 200031, People's Republic of China. e-mail: [email protected] 
  • Footnotes
    *  Guohua Shi and Zhenying Jiang contributed equally to this study and share first authorship.
Translational Vision Science & Technology August 2019, Vol.8, 25. doi:https://doi.org/10.1167/tvst.8.4.25
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Guohua Shi, Zhenying Jiang, Guohua Deng, Guangxing Liu, Yuan Zong, Chunhui Jiang, Qian Chen, Yi Lu, Xinhuai Sun; Automatic Classification of Anterior Chamber Angle Using Ultrasound Biomicroscopy and Deep Learning. Trans. Vis. Sci. Tech. 2019;8(4):25. https://doi.org/10.1167/tvst.8.4.25.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose: To develop a software package for automated classification of anterior chamber angle of the eye by using ultrasound biomicroscopy.

Methods: Ultrasound biomicroscopy images were collected, and the trabecular-iris angle was manually measured and classified into three categories: open angle, narrow angle, and angle closure. Inception v3 was used as the classifying convolutional neural network and the algorithm was trained.

Results: With a recall rate of 97% in the test set, the neural network's classification accuracy can reach 97.2% and the overall area under the curve was 0.988. The sensitivity and specificity were 98.04% and 99.09% for the open angle, 96.30% and 98.13% for the narrow angle, and 98.21% and 99.05% for the angle closure categories, respectively.

Conclusions: Preliminary results show that an automated classification of the anterior chamber angle achieved satisfying sensitivity and specificity and could be helpful in clinical practice.

Translational Relevance: The present work suggests that the algorithm described here could be useful in the categorizing of anterior chamber angle and screening for subjects who are at high risk of angle closure.

Introduction
Primary angle closure glaucoma (PACG) is one of the leading cause of blindness in Asians.1 Although early prophylactic treatment can control the development of the disease well,2 this disease can be quite asymptomatic in its early stages and is frequently recognized only when already advanced. PACG eyes tend to have special ocular biometric findings, such as a smaller central and peripheral anterior chamber depth, narrow or close anterior chamber angle (ACA), and a thicker and anterior positioned lens, which could be accessed and measured objectively in clinical practice.3 On the other hand, narrow or close angles are also common features across the angle closure disease spectrum and anterior chamber characteristics (especially ACA assessments) are crucial for early detection. However, this requires trained graders and is also labor intensive.4 Automated assessment, which could be cost-effective, is another choice, and it could be helpful to screen out those with narrow or close angle and then refer them to experienced specialists for further examination. Recently, many studies have reported encouraging results in the development of automated image assessment software.58 In this study, we propose a deep learning method for automated classification of ACA by using ultrasound biomicroscopy (UBM), a noninvasive imaging technique, which allows high-resolution assessment of the anatomical features of the anterior segment.9 
Methods
Datasets
Images taken by UBM (MD-300L, 50-MHz probe transducer; Meda Co., Ltd, Tianjin, China) during January 2017 to December 2017 at the Eye and Ear Nose and Throat Hospital of Fudan University (Shanghai, China) were used in this study. During image analysis, first the scleral spur, which was defined as the innermost point of a line separating the ciliary muscle and the scleral fibers, was located. Then, the angle was defined as close angle if there was contact between the peripheral iris and the scleral spur; otherwise, it was subjected to trabecular-iris angle (TIA) measurement. TIA was defined as in Pavlin et al.10 and Marchini et al.11 as the angle with the apex in the iris recess and the arms of the angle passing through a point on the trabecular meshwork 500 μm from the scleral spur and the point on the iris perpendicularly opposite. The angle was defined as open angle if the TIA was 15 degrees or above and as a narrow angle if the TIA was less than 15 degrees.12 During the grading, if the results of the two graders (Zhenying Jiang and Yuan Zong) were the same, then it was used as the final result, and if otherwise, the senior specialist Qian Chen made the final call. 
Then, the images were randomly divided into the training set and test set by generating random numbers ranging from 0 to 1, whereby a random number greater than one-third put the image in the training set, and a random number less than or equal to one-third in the test set. In the classification model, the algorithm was trained to classify the images into three categories, namely, open angle, narrow angle, and angle closure, by using the training set. Subsequently, the test set was used to test the algorithm. Because of the rotational invariance and size insensitivity of UBM images, we augmented the training images with image rotation and scaling. 
Network Architecture
Inception v3 was used as the classifying convolutional neural network (CNN), which is illustrated in Figure 1. Inception v3 is a deep CNN architecture based on GoogLeNet and developed by Google.13 As is shown in Figure 1, Inception v3 is an extended work of Inception v2 that achieves high efficiency in performing image recognition tasks by factorizing 5 × 5 convolution into two smaller 3 × 3 convolutions to speed up computation. By expanding the filter banks in width, Inception v3 can prevent overfitting to a large extent. Moreover, Inception v3 further factorizes 7 × 7 convolution and concatenates multiple different layers with batch normalization technique, rendering even higher efficiency and less computational complexity. 
Figure 1
 
Architecture of Inception v3. Inception network stacks three convolution layers whose filter size are 1 × 1, 3 × 3, and 5 × 5 with one 3 × 3 pooling layer together, which is able to increase the width of network and improve the adaptability of the network to scale.
Figure 1
 
Architecture of Inception v3. Inception network stacks three convolution layers whose filter size are 1 × 1, 3 × 3, and 5 × 5 with one 3 × 3 pooling layer together, which is able to increase the width of network and improve the adaptability of the network to scale.
An Inception v3 CNN architecture that was pretrained on the 1000 object categories (1.28 million images) from the 2014 ImageNet Large Visual Recognition Challenge was used, after the final classification layer from the network was removed. We retrained this layer with our dataset (Supplementary Table S1). To make the images compatible with the original dimensions of the Inception v3 network, each image was resized to 299 × 299. 
Training for Angle Classification
In the angle classification model, the algorithm was trained to classify the images into three categories: open angle, narrow angle, and angle closure. All layers of the network were fine-tuned using the same initial learning rate of 0.01. The classification model was trained with the softmax cross entropy loss for 50 epochs by using an ADAM optimizer and a batch size of 32. To prevent the model from over-fitting, an L2 loss was used as a regularizer whose regularization coefficient was set to 0.01. 
Statistics
Using the manual classifications as the reference standard, we used receiver operating characteristic (ROC) curves, with calculations of area under the curve (AUC), as an index of the performance of our automated algorithm. Sensitivity and specificity were also used to evaluate the angle classification performances of our proposed model. We defined “true positive” (TP) as the number of cases correctly identified as open angle (narrow angle or angle closure), “true negative” (TN) as the number of cases correctly identified as other angle, “false positive” (FP) as the number of cases incorrectly identified as open angle (narrow angle or angle closure), and “false negative” (FN) as the number of cases incorrectly identified as other angle. The sensitivity and specificity can be expressed as follows:  
\(\def\upalpha{\unicode[Times]{x3B1}}\)\(\def\upbeta{\unicode[Times]{x3B2}}\)\(\def\upgamma{\unicode[Times]{x3B3}}\)\(\def\updelta{\unicode[Times]{x3B4}}\)\(\def\upvarepsilon{\unicode[Times]{x3B5}}\)\(\def\upzeta{\unicode[Times]{x3B6}}\)\(\def\upeta{\unicode[Times]{x3B7}}\)\(\def\uptheta{\unicode[Times]{x3B8}}\)\(\def\upiota{\unicode[Times]{x3B9}}\)\(\def\upkappa{\unicode[Times]{x3BA}}\)\(\def\uplambda{\unicode[Times]{x3BB}}\)\(\def\upmu{\unicode[Times]{x3BC}}\)\(\def\upnu{\unicode[Times]{x3BD}}\)\(\def\upxi{\unicode[Times]{x3BE}}\)\(\def\upomicron{\unicode[Times]{x3BF}}\)\(\def\uppi{\unicode[Times]{x3C0}}\)\(\def\uprho{\unicode[Times]{x3C1}}\)\(\def\upsigma{\unicode[Times]{x3C3}}\)\(\def\uptau{\unicode[Times]{x3C4}}\)\(\def\upupsilon{\unicode[Times]{x3C5}}\)\(\def\upphi{\unicode[Times]{x3C6}}\)\(\def\upchi{\unicode[Times]{x3C7}}\)\(\def\uppsy{\unicode[Times]{x3C8}}\)\(\def\upomega{\unicode[Times]{x3C9}}\)\(\def\bialpha{\boldsymbol{\alpha}}\)\(\def\bibeta{\boldsymbol{\beta}}\)\(\def\bigamma{\boldsymbol{\gamma}}\)\(\def\bidelta{\boldsymbol{\delta}}\)\(\def\bivarepsilon{\boldsymbol{\varepsilon}}\)\(\def\bizeta{\boldsymbol{\zeta}}\)\(\def\bieta{\boldsymbol{\eta}}\)\(\def\bitheta{\boldsymbol{\theta}}\)\(\def\biiota{\boldsymbol{\iota}}\)\(\def\bikappa{\boldsymbol{\kappa}}\)\(\def\bilambda{\boldsymbol{\lambda}}\)\(\def\bimu{\boldsymbol{\mu}}\)\(\def\binu{\boldsymbol{\nu}}\)\(\def\bixi{\boldsymbol{\xi}}\)\(\def\biomicron{\boldsymbol{\micron}}\)\(\def\bipi{\boldsymbol{\pi}}\)\(\def\birho{\boldsymbol{\rho}}\)\(\def\bisigma{\boldsymbol{\sigma}}\)\(\def\bitau{\boldsymbol{\tau}}\)\(\def\biupsilon{\boldsymbol{\upsilon}}\)\(\def\biphi{\boldsymbol{\phi}}\)\(\def\bichi{\boldsymbol{\chi}}\)\(\def\bipsy{\boldsymbol{\psy}}\)\(\def\biomega{\boldsymbol{\omega}}\)\(\def\bupalpha{\bf{\alpha}}\)\(\def\bupbeta{\bf{\beta}}\)\(\def\bupgamma{\bf{\gamma}}\)\(\def\bupdelta{\bf{\delta}}\)\(\def\bupvarepsilon{\bf{\varepsilon}}\)\(\def\bupzeta{\bf{\zeta}}\)\(\def\bupeta{\bf{\eta}}\)\(\def\buptheta{\bf{\theta}}\)\(\def\bupiota{\bf{\iota}}\)\(\def\bupkappa{\bf{\kappa}}\)\(\def\buplambda{\bf{\lambda}}\)\(\def\bupmu{\bf{\mu}}\)\(\def\bupnu{\bf{\nu}}\)\(\def\bupxi{\bf{\xi}}\)\(\def\bupomicron{\bf{\micron}}\)\(\def\buppi{\bf{\pi}}\)\(\def\buprho{\bf{\rho}}\)\(\def\bupsigma{\bf{\sigma}}\)\(\def\buptau{\bf{\tau}}\)\(\def\bupupsilon{\bf{\upsilon}}\)\(\def\bupphi{\bf{\phi}}\)\(\def\bupchi{\bf{\chi}}\)\(\def\buppsy{\bf{\psy}}\)\(\def\bupomega{\bf{\omega}}\)\(\def\bGamma{\bf{\Gamma}}\)\(\def\bDelta{\bf{\Delta}}\)\(\def\bTheta{\bf{\Theta}}\)\(\def\bLambda{\bf{\Lambda}}\)\(\def\bXi{\bf{\Xi}}\)\(\def\bPi{\bf{\Pi}}\)\(\def\bSigma{\bf{\Sigma}}\)\(\def\bPhi{\bf{\Phi}}\)\(\def\bPsi{\bf{\Psi}}\)\(\def\bOmega{\bf{\Omega}}\)\begin{equation}\tag{1}{\rm{sensitivity}} = {{{\rm{TP}}} \over {{\rm{TP}} + {\rm{FN}}}}.\end{equation}
 
\begin{equation}\tag{2}{\rm{specificity}} = {{{\rm{TN}}} \over {{\rm{TN}} + {\rm{FP}}}}.\end{equation}
 
Results
A total of 540 UBM images from 540 eyes (290 subjects) were collected from January 2017 to December 2017 and were included in the final study. The dataset contained 180 images of each angle type: open-angle, narrow-angle, and angle-closure images. During manual grading, the intraobserver and the interobserver repeatability were all above 0.97 (Table 1). 
Table 1
 
Intraobserver and Interobserver Repeatability of Manual Grading
Table 1
 
Intraobserver and Interobserver Repeatability of Manual Grading
During the study, 379 images were assigned to the training set and 161 to the test set (Table 2). Using the manual classification as a standard, with a recall rate of 97% in the test set, we found that the classification accuracy can reach 97.2%. The overall AUC was 0.988, which demonstrates a good performance of the proposed mode (Fig. 2). 
Table 2
 
Automated ACA Classification Compared With Manual Classification
Table 2
 
Automated ACA Classification Compared With Manual Classification
Figure 2
 
ROC curve of the classification model. The overall AUC was 0.988, which demonstrates good performance of the proposed model.
Figure 2
 
ROC curve of the classification model. The overall AUC was 0.988, which demonstrates good performance of the proposed model.
Our automated ACA classification model reached a sensitivity and specificity of 98.04% and 99.09% for open angle, 96.30% and 98.13% for narrow angle, as well as 98.21% and 99.05% for angle closure (Table 2). The normalized confusion matrix of angle classification model (Fig. 3) shows that no close angle case was classified as open angle, or vice versa. The saliency map shows that the region of most concern of the proposed model was centered at ACA. 
Figure 3
 
Normalized confusion matrix of the classification model. Element (x,y) of each confusion matrix represents the empirical probability of predicting class y given that the ground truth (manual classification) is a member of class x.
Figure 3
 
Normalized confusion matrix of the classification model. Element (x,y) of each confusion matrix represents the empirical probability of predicting class y given that the ground truth (manual classification) is a member of class x.
Discussion
UBM, which was first designed by Pavlin et al.,10 combines high-frequency ultrasound and computer image processing technology and is able to acquire high-resolution images of the anterior segment that provide valuable information about the cornea, anterior chamber, chamber angle, iris, ciliary body, zonules, and lens. Since its inception, it has become an important method that greatly assists the clinician in the diagnosis and management of angle closure and other subtypes of glaucoma.14 Currently, UBM is the most widely used method for the study of ACA regardless of optical media transparency.15 
In the manual part of the analysis, we defined a TIA of less than 15 degrees as a narrow angle and that of 15 degrees or above as an open angle. Although there were no well-accepted standards for the classification of open and narrow angles in UBM, in a population-based study of UBM performed by Henzan et al.12 from Japan, the averaged TIA was 10.3 ± 3.9 degrees in patients who had PACG or suspected to have PACG based on gonioscopic findings, whereas it was 24.2 ± 9.3 degrees in the healthy control group. Hence, in the present study, we used 15 degrees, which was about one standard deviation (SD) above the average of suspected PACG or PACG cases (10.3 + 3.9 = 14.2 degrees) and one SD below the average of normal subjects (24.2 − 9.3 = 14.9 degrees), as the boundary between narrow and open angle cases. 
Automated ACA assessment has been studied before. An automated software for goniophotographic angle assessment was proposed for RetCam images with encouraging results8; however, the authors found that pigmentation of the trabecular meshwork and a convex iris might lead to erroneous classification as a case of closed angle. Using anterior segment optical coherence tomography (OCT) images, we used the Zhongshan Angle Assessment Program16 to investigate a semiautomatic algorithm to measure the various anterior segment parameters, but this approach still needed the clinician to input the location of the scleral spur first. Using high-definition OCT, Tian et al.5 reported the automatic detection of Schwalbe's line and a ACA assessment. However, critics pointed out that this system could not process images acquired by other anterior chamber OCT systems.7 Automated analysis based on UBM images have also been tried and was found to be an useful approach, but it was found that in angle closure cases, which are quite common in PACG, the contact of the peripheral iris and the corneal-scleral surface was falsely identified as the apex.17 Our results show that the algorithm developed this time was able to automatically classify the UBM images into three categories, namely, open angle, narrow angle, and angle closure, with a high sensitivity and specificity (all above 96%), which was a marked improvement in comparison to previous report.8 Also, no close angle case was classified as open angle, or vice versa. 
The saliency map visualizations were presented to identify the areas of greatest importance used by the model in ACA classification. The greatest benefit of a saliency map is that it reveals insight into the decisions of neural networks, which are widely known as “black boxes.” Gradient-weighted class activation mapping18 was used as the neural network visualization approach, which can generate visual explanations from the CNN-based network without requiring architectural changes or retraining. The UBM image was fed into the well-trained Inception v3 network, and the feature maps from the final convolutional layer were output. The saliency map highlighting the important regions in the image for ACA classification was obtained by taking the weighted sum of all the feature maps by using their associated weights. As illustrated in Figure 4, the most concern region of the proposed model is the center of ACA, which is exactly the area ophthalmologists used to make a diagnosis. 
Figure 4
 
Feature visualization for ACA images. (a) The input UBM image. (b) Saliency map of deep learning features.
Figure 4
 
Feature visualization for ACA images. (a) The input UBM image. (b) Saliency map of deep learning features.
The results this time suggested that using UBM images and the algorithm developed in this study, narrow and close angle cases, which are important in the diagnosis and management of angle closure diseases, could be detected automatically with a high accuracy. Also, these cases could be referred to experienced specialists for further examination and proper prophylactic treatment if necessary. Additionally, as the screening process could operate in a remote medical treatment center or via a personal computer, it could be helpful to subjects at rural areas, which lack experienced ophthalmologists but have a high incidence of PACG compared to urban environments.1 
Results in this study show that an automated classification model can achieve a high accuracy of classification of the ACA based on UBM images and could be of value in future clinical practice. 
Acknowledgments
The work is supported by National Key R&D Program of China (2017YFC0108200), National Scientific Instrument and Equipment Development Project (2016YFF0102000), Frontier Science research project of the Chinese Academy of Sciences (QYZDB-SSW-JSC03), the Jiangsu Province Science Fund for Distinguished Young Scholars (BK20060010), the Jiangsu Province Key Research and Development Program(BE2018667), and the National Natural Science Foundation of China (61675226 and 61378090). 
Disclosure: G. Shi, None; Z. Jiang, None; G. Deng, None; G. Liu, None; Y. Zong, None; C. Jiang, None; Q. Chen, None; Y. Lu, None; X. Sun, None 
References
Foster PJ, Johnson GJ. Glaucoma in China: how big is the problem? Br J Ophthalmol. 2001; 85: 1277– 1282.
He M, Friedman DS, Ge J, et al. Laser peripheral iridotomy in primary angle closure suspects: biometric and gonioscopic outcomes: the Liwan Eye Study. Ophthalmology. 2016; 114: 494– 500.
Mansoori T, Balakrishna N. Anterior segment morphology in primary angle closure glaucoma using ultrasound biomicroscopy. J Curr Glaucoma Pract. 2017; 11: 86– 91.
Narayanaswamy A, Vijaya L, Shantha B, et al. Anterior chamber angle assessment using gonioscopy and ultrasound biomicroscopy. Jpn Ophthalmol. 2004; 48: 44– 49.
Tian J, Marziliano P, Baskaran M, et al. Automatic anterior chamber angle assessment for HD-OCT images. IEEE Trans Biomed Eng. 2011; 58: 3242– 3249.
Xu Y, Liu J, Cheng J, et al. Automated anterior chamber angle localization and glaucoma type classification in OCT images. Conf Proc IEEE Eng Med Biol Soc. 2013; 2013: 7380– 7383.
Fu H, Xu Y, Wong DW, et al. Automatic anterior chamber angle structure segmentation in AS-OCT image based on label transfer. Conf Proc IEEE Eng Med Biol Soc. 2016; 2016: 1288– 1291.
Baskaran M, Cheng J, Perera SA, et al. Automated analysis of angle closure from anterior chamber angle images. Invest Ophthalmol Vis Sci. 2014; 55: 7669– 7673.
Ishikawa H, Schuman JS. Anterior segment imaging: ultrasound biomicroscopy. Ophthalmol Clin North Am. 2004; 17: 7– 20.
Pavlin CJ, Harasiewicz K, Foster FS. Ultrasound biomicroscopy of anterior segment structures in normal and glaucomatous eyes. Am J Ophthalmol. 1992; 113: 381– 389.
Marchini G, Pagliarusco A, Toscano A, et al. Ultrasound biomicroscopic and conventional ultrasonographic study of ocular dimensions in primary angle-closure glaucoma. Ophthalmology. 1998; 105: 2091– 2098.
Henzan IM, Tomidokoro A, Uejo C, et al. Comparison of ultrasound biomicroscopic configurations among primary angle closure, its suspects, and nonoccludable angles: the Kumejima Study. Am J Ophthalmol. 2011; 151: 1065– 1073.
Scegedy C, Vanhoucke V, Ioffe S, et al. Rethinking the inception architecture for computer vision. Proc IEEE Comput Soc Comput Vis Pattern Recognit. 2016; 2016: 2818– 2826.
Mannino G, Abdolrahimzadeh B, Calafiore S, et al. A review of the role of ultrasound biomicroscopy in glaucoma associated with rare diseases of the anterior segment. Clin Ophthalmol. 2016; 10: 1453– 1459.
Maslin JS, Barkana Y, Dorairaj SK. Anterior segment imaging in glaucoma: an updated review. Indian J Ophthalmol. 2015; 63: 630– 640.
Console JW, Sakata LM, Aung T, et al. Quantitative analysis of anterior segment optical coherence tomography images: the Zhongshan Angle Assessment Program. Br J Ophthalmol. 2008; 92: 1612– 1616.
Leung CK, Yung WH, Yiu CK, et al. Novel approach for anterior chamber angle analysis: anterior chamber angle detection with edge measurement and identification algorithm (ACADEMIA). Arch Ophthalmol. 2006; 124: 1395– 1401.
Selvaraju RR, Cogswell M, Das A, et al. Grad-CAM: visual explanations from deep networks via gradient-based localization. Proc IEEE Comput Soc Comput Vis Pattern Recognit. 2017; 2017: 22– 29.
Figure 1
 
Architecture of Inception v3. Inception network stacks three convolution layers whose filter size are 1 × 1, 3 × 3, and 5 × 5 with one 3 × 3 pooling layer together, which is able to increase the width of network and improve the adaptability of the network to scale.
Figure 1
 
Architecture of Inception v3. Inception network stacks three convolution layers whose filter size are 1 × 1, 3 × 3, and 5 × 5 with one 3 × 3 pooling layer together, which is able to increase the width of network and improve the adaptability of the network to scale.
Figure 2
 
ROC curve of the classification model. The overall AUC was 0.988, which demonstrates good performance of the proposed model.
Figure 2
 
ROC curve of the classification model. The overall AUC was 0.988, which demonstrates good performance of the proposed model.
Figure 3
 
Normalized confusion matrix of the classification model. Element (x,y) of each confusion matrix represents the empirical probability of predicting class y given that the ground truth (manual classification) is a member of class x.
Figure 3
 
Normalized confusion matrix of the classification model. Element (x,y) of each confusion matrix represents the empirical probability of predicting class y given that the ground truth (manual classification) is a member of class x.
Figure 4
 
Feature visualization for ACA images. (a) The input UBM image. (b) Saliency map of deep learning features.
Figure 4
 
Feature visualization for ACA images. (a) The input UBM image. (b) Saliency map of deep learning features.
Table 1
 
Intraobserver and Interobserver Repeatability of Manual Grading
Table 1
 
Intraobserver and Interobserver Repeatability of Manual Grading
Table 2
 
Automated ACA Classification Compared With Manual Classification
Table 2
 
Automated ACA Classification Compared With Manual Classification
Supplement 1
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×