September 2021
Volume 10, Issue 11
Open Access
Articles  |   September 2021
A Deep Learning System for Automatic Assessment of Anterior Chamber Angle in Ultrasound Biomicroscopy Images
Author Affiliations & Notes
  • Wensai Wang
    Institute of Biomedical Engineering, Chinese Academy of Medical Sciences and Peking Union Medical College, Tianjin, China
  • Lingxiao Wang
    Institute of Biomedical Engineering, Chinese Academy of Medical Sciences and Peking Union Medical College, Tianjin, China
  • Xiaochun Wang
    Institute of Biomedical Engineering, Chinese Academy of Medical Sciences and Peking Union Medical College, Tianjin, China
  • Sheng Zhou
    Institute of Biomedical Engineering, Chinese Academy of Medical Sciences and Peking Union Medical College, Tianjin, China
  • Song Lin
    Eye Institute and School of Optometry, Tianjin Medical University Eye Hospital, Tianjin, China
  • Jun Yang
    Institute of Biomedical Engineering, Chinese Academy of Medical Sciences and Peking Union Medical College, Tianjin, China
  • Correspondence: Jun Yang, Institute of Biomedical Engineering, Chinese Academy of Medical Sciences and Peking Union Medical College, 236 Baidi Rd, Tianjin 300192, People's Republic of China. e-mail: yangj3210@hotmail.com 
  • Song Lin, Eye Institute and School of Optometry, Tianjin Medical University Eye Hospital, 251 Fukang Rd, Tianjin 300384, People's Republic of China. e-mail: linsong123123@sina.com 
Translational Vision Science & Technology September 2021, Vol.10, 21. doi:https://doi.org/10.1167/tvst.10.11.21
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Wensai Wang, Lingxiao Wang, Xiaochun Wang, Sheng Zhou, Song Lin, Jun Yang; A Deep Learning System for Automatic Assessment of Anterior Chamber Angle in Ultrasound Biomicroscopy Images. Trans. Vis. Sci. Tech. 2021;10(11):21. doi: https://doi.org/10.1167/tvst.10.11.21.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose: To develop and assess a deep learning system that automatically detects angle closure and quantitatively measures angle parameters from ultrasound biomicroscopy (UBM) images using a deep learning algorithm.

Methods: A total of 3788 UBM images (2146 open angle and 1642 angle closure) from 1483 patients were collected. We developed a convolutional neural network (CNN) based on the InceptionV3 network for automatic classification of angle closure and open angle. For nonclosed images, we developed a CNN based on the EfficienttNetB3 network for the automatic localization of the scleral spur and the angle recess; then, the Unet network was used to segment the anterior chamber angle (ACA) tissue automatically. Based on the results of the latter two processes, we developed an algorithm to automatically measure the trabecular-iris angle (TIA500 and TIA750), angle-opening distance (AOD500 and AOD750), and angle recess area (ARA500 and ARA750) for quantitative evaluation of angle width.

Results: Using manual labeling as the reference standard, the ACA classification network's accuracy reached 98.18%, and the sensitivity and specificity for angle closure reached 98.74% and 97.44%, respectively. The deep learning system realized the automatic measurement of the angle parameters, and the mean of differences was generally small between automatic measurement and manual measurement. The coefficients of variation of TIA500, TIA750, AOD500, AOD750, ARA500, and ARA750 measured by the deep learning system were 5.77%, 4.67%, 10.76%, 7.71%, 16.77%, and 12.70%, respectively. The within-subject standard deviations of TIA500, TIA750, AOD500, AOD750, ARA500, and ARA750 were 5.77 degrees, 4.56 degrees, 155.92 µm, 147.51 µm, 0.10 mm2, and 0.12 mm2, respectively. The intraclass correlation coefficients of all the angle parameters were greater than 0.935.

Conclusions: The deep learning system can effectively and accurately evaluate the ACA automatically based on fully automated analysis of a UBM image.

Translational Relevance: The present work suggests that the deep learning system described here could automatically detect angle closure and quantitatively measure angle parameters from UBM images and enhancing the intelligent diagnosis and management of primary angle-closure glaucoma.

Introduction
Primary angle-closure glaucoma (PACG) is the most common cause of irreversible blindness in Asians.1,2 It is estimated that there will be about 34 million people with PACG in the world by 2040, an increase of 58.4% compared with 2013.3 This blinding disease with increasing incidence warrants the attention of the global ophthalmic research community. Angle closure can cause the aqueous humor outflow resistance to increase and lead to elevated intraocular pressure, which is a significant risk factor for optic nerve damage.46 The early detection of angle closure is an efficient measure to prevent permanent loss of vision, and determining the anterior chamber angle (ACA) is essential for detecting the angle closure and assessing the risk of closure. 
The diagnosis of PACG depends on the morphology of the ACA. Gonioscopy is the gold standard for evaluating ACA and detecting angle closure.710 However, gonioscopy is subjective and qualitative and depends on the examiner's clinical experience.8,11 Ultrasound biomicroscopy (UBM) allows observation of the peripheral ACA, iris, and ciliary body, so it can be used for the real-time, quantitative, and objective evaluation of ACA.12,13 Trabecular-iris angle (TIA), angle-opening distance (AOD), angle recess area (ARA), and other angle parameters measurable in UBM images have been proposed for quantitative assessment of ACA.12,14,15 The main limitation of UBM imaging is the need for specialized equipment, the need for highly trained personnel, the need for an immersion bath for imaging, and associated patient discomfort. Additionally, manual evaluation of UBM images requires experienced ophthalmologists and is also time-intensive. Due to the mass production of medical images in clinical practice, it is challenging to accurately and time-effectively mark specific anatomic structures and measure the angle parameters in each UBM image. Therefore, deep learning–based automatic assessment of ACA in UBM images may be an effective alternative tool to help screen patients with narrow angle or angle closure and then refer them to experienced specialists for further examination. 
In recent years, several studies have reported the automatic detection of angle closure with high accuracy.1620 However, the studies of automated measurement of angle parameters such as TIA, AOD, and ARA in UBM images are quite scarce. The automatic localization of the scleral spur and the automatic segmentation of ACA tissue is the basis of the automatic quantitative measurement of angle parameters. In the current study, we developed an artificial intelligence system composed of multilevel convolutional neural networks (CNNs) for automatic measurement of ACA dimensions. First, we developed a CNN for automatic detection of angle closure in UBM images; for nonclosed images, we developed a CNN based on the EfficienttNetB3 network for the automatic localization of the scleral spur and the angle recess; then, the Unet network was used to segment the ACA tissue automatically. Based on the results of the latter two processes, we developed an automatic measurement algorithm for angle parameters. This study aims to achieve automatic detection of angle closure and quantitative measurement of angle parameters for enhancing the diagnosis and management of PACG. 
Methods
UBM Data Set
UBM images were collected from patients who underwent UBM examinations at the Tianjin Medical University Eye Hospital from May 2014 to February 2021. The UBM equipment was an MD-300L produced by MEDA Co. Ltd. (Tianjin, China), and the ultrasonic probe frequency used was 50 MHz, with a scan depth of 5.5 mm and width of 8.25 mm. It requires patients to be in a reclined position so that a water bath can be placed on the ocular surface for immersion of the probe. Images were excluded due to ACA structural abnormalities caused by iridodialysis, motion artifacts, or incompleteness. A total of 3788 UBM images from 1483 patients were selected from the database consecutively, and each image contained only one ACA. All UBM images were desensitized to personal privacy information before being obtained by researchers. This study was conducted following the World Medical Association Declaration of Helsinki principles and was approved by the Ethics Committee of Tianjin Medical University Eye Hospital (2019KY-24). Since the study was a retrospective study and used desensitized UBM images, informed consent was exempted. 
The labeling process of the UBM image was divided into two steps: (1) ophthalmologists classified each UBM image into angle closure or open angle. If the trabecular meshwork touched the iris, it was defined as angle closure. Labeling an image as angle closure did not require identification of the scleral spur since the boundary between the cornea-scleral tissue and the iris was blurred in closed angles. Figure 1 shows the representative image of open angle and angle closure. (2) For open-angle images, ophthalmologists used LabelMe (Massachusetts Institute of Technology, Cambridge, MA, USA) to mark the scleral spur coordinates, angle recess coordinates, and ACA tissue segmentation. Figure 2 shows the labeling process. 
Figure 1.
 
Open-angle and angle-closure images captured by UBM. (A) Open angle. (B) Angle closure. If the trabecular meshwork touched the iris, it was defined as angle closure.
Figure 1.
 
Open-angle and angle-closure images captured by UBM. (A) Open angle. (B) Angle closure. If the trabecular meshwork touched the iris, it was defined as angle closure.
Figure 2.
 
The labeling process of the UBM image. The UBM data set included 3788 images of 1483 patients. Each image was independently labeled by two experienced ophthalmologists (each with more than 8 years of clinical experience), and a third ophthalmologist (with more than 15 years of clinical experience) made the quality check on all marked data.
Figure 2.
 
The labeling process of the UBM image. The UBM data set included 3788 images of 1483 patients. Each image was independently labeled by two experienced ophthalmologists (each with more than 8 years of clinical experience), and a third ophthalmologist (with more than 15 years of clinical experience) made the quality check on all marked data.
The training of deep learning systems requires robust reference standards. Two ophthalmologists (each with more than 8 years of clinical experience) classified all images as angle closure or open angle. If their results were the same, this result was accepted as the final result. Otherwise, a senior ophthalmologist with more than 15 years of clinical experience made the final decision. For open-angle images, two ophthalmologists independently marked the scleral spur and the angle recess. The average value of the marked coordinates was used as the reference standard, and the senior ophthalmologist checked and corrected it. Likewise, the two ophthalmologists marked the ACA tissue for the open-angle image, and the senior ophthalmologist checked and corrected it. 
Deep Learning Model Development
Classification of Open-Angle and Angle-Closure Images
As shown in Figure 3, the automatic classification model of angle-closure and open-angle UBM images was developed using the Inceptionv3 (Google Inc, Mountain View, CA, USA) classifier with the last layer modified to one output. Our revised Inceptionv3 model included many-layer parameters to optimize the learning process. We used transfer learning and image augmentation to optimize model parameters. We used transfer learning to initialize the revised Inceptionv3 model using the model parameters pretrained on the ImageNet,21 and then we fine-tuned model weights on our UBM data set. We used randomly shifting with 0.2 scales, randomly rotating with 20 degrees, randomly zooming with 0.2 scales, and randomly horizontally flipping as image augmentation. 
Figure 3.
 
Deep learning models development. UBM images accompanied by classification labels, localization labels, and segmentation labels were used to train the deep learning models.
Figure 3.
 
Deep learning models development. UBM images accompanied by classification labels, localization labels, and segmentation labels were used to train the deep learning models.
Localization of the Scleral Spur and the Angle Recess
For open-angle images, the quantitative assessment of ACA is helpful to the assessment of closure risk. In UBM images, the scleral spur is a critical anatomic structure for quantitative assessment of the ACA. The localization of the scleral spur is also a challenge in the development of the automatic ACA analysis system. In this study, we designed a CNN, EfficientNetB3-Unet, based on the Unet network for the automatic localization of the scleral spur. The network consisted of an encoding module and a decoding module. To satisfy the need for semantic information extraction of sclera spur location, we used EfficientNetB3 as the main structure to construct the encoding module, removed the final pooling layer and fully connected layer, and used a 3 × 3 convolutional layer for further semantic information extraction. As shown in Figure 3, we generated the corresponding two-dimensional Gaussian heatmap H(u, v) from the scleral spur coordinates (u0,v0) marked by the ophthalmologist. The formula for generating the Gaussian heatmap H(u, v) is as follows:  
\begin{equation}H\left( {{\rm{u}},{\rm{v}}} \right) = exp\left\{ { - \frac{{{{\left( {u - {u_0}} \right)}^2} + {{\left( {v - {v_0}} \right)}^2}}}{{{\delta ^2}}}\;\;} \right\},\end{equation}
(1)
where δ is a hyperparameter that controls the heatmap radius. We used UBM images and the corresponding Gaussian heatmap to train the EfficientNetB3-Unet network to obtain the scleral spur localization model. The output of the localization model was a heatmap containing the position information of the scleral spur. We used the maximum likelihood algorithm to extract the scleral spur coordinates from the heatmap. 
Although some researchers use the scleral spur as the apex of the angle when using UBM images to measure TIA, most ophthalmologists and researchers still use the angle recess as the apex of the angle.14,2225 We used the same methodology for localization angle recess as we did for localization scleral spur. 
ACA Boundary Identification
ACA tissue segmentation aims to obtain the medial border of the angle used for the quantitative measurement of angle parameters. Because the boundaries between the cornea, sclera, iris, and ciliary body are fuzzy, the cornea, sclera, iris, and ciliary body are often marked as one category in the UBM image when human resources are limited. This reduces the difficulty of labeling and automatic segmentation of the ACA tissue without affecting the measurement of angle parameters. Because the Unet network can achieve better segmentation performance on small data sets, it has many successful applications in medical image segmentation.26 Therefore, this study used the Unet network to automatically segment the ACA tissue, as shown in Figure 3. After ACA tissue segmentation, the image-processing method was used to extract the angle boundary automatically. 
Quantification of Angle Parameters
The main parameters used to quantitatively evaluate the ACA in UBM images are TIA, AOD, and ARA. After obtaining the scleral spur and angle recess coordinates and the boundary of the ACA tissue in the UBM image through manual annotation or deep learning algorithms, we wrote a Python program to calculate angle parameters. Since the trabecular meshwork is located about 500 µm anterior to the scleral spur, it is essential to determine the ACA morphology near the trabecular meshwork (500 µm) and slightly anterior (750 µm) for the evaluation of the ACA.2729 In UBM image analysis, the measurement of the angle parameters based on the 500-µm and 750-µm anterior scleral spur has been extensively used in clinical practice. The Python program calculates angle parameters based on circles with both radii. It determines the intersection point of the circle and the cornea inner surface, draws a line through the intersection that is perpendicular to the cornea inner surface, determines an intersection point of the perpendicular and anterior surface of the iris, and then automatically calculates the angle parameters according to their definitions. 
Statistical Analysis
Using manual labeling as the reference standard, the performance of the classification model was assessed by accuracy, sensitivity, and specificity; the performance of the localization model was assessed by calculating the Euclidean distance between the model-predicted coordinates and the labeled coordinates; the performance of the segmentation model was assessed by pixel accuracy (PA; indicates the proportion of correct segmentation pixels to the total number of pixels) and mean intersection over union (mIOU; indicates the intersection of predicted ACA tissue and manual annotation ACA tissue divided by their union). 
To assess the consistency of the measurements between the ophthalmologists and the deep learning system, we calculated the interobserver reproducibility (within-subject standard deviation), coefficient of variation (CV; within-subject standard deviation divided by the overall mean), intraclass correlation coefficient (ICC), and limits of agreement based on the angle parameters measured by the ophthalmologists and automatically measured by the deep learning system. In the assessment, P values less than or equal to 0.05 were considered significant. 
Results
Deep Learning Model's Performance
In total, 185 images were excluded due to ACA structural abnormalities caused by iridodialysis (65 images), motion artifacts (18 images), or incompleteness (102 images). The final data set contained 3788 UBM images with 2146 open-angle and 1642 angle-closure images from 1483 patients. The training set, validation set, and testing set were split randomly at the patient's level so that images from a single patient were only included in the testing or training/validation sets (sample size, training set/validation set/testing set = 6:2:2). This operation is essential to prevent data leakage. During the ACA classification task, 2267 images (1285 open-angle and 982 angle-closure images) were assigned to the training set, 760 images (434 open-angle and 326 angle-closure images) were assigned to the validation set, and 761 images (427 open-angle and 334 angle-closure images) were assigned to the testing set. Using the manual classification as the reference standard, we found that the classification accuracy reached 98.18%, and the sensitivity and specificity reached 98.74% and 97.44% for angle closure. 
During the scleral spur and angle recess localization task and ACA tissue segmentation task, 1285 open-angle images were assigned to the training set, 434 open-angle images were assigned to the validation set, and 427 open-angle images were assigned to the testing set. Using coordinates marked by the ophthalmologists as the reference standard, we found that the mean Euclidian distance of the scleral spur localization model was 65.19 ± 51.47 µm. The Euclidean distance distribution of the model was 5.62% within 10 µm, 50.82% within 50 µm, 80.80% within 100 µm, and 92.74% within 150 µm. Figure 4 shows representative images of various Euclidean distances between the scleral spur locations marked by ophthalmologists and predicted by the deep learning model. Similarly, we found that the mean Euclidian distance of the angle recess localization model was 43.32 ± 41.23 µm. The Euclidean distance distribution of the model was 9.13% within 10 µm, 74.00% within 50 µm, 94.38% within 100 µm, and 97.19% within 150 µm. There were no statistically significant differences in the localization error distributions of scleral spur and angle recess at different angle widths (Mann–Whitney U test, P > 0.05), and no association was found between angle width and localization error. Using the manual segmentation as the standard, we found that the PA and mIOU of the deep learning segmentation model reached 98.94% and 97.11%, respectively. 
Figure 4.
 
Representative images of various Euclidean distances (10, 50, 100, and 150 µm) between the scleral spur locations marked by ophthalmologists (green cross) and predicted by the deep learning model (red cross).
Figure 4.
 
Representative images of various Euclidean distances (10, 50, 100, and 150 µm) between the scleral spur locations marked by ophthalmologists (green cross) and predicted by the deep learning model (red cross).
Quantitative Measurement of Angle Parameters
After obtaining the scleral spur and angle recess coordinates and the boundary of the ACA in the UBM image through manual annotation or deep learning algorithms, we used the automatic measurement algorithm to calculate angle parameters. Figure 5 and Figure 6 present the measurement results of the angle parameters at 500 µm anterior to the scleral spur with different angle widths. We printed the measurement results of TIA, AOD, and ARA on the UBM image. 
Figure 5.
 
The measurement results of the manual annotation and deep learning system. ∠CRI (represents the angle composed of point C, point R and point I) is TIA500, the length of CI (represents the distance from point C to point I) is AOD500, and the area of the purple area is ARA500. (A) The measurement results of the manual annotation. The red contour is the manual segmentation result. Points S and R represent the manually marked scleral spur and angle recess, respectively. The yellow circle is centered at point S with a radius of 500 µm. Point C represents the intersection point of the circle and the cornea inner surface, and point I is the intersection of a straight line perpendicular to the cornea inner surface and passing through point C and the anterior surface of the iris. (B) The measurement results of the deep learning system. The red contour is the deep learning segmentation result. Points S and R represent the scleral spur and angle recess predicted by the deep learning model.
Figure 5.
 
The measurement results of the manual annotation and deep learning system. ∠CRI (represents the angle composed of point C, point R and point I) is TIA500, the length of CI (represents the distance from point C to point I) is AOD500, and the area of the purple area is ARA500. (A) The measurement results of the manual annotation. The red contour is the manual segmentation result. Points S and R represent the manually marked scleral spur and angle recess, respectively. The yellow circle is centered at point S with a radius of 500 µm. Point C represents the intersection point of the circle and the cornea inner surface, and point I is the intersection of a straight line perpendicular to the cornea inner surface and passing through point C and the anterior surface of the iris. (B) The measurement results of the deep learning system. The red contour is the deep learning segmentation result. Points S and R represent the scleral spur and angle recess predicted by the deep learning model.
Figure 6.
 
The measurement results of the manual annotation and deep learning system. ∠CRI is TIA500, the length of CI is AOD500, and the area of the purple area is ARA500. (A) The measurement results of the manual annotation. The red contour is the manual segmentation result. Points S and R represent the manually marked scleral spur and angle recess, respectively. The yellow circle is centered at point S with a radius of 500 µm. Point C represents the intersection point of the circle and the cornea inner surface, and point I is the intersection of a straight line perpendicular to the cornea inner surface and passing through point C and the anterior surface of the iris. (B) The measurement results of the deep learning system. The red contour is the deep learning segmentation result. Points S and R represent the scleral spur and angle recess predicted by the deep learning model.
Figure 6.
 
The measurement results of the manual annotation and deep learning system. ∠CRI is TIA500, the length of CI is AOD500, and the area of the purple area is ARA500. (A) The measurement results of the manual annotation. The red contour is the manual segmentation result. Points S and R represent the manually marked scleral spur and angle recess, respectively. The yellow circle is centered at point S with a radius of 500 µm. Point C represents the intersection point of the circle and the cornea inner surface, and point I is the intersection of a straight line perpendicular to the cornea inner surface and passing through point C and the anterior surface of the iris. (B) The measurement results of the deep learning system. The red contour is the deep learning segmentation result. Points S and R represent the scleral spur and angle recess predicted by the deep learning model.
The mean of differences was generally small between measurement results of the manual annotation and deep learning system. The CVs of TIA500, TIA750, AOD500, AOD750, ARA500, and ARA750 measured by the deep learning system were 5.77%, 4.67%, 10.76%, 7.71%, 16.77%, and 12.70%, respectively. The reproducibility of TIA500, TIA750, AOD500, AOD750, ARA500, and ARA750 was 5.77 degrees, 4.56 degrees, 155.92 µm, 147.51 µm, 0.10 mm2, and 0.12 mm2, respectively. ICC values of all the angle parameters were greater than 0.935, indicating that measurement results of the deep learning system and the manual annotation had good consistency, and the consistency of TIA was better than that of AOD and ARA (Table 1). Figure 7 shows the Bland–Altman plots of TIA500 and TIA750. Most TIA measurements fell within ±1.96 SD and were clinically acceptable in both cases. 
Table 1.
 
Consistency Between the Manual and Automated Angle Parameters Measurement
Table 1.
 
Consistency Between the Manual and Automated Angle Parameters Measurement
Figure 7.
 
Bland–Altman plots of TIA500 and TIA750. (A) Difference in TIA500 between ophthalmologists and deep learning system against the average of the two. (B) Difference in TIA750 between ophthalmologists and deep learning system against the average of the two.
Figure 7.
 
Bland–Altman plots of TIA500 and TIA750. (A) Difference in TIA500 between ophthalmologists and deep learning system against the average of the two. (B) Difference in TIA750 between ophthalmologists and deep learning system against the average of the two.
Impact of Scleral Spur Location on Angle Parameters
The accurate measurement of angle parameters heavily depends on the accurate localization of the scleral spur. The linear regression analysis between the scleral spur localization error and the absolute error in angle parameters showed a positive linear association. The error in the automatic localization of the scleral spur largely explained the variance in the automatic measurement of the angle parameters (Table 2). 
Table 2.
 
Linear Regression Between Change in Spur Placement and Change in Angle Parameters
Table 2.
 
Linear Regression Between Change in Spur Placement and Change in Angle Parameters
Discussion
In this study, we developed and assessed a deep learning system composed of multilevel CNNs for automatic assessment of ACA. The results suggested that the artificial intelligence system can automatically classify UBM images into angle closure (iridotrabecular contact) and open angle with high accuracy (ACC = 98.18%). This deep learning system's automatic measurement of angle parameters such as TIA, AOD, and ARA is in good agreement with the manual measurement of open-angle images. We believe that this automatic ACA assessment system will facilitate the development of intelligent diagnostic systems for PACG and enhance the application of UBM imaging in clinical care and scientific research in PACG. 
Several studies have reported automated ACA assessment. UBM Pro 2000 (Paradigm Medical Industries, Salt Lake City, UT, USA) is a program for ACA quantization based on UBM images.30 However, this program requires the user to recognize the ACA and adjust the image contrast, which may increase the measurement differences between different observers. The Zhongshan Angle Assessment Program proposed a quantitative assessment method of ACA based on anterior segment optical coherence tomography (AS-OCT) images,31 but this method requires the operator to determine the location of the scleral spur. Lin et al.32 developed software for measuring angle parameters and iris parameters based on UBM images, but this method requires the operator to locate the scleral spur and other anatomic reference points. In these studies, users must manually identify specific anatomic structures as reference points for automatic ACA assessment. These semiautomated methods introduce user subjectivity. Manual identification of anatomic structures depends on the clinical experience of the user. Different operators may use different criteria to locate the reference points. Even under the same localization criteria, there will be some localization errors due to image resolution and contrast limitation. Image analysis based on computer vision and deep learning may be an effective solution for automatic quantitative assessment of ACA in these instances. We realized the automatic location of scleral spur and angle recess and the automatic segmentation of the ACA tissue based on a deep learning algorithm. The deep learning–based ACA automatic assessment system we proposed is fully automatic without any manual intervention. For the same input image, this deep learning system can always output the same angle parameters. Although the deep learning system eliminates interobserver and intraobserver errors in measuring a single UBM image, factors such as the experience of the UBM operator and the scanning position of the ultrasonic probe on the eyeball will still affect the reproducibility of the angle parameters measurement. Because the objective of this study was to design a deep learning system for automatic assessment of ACA in UBM images, clinical information about whether patients with prior surgery or laser treatment and whether they had secondary angle closure was not recorded. 
The ICC values between manual measurement and automatic measurement of TIA, AOD, and ARA were all greater than 0.935, and the ICC value of TIA was greater than 0.985. The CV values of AOD500, AOD750, and ARA750 were 10.76%, 7.71%, and 12.70%, respectively. The CV values of TIA500 and TIA750 were 5.67% and 4.67%, respectively, and the reproducibility of TIA500 and TIA750 was 5.77 degrees and 4.56 degrees, respectively. For comparison, the CV values of AOD500, AOD750, and ARA750 achieved by the Zhongshan Angle Assessment Program were 17.6%, 12.8%, and 14.9%, respectively.31 Lin et al.32 described automatic measurement of angle parameters using UBM images and reported that the ICC ranges of TIA, AOD, and ARA were 0.60 to 0.92, 0.52 to 0.89, and 0.64 to 0.92, respectively. Li et al.33 only realized the automatic prediction of TIA, with an ICC of 0.95, a CV of 6.8%, and a reproducibility of 6.1 degrees. Compared to the abovementioned systems, our deep learning system achieved better consistency with the manual measurement results. 
By analyzing the angle parameters, we found that the consistency of TIA measured by the deep learning system was better than AOD and ARA. Accurate measurement of angle parameters relies on precise localization of the scleral spur. Table 2 summarizes the relationship between the angle parameter measurement error and the scleral spur localization error. Compared with the measurement errors of TIA and AOD, the ARA measurement error had the most significant correlation with the scleral spur localization error (R2 = 0.446, P < 0.0001), which may have been because the ARA measurement was greatly affected by the scleral spur localization error but was insensitive to irregular iris anterior surfaces. The TIA measurement error and the scleral spur localization error were the least significant because the TIA measurement was affected by the scleral spur position, the irregular iris anterior surface, and the angle recess position. The relationship between angle parameter measurement error and scleral spur location error largely explains the better consistency of TIA than AOD and ARA. The within-subject standard deviation for AOD 500 and AOD 750 may be relatively large in the angle region. The reason for this may be due to changes in the morphology of the iris. 
Our proposed deep learning system can automatically detect the angle closure and quantitatively measure angle width in UBM images. Studies have shown that some patients with PACG are not aware of their condition before the onset of the disease, which may produce critical visual damage to the patients.34 Therefore, using UBM imaging and the deep learning system can help screen people at high risk for PACG to enable intervention protecting against vision loss. The deep learning system can also dynamically measure the angle parameters of the patient and monitor the changing trend of the angle parameters. The angle parameters before and after the treatment automatically measured by the deep learning system can assist ophthalmologists in evaluating the treatment effect. Because people in underdeveloped areas have limited access to UBM experts with rich clinical experience, the system can also be combined with telemedicine to help to decide who should be referred for further evaluation and treatment. 
Our study also presents some limitations. The first limitation is the lack of absolute ground truth in UBM image annotation. Since the UBM image labeling process is subjective, human errors may be introduced in the labeling process. When the deep learning model learns according to the labeling results of ophthalmologists, it will inevitably learn from human errors. Some studies have pointed out that with the increase of annotation experts, the objectivity of annotation results will also increase, and the error will be concentrated around zero.35 Therefore, adding labeling experts may be an effective way to alleviate the absence of absolute ground truth. The second limitation is that all the UBM images in the data set are from the same UBM device. Different UBM devices may have different image sizes and resolutions, which will affect the assessment of the ACA. Therefore, the deep learning system proposed in this study does not apply to other UBM devices. A third limitation is that all UBM images in the data set are from Chinese people, so our results may not apply to other ethnic groups. In this study, although our data set contains many UBM images from real-world clinical settings, the generality of our findings should be treated with caution due to the lack of validation of external data sets. A recent study has shown that poor-quality images often have a negative influence on image-based artificial intelligence systems.36 Therefore, future work should assess the automatic recognition of poor-quality images to ensure that the performance of the deep learning system is not affected by poor-quality images. 
In summary, our proposed automatic ACA assessment system based on a deep learning algorithm realizes reliable and repeatable angle-closure detection and automatic measurement of angle parameters. In the future, more studies are needed to evaluate the clinical performance of this system and compare it to clinical assessments without the use of artificial intelligence. 
Acknowledgments
Supported by CAMS Initiative for Innovative Medicine (2017-12M-3-020) and Key Technologies R&D program of Tianjin (19YFZCSY00510). The funding organization had no role in the design or conduct of this research. 
Disclosure: W. Wang, None; L. Wang, None; X. Wang, None; S. Zhou, None; S. Lin, None; J. Yang, None 
References
Foster PJ, Johnson GJ. Glaucoma in China: how big is the problem? Br J Ophthalmol. 2001; 85: 1277–1282. [CrossRef] [PubMed]
Quigley HA, Broman AT. The number of people with glaucoma worldwide in 2010 and 2020. Br J Ophthalmol. 2006; 90: 262–267. [CrossRef] [PubMed]
Tham YC, Li X, Wong TY, Quigley HA, Aung T, Cheng CY. Global prevalence of glaucoma and projections of glaucoma burden through 2040: a systematic review and meta-analysis. Ophthalmology. 2014; 121: 2081–2090. [CrossRef] [PubMed]
Sun X, Dai Y, Chen Y, et al. Primary angle closure glaucoma: what we know and what we don't know. Prog Retin Eye Res. 2017; 57: 26–45. [CrossRef] [PubMed]
Nongpiur ME, Ku JY, Aung T. Angle closure glaucoma: a mechanistic review. Curr Opin Ophthalmol. 2011; 22: 96–101. [CrossRef] [PubMed]
Weinreb RN, Aung T, Medeiros FA. The pathophysiology and treatment of glaucoma: a review. JAMA. 2014; 311: 1901–1911. [CrossRef] [PubMed]
Riva I, Micheletti E, Oddone F, et al. Anterior chamber angle assessment techniques: a review. J Clin Med. 2020; 9: 3814. [CrossRef]
Barkana Y, Dorairaj SK, Gerber Y, Liebmann JM, Ritch R. Agreement between gonioscopy and ultrasound biomicroscopy in detecting iridotrabecular apposition. Arch Ophthalmol. 2007; 125: 1331–1335. [CrossRef] [PubMed]
Forbes M. Gonioscopy with corneal indentation: a method for distinguishing between appositional closure and synechial closure. Arch Ophthalmol. 1966; 76: 488–492. [CrossRef] [PubMed]
Shinoj VK, Xun JJH, Murukeshan VM, Baskaran M , Aung T. Progress in anterior chamber angle imaging for glaucoma risk prediction—A review on clinical equipment, practice and research. Med Eng Phys. 2016; 38: 1383–1391. [PubMed]
He M, Friedman DS, Ge J, et al. Laser peripheral iridotomy in primary angle-closure suspects: biometric and gonioscopic outcomes: the Liwan Eye Study. Ophthalmology. 2007; 114: 494–500. [CrossRef] [PubMed]
Pavlin CJ, Harasiewicz K, Sherar MD, Foster FS. Clinical use of ultrasound biomicroscopy. Ophthalmology. 1991; 98: 287–295. [CrossRef] [PubMed]
Friedman DS, He M. Anterior chamber angle assessment techniques. Surv Ophthalmol. 2008; 53: 250–273. [CrossRef] [PubMed]
Pavlin CJ, Harasiewicz K, Foster FS. Ultrasound biomicroscopy of anterior segment structures in normal and glaucomatous eyes. Am J Ophthalmol. 1992; 113: 381–389. [CrossRef] [PubMed]
Ishikawa H, Esaki K, Liebmann JM, Uji Y, Ritch R. Ultrasound biomicroscopy dark room provocative testing: a quantitative method for estimating anterior chamber angle width. Jpn J Ophthalmol. 1999; 43: 526–534. [CrossRef] [PubMed]
Shi G, Jiang Z, Deng G, et al. Automatic classification of anterior chamber angle using ultrasound biomicroscopy and deep learning. Transl Vis Sci Technol. 2019; 8: 25. [CrossRef] [PubMed]
Fu H, Baskaran M, Xu Y, et al. A deep learning system for automated angle-closure detection in anterior segment optical coherence tomography images. Am J Ophthalmol. 2019; 203: 37–45. [CrossRef] [PubMed]
Xu BY, Chiang M, Chaudhary S, Kulkarni S, Pardeshi AA, Varma R. Deep learning classifiers for automated detection of gonioscopic angle closure based on anterior segment OCT images. Am J Ophthalmol. 2019; 208: 273–280. [CrossRef] [PubMed]
Fu H, Xu Y, Lin S, et al. Angle-closure detection in anterior segment OCT based on multilevel deep network. IEEE Trans Cybern. 2020; 50: 3358–3366. [CrossRef] [PubMed]
Hao H, Zhao Y, Yan Q, et al. Angle-closure assessment in anterior segment OCT images via deep learning. Med Image Anal. 2021; 69: 101956. [CrossRef] [PubMed]
Russakovsky O, Deng J, Su H, et al. ImageNet large scale visual recognition challenge. Int J Comput Vision. 2015; 115: 211–252. [CrossRef]
Henzan IM, Tomidokoro A, Uejo C, et al. Comparison of ultrasound biomicroscopic configurations among primary angle closure, its suspects, and nonoccludable angles: the Kumejima Study. Am J Ophthalmol. 2011; 151: 1065–1073.e1. [CrossRef] [PubMed]
Chansangpetch S, Rojanapongpun P, Lin SC. Anterior segment imaging for angle closure. Am J Ophthalmol. 2018; 188: xvi–xxix. [CrossRef] [PubMed]
Lim DH, Lee MG, Chung ES, Chung TY. Clinical results of posterior chamber phakic intraocular lens implantation in eyes with low anterior chamber depth. Am J Ophthalmol. 2014; 158: 447–454.e1. [CrossRef] [PubMed]
Kunimatsu S, Tomidokoro A, Mishima K, et al. Prevalence of appositional angle closure determined by ultrasonic biomicroscopy in eyes with shallow anterior chambers. Ophthalmology. 2005; 112: 407–412. [CrossRef] [PubMed]
Ronneberger O, Fischer P, Brox T. U-Net: convolutional networks for biomedical image segmentation. Lect Notes Comput Sci. 2015; 9351: 234–241. [CrossRef]
Ramani KK, Mani B, Ronnie G, Joseph R, Lingam V. Gender variation in ocular biometry and ultrasound biomicroscopy of primary angle closure suspects and normal eyes. J Glaucoma. 2007; 16: 122–128. [CrossRef] [PubMed]
Hirasawa H, Tomidokoro A, Kunimatsu S, et al. Ultrasound biomicroscopy in narrow peripheral anterior chamber eyes with or without peripheral anterior synechiae. J Glaucoma. 2009; 18: 552–556. [CrossRef] [PubMed]
Yao BQ, Wu LL, Zhang C, Wang X. Ultrasound biomicroscopic features associated with angle closure in fellow eyes of acute primary angle closure after laser iridotomy. Ophthalmology. 2009; 116: 444–448.e2. [CrossRef] [PubMed]
Ishikawa H, Liebmann JM, Ritch R. Quantitative assessment of the anterior segment using ultrasound biomicroscopy. Curr Opin Ophthalmol. 2000; 11: 133–139. [CrossRef] [PubMed]
Console JW, Sakata LM, Aung T, Friedman DS, He M. Quantitative analysis of anterior segment optical coherence tomography images: the Zhongshan Angle Assessment Program. Br J Ophthalmol. 2008; 92: 1612–1616. [CrossRef] [PubMed]
Lin Z, Mou da P, Liang YB, et al. Reproducibility of anterior chamber angle measurement using the Tongren ultrasound biomicroscopy analysis system. J Glaucoma. 2014; 23: 61–68. [CrossRef] [PubMed]
Li W, Chen Q, Jiang Z, et al. Automatic anterior chamber angle measurement for ultrasound biomicroscopy using deep learning. J Glaucoma. 2020; 29: 81–85. [CrossRef] [PubMed]
Sawaguchi S, Sakai H, Iwase A, et al. Prevalence of primary angle closure and primary angle-closure glaucoma in a southwestern rural population of Japan: the Kumejima Study. Ophthalmology. 2012; 119: 1134–1142. [CrossRef] [PubMed]
Shalev-Shwartz S, Ben-David S. Understanding Machine Learning: From Theory to Algorithms. Cambridge, UK: Cambridge University Press; 2014.
Li Z, Guo C, Nie D, et al. Deep learning from “passive feeding” to “selective eating” of real-world data. Npj Digit Med. 2020; 3(1): 143. [CrossRef] [PubMed]
Figure 1.
 
Open-angle and angle-closure images captured by UBM. (A) Open angle. (B) Angle closure. If the trabecular meshwork touched the iris, it was defined as angle closure.
Figure 1.
 
Open-angle and angle-closure images captured by UBM. (A) Open angle. (B) Angle closure. If the trabecular meshwork touched the iris, it was defined as angle closure.
Figure 2.
 
The labeling process of the UBM image. The UBM data set included 3788 images of 1483 patients. Each image was independently labeled by two experienced ophthalmologists (each with more than 8 years of clinical experience), and a third ophthalmologist (with more than 15 years of clinical experience) made the quality check on all marked data.
Figure 2.
 
The labeling process of the UBM image. The UBM data set included 3788 images of 1483 patients. Each image was independently labeled by two experienced ophthalmologists (each with more than 8 years of clinical experience), and a third ophthalmologist (with more than 15 years of clinical experience) made the quality check on all marked data.
Figure 3.
 
Deep learning models development. UBM images accompanied by classification labels, localization labels, and segmentation labels were used to train the deep learning models.
Figure 3.
 
Deep learning models development. UBM images accompanied by classification labels, localization labels, and segmentation labels were used to train the deep learning models.
Figure 4.
 
Representative images of various Euclidean distances (10, 50, 100, and 150 µm) between the scleral spur locations marked by ophthalmologists (green cross) and predicted by the deep learning model (red cross).
Figure 4.
 
Representative images of various Euclidean distances (10, 50, 100, and 150 µm) between the scleral spur locations marked by ophthalmologists (green cross) and predicted by the deep learning model (red cross).
Figure 5.
 
The measurement results of the manual annotation and deep learning system. ∠CRI (represents the angle composed of point C, point R and point I) is TIA500, the length of CI (represents the distance from point C to point I) is AOD500, and the area of the purple area is ARA500. (A) The measurement results of the manual annotation. The red contour is the manual segmentation result. Points S and R represent the manually marked scleral spur and angle recess, respectively. The yellow circle is centered at point S with a radius of 500 µm. Point C represents the intersection point of the circle and the cornea inner surface, and point I is the intersection of a straight line perpendicular to the cornea inner surface and passing through point C and the anterior surface of the iris. (B) The measurement results of the deep learning system. The red contour is the deep learning segmentation result. Points S and R represent the scleral spur and angle recess predicted by the deep learning model.
Figure 5.
 
The measurement results of the manual annotation and deep learning system. ∠CRI (represents the angle composed of point C, point R and point I) is TIA500, the length of CI (represents the distance from point C to point I) is AOD500, and the area of the purple area is ARA500. (A) The measurement results of the manual annotation. The red contour is the manual segmentation result. Points S and R represent the manually marked scleral spur and angle recess, respectively. The yellow circle is centered at point S with a radius of 500 µm. Point C represents the intersection point of the circle and the cornea inner surface, and point I is the intersection of a straight line perpendicular to the cornea inner surface and passing through point C and the anterior surface of the iris. (B) The measurement results of the deep learning system. The red contour is the deep learning segmentation result. Points S and R represent the scleral spur and angle recess predicted by the deep learning model.
Figure 6.
 
The measurement results of the manual annotation and deep learning system. ∠CRI is TIA500, the length of CI is AOD500, and the area of the purple area is ARA500. (A) The measurement results of the manual annotation. The red contour is the manual segmentation result. Points S and R represent the manually marked scleral spur and angle recess, respectively. The yellow circle is centered at point S with a radius of 500 µm. Point C represents the intersection point of the circle and the cornea inner surface, and point I is the intersection of a straight line perpendicular to the cornea inner surface and passing through point C and the anterior surface of the iris. (B) The measurement results of the deep learning system. The red contour is the deep learning segmentation result. Points S and R represent the scleral spur and angle recess predicted by the deep learning model.
Figure 6.
 
The measurement results of the manual annotation and deep learning system. ∠CRI is TIA500, the length of CI is AOD500, and the area of the purple area is ARA500. (A) The measurement results of the manual annotation. The red contour is the manual segmentation result. Points S and R represent the manually marked scleral spur and angle recess, respectively. The yellow circle is centered at point S with a radius of 500 µm. Point C represents the intersection point of the circle and the cornea inner surface, and point I is the intersection of a straight line perpendicular to the cornea inner surface and passing through point C and the anterior surface of the iris. (B) The measurement results of the deep learning system. The red contour is the deep learning segmentation result. Points S and R represent the scleral spur and angle recess predicted by the deep learning model.
Figure 7.
 
Bland–Altman plots of TIA500 and TIA750. (A) Difference in TIA500 between ophthalmologists and deep learning system against the average of the two. (B) Difference in TIA750 between ophthalmologists and deep learning system against the average of the two.
Figure 7.
 
Bland–Altman plots of TIA500 and TIA750. (A) Difference in TIA500 between ophthalmologists and deep learning system against the average of the two. (B) Difference in TIA750 between ophthalmologists and deep learning system against the average of the two.
Table 1.
 
Consistency Between the Manual and Automated Angle Parameters Measurement
Table 1.
 
Consistency Between the Manual and Automated Angle Parameters Measurement
Table 2.
 
Linear Regression Between Change in Spur Placement and Change in Angle Parameters
Table 2.
 
Linear Regression Between Change in Spur Placement and Change in Angle Parameters
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×