February 2022
Volume 11, Issue 2
Open Access
Articles  |   February 2022
Application of Artificial Intelligence and Deep Learning for Choroid Segmentation in Myopia
Author Affiliations & Notes
  • Hung-Ju Chen
    Department of Ophthalmology, Taichung Veterans General Hospital, Taichung, Taiwan
  • Yu-Len Huang
    Department of Computer science, Tunghai University, Taichung, Taiwan
  • Siu-Lun Tse
    Department of Computer science, Tunghai University, Taichung, Taiwan
  • Wei-Ping Hsia
    Department of Ophthalmology, Taichung Veterans General Hospital, Taichung, Taiwan
  • Chung-Hao Hsiao
    Department of Ophthalmology, Taichung Veterans General Hospital, Taichung, Taiwan
  • Yang Wang
    Department of Computer science, Tunghai University, Taichung, Taiwan
  • Chia-Jen Chang
    Department of Optometry, Central Taiwan University of Science and Technology, Taichung City, Taiwan
    Department of Ophthalmology, Taichung Veterans General Hospital, Taichung, Taiwan
  • Correspondence: Chia-Jen Chang, Department of Ophthalmology, Taichung Veterans General Hospital, No. 1650, Sec. 4, Taiwan Blvd., Xitun Dist., Taichung City 407, Taiwan (R.O.C.). e-mail: capmchangcj@gmail.com 
Translational Vision Science & Technology February 2022, Vol.11, 38. doi:https://doi.org/10.1167/tvst.11.2.38
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Hung-Ju Chen, Yu-Len Huang, Siu-Lun Tse, Wei-Ping Hsia, Chung-Hao Hsiao, Yang Wang, Chia-Jen Chang; Application of Artificial Intelligence and Deep Learning for Choroid Segmentation in Myopia. Trans. Vis. Sci. Tech. 2022;11(2):38. doi: https://doi.org/10.1167/tvst.11.2.38.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose: To investigate the correlation between choroidal thickness and myopia progression using a deep learning method.

Methods: Two data sets, data set A and data set B, comprising of 123 optical coherence tomography (OCT) volumes, were collected to establish the model and verify its clinical utility. The proposed mask region-based convolutional neural network (R-CNN) model, trained with the pretrained weights from the Common Objects in Context database as well as the manually labeled OCT images from data set A, was used to automatically segment the choroid. To verify its clinical utility, the mask R-CNN model was tested with data set B, and the choroidal thickness estimated by the model was also used to explore its relationship with myopia.

Results: Compared with the result of manual segmentation in data set B, the error of the automatic choroidal inner and outer boundary segmentation was 6.72 ± 2.12 and 13.75 ± 7.57 µm, respectively. The mean dice coefficient between the region segmented by automatic and manual methods was 93.87% ± 2.89%. The mean difference in choroidal thickness over the Early Treatment Diabetic Retinopathy Study zone between the two methods was 10.52 µm. Additionally, the choroidal thickness estimated using the proposed model was thinner in high-myopic eyes, and axial length was the most significant predictor.

Conclusions: The mask R-CNN model has excellent performance in choroidal segmentation and quantification. In addition, the choroid of high myopia is significantly thinner than that of nonhigh myopia.

Translational Relevance: This work lays the foundations for mask R-CNN models that could aid in the evaluation of more intricate changes occurring in chorioretinal diseases.

Introduction
High myopia is one of the leading causes of low vision worldwide. Its prevalence is 2.4% in the United States,1 4.2% in Taiwan,2 and 8.2% in Japan.3 Its progression is associated with progressive elongation of the eyeball, resulting in a variety of secondary fundus changes that may lead to visual impairment, including retinal detachment, myopic macular schisis, macular hole, choroidal neovascularization, and zonal areas of chorioretinal atrophy.4 In highly myopic eyes, the earliest changes begin in the choroid, and the thickness of the choroid may become one of the determinants in the pathogenesis of vision loss.5,6 
Optical coherence tomography (OCT) using enhanced depth imaging (EDI) has been used in many studies to measure choroidal thickness in normal populations and in eyes with ocular diseases, including age-related macular degeneration, polypoidal choroidal vasculopathy, central serous chorioretinopathy, and myopic maculopathy.7 Unlike the retina, choroidal structures are not found in distinct, ordered layers, and they lack contrasting reflective properties. The heterogeneous texture of tissues, artifact speckles, and various noises often cause difficulties in the extraction of accurate boundaries of the choroid layer in OCT images. Moreover, the evaluation of the choroid can be subjective in nature, as it relies on the clinician's familiarity with its characteristic patterns. Therefore, evaluation accuracy might be improved significantly if manual segmentation is performed by an experienced ophthalmologist. 
Manual segmentation of the choroid from OCT images is a time-consuming procedure for the clinician. Deep learning methods for choroid segmentation have been developed in recent years that have shown promising results. For example, Sui et al.8 proposed a deep convolutional neural network (CNN) in choroid segmentation by learning graph-edge weight, and the results outperformed conventional hand-crafted ones. Masood et al.9 combined morphologic operations and CNN to calculate choroidal thickness, which showed high precision and significantly reduced error rates. He et al.10 proposed an improved CNN model-based method that performs well on a small data set, and the results showed higher robustness and credibility. In addition, other neural networks, such as a U-shape convolutional network (U-Net), which is considered the most widely applicable architecture for medical image segmentation, has also been developed for OCT image segmentation.11,12 
Numerous deep learning methods have been developed for choroid segmentation in OCT images.911,13 However, few studies have applied the deep learning model in the clinical setting. Being a major vascular layer of the eye, the choroid plays an important role in ocular health. Accurate measurement of choroidal thickness is an essential step in monitoring disease onset and progression that lead to choroidal thinning. In this study, we propose a novel and practical deep learning model, mask region-based CNN (R-CNN), to segment the choroid boundary and measure the choroidal thickness. Furthermore, we aimed to use another clinical data set to verify its clinical utility in exploring choroidal changes in myopia progression. 
Method
This study was approved by the Institutional Review Board of Taichung Veterans General Hospital (CE21201B). The need for informed consent from study participants was waived. The study protocol adhered to the tenets of the Declaration of Helsinki. All collected OCT images underwent deidentification before further processing. 
Data Acquisition
We retrospectively collected OCT images from patients in the Taichung Veterans General Hospital from January 2017 to December 2020. OCT images were obtained using spectral-domain OCT (Spectralis; Heidelberg Engineering, Heidelberg, Germany) with EDI methods that provided more a detailed image of the choroid layer as compared with conventional spectral-domain OCT. We measured the axial length using an ocular biometry system (IOL Master; Carl Zeiss Meditec, Oberkochen, Germany). Raw images were stored in a centralized workstation, and all collected OCT images underwent deidentification before additional processing. We collected two data sets, data set A and data set B, at different times by different doctors. Data set A was collected by W.-P. Hsia, and data set B was collected by H.-J. Chen. Data set A was used to establish the model, and data set B was used to evaluate the performance of the model in clinical application. There were no overlapping cases between the two data sets. Eyes in data set B were further divided into a high-myopia group and a non–high-myopia group based on their axial length. In the high-myopia subgroup, axial length was defined as being longer than 26 mm.14 The eligibility criteria for data set B were as follows: (1) age range from 40 to 65 years, (2) no ocular pathology other than age-related cataract, and (3) no diabetes mellitus. The exclusion criteria were as follows: (1) the presence of ocular diseases that could influence the normal contour of the retina and choroid layer (i.e., intraretinal fluid, subretinal fluid, large drusens, or tumor), (2) previous vitreoretinal surgery (i.e., for retinal detachment, epiretinal membrane, macular hole, or vitreous hemorrhage), (3) concomitant glaucoma, and (4) poor-quality OCT images. 
All eyes were assessed using the spectral-domain OCT with EDI mode consisting of a raster of 25 horizontal line scans over the Early Treatment Diabetic Retinopathy Study (EDTRS) area (6000 × 6000 µm centered on the fovea). Each acquired image measured 480  ×  480 pixels (width × height) with a vertical scale of 4 µm per pixel and a horizontal scale of 11 µm per pixel, corresponding to an approximate physical area of 5.3  ×  1.9 mm. 
Choroidal Thickness Definition and Ground Truth Labeling
We defined choroidal thickness as the axial distance between the choroid inner boundary (CiB) and the choroid outer boundary (CoB) (Fig. 1). The CiB is formed by Bruch's membrane, which is a thin layer derived in part from the retinal pigment epithelium and the choriocapillaris. The CoB is the low-contrast line extending along the choroid–sclera interface. Ground truth was specified by the ophthalmologist at Taichung Veterans General Hospital. The specialist manually segmented CiB and CoB on the OCT images. Thus, these manually segmented images were used as ground truth in the proposed method. Furthermore, the manually labeled OCT images from data set A were also used for model training described subsequently. 
Figure 1.
 
Physiologic structure of the CiB and the CoB.
Figure 1.
 
Physiologic structure of the CiB and the CoB.
Deep Learning Algorithms and Transfer Learning
As shown in Figure 2, the segmentation of the choroid was performed using a mask R-CNN. Mask R-CNN is a small, flexible generic object instance segmentation framework. It is extended on the basis of faster R-CNN and provides a more high-quality segmentation result for each target.15 Mask R-CNN has a two-staged architecture. In the first stage, to determine the possible location of the choroid, the backbone network–extracted feature map from the input OCT and output from the backbone was passed to the region proposal network. In the second stage, the fully convolutional network (FCN) extracted a feature from the fixed-sized feature map and masks for the choroid. 
Figure 2.
 
The schematic architecture of mask R-CNN. RPN, region proposal network.
Figure 2.
 
The schematic architecture of mask R-CNN. RPN, region proposal network.
In our proposed model, we used ResNet50 and feature pyramid networks (FPNs) as the backbone. ResNet50 is a residual learning framework used to extract feature from images and solve the problem of deep network degradation as increasing network depth.16 However, the features would lose the location information at up-sampling. The FPN was another method for extending the backbone network. It combined high-resolution information of low-level features and high-semantic information of high-level features to detect the object of different scales.17 Hence, mask R-CNN used ResNet50 to generate the features and added FPN to improve the problem of losing the location information. The mask R-CNN architecture and other related details are described in Appendix 1
Training data are the primary and most important data that help the machine to learn and make the prediction. However, annotating the choroid boundary not only is tedious, costly, and time-consuming but also requires specialty-oriented knowledge and skills. Therefore, a large number of annotated OCT images are not easily accessible. To better grasp the image features and thereby improve segmentation accuracy, we performed transfer learning to transfer the weights from the Common Objects in Context database.18 Transfer learning is a commonly used technique when developing medical imaging CNN models due to a small data set.1921 Although medical image data sets are different from natural scene image data sets, the low-level features are universal to most of the image analysis tasks. As for frozen stages during transfer learning, the default is to turn off one layer. Its function is to freeze all the convolutional layers of the pretrained model. Only the training of our own customized fully connected layers is necessary to enable the model to quickly grasp the important features. The transferred parameters might serve as a powerful set of features, reducing the need for a large data set, as well as the training time and memory cost. In addition to transferring the weights from the Common Objects in Context database, we also used all the manually labeled OCT images from data set A for the model training. 
Postprocessing
The implementation of mask R-CNN generates three branches—classification, bounding box, and mask—for each instance of an object. In our study, we removed the classification branches and the bounding box, and the remaining mask branch was used to obtain the choroidal inner boundary and choroidal outer boundary. During the automatic segmentation process, unreasonable dents on the boundaries were occasionally noted, as shown in Figure 3. To address this problem, we proposed a method to create a vector for each side and then used the cross-product of adjacent sides to test the convexity. The first concave polygon of the corner would connect to the next x-axis of the corner, and if an unreasonable dent is detected, it is redrafted. The proposed postprocessing procedure was performed to connect the first concave polygon of a corner with the next x-axis of a corner, thereby filling the unreasonable dents. The postprocessing procedure was practical for refining the sketching results using the proposed segmentation models. 
Figure 3.
 
One case with an unreasonable dent in the boundary. (A) Manual segmentation result. (B) Automatic segmentation result with erroneous kink (arrow) in the boundary before the postprocessing procedure. (C) Automatic segmentation result after the postprocessing procedure.
Figure 3.
 
One case with an unreasonable dent in the boundary. (A) Manual segmentation result. (B) Automatic segmentation result with erroneous kink (arrow) in the boundary before the postprocessing procedure. (C) Automatic segmentation result after the postprocessing procedure.
Evaluation Metrics
To test the results, the error calculation matrix was used to compute the error rate of the choroidal boundary segmentation on the test data set. The error was defined as the difference in the absolute distance between the manual and automatic choroidal boundary segmentations. After calculating the error of upper and lower borders, the dice similarity coefficient (DSC) was also created to measure the similarity between the segmentation results of the proposed method and the ground truth. The DSC was computed using the test data set. The coefficient was calculated as follows:  
\begin{eqnarray*}{\rm{DSC}} = \frac{{2\left| {{\rm{A}} \cap {\rm{B}}} \right|}}{{\left| {\rm{A}} \right| + \left| {\rm{B}} \right|}}\end{eqnarray*}
where A and B are the segmented choroidal region and the manually labeled choroidal region, respectively. 
Application of the Proposed Model
To evaluate the mask R-CNN model in the clinical settings, data set B, which was composed of 2325 B-scans from 93 volumes, was collected and analyzed by manual and automatic methods, respectively, to test the segment result (Fig. 4). Manual segmentation was defined as the ground truth. In addition, to investigate the association between choroidal thickness and myopia progression, we used the results from both the manual and automatic measurements for further analysis. 
Figure 4.
 
Sketching result of one case. (A) Original image. (B) Manual segmentation result. (C) Automatic segmentation result.
Figure 4.
 
Sketching result of one case. (A) Original image. (B) Manual segmentation result. (C) Automatic segmentation result.
Statistical Analysis
The data obtained were analyzed using SPSS 25 (SPSS, Inc., Chicago, IL, USA). We used the Kolmogorov–Smirnov test to determine whether the variable had a normal distribution or not. Descriptive statistics were used to compare various characteristics between groups. Categorical data were presented as number and continuous data as means ± standard deviations. We used Pearson's χ2 test for comparisons of qualitative variables. Student’s t-test and the Mann–Whitney U test were used for comparisons of quantitative variables between the two groups. Pearson’s correlation analysis was used to assess the relationship between the choroidal thickness as estimated by automatic and manual segmentation and to evaluate the relationship between axial length and choroidal thickness. In addition, multiple linear regression, using the “enter” method, was performed to examine the influence of demographic (age and sex) and biometric ocular factors (axial length) on the measurement of choroidal thickness.22 The Bland–Altman plot was used to assess the agreement between the automatic and manual methods.23 A P value of <0.05 was accepted as the threshold of statistical significance. 
Results
Performance of the Proposed Model
The 30 OCT volumes, composed of 750 B-scans from data set A, were used to establish and test the error of the proposed model for an automatic sketching scheme. Threefold cross-validation was used to estimate the performance of the proposed model. Twenty-five OCT images were obtained for each volume. The 30 volumes taken from 30 eyes were equally divided into three subsets (10 eyes each) in threefold cross-validation. Each subset served as the test set for the remaining two subsets pooled together for training. To assess the model's performance, the results of the automatic segmented choroidal outer boundary from each test set were compared with that of the manually labeled choroidal outer boundary. The performance over all test sets was then averaged. The average error of choroidal inner and outer boundary segmentation in data set A was 5.89 ± 2.25 µm and 10.96 ± 10.15 µm, respectively. 
Proposed Model in Clinical Application
Data set B consisted of 93 volumes taken from 93 eyes with a total of 2325 B-scans and was used to prove the clinical utility of the proposed model. The mean patient age was 52.46 ± 4.87 years. Table 1 presents a baseline character of data set B. There were 51 eyes classified into the non–high-myopia group and 42 eyes classified into the high-myopia group. We found no significant difference between the two subgroups in age or sex distribution (P = 0.608 and P = 0.472, respectively). 
Table 1.
 
Demographic of Baseline Characteristics of Two Groups Divided by Axial Length
Table 1.
 
Demographic of Baseline Characteristics of Two Groups Divided by Axial Length
As presented in Table 2, the mean error of the choroidal inner and outer boundary segmentation in data set B was 6.72 ± 2.12 µm and 13.75 ± 7.57 µm, respectively. After calculating the error of boundary segmentation, the DSC was also created to measure the similarity of the choroidal segmented regions between the proposed method and the ground truth. The mean DSC over 93 volumes in data set B was 93.87% ± 2.89%. 
Table 2.
 
Performance of Proposed Model in Data Set B
Table 2.
 
Performance of Proposed Model in Data Set B
The mean choroidal thickness over the EDTRS region was 173.06 ± 57.06 µm using the automatic method, compared with 184.00 ± 61.19 µm using the manual method, with a mean difference of 10.51 ± 8.35 µm. The difference in mean choroidal thickness measurements between the two methods is illustrated in the Bland–Altman plot in Figure 5, which shows only 3.2% (3/93) of the measurement points were located outside the 95% limits of agreement (95% limits of agreement, −26.88 to 5.86 µm). 
Figure 5.
 
(A) Scatterplot of choroidal thickness measurement between automatic and manual methods. (B) Bland–Altman plot of automated choroidal thickness measurements minus manual choroidal thickness measurements over the EDTRS region.
Figure 5.
 
(A) Scatterplot of choroidal thickness measurement between automatic and manual methods. (B) Bland–Altman plot of automated choroidal thickness measurements minus manual choroidal thickness measurements over the EDTRS region.
Determination of Factors Associated with Choroidal Thickness Measured by Proposed Model
Choroidal thickness measured by automatic segmentation using the mask R-CNN model was significantly thinner in the high-myopia group (mean, 134.84 ± 43.83 µm) as compared with the non–high-myopia group (mean, 204.83 ± 46.29 µm, P < 0.001). The average choroidal thickness estimated by automatic segmentation was then used to evaluate its association with sex, age, and axial length. Table 3 summarizes the results of the multiple linear regression analysis of the choroidal thickness. In the multivariate linear regression, the choroidal thickness estimated by automatic segmentation was associated with age (β = −0.151, P = 0.024) and axial length (β= −0.784, P < 0.001). In addition, we also performed correlation analysis between axial length and choroidal thickness estimated by automatic segmentation, which showed a strong inverse correlation (r = −0.765, P < 0.001). 
Table 3.
 
Multivariate Linear Regression Analysis to Determine the Factors Related to the Choroidal Thickness Estimated by the Proposed Model
Table 3.
 
Multivariate Linear Regression Analysis to Determine the Factors Related to the Choroidal Thickness Estimated by the Proposed Model
Discussion
Accurate segmentation of the choroid is important for exploring choroid-related disease. Deep learning for fully automated segmentation method can provide a more suitable approach with a great prospect. In our study, we proposed a new automatic choroid segmentation method based on the mask R-CNN model and compared the results with manual segmentation, which was defined as the ground truth. The result showed the mask R-CNN model has a good prediction rate of the choroidal boundary and the region segmented by automatic and manual methods with high similarity. Furthermore, choroidal thickness estimated by automatic segmentation was associated with increasing myopia, aging, and elongation of axial length. 
In the computer age, one of the most important directions for medical research is to build a large database by collecting clinical data from patients. Artificial intelligence (AI) is a concept that automatically analyzes existing information, and deep learning is the method by which AI is practiced. There are many kinds of network learning methods for medical image analysis, such as CNN, FCN, and mask R-CNN. CNN is one class of a deep neural network and is most commonly applied for analysis of visual imagery.24 This works by extracting features from the images and recognizing objects through feature learning. As the number of layers of the neural network increases, the features that can be extracted are more complex, which may consequently consume enormous time and require a lot of computer resources to work efficiently. Thus, Long et al.25 proposed the FCN neural network for image semantic segmentation. This contains convolution layers and can classify each pixel of the image from abstract features with a faster process speed. However, its segmentation is not instance level and is not efficient enough. Recently, the image segmentation technique called mask R-CNN has been proposed to solve an instance segmentation problem and is widely used in medical image analysis.26 
To highlight the potential advantages of the mask R-CNN model, we compare our model with U-Net,27 another state-of-the-art deep learning method. The architecture and other relevant details of U-Net are described in Appendix 2. The details of the comparison in boundary error in choroidal segmentation and dice coefficient over the entire data set B (93 volumes with a total of 2325 B-scans) are given in Table 4. Our proposed method showed smaller error in choroidal boundary segmentation and larger dice coefficient than the U-Net method, which emphasizes the effectiveness of the mask R-CNN model. 
Table 4.
 
Comparison in Upper Border Error, Lower Border Error, and Dice Coefficient
Table 4.
 
Comparison in Upper Border Error, Lower Border Error, and Dice Coefficient
To date, many deep learning methods have been developed for choroidal segmentation despite few of them exploring the clinical utility of the models.810 Thinning choroid, a significant structural change preceding the development of myopia,5,6,28 has been shown to be related to decreased vision.29 With the aim of determining the value of deep learning in automatic segmentation of the choroid and understanding the association between choroidal thickness and myopia progression, we divided our study into two phases. In the first phase, we proposed a model based on a deep learning algorithm, mask R-CNN. In the second phase, we tested the performance of the mask R-CNN model and proved its clinical utility with another data set, data set B. 
In data set B, we found that the mean difference in the choroidal thickness measurements over the ETDRS region between the two methods was 10.34 µm. In our experience, this difference is small and likely to be clinically insignificant. A study by Rahman et al.30 has reported that interobserver variability in choroidal thickness measurements may result in differences of up to 32 µm. Moreover, this difference was also much smaller than diurnal variation in choroidal thickness. A study by Tan et al.31 reported that significant diurnal variation was noted in choroidal thickness among healthy adults, and the mean amplitude was 33.7 µm. Regarding the segmentation errors in the choroidal outer boundary, our proposed model performed slightly better in the high-myopia subgroup. The possible reason for the lower error may be that the choroid–sclera interface was more clearly and easily visible in the high-myopia subgroup.32 
In addition to testing the performance of our proposed model, we also highlighted the importance of choroidal thickness in myopia progression. A 2-year longitudinal observational study by Li et al.33 found that myopic participants with a thinner choroid tended to have a higher likelihood of progression of myopic maculopathy. The exact mechanism of why eyes with high myopia develop degenerative and atrophic changes remains unclear. The mechanical stretch of the retina and ischemia by prolonged axial length, which may decrease the density and diameter of the choriocapillaris, were the most probable reasons for the development of myopic maculopathy.34 Apart from the axial length, age was also correlated with myopia choroidal thickness. The possible pathogenesis that can explain the relationship between aging and choroid thinning is that choroidal vessels are prone to be affected by systemic conditions, such as hypertension and hyperlipidemia, and are likely to undergo atherosclerotic and aging changes. These microvascular changes may result in a decrease in choroidal thickness.35,36 Besides, in contrast to prior studies that measured only the perpendicular distance between Bruch's membrane and the choroidal outer boundary at a few points to represent the average choroidal thickness,37 we measured the average choroid thickness over the EDTRS area. 
AI has been shown to be capable of helping clinicians to make an accurate assessment and decisions in many ways. It is worth noting that the mask R-CNN model we proposed showed great performance for delineating the association between choroidal thickness and myopia progression. However, the application of a deep learning model for exploring the relationship between choroidal thickness and numerous pathologic eye diseases, as well as changes in choroidal thickness after treatment with intravitreal anti–vascular endothelial growth factor injections, has not been explored. Moreover, some of the limitations of our study should be highlighted, as they should potentially be addressed in future research. First, the method we used to segment the choroid is the two-dimensional (2D) method, which might have caused the segmentation of adjacent 2D slices to be discontinuous. The three-dimensional (3D) segmentation method, which directly uses the full volumetric image represented by a sequence of 2D slices, might achieve better continuity across adjacent 2D slices.38 However, 3D segmentation using deep learning techniques requires significantly higher computation power and memory overhead than sequential 2D image analyses. In the future, suitable 3D segmentation methods based on deep learning techniques should be developed to automatically segment the choroid and further improve segmentation quality. Second, we segmented and quantified the choroidal thickness without calculating the choroidal vascularity index. The choroidal vascularity index has been discussed in numerous studies with regard to its potential applications in the evaluation and management of several disorders of the retina and the choroid.39,40 Third, we only included eyes with good-quality OCT images, which might have contributed to potential selection bias. 
In conclusion, AI has become an indispensable method for solving complex problems. In this study, we proposed the mask R-CNN model to evaluate the choroidal thickness in OCT images. The results showed that the model has excellent performance for segmentation and quantification of the choroid. In addition, the mask R-CNN model is feasible for use in the assessment of choroid change in myopia. Future research is recommended to investigate whether the proposed deep learning model, mask R-CNN, can be used to realize the pathogenesis of additional chorioretinal diseases, to reflect disease activity, and to help the clinician make better treatment choices for disease control. 
Acknowledgments
Supported by the Ministry of Science and Technology, Taiwan, Republic of China, under Grant MOST 109-2221-E-029-024. 
Disclosure: H.-J. Chen, None; Y.-L. Huang, None; S.-L. Tse, None; W.-P. Hsia, None; C.-H. Hsiao, None; Y. Wang, None; C.-J. Chang, None 
References
Tarczy-Hornoch K, Ying-Lai M, Varma R; Los Angeles Latino Eye Study Group. Myopic refractive error in adult Latinos: the Los Angeles Latino Eye Study. Invest Ophthalmol Vis Sci. 2006; 47(5): 1845–1852. [CrossRef] [PubMed]
Chen SJ, Cheng CY, Li AF, et al. Prevalence and associated risk factors of myopic maculopathy in elderly Chinese: the Shihpai eye study. Invest Ophthalmol Vis Sci. 2012; 53(8): 4868–4873. [CrossRef] [PubMed]
Sawada A, Tomidokoro A, Araie M, Iwase A, Yamamoto T. Refractive errors in an elderly Japanese population: the Tajimi study. Ophthalmology. 2008; 115 2: 363–370.e363. [CrossRef] [PubMed]
Ruiz-Medrano J, Montero JA, Flores-Moreno I, Arias L, García-Layana A, Ruiz-Moreno JM. Myopic maculopathy: current status and proposal for a new classification and grading system (ATN). Prog Retin Eye Res. 2019; 69: 80–115. [CrossRef] [PubMed]
Fujiwara T, Imamura Y, Margolis R, Slakter JS, Spaide RF. Enhanced depth imaging optical coherence tomography of the choroid in highly myopic eyes. Am J Ophthalmol. 2009; 148(3): 445–450. [CrossRef] [PubMed]
Ho M, Liu DT, Chan VC, Lam DS. Choroidal thickness measurement in myopic eyes by enhanced depth optical coherence tomography. Ophthalmology. 2013; 120(9): 1909–1914. [CrossRef] [PubMed]
Wu L, Alpizar-Alvarez N. Choroidal imaging by spectral domain-optical coherence tomography. Taiwan J Ophthalmol. 2013; 3(1): 3–13. [CrossRef]
Sui X, Zheng Y, Wei B, et al. Choroid segmentation from optical coherence tomography with graph-edge weights learned from deep convolutional neural networks. Neurocomputing. 2017; 237: 332–341. [CrossRef]
Masood S, Fang R, Li P, et al. Automatic choroid layer segmentation from optical coherence tomography images using deep learning. Sci Rep. 2019; 9(1): 3058. [CrossRef] [PubMed]
He F, Chun RKM, Qiu Z, et al. Choroid segmentation of retinal OCT images based on CNN classifier and l (2)-l (q)Fitter. Comput Math Methods Med. 2021; 2021: 8882801. [PubMed]
Xuena C, Xinjian C, Yuhui M, Weifang Z, Ying F, Fei S. Choroid segmentation in OCT images based on improved U-net. Paper presented at: Proc. Society of Photo-Optical Instrumentation Engineers (SPIE). San Diego, California, United States; February 16–21, 2019.
Ronneberger O, Fischer P, U-net Brox T.: convolutional networks for biomedical image segmentation. Paper presented at: International Conference on Medical Image Computing and Computer-Assisted Intervention. Munich, Germany; October 5–9, 2015.
Kugelman J, Alonso-Caneiro D, Read SA, et al. Automatic choroidal segmentation in OCT images using supervised deep learning methods. Sci Rep. 2019; 9(1): 13298. [CrossRef] [PubMed]
Hoffer KJ. The Hoffer Q formula: a comparison of theoretic and regression formulas. J Cataract Refract Surg. 1993; 19(6): 700–712. [CrossRef] [PubMed]
He K, Gkioxari G, Dollar P, Girshick R. Mask R-CNN. IEEE Trans Pattern Anal Mach Intell. 2020; 42(2): 386–397. [CrossRef] [PubMed]
He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. Paper presented at: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, NV, USA; June 27–30, 2016.
Lin T-Y, Dollár P, Girshick R, He K, Hariharan B, Belongie S. Feature pyramid networks for object detection. Paper presented at: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, HI, USA; July 21–26, 2017.
Lin T-Y, Maire M, Belongie S, et al. Microsoft coco: common objects in context. Paper presented at: European Conference on Computer Vision. Zurich, Switzerland; September 6–12, 2014.
Alzubaidi L, Al-Amidie M, Al-Asadi A, et al. Novel transfer learning approach for medical imaging with limited labeled data. Cancers (Basel). 2021; 13(7): 1590. [CrossRef] [PubMed]
Almubarak H, Bazi Y, Alajlan N. Two-stage mask-rcnn approach for detecting and segmenting the optic nerve head, optic disc, and optic cup in fundus images. Applied Sciences. 2020; 10(11): 3833. [CrossRef]
Park K, Kim J, Lee J. Automatic optic nerve head localization and cup-to-disc ratio detection using state-of-the-art deep-learning architectures. Sci Rep. 2020; 10(1): 5025. [CrossRef] [PubMed]
Hinton P, McMurray I, Brownlow C. SPSS Explained. London, Routledge; 2014.
Haghayegh S, Kang HA, Khoshnevis S, Smolensky MH, Diller KR. A comprehensive guideline for Bland-Altman and intra class correlation calculations to properly compare two methods of measurement and interpret findings. Physiol Meas. 2020; 41(5): 055012. [CrossRef] [PubMed]
Fukushima K. Neocognitron: a self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biol Cybern. 1980; 36(4): 193–202. [CrossRef] [PubMed]
Long J, Shelhamer E, Darrell T. Fully convolutional networks for semantic segmentation. Paper presented at: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Boston, MA, USA; June 7–12, 2015.
He K, Gkioxari G, Dollár P, Girshick R. Mask R-CNN. Paper presented at: 2017 IEEE International Conference on Computer Vision (ICCV). Venice, Italy; October 22–29, 2017.
Ronneberger O, Fischer P, Brox T. U-Net: convolutional networks for biomedical image segmentation. Paper presented at: Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015. Cham, Switzerland; October 5–9, 2015.
Read SA, Collins MJ, Vincent SJ, Alonso-Caneiro D. Choroidal thickness in myopic and nonmyopic children assessed with enhanced depth imaging optical coherence tomography. Invest Ophthalmol Vis Sci. 2013; 54(12): 7578–7586. [CrossRef] [PubMed]
Nishida Y, Fujiwara T, Imamura Y, Lima LH, Kurosaka D, Spaide RF. Choroidal thickness and visual acuity in highly myopic eyes. Retina. 2012; 32(7): 1229–1236. [CrossRef] [PubMed]
Rahman W, Chen FK, Yeoh J, Patel P, Tufail A, Da Cruz L. Repeatability of manual subfoveal choroidal thickness measurements in healthy subjects using the technique of enhanced depth imaging optical coherence tomography. Invest Ophthalmol Vis Sci. 2011; 52(5): 2267–2271. [CrossRef] [PubMed]
Tan CS, Ouyang Y, Ruiz H, Sadda SR. Diurnal variation of choroidal thickness in normal, healthy subjects measured by spectral domain optical coherence tomography. Invest Ophthalmol Vis Sci. 2012; 53(1): 261–266. [CrossRef] [PubMed]
Hayashi M, Ito Y, Takahashi A, Kawano K, Terasaki H. Scleral thickness in highly myopic eyes measured by enhanced depth imaging optical coherence tomography. Eye (Lond). 2013; 27(3): 410–417. [CrossRef] [PubMed]
Li Z, Wang W, Liu R, et al. Choroidal thickness predicts progression of myopic maculopathy in high myopes: a 2-year longitudinal study. Br J Ophthalmol. 2021; 105(12): 1744–1750. [CrossRef] [PubMed]
Ohno-Matsui K, Lai TY, Lai CC, Cheung CM. Updates of pathologic myopia. Prog Retin Eye Res. 2016; 52: 156–187. [CrossRef] [PubMed]
Yildiz O. Vascular smooth muscle and endothelial functions in aging. Ann N Y Acad Sci. 2007; 1100: 353–360. [CrossRef] [PubMed]
Muller-Delp JM. Aging-induced adaptations of microvascular reactivity. Microcirculation. 2006; 13(4): 301–314. [CrossRef] [PubMed]
Flores-Moreno I, Lugo F, Duker JS, Ruiz-Moreno JM. The relationship between axial length and choroidal thickness in eyes with high myopia. Am J Ophthalmol. 2013; 155(2): 314–319.e311. [CrossRef] [PubMed]
Shivdeo A, Lokwani R, Kulkarni V, Kharat A, Pant A. Evaluation of 3D and 2D Deep Learning Techniques for Semantic Segmentation in CT Scans. 2021 International Conference on Artificial Intelligence, Big Data, Computing and Data Communication Systems (icABCD), 2021:1–8.
Agrawal R, Gupta P, Tan K-A, Cheung CMG, Wong T-Y, Cheng C-Y. Choroidal vascularity index as a measure of vascular status of the choroid: measurements in healthy eyes from a population-based study. Sci Rep. 2016; 6(1): 21090. [CrossRef] [PubMed]
Iovino C, Pellegrini M, Bernabei F, et al. Choroidal vascularity index: an in-depth analysis of this novel optical coherence tomography parameter. J Clin Med. 2020; 9(2): 595. [CrossRef]
Figure 1.
 
Physiologic structure of the CiB and the CoB.
Figure 1.
 
Physiologic structure of the CiB and the CoB.
Figure 2.
 
The schematic architecture of mask R-CNN. RPN, region proposal network.
Figure 2.
 
The schematic architecture of mask R-CNN. RPN, region proposal network.
Figure 3.
 
One case with an unreasonable dent in the boundary. (A) Manual segmentation result. (B) Automatic segmentation result with erroneous kink (arrow) in the boundary before the postprocessing procedure. (C) Automatic segmentation result after the postprocessing procedure.
Figure 3.
 
One case with an unreasonable dent in the boundary. (A) Manual segmentation result. (B) Automatic segmentation result with erroneous kink (arrow) in the boundary before the postprocessing procedure. (C) Automatic segmentation result after the postprocessing procedure.
Figure 4.
 
Sketching result of one case. (A) Original image. (B) Manual segmentation result. (C) Automatic segmentation result.
Figure 4.
 
Sketching result of one case. (A) Original image. (B) Manual segmentation result. (C) Automatic segmentation result.
Figure 5.
 
(A) Scatterplot of choroidal thickness measurement between automatic and manual methods. (B) Bland–Altman plot of automated choroidal thickness measurements minus manual choroidal thickness measurements over the EDTRS region.
Figure 5.
 
(A) Scatterplot of choroidal thickness measurement between automatic and manual methods. (B) Bland–Altman plot of automated choroidal thickness measurements minus manual choroidal thickness measurements over the EDTRS region.
Table 1.
 
Demographic of Baseline Characteristics of Two Groups Divided by Axial Length
Table 1.
 
Demographic of Baseline Characteristics of Two Groups Divided by Axial Length
Table 2.
 
Performance of Proposed Model in Data Set B
Table 2.
 
Performance of Proposed Model in Data Set B
Table 3.
 
Multivariate Linear Regression Analysis to Determine the Factors Related to the Choroidal Thickness Estimated by the Proposed Model
Table 3.
 
Multivariate Linear Regression Analysis to Determine the Factors Related to the Choroidal Thickness Estimated by the Proposed Model
Table 4.
 
Comparison in Upper Border Error, Lower Border Error, and Dice Coefficient
Table 4.
 
Comparison in Upper Border Error, Lower Border Error, and Dice Coefficient
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×