March 2023
Volume 12, Issue 3
Open Access
Artificial Intelligence  |   March 2023
A Deep Learning–Based Fully Automated Program for Choroidal Structure Analysis Within the Region of Interest in Myopic Children
Author Affiliations & Notes
  • Meng Xuan
    State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-Sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
  • Wei Wang
    State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-Sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
  • Danli Shi
    State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-Sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
    Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Victoria, Australia
  • James Tong
    Monash e-Research Centre, Monash University, Melbourne, Victoria, Australia
    Monash Medical AI Group, Monash University, Melbourne, Victoria, Australia
  • Zhuoting Zhu
    State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-Sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
    Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Victoria, Australia
  • Yu Jiang
    State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-Sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
  • Zongyuan Ge
    Monash e-Research Centre, Monash University, Melbourne, Victoria, Australia
    Monash Medical AI Group, Monash University, Melbourne, Victoria, Australia
  • Jian Zhang
    State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-Sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
  • Gabriella Bulloch
    Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Victoria, Australia
    Faculty of Science, Medicine and Health, University of Melbourne, Melbourne, Victoria, Australia
  • Guankai Peng
    Guangzhou Vision Tech Medical Technology Co., Ltd., Guangzhou, China
  • Wei Meng
    Guangzhou Vision Tech Medical Technology Co., Ltd., Guangzhou, China
  • Cong Li
    State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-Sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
  • Ruilin Xiong
    State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-Sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
  • Yixiong Yuan
    State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-Sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
  • Mingguang He
    State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-Sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
    Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Victoria, Australia
    Ophthalmology, Department of Surgery, University of Melbourne, Melbourne, Victoria, Australia
  • Correspondence: Mingguang He, Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, Level 7, 32 Gisborne Street, East Melbourne, VIC 3004, Australia. e-mail: mingguang.he@unimelb.edu.au 
  • Footnotes
    *  MX, WW, DS, and JT contributed equally to this work.
Translational Vision Science & Technology March 2023, Vol.12, 22. doi:https://doi.org/10.1167/tvst.12.3.22
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Meng Xuan, Wei Wang, Danli Shi, James Tong, Zhuoting Zhu, Yu Jiang, Zongyuan Ge, Jian Zhang, Gabriella Bulloch, Guankai Peng, Wei Meng, Cong Li, Ruilin Xiong, Yixiong Yuan, Mingguang He; A Deep Learning–Based Fully Automated Program for Choroidal Structure Analysis Within the Region of Interest in Myopic Children. Trans. Vis. Sci. Tech. 2023;12(3):22. https://doi.org/10.1167/tvst.12.3.22.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose: To develop and validate a fully automated program for choroidal structure analysis within a 1500-µm-wide region of interest centered on the fovea (deep learning–based choroidal structure assessment program [DCAP]).

Methods: A total of 2162 fovea-centered radial swept-source optical coherence tomography (SS-OCT) B-scans from 162 myopic children with cycloplegic spherical equivalent refraction ranging from −1.00 to −5.00 diopters were collected to develop the DCAP. Medical Transformer network and Small Attention U-Net were used to automatically segment the choroid boundaries and the nulla (the deepest point within the fovea). Automatic denoising based on choroidal vessel luminance and binarization were applied to isolate choroidal luminal/stromal areas. To further compare the DCAP with the traditional handcrafted method, the luminal/stromal areas and choroidal vascularity index (CVI) values for 20 OCT images were measured by three graders and the DCAP separately. Intraclass correlation coefficients (ICCs) and limits of agreement were used for agreement analysis.

Results: The mean ± SD pixel-wise distances from the predicted choroidal inner, outer boundary, and nulla to the ground truth were 1.40 ± 1.23, 5.40 ± 2.24, and 1.92 ± 1.13 pixels, respectively. The mean times required for choroidal structure analysis were 1.00, 438.00 ± 75.88, 393.25 ± 78.77, and 410.10 ± 56.03 seconds per image for the DCAP and three graders, respectively. Agreement between the automatic and manual area measurements was excellent (ICCs > 0.900) but poor for the CVI (0.627; 95% confidence interval, 0.279–0.832). Additionally, the DCAP demonstrated better intersession repeatability.

Conclusions: The DCAP is faster than manual methods. Also, it was able to reduce the intra-/intergrader and intersession variations to a small extent.

Translational Relevance: The DCAP could aid in choroidal structure assessment.

Introduction
The choroid is the most highly vascularized tissue in the eye and is composed of vessels encased in stroma.1,2 Structural changes to the choroid are implicated in various ocular and systemic conditions,39 including high myopia, diabetic retinopathy, glaucoma, pachychoroid diseases, and Parkinson's disease. Therefore, quantitative choroidal structure assessment is vital for investigating the pathophysiology of these disorders. 
Swept-source optical coherence tomography (SS-OCT) enables deep tissue penetration for in vivo cross-sectional depiction of the choroidal structure at a micrometer resolution.10 In addition, image binarization technique can convert grayscale images into binarized images, with dark and light pixels representing the luminal and stromal areas, respectively.1113 These developments led to the creation of novel OCT-based choroidal parameters, including the luminal area (LA); stromal area (SA); total choroidal area (TCA), which is a combination of LA and SA; and the ratio of LA to TCA (termed choroidal vascular ratio by Sonoda et al.11 or choroidal vascularity index [CVI] by Agrawal et al.13), which have been widely applied to choroidal structure analysis.7,1417 Currently, choroidal parameters are obtained manually,11,13,15,18 which is time consuming and requires subjective grader input.14,15 With the widespread adoption of OCT, a promising program for large-scale analyses is urgently needed. Some deep learning–based algorithms for segmenting the choroid automatically already exist,1936 but they are primarily focused on identifying the anterior and posterior boundaries of the choroid and on automated choroidal thickness measurement instead of quantifying the structure inside the choroid (specifically, the choroidal LA from SA) or further calculation of the CVI.1929 In addition, although some studies have further assessed choroidal structure automatically using the binarization technique or deep learning–based choroidal vessel segmentation, they analyzed the entire choroid in OCT B-scans without identifying the region of interest (ROI), which could compromise the standardization and comparability of the measurement areas, as the measurements differ from case to case.3037 To date, no algorithm with automated choroidal structure detection in defined areas has been established. Considering regional differences of choroidal structure,38 such an approach would facilitate comparison across different studies and tracking longitudinal choroidal structure changes. 
In this study, we developed a deep learning–based choroidal structure assessment program (DCAP) that will be published online (Choroid-AI.com). It combines choroidal segmentation with foveal center detection using deep learning to automatically determine the ROI with a defined width. In addition, the program can automatically denoise and binarize OCT images, allowing for quantifying the choroidal luminal and stromal areas separately. The repeatability and time required to quantify the choroidal structure of this program in comparison with manual methods were further investigated in this study. 
Methods
OCT Database and Ground Truth Labeling
The dataset used consists of fovea-centered SS-OCT (DRI OCT Triton; Topcon, Tokyo, Japan) B-scans from a longitudinal study that has been described in detail in a previous study.39 In brief, the OCT images were obtained from 162 children in both eyes at baseline and follow-up visits over a 12-month period. Demographic features and baseline axial length, refractive errors, and uncorrected visual acuity are shown in Supplementary Table S1. The OCT scanning pattern used consisted of 12 circumferential radial lines centered on the fovea and separated 15° apart (Fig. 1). Each scanning line measured 12 mm in length. OCT images were exported in the Topcon high-resolution .fda file format (each .fda file consists of 12 B-scans). For each .fda file, three slices with a quality score of >60 without eye movement, residual motion artifacts, or blinking were selected for ground truth labeling. For .fda files from the same eye but captured at different follow-up visits, OCT images in different scanning directions were selected to avoid the inclusion of B-scans with a high degree of similarity. For example, if OCT B-scans captured at baseline corresponding to scanning lines 1, 5, and 9 were selected for ground truth labeling, then another three scans in different scanning directions would be annotated for .fda files from subsequent follow-up visits. Finally, choroidal boundaries and the nulla in 2162 B-scans (721 .fda files were used with one slice excluded due to suboptimal image quality) were annotated by a trained and experienced grader who was familiar with choroidal structures with the aid of OCT-marker (https://github.com/neurodial/OCT-Marker), an open-source tool for creating labels on OCT images. The choroidal inner boundary was deemed the basal margin of the retinal pigment epithelium (RPE) layer and the outer boundary was the choroidal–scleral interface (CSI).40 The foveal center (or nulla) was determined as the deepest point in the foveal depression (Fig. 1).41 These 2162 scans were assigned to a training, validation, and test set with a 75/15/10 split. 
Figure 1.
 
Images illustrating ground truth generation. (A) Infrared reflectance image with the orientation of the 12 radial scanning lines projected onto it. (B) Representative radial OCT image (horizontal scan). (C) Overlay of the choroidal inner boundary (yellow line), outer boundary (blue line), and the nulla (or fovea center; red point).
Figure 1.
 
Images illustrating ground truth generation. (A) Infrared reflectance image with the orientation of the 12 radial scanning lines projected onto it. (B) Representative radial OCT image (horizontal scan). (C) Overlay of the choroidal inner boundary (yellow line), outer boundary (blue line), and the nulla (or fovea center; red point).
The original study was registered with ClinicalTrials.gov (Identifier: NCT04073238) and was performed according to the tenets of the Declaration of Helsinki. The protocol was approved by the institutional review board of Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China (Identifier: 2019KYPJ093). 
Development and Workflow of the Automated Program
A flow chart of the automated algorithm to assess choroidal structure is displayed in Figure 2. The Medical Transformer network (MedT-Net) and Small Attention-UNet (SmaAt-UNet) were used as base models for choroidal boundary segmentation and the fovea center detection (Fig. 3).42,43 Then, automated image denoising (image brightness adjustment) and the Niblack auto local threshold procedure were applied to differentiate between LA and SA (Fig. 4). 
Figure 2.
 
Flow chart of the automated program. CT, choroidal thickness.
Figure 2.
 
Flow chart of the automated program. CT, choroidal thickness.
Figure 3.
 
MedT-Net and SmaAt-UNet used in the DCAP for segmentation. (A) MedT-Net has two branches, one for global features and another for local ones. The global branch operates on the image as a whole, and the local branch is trained by splitting the whole image into several patches. (B) SmaAt-UNet was added to improve the accuracy of the upscaled fovea location from the MedT-Net model. (C) Segmentation mask generated by MedT-Net and SmaAt-UNet, consisting of the choroidal boundaries (inner and outer boundaries; yellow arrows) and the fovea center (red arrow).
Figure 3.
 
MedT-Net and SmaAt-UNet used in the DCAP for segmentation. (A) MedT-Net has two branches, one for global features and another for local ones. The global branch operates on the image as a whole, and the local branch is trained by splitting the whole image into several patches. (B) SmaAt-UNet was added to improve the accuracy of the upscaled fovea location from the MedT-Net model. (C) Segmentation mask generated by MedT-Net and SmaAt-UNet, consisting of the choroidal boundaries (inner and outer boundaries; yellow arrows) and the fovea center (red arrow).
Figure 4.
 
Representative denoised and binary images and segmentation masks generated by the DCAP automatically. A raw SS-OCT image (A) was denoised (B) and then converted to a binary image (C) automatically using the DCAP. The luminal area (dark pixels, red arrow) and the stromal area (light pixels, yellow arrow) are indicated. (D) The same image as in (A) but overlaid with annotations generated by the DCAP automatically. Point A indicates the fovea center, and point B indicates the point on the choroidal inner boundary nearest to the fovea center. Lines 1 to 4 are the tangent line of the choroidal inner boundary at point B (line 1), the perpendicular line of line 1 at point B (line 2), and margins on both sides (lines 3 and 4) of the ROI. A 1500-µm-wide ROI in the choroid (closed arrowhead) and choroidal inner and outer boundaries (green arrows) were displayed.
Figure 4.
 
Representative denoised and binary images and segmentation masks generated by the DCAP automatically. A raw SS-OCT image (A) was denoised (B) and then converted to a binary image (C) automatically using the DCAP. The luminal area (dark pixels, red arrow) and the stromal area (light pixels, yellow arrow) are indicated. (D) The same image as in (A) but overlaid with annotations generated by the DCAP automatically. Point A indicates the fovea center, and point B indicates the point on the choroidal inner boundary nearest to the fovea center. Lines 1 to 4 are the tangent line of the choroidal inner boundary at point B (line 1), the perpendicular line of line 1 at point B (line 2), and margins on both sides (lines 3 and 4) of the ROI. A 1500-µm-wide ROI in the choroid (closed arrowhead) and choroidal inner and outer boundaries (green arrows) were displayed.
The method of image denoising based on choroidal vessel luminance was learned from Sonoda et al.,11,18 who randomly selected three choroidal vessels with lumens larger than 100 µm with the aid of ImageJ (National Institutes of Health, Bethesda, MD), and the average intensity of these areas was determined. The average brightness was set as the minimum value to minimize the noise in OCT scans.11 To automatize and optimize this produce,11 we proposed an automated protocol to measure the luminance of the luminal areas in an OCT scan. First, the raw OCT image was binarized by the Niblack auto local thresholding with inverted settings. Then, the predicted choroidal segmentation mask was overlaid to extract the choroid layer, which generated a black image with white spots representing luminal areas. Erosion was used twice to restrict the surviving white pixels almost to the luminal area. The averaged pixel intensity within these areas on raw images was calculated and set as the minimum brightness. Then, image brightness was adjusted according to this value before binarization was finally done using the Niblack auto local threshold procedure (Fig. 4). 
Based on the predicted fovea center (or nulla), a 1500-µm-wide choroidal ROI centered on the nulla was identified (Fig. 4D) in two steps. The first was to determine a point (point B) on the predicted choroidal inner boundary nearest to the predicted fovea center (point A). Then, using the selected point (point B), a 1500-µm-wide region was centered on it and rotated to match the tangent of the choroidal inner boundary at that point (point B) (Fig. 4D). The anterior margin of the ROI was deemed the basal margin of the RPE layer, and the posterior margin was the CSI.40 The nasal and temporal margins of the ROI were two parallel lines centered on the nulla and perpendicular to the tangent of the choroidal inner boundary at point B. The distance between these two parallel lines was 1500 µm (Fig. 4D). Subsequently, choroidal parameters including LA, SA, TCA, CVI, and averaged choroidal thickness within the ROI were measured. The averaged choroidal thickness was calculated based on the TCA and width of the ROI. 
Evaluation Metrics of the Automatic Segmentation
The average unsigned surface detection error (AUSDE), thickness difference (TD), and dice similarity coefficient (DSC) were calculated to evaluate the performance of automatic choroidal boundary segmentation.25 The AUSDEs of the choroidal inner and outer boundaries were calculated separately and represented the pixelwise mismatch between the predicted choroid boundary and ground truth (smaller is better). TD revealed the mean absolute difference in choroidal thickness compared to the manual reference (smaller is better). Also, the DSC showed a similarity between regions segmented by the automatic algorithm and by manual methods (larger is better). Euclidian distance (namely, the pixelwise distance from the predicted fovea center to the ground truth) was employed to assess the performance of automated fovea center detection. These evaluation metrics were computed on the validation dataset. 
Manual Assessments of Choroidal Parameters
During session 1, 20 OCT images captured at baseline (Fig. 5, dataset 1) from 20 children were randomly selected from the original testing dataset. These children underwent OCT examinations again 1 month later (session 2), and then another 20 images with the same scanning locations as in session 1 were collected (Fig. 5, dataset 2). All of these images had a quality score of >60 and were taken without eye movement, residual motion artifacts, or blinking. Raw OCT images were provided unlabeled to three experienced and well-trained graders who were tasked with measuring LA, SA, TCA, and CVI on these images manually. They were masked to the participants’ information, and they analyzed these images independently. Manual assessment of choroidal structure was performed according to the study by Sonoda et al.11. Briefly, OCT images were processed using ImageJ 1.53c. A 1500-µm-wide area from the basal margin of the RPE layer to the CSI centered on the fovea was annotated manually with the “ROI Manager.”40 Three choroidal vessels with lumens > 100 µm were randomly selected using the “Oval Selection Tool.” The brightness values of these areas were averaged and set as the minimum value. Image brightness was adjusted according to this value. Then, the image type was converted to 8-bit images, and the Niblack auto local threshold procedure was applied. This binarized image was converted to an RGB image again, and the LA, SA, TCA, and CVI were calculated. 
Figure 5.
 
Flow chart summarizing the study of intra- and intergrader repeatability, correspondence between automatic and manual measurements, and intersession repeatability of the automatic and manual measurements of choroidal parameters.
Figure 5.
 
Flow chart summarizing the study of intra- and intergrader repeatability, correspondence between automatic and manual measurements, and intersession repeatability of the automatic and manual measurements of choroidal parameters.
Intra- and Intergrader Repeatability of Manual Measurements
Twenty OCT images (Fig. 5, dataset 1) were used to determine the intergrader repeatability among three observers. The same images were reanalyzed by the same observers after 1 week to evaluate the intragrader repeatability (Fig. 5). 
Agreement Between the Automatic and Manual Measurements
Twenty images (Fig. 5, dataset 1) were imported into the automated program as a package to measure choroidal parameters automatically. Then, the LA, SA, TCA, and CVI generated by the automated program were compared with the values obtained manually by three graders (Fig. 5). 
Comparison of the Intersession Repeatability Between the Automatic and Manual Measurements
Another 20 OCT images (Fig. 5, dataset 2) were analyzed by the automated program and three observers separately. Then, the LA, SA, TCA, and CVI values of the images in dataset 2 were compared with the first measurements of the images in dataset 1 by the automated program and three observers separately to determine the intersession repeatability. 
Comparison of Time Spent on Assessing Choroidal Parameters by the Automated and Manual Methods
To record the time spent on each OCT image, graders were encouraged to take a full-page screenshot of the current browser window upon completion of the choroidal structure analysis of each B-scan. The moment of completion was saved and could be recorded and analyzed using the “Image Properties Context Menu.” The time to completion for each image was calculated as the difference between two contiguous time points. To compare the efficiency of the automated and manual methods, the total times spent on the analysis of images in dataset 1 (Fig. 5) using the automated program and three observers separately were compared. 
Statistical Analysis
All statistical analyses were performed using SPSS Statistics 25.0 (IBM, Chicago, IL) and Prism 9.0 (GraphPad, San Diego, CA). Intraclass correlation coefficients (ICCs; two-way random effects, absolute agreement, and single measurement) and 95% limits of agreement (LOAs) were used to demonstrate the intragrader, intergrader, and intersession repeatability of choroidal parameters and the agreement between the automatic and manual measurements. Based on 95% confident intervals (CIs) of the ICC estimate, ICC values < 0.5, 0.5 to 0.75, 0.75 to 0.9, and > 0.9 indicated poor, moderate, good, and excellent repeatability, respectively.44 The 95% LOA was defined as the mean of differences ± 1.96 SD of differences. Smaller LOAs indicated better agreement among repeated measurements. A two-sided P < 0.05 was considered statistically significant. 
Results
Evaluation Metrics of Automatic Segmentation
The AUSDEs, which demonstrated the pixel-wise mismatch between the predicted choroidal boundary and the ground truth (smaller is better), were 1.40 ± 1.23 and 5.40 ± 2.24 pixels for the inner and outer boundaries, respectively. The AUSDE of the inner boundary was smaller than that of outer boundary, which agreed with the fact that the basal margin of the RPE layer was much more distinguishable than the CSI. The DSC was further calculated, and the mean value was 95.93% ± 2.14%, indicating a high proportion of overlap between the predicted choroid layer and the ground truth (larger is better). Also, the TD was 6.30 ± 3.18 pixels, representing a small mean absolute difference in choroidal thickness compared to the manual reference (smaller is better) (Table 1). These metrics indicated robust segmentation of the choroid layer. Additionally, the mean Euclidian distance, representing the pixel-wise distance between the predicted fovea center to the ground truth (smaller is better), was 1.92 ± 1.13 pixels, which indicated that the automatic detection of the fovea center was also highly effective. 
Table 1.
 
Comparison of Choroidal Segmentation Evaluation Metrics Between MedT-Net Proposed in the Current Study and the Traditional U-Net
Table 1.
 
Comparison of Choroidal Segmentation Evaluation Metrics Between MedT-Net Proposed in the Current Study and the Traditional U-Net
Inter- and Intragrader Repeatability
The inter- and intragrader agreement for area measurements (LA, SA, and TCA) was very high, with ICCs > 0.9, indicating excellent repeatability. However, the ICCs for CVI calculation were generally lower than area measurements, with ICCs ranging from 0.816 to 0.833 and 0.854 to 0.919 for the inter- and intragrader agreement, respectively. Moreover, although the ICCs obtained for the CVI were over 0.800, the lower limits of the 95% CIs for inter- and intragrader agreement ranged from 0.407 to 0.631 and 0.674 to 0.807, respectively. Therefore, based on statistical inference, the level of intergrader repeatability was “poor” to “good,” and the level of intragrader repeatability was “moderate” to “excellent” (Table 2). All measured choroidal parameters demonstrated minimal mean differences between two graders and within the same graders. However, none resided outside the 95% CIs of the mean differences for the following: intragrader agreement of LA within grader 1, intergrader agreement of SA (grader 1 vs. grader 3; grader 2 vs. grader 3), intergrader agreement of TCA (grader 2 vs. grader 3), and intergrader agreement of CVI (grader 1 vs. grader 3), indicating that the systematic bias was statistically significant (Table 2). 
Table 2.
 
Intra- and Intergrader Agreement in Choroidal Parameters
Table 2.
 
Intra- and Intergrader Agreement in Choroidal Parameters
Agreement of the Automatic and Manual Measurements
The agreement between the automatic and manual measurements of TCA was excellent, with ICCs > 0.900 indicating high correlation between the predicted choroidal ROIs and the ROIs delineated manually. The ICCs were also high (>0.800) for the luminal and interstitial areas. However, the ICCs for the CVI were low, ranging between 0.532 and 0.696, which indicated moderate agreement between the automatic and manual CVI calculation (Table 3). Additionally, we found that none resided within the 95% CIs of the mean differences between the automatic and averaged manual measurements for all choroidal parameters, showing insignificant systematic bias. Choroidal parameters for SS-OCT images measured by three graders and the automated program are shown in Supplementary Table S2
Table 3.
 
Agreement Between Automatic and Manual Measurements of Choroidal Parameters
Table 3.
 
Agreement Between Automatic and Manual Measurements of Choroidal Parameters
Intersession Repeatability of the Automatic and Manual Measurements
Although the automated and manual measurements both had high intersession agreement for area measurements (ICCs > 0.800), the automated algorithm provided the lowest systematic bias and LOAs (Table 4). As for the intersession repeatability of CVI calculations, three observers all had moderate agreement, with ICCs ranging between 0.553 and 0.576, whereas the automated program had good repeatability with an ICC of 0.872 (95% CI, 0.705–0.948). Also, the automated program yielded the lowest systematic bias (−0.046%; 95% CI, −0.453 to 0.360) and the smallest LOAs (−1.749%, 1.657%) for the intersession repeatability of CVI measurements. Moreover, the bias of the automated program for CVI calculations was approximately 1/4 to 1/3 of that of grader 1, who showed the lowest intersession systematic bias among three graders. The lower and upper limit values (−1.749%, 1.657%) of the automated program were less than half of that of grader 1 (−3.881%, 3.566%) (Table 4). 
Table 4.
 
Intersession Repeatability of the Automatic and Manual Measurements of Choroidal Parameters
Table 4.
 
Intersession Repeatability of the Automatic and Manual Measurements of Choroidal Parameters
Time Spent on the Automatic and Manual Measurements
As depicted in Figure 6, the time spent by the automated program was far less than the time spent in the manual process. The automated program required approximately 1.00 second per image, whereas graders 1 to 3 needed 438.00 ± 75.88, 393.25 ± 78.77, and 410.10 ± 56.03 seconds per image, respectively. 
Figure 6.
 
Time spent on assessing choroidal parameters on 20 SS-OCT images by the DCAP and three graders separately. The times required by the three graders are presented as means ± SD within the columns. For the DCAP, the approximate time for analyzing one slice of the OCT B-scan = (total time spent analyzing 20 OCT B-scans)/20.
Figure 6.
 
Time spent on assessing choroidal parameters on 20 SS-OCT images by the DCAP and three graders separately. The times required by the three graders are presented as means ± SD within the columns. For the DCAP, the approximate time for analyzing one slice of the OCT B-scan = (total time spent analyzing 20 OCT B-scans)/20.
Discussion
This study presents a robust, fully automated, deep learning–based algorithm that assesses choroidal structure within a specified region of interest. The mean times required for choroidal structure analysis were 1.00, 438.00 ± 75.88, 393.25 ± 78.77, and 410.10 ± 56.03 seconds per image for the DCAP and three graders, respectively; therefore, the automated program was over 400 times faster than the manual method. Also, this automatic program could reduce intragrader, intergrader, and intersession variations to a small extent. These features allow for large-scale analyses, detection of minor changes in choroidal parameters, comparison across different studies, and tracking longitudinal choroidal structure changes. 
The choroidal structure is altered pathologically in various ocular and systemic conditions,39,45,46 such as high myopia, diabetic retinopathy, glaucoma, age-related macular degeneration (AMD), pachychoroid diseases, and Parkinson's disease. Currently, choroidal thickness serves as the most commonly used choroidal biomarker.17,47 Choroidal thinning occurs in eyes with high myopia48 and AMD,49 but choroidal thickening was observed in central serous chorioretinopathy50 and Vogt–Koyanagi–Harada disease.51 However, measuring choroidal thickness does not give us information on choroidal structural changes affected by diseases, given that the choroid is composed of vessels embedded in the stroma. A reduction in choroidal thickness may be due to a shrinkage in blood vessels, stromal tissue, or both. CVI, defined as the ratio of luminal area to total choroidal area,13 is a novel OCT-based parameter of choroidal vascularity. It is able to detect the exact choroidal component underlying choroidal thickness changes. This marker has been validated as a reliable tool for choroidal structure analysis in normal and diseased states14,15 and has developed as a further subject of interest in ocular research. In addition, given that some studies have reported that eyes of patients with diabetes mellitus and the normal fellow eyes of patients with AMD demonstrated decreased CVI with no corresponding change in choroidal thickness,45,52 subclinical diseases may be present and CVI will provide further insight into choroidal structural changes based on choroidal thickness. 
Traditional choroidal structure analysis is generally performed manually; however, this is labor intensive and may be biased by inter- and intragrader variability from its purely subjective input.36,53 Although this method has produced excellent inter- and intragrader agreement for choroidal area measurements (LA, SA, and TCA) in both the current and previous studies (Supplementary Table S3),11,13,18 CVI values had large variations even when retested by the same graders in our study, which were not reported in the previous literature.11,13,18 In addition, although the manual method also had high intersession repeatability for area measurements, the intersession repeatability of CVI was moderate, with large LOAs of the systematic bias. However, the automatic program could reduce intragrader, intergrader, and intersession variations to a small extent. This finding is consistent with reports that agreement between automatic and manual area measurements was excellent but was poor for the CVI. 
The large variations in CVI calculations between the automatic and manual methods may originate from variance in the four margins of the ROI or variance in background intensity from three blood vessels in the manual process. However, as shown in Table 3, the agreement between the automated and manual measurements of total choroidal area was excellent, with ICCs > 0.900, indicating high correlation between the predicted choroidal ROIs generated by the DCAP and the ROIs delineated manually. A major problem with measuring background intensity from three blood vessels in the manual process was that sometimes large differences occurred in the illumination of the various lumens within the same OCT image. For the manual method, brightness can be adjusted after random selection of three choroidal vessels and is set to a minimum value to reduce noise before further binarization11; however, this can interfere with the isolation and measurement of luminal and stromal areas. Therefore, differentiating the choroidal luminal area from the stromal area can be distorted by image brightness adjustment (image denoising), which may be a major cause of large variations in CVI measurements, particularly in OCT images where large variations in the brightness of the luminal area exist. 
To overcome the limitations of manual brightness adjustment, automated image brightness adjustment for image denoising was integrated into the algorithm. The automatic image denoising in the DCAP allowed for constant repeatability and reduced the bias in image denoising to a small extent. In addition, it could take the whole choroidal luminal area within an OCT B-scan into consideration for brightness adjustment. However, because imaging denoising is a challenging and open task, the solution is not unique. It may be interesting to compare image denoising techniques used in the current study with those in other studies54,55 to seek a better method for choroidal structure analysis. 
The DCAP combines choroidal segmentation with detection of the foveal center (also referred to as the nulla, the deepest point in the fovea) using deep learning to automatically determine the ROI with a defined width. The MedT-Net algorithm was applied to choroidal segmentation, which anchored results to the ground truth and provided good repeatability. Compared to its precursor, U-Net,56 MedT-Net had fewer errors when segmenting choroidal boundaries and provided a larger dice coefficient (Table 1). Although previous models for choroidal segmentation have reported satisfactory results,20,31,33,34,36 these were limited to choroidal segmentation alone, without consideration of the location and width of the examined choroidal area. Defining a ROI is essential for comparison across different studies and tracking longitudinal choroidal structure changes. Without this feature, changes within regions of the choroid may be confused as longitudinal change, leading to false-positive findings. Maloca et al.41 applied a hybrid machine learning algorithm to automated OCT image processing. This algorithm combines retinal boundary segmentation with nulla detection, where the nulla is defined as the deepest point within the foveolar cavity.41 The algorithm could also segment the choroid, although measurement of central retina thickness was the main purpose of this study.41 The similarity between the current study and the method proposed by Maloca et al.41,57 is that both studies used the nulla as a reference landmark. Following the path paved by Maloca et al., our group utilized the nulla to determine the choroidal ROI within a certain width for longitudinal tracking of microscopic changes. Although the significance of choroidal structure analysis is only just beginning to be elucidated, changes to the choroid have been so far linked to various ocular and systemic conditions.3,4,6,7,14,15 If and when OCT becomes more popular as a screening method for eye and systemic-related illnesses, clinicians will have the opportunity to meaningfully apply the DCAP. 
Key strengths of this study include its large sample size (2162 OCT images) and combining choroidal segmentation with fovea center detection to reliably identify a choroidal region of interest with defined width for subsequent choroidal structure analysis. In addition, posterior choroidal boundary definition was explicitly determined to reduce variability in choroid boundary annotation.40 Some limitations, however, should also be acknowledged. First, participants in this study were children who had low to moderate myopia; thus, validation of this program in elderly and diseased patients should be the aim of future research. However, myopia is the most common ocular disorder worldwide and has increased in prevalence in recent years, especially in East and Southeast Asia.5860 Numerous studies have demonstrated the essential role of the choroid in myopia progression7,48,6173; therefore, we believe that the DCAP can be widely used in the field of myopia. Second, the algorithm was validated using OCT images acquired from a DRI OCT Triton machine, a commercially available and widespread OCT system. Nonetheless, it is not clear whether the algorithm applies to OCT images acquired from other OCT devices (e.g., Zeiss PLEX Elite, Heidelberg SPECTRALIS), which requires further investigation. Third, although medium- to large-size vessels could be visualized, the detection of small vessels was not possible due to limited lateral resolution and lack of contrast. Future studies should incorporate binarization with angio-OCT to allow for simultaneous evaluation of the choriocapillaris density and vascular area.74 
In conclusion, the DCAP is faster than the manual approach for quantitative choroidal structure assessment. The combination of choroidal segmentation with foveal center detection facilitates comparison across different studies and tracking longitudinal choroidal structure changes. In addition, the automatic image denoising and binarization allow for constant repeatability for differentiation between LA and SA; therefore, this approach could reduce intragrader, intergrader, and intersession variations to a small extent. The DCAP has the potential to provide efficient, accurate, and reliable measurement of the choroidal structure and could serve as a potentially useful tool in the early diagnosis and monitoring of disease progression. 
Acknowledgments
Supported by grants from the Fundamental Research Funds of the State Key Laboratory of Ophthalmology, National Natural Science Foundation of China (81420108008 and 81271037). 
Disclosure: M. Xuan, None; W. Wang, None; D. Shi, None; J. Tong, None; Z. Zhu, None; Y. Jiang, None; Z. Ge, None; J. Zhang, None; G. Bulloch, None; G. Peng, None; W. Meng, None; C. Li, None; R. Xiong, None; Y. Yuan, None; M. He, None 
References
Alm A, Bill A. Ocular and optic nerve blood flow at normal and increased intraocular pressures in monkeys (Macaca irus): A study with radioactively labelled microspheres including flow determinations in brain and some other tissues. Exp Eye Res. 1973; 15: 15–29. [CrossRef] [PubMed]
Nickla DL, Wallman J. The multifunctional choroid. Prog Retin Eye Res. 2010; 29: 144–168. [CrossRef] [PubMed]
Robbins CB, Thompson AC, Bhullar PK, et al. Characterization of retinal microvascular and choroidal structural changes in Parkinson disease. JAMA Ophthalmol. 2021; 139: 182–188. [CrossRef] [PubMed]
Lee M, Lee H, Kim HC, Chung H. Changes in stromal and luminal areas of the choroid in pachychoroid diseases: Insights into the pathophysiology of pachychoroid diseases. Invest Ophthalmol Vis Sci. 2018; 59: 4896–4908. [CrossRef] [PubMed]
Karslioglu MZ, Kesim C, Yucel O, et al. Choroidal vascularity index in pseudoexfoliative glaucoma. Int Ophthalmol. 2021; 41: 4197–4208. [CrossRef] [PubMed]
Gupta P, Thakku SG, Sabanayagam C, et al. Characterisation of choroidal morphological and vascular features in diabetes and diabetic retinopathy. Br J Ophthalmol. 2017; 101: 1038–1044. [CrossRef] [PubMed]
Gupta P, Thakku SG, Saw SM, et al. Characterization of choroidal morphologic and vascular features in young men with high myopia using spectral-domain optical coherence tomography. Am J Ophthalmol. 2017; 177: 27–33. [CrossRef] [PubMed]
Aksoy FE, Altan C, Kesim C, et al. Choroidal vascularity index as an indicator of vascular status of choroid, in eyes with nanophthalmos. Eye (Lond). 2020; 34: 2336–2340. [CrossRef] [PubMed]
Ugurlu E, Pekel G, Akbulut S, Cetin N, Durmus S, Altinisik G. Choroidal vascularity index and thickness in sarcoidosis. Medicine. 2022; 101: e28519. [CrossRef] [PubMed]
Laíns I, Wang JC, Cui Y, et al. Retinal applications of swept source optical coherence tomography (OCT) and optical coherence tomography angiography (OCTA). Prog Retin Eye Res. 2021; 84: 100951. [CrossRef] [PubMed]
Sonoda S, Sakamoto T, Yamashita T, et al. Choroidal structure in normal eyes and after photodynamic therapy determined by binarization of optical coherence tomographic images. Invest Ophthalmol Vis Sci. 2014; 55: 3893–3899. [CrossRef] [PubMed]
Wei X, Sonoda S, Mishra C, et al. Comparison of choroidal vascularity markers on optical coherence tomography using two-image binarization techniques. Invest Ophthalmol Vis Sci. 2018; 59: 1206–1211. [CrossRef] [PubMed]
Agrawal R, Gupta P, Tan KA, Cheung CM, Wong TY, Cheng CY. Choroidal vascularity index as a measure of vascular status of the choroid: Measurements in healthy eyes from a population-based study. Sci Rep. 2016; 6: 21090. [CrossRef] [PubMed]
Betzler BK, Ding J, Wei X, et al. Choroidal vascularity index: A step towards software as a medical device. Br J Ophthalmol. 2022; 106: 149–155. [CrossRef] [PubMed]
Agrawal R, Ding J, Sen P, et al. Exploring choroidal angioarchitecture in health and disease using choroidal vascularity index. Prog Retin Eye Res. 2020; 77: 100829. [CrossRef] [PubMed]
Ng WY, Ting DS, Agrawal R, et al. Choroidal structural changes in myopic choroidal neovascularization after treatment with antivascular endothelial growth factor over 1 year. Invest Ophthalmol Vis Sci. 2016; 57: 4933–4939. [CrossRef] [PubMed]
Singh SR, Vupparaboina KK, Goud A, Dansingani KK, Chhablani J. Choroidal imaging biomarkers. Surv Ophthalmol. 2019; 64: 312–333. [CrossRef] [PubMed]
Sonoda S, Sakamoto T, Yamashita T, et al. Luminal and stromal areas of choroid determined by binarization method of optical coherence tomographic images. Am J Ophthalmol. 2015; 159: 1123–1131.e1. [CrossRef] [PubMed]
Tian J, Marziliano P, Baskaran M, Tun TA, Aung T. Automatic segmentation of the choroid in enhanced depth imaging optical coherence tomography images. Biomed Opt Express. 2013; 4: 397–411. [CrossRef] [PubMed]
Chen HJ, Huang YL, Tse SL, et al. Application of artificial intelligence and deep learning for choroid segmentation in myopia. Transl Vis Sci Technol. 2022; 11: 38.
Masood S, Fang R, Li P, et al. Automatic choroid layer segmentation from optical coherence tomography images using deep learning. Sci Rep. 2019; 9: 3058. [CrossRef] [PubMed]
He F, Chun RKM, Qiu Z, et al. Choroid segmentation of retinal OCT images based on CNN classifier and l2-lq fitter. Comput Math Methods Med. 2021; 2021: 8882801. [PubMed]
Cheng X, Chen X, Ma Y, Zhu W, Fan Y, Shi F. Choroid segmentation in OCT images based on improved U-net. In: Angelini ED, Landman BA, eds. Medical Imaging 2019: Image Processing. Bellingham, WA: SPIE; 2019: 521–527.
Kugelman J, Alonso-Caneiro D, Read SA, et al. Automatic choroidal segmentation in OCT images using supervised deep learning methods. Sci Rep. 2019; 9: 13298. [CrossRef] [PubMed]
Zhang H, Fau-Yang J, Yang J, Fau-Zhou K, Zhou K, Fau-Li F, et al. Automatic Segmentation and Visualization of Choroid in OCT with Knowledge Infused Deep Learning. IEEE J Biomed Health Inform. 2020; 24: 3408–3420. [CrossRef] [PubMed]
Lin CY, Huang YL, Hsia WP, Wang Y, Chang CJ. Correlation of choroidal thickness with age in healthy subjects: Automatic detection and segmentation using a deep learning model. Int Ophthalmol. 2022; 42: 3061–3070. [CrossRef] [PubMed]
Xu X, Wang X, Lin J, et al. Automatic segmentation and measurement of choroid layer in high myopia for OCT imaging using deep learning. J Digit Imaging. 2022; 35: 1153–1163. [CrossRef] [PubMed]
Li M, Zhou J, Chen Q, et al. Choroid automatic segmentation and thickness quantification on swept-source optical coherence tomography images of highly myopic patients. Ann Transl Med. 2022; 10: 620. [CrossRef] [PubMed]
Sui X, Zheng Y, Wei B, et al. Choroid segmentation from optical coherence tomography with graph-edge weights learned from deep convolutional neural networks. Neurocomputing. 2017; 237: 332–341. [CrossRef]
Li J, Zhu L, Zhu R, et al. Automated analysis of choroidal sublayer morphologic features in myopic children using EDI-OCT by deep learning. Transl Vis Sci Technol. 2021; 10: 12. [CrossRef] [PubMed]
Khaing TT, Okamoto T, Ye C, et al. ChoroidNET: A dense dilated U-Net model for choroid layer and vessel segmentation in optical coherence tomography images. IEEE Access. 2021; 9: 150951–150965. [CrossRef]
Zheng G, Jiang Y, Shi C, et al. Deep learning algorithms to segment and quantify the choroidal thickness and vasculature in swept-source optical coherence tomography images. J Innov Opt Health Sci. 2021; 14: 2140002. [CrossRef]
Liu X, Bi L, Xu Y, Feng D, Kim J, Xu X. Robust deep learning method for choroidal vessel segmentation on swept source optical coherence tomography images. Biomed Opt Express. 2019; 10: 1601–1612. [CrossRef] [PubMed]
Vupparaboina KK, Dansingani KK, Goud A, et al. Quantitative shadow compensated optical coherence tomography of choroidal vasculature. Sci Rep. 2018; 8: 6461. [CrossRef] [PubMed]
Khaing TT, Okamoto T, Ye C, et al. Automatic measurement of choroidal thickness and vasculature in optical coherence tomography images of eyes with retinitis pigmentosa. Artif Life Robot. 2022; 27: 70–79. [CrossRef]
Liu X, Jin K, Yang Z, et al. A curriculum learning-based fully automated system for quantification of the choroidal structure in highly myopic patients. Phys Med Biol. 2022; 67: 125015. [CrossRef]
Bartol-Puyal FA, Pablo Júlvez L. Deep-learning algorithms for choroidal thickness measurements in high myopia. Ann Transl Med. 2022; 10: 654. [CrossRef] [PubMed]
Kakiuchi N, Terasaki H, Sonoda S, et al. Regional differences of choroidal structure determined by wide-field optical coherence tomography. Invest Ophthalmol Vis Sci. 2019; 60: 2614–2622. [CrossRef] [PubMed]
Jiang Y, Zhu Z, Tan X, et al. Effect of repeated low-level red-light therapy for myopia control in children: A multicenter randomized controlled trial. Ophthalmology. 2022; 129: 509–519. [CrossRef] [PubMed]
Yiu G, Pecen P, Sarin N, et al. Characterization of the choroid-scleral junction and suprachoroidal layer in healthy individuals on enhanced-depth imaging optical coherence tomography. JAMA Ophthalmol. 2014; 132: 174–181. [CrossRef] [PubMed]
Maloca PM, Seeger C, Booler H, et al. Uncovering of intraspecies macular heterogeneity in cynomolgus monkeys using hybrid machine learning optical coherence tomography image segmentation. Sci Rep. 2021; 11: 20647. [CrossRef] [PubMed]
Valanarasu JMJ, Oza P, Hacihaliloglu I, Patel VM. Medical transformer: Gated axial-attention for medical image segmentation. In: Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention. Cham: Springer; 2021: 36–46.
Trebing K, Staǹczyk T, Mehrkanoon S. SmaAt-UNet: Precipitation nowcasting using a small attention-UNet architecture. Pattern Recognit Lett. 2021; 145: 178–186. [CrossRef]
Koo TK, Li MY. A guideline of selecting and reporting intraclass correlation coefficients for reliability research. J Chiropr Med. 2016; 15: 155–163. [CrossRef] [PubMed]
Tan KA, Laude A, Yip V, Loo E, Wong EP, Agrawal R. Choroidal vascularity index - a novel optical coherence tomography parameter for disease monitoring in diabetes mellitus? Acta Ophthalmol. 2016; 94: e612–e616. [CrossRef] [PubMed]
Ting DSW, Yanagi Y, Agrawal R, et al. Choroidal remodeling in age-related macular degeneration and polypoidal choroidal vasculopathy: A 12-month prospective study. Sci Rep. 2017; 7: 7868. [CrossRef] [PubMed]
Laviers H, Zambarakji H. Enhanced depth imaging-OCT of the choroid: A review of the current literature. Graefes Arch Clin Exp Ophthalmol. 2014; 252: 1871–1883. [CrossRef] [PubMed]
Gupta P, Saw SM, Cheung CY, et al. Choroidal thickness and high myopia: A case-control study of young Chinese men in Singapore. Acta Ophthalmol. 2015; 93: e585–e592. [CrossRef] [PubMed]
Fan W, Abdelfattah NS, Uji A, et al. Subfoveal choroidal thickness predicts macular atrophy in age-related macular degeneration: results from the TREX-AMD trial. Graefes Arch Clin Exp Ophthalmol. 2018; 256: 511–518. [CrossRef] [PubMed]
Kuroda S, Ikuno Y, Yasuno Y, et al. Choroidal thickness in central serous chorioretinopathy. Retina. 2013; 33: 302–308. [CrossRef] [PubMed]
Tagawa Y, Namba K, Mizuuchi K, et al. Choroidal thickening prior to anterior recurrence in patients with Vogt-Koyanagi-Harada disease. Br J Ophthalmol. 2016; 100: 473–477. [CrossRef] [PubMed]
Koh LHL, Agrawal R, Khandelwal N, Sai Charan L, Chhablani J. Choroidal vascular changes in age-related macular degeneration. Acta Ophthalmol. 2017; 95: e597–e601. [CrossRef] [PubMed]
Yang J, Wang X, Wang Y, et al. CVIS: Automated OCT-scan-based software application for the measurements of choroidal vascularity index and choroidal thickness. Acta Ophthalmol. 2022; 100: e1553–e1560. [CrossRef] [PubMed]
Maloca P, Gyger C, Schoetzau A, Hasler PW. Ultra-short-term reproducibility of speckle-noise freed fluid and tissue compartmentalization of the choroid analyzed by standard OCT. Transl Vis Sci Technol. 2015; 4: 3. [CrossRef] [PubMed]
Fan L, Zhang F, Fan H, Zhang C. Brief review of image denoising techniques. Vis Comput Ind Biomed Art. 2019; 2: 7. [CrossRef] [PubMed]
Ronneberger O, Fischer P, Brox T. U-Net: Convolutional networks for biomedical image segmentation. In: Navab N, Hornegger J, Wells WM, Frangi AF, eds. Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015. Cham: Springer International Publishing; 2015: 234–241.
Maloca PM, Freichel C, Hänsli C, et al. Cynomolgus monkey's choroid reference database derived from hybrid deep learning optical coherence tomography segmentation. Sci Rep. 2022; 12: 13276. [CrossRef] [PubMed]
Dolgin E. The myopia boom. Nature. 2015; 519: 276–278. [CrossRef] [PubMed]
Morgan IG, Ohno-Matsui K, Saw SM. Myopia. Lancet. 2012; 379: 1739–1748. [CrossRef] [PubMed]
Holden BA, Fricke TR, Wilson DA, et al. Global prevalence of myopia and high myopia and temporal trends from 2000 through 2050. Ophthalmology. 2016; 123: 1036–1042. [CrossRef] [PubMed]
Fujiwara T, Imamura Y, Margolis R, Slakter JS, Spaide RF. Enhanced depth imaging optical coherence tomography of the choroid in highly myopic eyes. Am J Ophthalmol. 2009; 148: 445–450. [CrossRef] [PubMed]
Gupta P, Jing T, Marziliano P, et al. Distribution and determinants of choroidal thickness and volume using automated segmentation software in a population-based study. Am J Ophthalmol. 2015; 159: 293–301.e3. [CrossRef] [PubMed]
Hirata M, Tsujikawa A, Matsumoto A, et al. Macular choroidal thickness and volume in normal subjects measured by swept-source optical coherence tomography. Invest Ophthalmol Vis Sci. 2011; 52: 4971–4978. [CrossRef] [PubMed]
Ho M, Liu DT, Chan VC, Lam DS. Choroidal thickness measurement in myopic eyes by enhanced depth optical coherence tomography. Ophthalmology. 2013; 120: 1909–1914. [CrossRef] [PubMed]
Ikuno Y, Kawaguchi K, Nouchi T, Yasuno Y. Choroidal thickness in healthy Japanese subjects. Invest Ophthalmol Vis Sci. 2010; 51: 2173–2176. [CrossRef] [PubMed]
Li XQ, Larsen M, Munch IC. Subfoveal choroidal thickness in relation to sex and axial length in 93 Danish university students. Invest Ophthalmol Vis Sci. 2011; 52: 8438–8441. [CrossRef] [PubMed]
Liu B, Wang Y, Li T, et al. Correlation of subfoveal choroidal thickness with axial length, refractive error, and age in adult highly myopic eyes. BMC Ophthalmol. 2018; 18: 127. [CrossRef] [PubMed]
Wang S, Wang Y, Gao X, Qian N, Zhuo Y. Choroidal thickness and high myopia: A cross-sectional study and meta-analysis. BMC Ophthalmol. 2015; 15: 70. [CrossRef] [PubMed]
Wei WB, Xu L, Jonas JB, et al. Subfoveal choroidal thickness: The Beijing Eye Study. Ophthalmology. 2013; 120: 175–180. [CrossRef] [PubMed]
Tan CS, Cheong KX. Macular choroidal thicknesses in healthy adults–relationship with ocular and demographic factors. Invest Ophthalmol Vis Sci. 2014; 55: 6452–6458. [CrossRef] [PubMed]
Xie J, Ye L, Chen Q, et al. Choroidal thickness and its association with age, axial length, and refractive error in Chinese adults. Invest Ophthalmol Vis Sci. 2022; 63: 34. [CrossRef] [PubMed]
Read SA, Alonso-Caneiro D, Vincent SJ, Collins MJ. Longitudinal changes in choroidal thickness and eye growth in childhood. Invest Ophthalmol Vis Sci. 2015; 56: 3103–3112. [CrossRef] [PubMed]
Fontaine M, Gaucher D, Sauer A, Speeg-Schatz C. Choroidal thickness and ametropia in children: A longitudinal study. Eur J Ophthalmol. 2017; 27: 730–734. [CrossRef] [PubMed]
Wu H, Zhang G, Shen M, et al. Assessment of choroidal vascularity and choriocapillaris blood perfusion in anisomyopic adults by SS-OCT/OCTA. Invest Ophthalmol Vis Sci. 2021; 62: 8. [CrossRef] [PubMed]
Figure 1.
 
Images illustrating ground truth generation. (A) Infrared reflectance image with the orientation of the 12 radial scanning lines projected onto it. (B) Representative radial OCT image (horizontal scan). (C) Overlay of the choroidal inner boundary (yellow line), outer boundary (blue line), and the nulla (or fovea center; red point).
Figure 1.
 
Images illustrating ground truth generation. (A) Infrared reflectance image with the orientation of the 12 radial scanning lines projected onto it. (B) Representative radial OCT image (horizontal scan). (C) Overlay of the choroidal inner boundary (yellow line), outer boundary (blue line), and the nulla (or fovea center; red point).
Figure 2.
 
Flow chart of the automated program. CT, choroidal thickness.
Figure 2.
 
Flow chart of the automated program. CT, choroidal thickness.
Figure 3.
 
MedT-Net and SmaAt-UNet used in the DCAP for segmentation. (A) MedT-Net has two branches, one for global features and another for local ones. The global branch operates on the image as a whole, and the local branch is trained by splitting the whole image into several patches. (B) SmaAt-UNet was added to improve the accuracy of the upscaled fovea location from the MedT-Net model. (C) Segmentation mask generated by MedT-Net and SmaAt-UNet, consisting of the choroidal boundaries (inner and outer boundaries; yellow arrows) and the fovea center (red arrow).
Figure 3.
 
MedT-Net and SmaAt-UNet used in the DCAP for segmentation. (A) MedT-Net has two branches, one for global features and another for local ones. The global branch operates on the image as a whole, and the local branch is trained by splitting the whole image into several patches. (B) SmaAt-UNet was added to improve the accuracy of the upscaled fovea location from the MedT-Net model. (C) Segmentation mask generated by MedT-Net and SmaAt-UNet, consisting of the choroidal boundaries (inner and outer boundaries; yellow arrows) and the fovea center (red arrow).
Figure 4.
 
Representative denoised and binary images and segmentation masks generated by the DCAP automatically. A raw SS-OCT image (A) was denoised (B) and then converted to a binary image (C) automatically using the DCAP. The luminal area (dark pixels, red arrow) and the stromal area (light pixels, yellow arrow) are indicated. (D) The same image as in (A) but overlaid with annotations generated by the DCAP automatically. Point A indicates the fovea center, and point B indicates the point on the choroidal inner boundary nearest to the fovea center. Lines 1 to 4 are the tangent line of the choroidal inner boundary at point B (line 1), the perpendicular line of line 1 at point B (line 2), and margins on both sides (lines 3 and 4) of the ROI. A 1500-µm-wide ROI in the choroid (closed arrowhead) and choroidal inner and outer boundaries (green arrows) were displayed.
Figure 4.
 
Representative denoised and binary images and segmentation masks generated by the DCAP automatically. A raw SS-OCT image (A) was denoised (B) and then converted to a binary image (C) automatically using the DCAP. The luminal area (dark pixels, red arrow) and the stromal area (light pixels, yellow arrow) are indicated. (D) The same image as in (A) but overlaid with annotations generated by the DCAP automatically. Point A indicates the fovea center, and point B indicates the point on the choroidal inner boundary nearest to the fovea center. Lines 1 to 4 are the tangent line of the choroidal inner boundary at point B (line 1), the perpendicular line of line 1 at point B (line 2), and margins on both sides (lines 3 and 4) of the ROI. A 1500-µm-wide ROI in the choroid (closed arrowhead) and choroidal inner and outer boundaries (green arrows) were displayed.
Figure 5.
 
Flow chart summarizing the study of intra- and intergrader repeatability, correspondence between automatic and manual measurements, and intersession repeatability of the automatic and manual measurements of choroidal parameters.
Figure 5.
 
Flow chart summarizing the study of intra- and intergrader repeatability, correspondence between automatic and manual measurements, and intersession repeatability of the automatic and manual measurements of choroidal parameters.
Figure 6.
 
Time spent on assessing choroidal parameters on 20 SS-OCT images by the DCAP and three graders separately. The times required by the three graders are presented as means ± SD within the columns. For the DCAP, the approximate time for analyzing one slice of the OCT B-scan = (total time spent analyzing 20 OCT B-scans)/20.
Figure 6.
 
Time spent on assessing choroidal parameters on 20 SS-OCT images by the DCAP and three graders separately. The times required by the three graders are presented as means ± SD within the columns. For the DCAP, the approximate time for analyzing one slice of the OCT B-scan = (total time spent analyzing 20 OCT B-scans)/20.
Table 1.
 
Comparison of Choroidal Segmentation Evaluation Metrics Between MedT-Net Proposed in the Current Study and the Traditional U-Net
Table 1.
 
Comparison of Choroidal Segmentation Evaluation Metrics Between MedT-Net Proposed in the Current Study and the Traditional U-Net
Table 2.
 
Intra- and Intergrader Agreement in Choroidal Parameters
Table 2.
 
Intra- and Intergrader Agreement in Choroidal Parameters
Table 3.
 
Agreement Between Automatic and Manual Measurements of Choroidal Parameters
Table 3.
 
Agreement Between Automatic and Manual Measurements of Choroidal Parameters
Table 4.
 
Intersession Repeatability of the Automatic and Manual Measurements of Choroidal Parameters
Table 4.
 
Intersession Repeatability of the Automatic and Manual Measurements of Choroidal Parameters
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×