September 2024
Volume 13, Issue 9
Open Access
Artificial Intelligence  |   September 2024
Prediction of Axial Length From Macular Optical Coherence Tomography Using Deep Learning Model
Author Affiliations & Notes
  • Richul Oh
    Department of Ophthalmology, Seoul National University Hospital, Seoul, Korea
    Department of Ophthalmology, Seoul National University College of Medicine, Seoul, Korea
  • Myeongkyun Kang
    Department of Robotics and Mechatronics Engineering, Daegu Gyeongbuk Institute of Science and Technology (DGIST), Daegu, South Korea
  • Jeeyun Ahn
    Department of Ophthalmology, Seoul National University College of Medicine, Seoul, Korea
    Department of Ophthalmology, Seoul Metropolitan Government Seoul National University Boramae Medical Center, Seoul National University College of Medicine, Seoul, Korea
  • Eun Kyoung Lee
    Department of Ophthalmology, Seoul National University Hospital, Seoul, Korea
    Department of Ophthalmology, Seoul National University College of Medicine, Seoul, Korea
  • Kunho Bae
    Department of Ophthalmology, Seoul National University Hospital, Seoul, Korea
    Department of Ophthalmology, Seoul National University College of Medicine, Seoul, Korea
  • Un Chul Park
    Department of Ophthalmology, Seoul National University Hospital, Seoul, Korea
    Department of Ophthalmology, Seoul National University College of Medicine, Seoul, Korea
  • Kyu Hyung Park
    Department of Ophthalmology, Seoul National University Hospital, Seoul, Korea
    Department of Ophthalmology, Seoul National University College of Medicine, Seoul, Korea
  • Chang Ki Yoon
    Department of Ophthalmology, Seoul National University Hospital, Seoul, Korea
    Department of Ophthalmology, Seoul National University College of Medicine, Seoul, Korea
  • Correspondence: Chang Ki Yoon, Department of Ophthalmology, Seoul National University Hospital, 101 Daehak-ro, Jongno-gu, Seoul 03080, Korea. e-mail: syst18@gmail.com 
Translational Vision Science & Technology September 2024, Vol.13, 14. doi:https://doi.org/10.1167/tvst.13.9.14
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Richul Oh, Myeongkyun Kang, Jeeyun Ahn, Eun Kyoung Lee, Kunho Bae, Un Chul Park, Kyu Hyung Park, Chang Ki Yoon; Prediction of Axial Length From Macular Optical Coherence Tomography Using Deep Learning Model. Trans. Vis. Sci. Tech. 2024;13(9):14. https://doi.org/10.1167/tvst.13.9.14.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose: The purpose of this study was to develop a deep learning model for predicting the axial length (AL) of eyes using optical coherence tomography (OCT) images.

Methods: We retrospectively included patients with AL measurements and OCT images taken within 3 months. We utilized a 5-fold cross-validation with the ResNet-152 architecture, incorporating horizontal OCT images, vertical OCT images, and dual-input images. The mean absolute error (MAE), R-squared (R2), and the percentages of eyes within error ranges of ±1.0, ±2.0, and ±3.0 mm were calculated.

Results: A total of 9064 eyes of 5349 patients (total image number of 18,128) were included. The average AL of the eyes was 24.35 ± 2.03 (range = 20.53–37.07). Utilizing horizontal and vertical OCT images as dual inputs, deep learning models predicted AL with MAE and R2 of 0.592 mm and 0.847 mm, respectively, in the internal test set (1824 eyes of 1070 patients). In the external test set (171 eyes of 123 patients), the deep learning models predicted AL with MAE and R2 of 0.556 mm and 0.663 mm, respectively. Regarding error margins of ±1.0, ±2.0, and ±3.0 mm, the dual-input models showed accuracies of 83.50%, 98.14%, and 99.45%, respectively, in the internal test set, and 85.38%, 99.42%, and 100.00%, respectively, in the external test set.

Conclusions: A deep learning-based model accurately predicts AL from OCT images. The dual-input model showed the best performance, demonstrating the potential of macular OCT images in AL prediction.

Translational Relevance: The study provides new insights into the relationship between retinal and choroidal structures and AL elongation using artificial intelligence models.

Introduction
Long axial length (AL) is important in the pathogenesis of various diseases, such as myopic degeneration, glaucoma, or retinal detachment.13 With the increasing prevalence of myopia in recent decades, the incidence and severity of myopia-related diseases is also expected to rise.4 The primary underlying mechanism of these diseases is the elongation of the eyeball, which influences the anatomic structure of the retina, choroid, sclera, and optic disc.57 
Previous studies have reported associations between AL and posterior structural parameters using various imaging modalities, including fundus photography, optical coherence tomography (OCT), and OCT angiography.712 Increased AL was associated with reduced macular volume and thickness.9 In high myopia, AL was correlated with atrophic, tractional, and neovascular components in OCT as well.13 The incidence of posterior staphyloma increased with a longer AL.14 Furthermore, recent advancements in deep learning models have allowed the prediction of AL using fundus photography or ultra-wide-field fundus (UWF) photography.1517 However, the potential of OCT imaging in predicting AL and providing further structural information has yet to be explored. OCT provides high-resolution cross-sectional images that offer detailed visualization of the vitreoretinal surface, retina, and choroid structures. 
In this study, we aimed to develop a deep learning model for predicting AL of eyes using OCT images and to investigate the accuracy of the deep learning model. 
Methods
The study was approved by the Institutional Review Board (IRB) of the Seoul National University Hospital (SNUH; IRB No. H-2202-069-1299) and the Seoul Metropolitan Government Seoul National University Boramae Medical Center (IRB No. 10-2024-26). All procedures were conducted in compliance with the principles of the Declaration of Helsinki. The review board waived the need for written informed consent due to the retrospective design of the study and complete anonymization of patient information. 
Dataset
We retrospectively enrolled all patients who visited the ophthalmology clinic at the SNUH between September 2018 and March 2023. The patients with an AL measurement using IOLMaster 700 (Carl Zeiss Meditec, Jena, Germany) and an OCT image, taken by Spectralis OCT (Heidelberg Engineering, Heidelberg, Germany) within 3 months prior to AL measurement, were included. A horizontal and vertical linear OCT scan of 8.8 mm centered onto the fovea acquired in high resolution modality with a 100 auto real time resolution was carried out using Spectralis HRA + OCT. The OCT images were displayed in 1:1 pixel model. The exclusion criteria were as follows: (1) eyes with an unmeasurable AL, and (2) lack of horizontal or vertical macular OCT images. The OCT images were classified based on the presence of macular abnormalities, which included epiretinal membrane (ERM), age-related macular degeneration (AMD), central macular edema (CME), macular hole (MH), or other macular abnormalities. The dataset (SNUH dataset) was split into the development set and the internal test set with a ratio of 4:1. Using the development set, five-fold cross-validation was performed to decrease selection bias. All datasets were constructed exclusively on the aspect of both the patient and eye to confirm that the images with the same eye did not belong to the other dataset. 
Data Preprocessing and Augmentation
Data acquisition process was performed manually from Heidelberg eye explorer, by author Richul Oh as follows: (1) acquisition of OCT data as an E2E format, and (2) converting the E2E format files to “png” format of images using Python library OCT-Converter version 0.5.8 (https://github.com/marksgraham/OCT-Converter). We used horizontal and vertical section of the OCT images, instead of 3D volume scan images. For each training batch, the following preprocessing was used. First, because the raw images were grayscale images, they were converted to red, green, and blue (RGB) by tripling each pixel value into a three-channel using the Pillow library. Then, the images were normalized between 0 and 1, and then resized to 224 × 224 pixels, which is the default input size of ResNet-152.18 These preprocessings were performed to better fine-tune the model and better visualize the images. Further augmentations were applied as follows: horizontal flip (random rate = 50%), vertical flip (random rate = 50%), affine rotation (–45 degrees to +45 degrees), and brightness adjustment (50–150% of uniform distribution). These augmentations were applied to the images randomly at each epoch. For the validation and test dataset, only preprocessing of the images were applied, and augmentations were not applied. 
Deep-Learning Model Development
All development processes were performed using Pytorch (torch version 1.10.0, and torchvision version 0.11.0) and Python (version 3.6.9). A private server equipped with a CPU with 64 GB RAM and an NVIDIA Titan RTX (24 GB GPU; Santa Clara, CA, USA) were used for development. ResNet-152 was used as a backbone network.17 We constructed two ResNet-152 parallelly to have dual input images. Instead of the last fully connected layers of the models, the outputs of the average pooling layers were flattened to 1-dimensional (output = 1 × 2048). Two 1-dimensional outputs from two ResNet-152 models were concatenated (output = 1 × 4096). We added a fully connected layer (output = 1 × 128), an ReLU layer, and another fully connected layer with one output (output = 1 × 1). The Pytorch architectures of the models are demonstrated in the Supplementary Data. The model output represented the predicted AL. For ResNet-152 backbones, we used pretrained weights from the Image-Net dataset.19 Stochastic gradient descent was used as the optimization and the learning rate was set to 1 × 10−3, momentum to 0.9, and weight_decay to 5 × 10−4 as optimization parameters. The model was trained through 1000 epochs, and among the models that were validated with a validation set using loss function of mean absolute error (MAE), defined as the mean of absolute difference between model prediction value and the actual value. The learning rate was reduced to 10% when the validation loss did not improve within the 10 epochs. If the learning rate was reduced to lower than 1 × 10−6, we stopped the training process to avoid overfitting, and the model with the lowest validation loss was selected for the final model. The schematic structure and development process of the deep learning model are illustrated in Figure 1
Figure 1.
 
The schematic structure and development process of the deep learning model. (A) The internal test set was split from the dataset. Using the development dataset, five-fold cross validation was performed. Performance was measured using the internal and external test set. (B) The model architecture of dual-input models utilizing both the horizontal and vertical section of OCT images.
Figure 1.
 
The schematic structure and development process of the deep learning model. (A) The internal test set was split from the dataset. Using the development dataset, five-fold cross validation was performed. Performance was measured using the internal and external test set. (B) The model architecture of dual-input models utilizing both the horizontal and vertical section of OCT images.
Gradient-Weighted Regression Activation Mapping
We modified gradient-weighted class activation mapping into gradient-weighted regression activation mapping (Grad-RAM) to illustrate the OCT region predominantly used by the convolutional neural network (CNN) model.18 With this technique, we were able to investigate the regions in the OCT that were important for the prediction of AL. 
Analysis of Results
To evaluate the performance of the model, we calculated the MAE and R-squared (R2). We used linear regression to obtain the coefficient of determination (R2). Moreover, the percentages of the eyes within error ranges of ±1.0, ±2.0, and ±3.0 mm were calculated. 
Validation Using an External Test Set
The patients who visited the ophthalmology clinic at the Seoul Metropolitan Government Seoul National University Boramae Medical Center between March 2023 and December 2023 were included in this study. The AL measurement was performed with the IOLMaster 500 (Carl Zeiss Meditec, Jena, Germany) and an OCT image was taken by the Spectralis OCT. The same evaluation process was performed using the external test set. 
Results
A total of 9064 eyes of 5349 patients were included in the SNUH dataset. The average AL of the eyes was 24.35 ± 2.03 mm (range = 20.53–37.07 mm). All eyes had both horizontal and vertical macular OCT images, resulting in the total image number of 18,128. The development set consisted of 7240 eyes of 4279 patients and 1824 eyes of 1070 patients were included in the internal test set. The external test set consisted of 171 eyes of 123 patients. The internal test set consisted of 1121 eyes without macular abnormality and 703 eyes with macular abnormality (332 ERMs, 195 AMDs, 88 CMEs, 47 MHs, and 41 other macular abnormalities). The external test set consisted of 132 eyes without macular abnormality and 39 eyes with macular abnormality (22 ERMs, 8 AMDs, 7 CMEs, and 2 MHs). The prediction results using the internal test set and external test set are summarized in the Table. Using only horizontal OCT images, the model predicted AL with MAE and R2 of 0.644 mm and 0.816 in the internal test set, respectively. Using only vertical OCT images, the model predicted AL with MAE and R2 of 0.628 mm and 0.822 mm in the internal test set, respectively. Using both horizontal and vertical OCT images, the dual-input model predicted AL with MAE and R2 of 0.592 mm and 0.847 mm in the internal test set, respectively. The external test set consisted of 171 eyes of 123 patients. The average AL of the eyes were 23.58 ± 1.21 mm (range = 20.87–28.6 mm). Using the external test set, the dual-input model predicted AL with MAE and R2 of 0.556 mm and 0.663 mm. The dual-input model showed 83.50%, 98.14%, and 99.45% accuracy in the error margins of ±1.0, ±2.0, and ±3.0 mm in the internal test set, and 85.38%, 99.42%, and 100.00% accuracy in the error margins of ±1.0, ±2.0, and ±3.0 mm in the external test set. Figure 2 shows the prediction results of the internal and external test sets. The representative images of guided Grad-RAM are shown in Figure 3, indicating the region of importance in the prediction of AL in the OCT images. 
Table.
 
Accuracy for Axial Length Prediction With Internal and External Test Set
Table.
 
Accuracy for Axial Length Prediction With Internal and External Test Set
Figure 2.
 
Performance of the deep learning model estimating the axial length. The red line represents the prediction with no error, the identity line. (A) The internal test set and (B) the external test set.
Figure 2.
 
Performance of the deep learning model estimating the axial length. The red line represents the prediction with no error, the identity line. (A) The internal test set and (B) the external test set.
Figure 3.
 
Heatmap analysis of gradient-weighted regression activation mapping (Grad-RAM) for myopic eyes with axial length greater than 26.0 mm. (A–I) Nine optical coherence tomography (OCT) images were randomly selected among eye with good prediction outcome (absolute prediction error less than 0.5 mm). For each image, the left one is a horizontal section OCT image and the right one is a vertical section OCT image.
Figure 3.
 
Heatmap analysis of gradient-weighted regression activation mapping (Grad-RAM) for myopic eyes with axial length greater than 26.0 mm. (A–I) Nine optical coherence tomography (OCT) images were randomly selected among eye with good prediction outcome (absolute prediction error less than 0.5 mm). For each image, the left one is a horizontal section OCT image and the right one is a vertical section OCT image.
Discussion
Our deep learning model predicted AL using both horizontal and vertical OCT images with MAE and R2 value of 0.592 and 0.847, respectively, in the internal test set and 0.556 and 0.663, respectively in the external test set. To the best of our knowledge, the present study was the first to utilize OCT images in the prediction of AL using deep learning models in the real-world data. 
Many researchers previously developed deep learning models predicting AL using conventional fundus images.15,17 In our previous study, we advanced this field by developing highly accurate deep learning models with UWF fundus images.16 However, although UWF images offer the advantage of capturing the peripheral area, they have limitations in focusing on the macular region. Recognizing this limitation, the current study aims to address the role of the macular area in predicting AL by utilizing OCT images. 
We constructed the deep learning model using dual input to utilize both horizontal and vertical sections of OCT images. A dome-shaped macula is often observed in myopic staphyloma,20 and Caillaux et al. described three types of the dome-shaped macula patterns using vertical and horizontal OCT scans.21 They argued that both horizontal and vertical OCT scans are important in detecting the dome-shaped macula. Our prediction results are in line with their opinion. Compared with the horizontal-only model and the vertical-only model, the dual-input model using both horizontal and vertical OCT images showed lesser MAE and greater R2 value, implying that both sections have their role in the prediction of AL. Considering that vertical-only models predicted AL with lower MAE and greater R2 than horizontal-only models, vertical scan images might play more important roles than horizontal scan images. As vertical section images are more axially symmetrical than horizontal section images as the optic disc is usually located in the edge of horizontal section, we hypothesized the axial elongation associated macular changes might predominantly occur in the vertical direction of the macula, which can be the reason for better prediction of vertical sections. However, as quantification of the guided Grad-RAM results is not available, we cannot conclude which section of OCT images among horizontal or vertical OCT images provides more information. 
The performance of the present model was comparable with the previous model using conventional fundus photographs and UWF images. The macula is the most important area in the retina and its shape and contour are affected by AL elongation. Considering that posterior staphyloma is a hallmark of pathologic myopia and macular type consists of 74% of eyes with posterior staphyloma,22 macular OCT images might have a superior role in predicting AL compared to other modalities. Previous studies utilizing fundus photographs showed MAEs and R2 values of 0.56 to 0.90 mm and 0.59 to 0.67, respectively.15,17 Deep learning models using UWF images showed MAE of 0.744 mm and R2 value of 0.815.16 The present study showed an MAE of 0.592 mm and an R2 value of 0.847 in the internal test set, which were better than those of the study using UWF images. Compared with Dong et al.’s study using fundus photographs, the results of the internal test set showed a worse MAE but a better R2 value. However, there were noticeable performance differences between the internal and external test sets in the present study. Our external test set still showed better MAE and R2 values than those in Dong et al.’s study, but the R2 value difference was smaller compared to the results of the internal test set. Moreover, the R2 value of the external test set was lower than that of the UWF study. 
The performance using the external test set was better in terms of MAE but worse in terms of the R2 value. This performance difference can be explained by two aspects. First, the external dataset consisted of a greater proportion of eyes without macular abnormalities. Considering that the deep learning model performed better on normal eyes in the internal test set, the external test set showed a lower MAE. Second, the external dataset had a narrower range of AL. Including more eyes with a greater AL might lead to a better R2 value. 
Eyes without macular abnormalities showed better accuracy in AL prediction than eyes with macular abnormalities in both the internal and external test sets. Macular abnormalities, including MHs, ERMs, and AMDs, result in structural alterations in the retina. However, considering the structural differences between eyes with and without macular abnormalities, eyes with macular abnormalities showed comparable outcomes with previous studies. Choroidal structure, outer retinal curvature, as well as retinal structure might play an important role in the prediction of AL. 
Multiple regions in OCT images were related to the long AL. As shown in the representative case of Figure 2, the choroid layer, the peripapillary area, the macular area, and the posterior staphyloma showed great significance in prediction of AL. Long AL is associated with not only decreased choroidal thickness23 but also decreased choroidal vascularity and choriocapillaris perfusion.24 The observed significance of the choroid layer can be explained by these structural alterations. It seems that the curvature of the choroidal layer is also associated with the prediction of long AL, therefore, further studies are needed to investigate the choroidal layer impacts on eyeball elongation. According to the Kim et al.’s report, OCT changes in disc contour and peripapillary structure was highly associated with axial elongation.25 Notably, in our representative examples, both horizontal and vertical Grad-RAM images exhibited remarkably strong signals in the posterior staphyloma region, where there was a pronounced and rapid curvature of the choroid and retina. This observation aligns with the characteristic features of high myopic changes, further reinforcing the notion that eyes exhibiting such abrupt focal curvatures are likely highly myopic in nature. These findings emphasize the importance of monitoring and understanding the structural changes in the peripapillary area and choroidal curvature as they relate to axial elongation, particularly in high myopic eyes. 
The limitations of the present study should be noted. First, our analysis was based solely on horizontal and vertical cross-sectional images, without incorporating macular volume scan images. As some eyes lack macular volume scan images and the number of scan images differ across eyes, we did not use macular volume scan images. Future studies should consider incorporating macular volume scan images to improve the accuracy of prediction. Second, the images were obtained from a single OCT device, which raises concerns about the generalizability of our model to other devices. When applying our model to the images from different devices, the prediction error would be greater than the internal validation set. The external validation using images from other devices is needed for general application of the model. Third, the heatmap does not guarantee a causal relationship between the anatomic structure and AL. However, the heatmap provided additional information regarding the eyeball elongation. Fourth, there were noticeable performance differences between the internal and external test sets. The MAE of the external test set was lower than that of the internal test set, whereas the R2 value was significantly worse than that of the internal test set. Further evaluation with larger and multiple external test sets is needed to validate our findings. Last, all images were downsized to 224 × 224 pixels, which is the default input size for the ResNet-152 model. Due to limited computational resources, downsizing the image size was necessary. Training with larger input image sizes and more complex models may improve the performance of the model. 
Despite the limitations, our study has several notable strengths. First, it is the first study to utilize OCT images in predicting AL through the implementation of deep learning models. Second, our developed model outperformed previously described models in terms of predictive accuracy. It demonstrated the lowest MAE and the highest R2 value, indicating its superior performance in predicting AL. Third, our study samples encompass not only normal retinal images but also a diverse range of pathological conditions. This inclusion of various diseases enhances the real-world relevance and applicability of our findings, as it accounts for the complexity and heterogeneity of retinal fundus lesions that may impact eyeball elongation. 
In conclusion, our research presents a novel deep learning-based model that effectively predicts AL using OCT images. This model demonstrates impressive performance and holds the potential to be utilized not only for accurate AL prediction based on OCT images but also for investigating how retinal and choroidal structures are related to AL elongation. These findings contribute to the advancement of both ophthalmological research and clinical practice. 
Acknowledgments
Disclosure: R. Oh, None; M. Kang, None; J. Ahn, None; E.K. Lee, None; K. Bae, None; U.C. Park, None; K.H. Park, None; C.K. Yoon, None 
References
Haarman AEG, Enthoven CA, Tideman JWL, Tedja MS, Verhoeven VJM, Klaver CCW. The complications of myopia: a review and meta-analysis. Invest Ophthalmol Vis Sci. 2020; 61(4): 49. [CrossRef] [PubMed]
Hashimoto S, Yasuda M, Fujiwara K, et al. Association between axial length and myopic maculopathy: the Hisayama Study. Ophthalmol Retin. 2019; 3(10): 867–873. [CrossRef]
Oku Y, Oku H, Park M, et al. Long axial length as risk factor for normal tension glaucoma. Graefes Arch Clin Exp Ophthalmol. 2009; 247: 781–787. [CrossRef] [PubMed]
Holden BA, Wilson DA, Jong M, et al. Myopia: a growing global problem with sight-threatening complications. Community Eye Health. 2015; 28(90): 35. [PubMed]
Lee KM, Kim M, Oh S, Kim SH. Position of central retinal vascular trunk and preferential location of glaucomatous damage in myopic normal-tension glaucoma. Ophthalmol Glaucoma. 2018; 1(1): 32–43. [CrossRef] [PubMed]
Moriyama M, Ohno-Matsui K, Hayashi K, et al. Topographic analyses of shape of eyes with pathologic myopia by high-resolution three-dimensional magnetic resonance imaging. Ophthalmology. 2011; 118(8): 1626–1637. [CrossRef] [PubMed]
Ruiz-Medrano J, Montero JA, Flores-Moreno I, Arias L, García-Layana A, Ruiz-Moreno JM. Myopic maculopathy: current status and proposal for a new classification and grading system (ATN). Prog Retin Eye Res. 2019; 69: 80–115. [CrossRef] [PubMed]
Jonas RA, Wang YX, Yang H, et al. Optic disc-fovea distance, axial length and parapapillary zones. The Beijing Eye Study 2011. PLoS One. 2015; 10(9): e0138701. [CrossRef] [PubMed]
Luo HD, Gazzard G, Fong A, et al. Myopia, axial length, and OCT characteristics of the macula in Singaporean children. Invest Ophthalmol Vis Sci. 2006; 47(7): 2773–2781. [CrossRef] [PubMed]
Savini G, Barboni P, Parisi V, Carbonelli M. The influence of axial length on retinal nerve fibre layer thickness and optic-disc size measurements by spectral-domain OCT. Br J Ophthalmol. 2012; 96(1): 57–61. [CrossRef] [PubMed]
Takeyama A, Kita Y, Kita R, Tomita G. Influence of axial length on ganglion cell complex (GCC) thickness and on GCC thickness to retinal thickness ratios in young adults. Jpn J Ophthalmol. 2014; 58: 86–93. [CrossRef] [PubMed]
Terasaki H, Yamashita T, Yoshihara N, et al. Location of tessellations in ocular fundus and their associations with optic disc tilt, optic disc area, and axial length in young healthy eyes. PLoS One. 2016; 11(6): e0156842. [CrossRef] [PubMed]
Flores-Moreno I, Puertas M, Almazán-Alonso E, et al. Pathologic myopia and severe pathologic myopia: correlation with axial length. Graefes Arch Clin Exp Ophthalmol. 2022; 260(1): 133–140. [CrossRef] [PubMed]
Igarashi-Yokoi T, Shinohara K, Fang Y, et al. Prognostic factors for axial length elongation and posterior staphyloma in adults with high myopia: a Japanese observational study. Am J Ophthalmol. 2021; 225: 76–85. [CrossRef] [PubMed]
Dong L, Hu XY, Yan YN, et al. Deep learning-based estimation of axial length and subfoveal choroidal thickness from color fundus photographs. Front Cell Dev Biol. 2021; 9: 653692. [CrossRef] [PubMed]
Oh R, Lee EK, Bae K, Park UC, Yu HG, Yoon CK. Deep learning-based prediction of axial length using ultra-widefield fundus photography. Korean J Ophthalmol KJO. 2023; 37(2): 95. [CrossRef] [PubMed]
Jeong Y, Lee B, Han JH, Oh J. Ocular axial length prediction based on visual interpretation of retinal fundus images via deep neural network. IEEE J Sel Top Quantum Electron. 2020; 27(4): 1–7. [CrossRef]
He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2016: 770–778.
Krizhevsky A, Sutskever I, Hinton GE. ImageNet classification with deep convolutional neural networks. Adv Neural Inf Process Syst. 2012; 25: 84–90.
Gaucher D, Erginay A, Lecleire-Collet A, et al. Dome-shaped macula in eyes with myopic posterior staphyloma. Am J Ophthalmol. 2008; 145(5): 909–914. [CrossRef] [PubMed]
Caillaux V, Gaucher D, Gualino V, Massin P, Tadayoni R, Gaudric A. Morphologic characterization of dome-shaped macula in myopic eyes with serous macular detachment. Am J Ophthalmol. 2013; 156(5): 958–967. [CrossRef] [PubMed]
Ohno-Matsui K. Proposed classification of posterior staphylomas based on analyses of eye shape by three-dimensional magnetic resonance imaging and wide-field fundus imaging. Ophthalmology. 2014; 121(9): 1798–1809. [CrossRef] [PubMed]
Park UC, Lee EK, Kim BH, Oh BL. Decreased choroidal and scleral thicknesses in highly myopic eyes with posterior staphyloma. Sci Rep. 2021; 11(1): 7987. [CrossRef] [PubMed]
Wu H, Xie Z, Wang P, et al. Differences in retinal and choroidal vasculature and perfusion related to axial length in pediatric anisomyopes. Invest Ophthalmol Vis Sci. 2021; 62(9): 40. [CrossRef] [PubMed]
Kim M, Lee KM, Choung HK, Oh S, Kim SH. Change of peripapillary retinal nerve fiber layer and choroidal thickness during 4-year myopic progress: Boramae Myopia Cohort Study Report 4. Br J Ophthalmol. 2023; 107(8): 1165–1171. [CrossRef] [PubMed]
Figure 1.
 
The schematic structure and development process of the deep learning model. (A) The internal test set was split from the dataset. Using the development dataset, five-fold cross validation was performed. Performance was measured using the internal and external test set. (B) The model architecture of dual-input models utilizing both the horizontal and vertical section of OCT images.
Figure 1.
 
The schematic structure and development process of the deep learning model. (A) The internal test set was split from the dataset. Using the development dataset, five-fold cross validation was performed. Performance was measured using the internal and external test set. (B) The model architecture of dual-input models utilizing both the horizontal and vertical section of OCT images.
Figure 2.
 
Performance of the deep learning model estimating the axial length. The red line represents the prediction with no error, the identity line. (A) The internal test set and (B) the external test set.
Figure 2.
 
Performance of the deep learning model estimating the axial length. The red line represents the prediction with no error, the identity line. (A) The internal test set and (B) the external test set.
Figure 3.
 
Heatmap analysis of gradient-weighted regression activation mapping (Grad-RAM) for myopic eyes with axial length greater than 26.0 mm. (A–I) Nine optical coherence tomography (OCT) images were randomly selected among eye with good prediction outcome (absolute prediction error less than 0.5 mm). For each image, the left one is a horizontal section OCT image and the right one is a vertical section OCT image.
Figure 3.
 
Heatmap analysis of gradient-weighted regression activation mapping (Grad-RAM) for myopic eyes with axial length greater than 26.0 mm. (A–I) Nine optical coherence tomography (OCT) images were randomly selected among eye with good prediction outcome (absolute prediction error less than 0.5 mm). For each image, the left one is a horizontal section OCT image and the right one is a vertical section OCT image.
Table.
 
Accuracy for Axial Length Prediction With Internal and External Test Set
Table.
 
Accuracy for Axial Length Prediction With Internal and External Test Set
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×