Translational Vision Science & Technology Cover Image for Volume 14, Issue 3
March 2025
Volume 14, Issue 3
Open Access
Cornea & External Disease  |   March 2025
Artificial Intelligence–Driven Detection of LASIK Using Corneal Optical Coherence Tomography Maps
Author Affiliations & Notes
  • Jiachi Hong
    The Center for Ophthalmic Optics and Lasers, Casey Eye Institute, Oregon Health & Science University, Portland, OR, USA
  • Afshan A. Nanji
    The Center for Ophthalmic Optics and Lasers, Casey Eye Institute, Oregon Health & Science University, Portland, OR, USA
  • Richard D. Stutzman
    The Center for Ophthalmic Optics and Lasers, Casey Eye Institute, Oregon Health & Science University, Portland, OR, USA
  • Winston D. Chamberlain
    The Center for Ophthalmic Optics and Lasers, Casey Eye Institute, Oregon Health & Science University, Portland, OR, USA
  • Xubo Song
    Department of Medical Informatics and Clinical Epidemiology, Oregon Health & Science University, Portland, OR, USA
  • David Huang
    The Center for Ophthalmic Optics and Lasers, Casey Eye Institute, Oregon Health & Science University, Portland, OR, USA
  • Yan Li
    The Center for Ophthalmic Optics and Lasers, Casey Eye Institute, Oregon Health & Science University, Portland, OR, USA
  • Correspondence: Yan Li, Casey Eye Institute, Oregon Health & Science University, 515 SW Campus Drive, Portland, OR 97239, USA. e-mail: [email protected] 
Translational Vision Science & Technology March 2025, Vol.14, 17. doi:https://doi.org/10.1167/tvst.14.3.17
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Jiachi Hong, Afshan A. Nanji, Richard D. Stutzman, Winston D. Chamberlain, Xubo Song, David Huang, Yan Li; Artificial Intelligence–Driven Detection of LASIK Using Corneal Optical Coherence Tomography Maps. Trans. Vis. Sci. Tech. 2025;14(3):17. https://doi.org/10.1167/tvst.14.3.17.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose: To train and validate a convolutional neural network (CNN) to detect the history of laser-assisted in situ keratomileusis (LASIK) surgeries using corneal optical coherence tomography (OCT) maps.

Methods: Five corneal OCT maps (pachymetry, epithelial thickness, posterior mean curvature, anterior axial power, and anterior stroma reflectance) were utilized as the input of a lightweight CNN model. OCT scans of healthy volunteers and patients who had undergone myopic or hyperopic LASIK were included. Repeated fivefold cross-validation was used to train and evaluate the proposed CNN. In addition, a separate group of post-LASIK participants, who were not included in the cross-validation, was used for out-of-sample testing to assess the CNN model performance.

Results: In the cross-validation, the proposed CNN model achieved an overall balanced accuracy of 90.2% ± 3.6% with 93.5% ± 5.2% sensitivity and 97.8% ± 1.7% area under the receiver operating characteristic curve (AUC) in detecting myopic LASIK and 90.2% ± 5.8% sensitivity and 98.2% ± 1.9% AUC in identifying the hyperopic LASIK. In the out-of-sample test, all eyes were classified correctively.

Conclusions: The lightweight CNN model with corneal OCT maps provides a useful tool for detecting LASIK history.

Translational Relevance: Artificial intelligence–assisted OCT may offer better management for patients with LASIK history who need cataract surgeries.

Introduction
Previous refractive surgery history is crucial for patients seeking cataract surgery. Ophthalmologists need to consider the details of the prior vision correction surgery to select the suitable intraocular lens (IOL) for optimal visual outcomes and set realistic patient expectations. Traditional IOL calculation formulas often provide inaccurate results in post–laser-assisted in situ keratomileusis (LASIK) eyes due to corneal curvature and thickness changes.1,2 Differentiating between myopic and hyperopic LASIK history helps the surgeon choose the appropriate IOL formulas that account for surgery-induced corneal changes.3 This consideration becomes increasingly important as the demand for refractive and cataract surgeries continues to grow. However, patients typically undergo LASIK when they are young and later require cataract surgery as they age, creating a 20- to 30-year gap between the two procedures. During this time, LASIK information may be missing or incorrectly recorded in the patient's chart due to factors such as lost medical records, referrals to new clinics, or the patient's inability to recall whether they had LASIK surgery. 
Optical coherence tomography (OCT) is a noncontact imaging modality widely used in ophthalmology. OCT can provide high-resolution cross-sectional images of the cornea, making it possible to detect surgery-related corneal changes.4 Artificial intelligence (AI)–driven deep learning models are developed to identify complex patterns and features in OCT images.510 However, AI models generated with OCT cross-sectional images (or three-dimensional volumes) will be device-specific and may pose more challenges for generalization. 
In this study, we developed and evaluated an AI model to detect LASIK history using corneal OCT maps. This approach may enhance surgical planning and improve patient outcomes for cataract surgery after LASIK. 
Methods
Participant Recruitment
Patients with a history of myopic or hyperopic LASIK who were seeking cataract surgeries and healthy volunteers were recruited at the Casey Eye Institute (CEI), Oregon Health & Science University (OHSU) in Portland, Oregon. In addition, patients whose LASIK surgeries were performed at the CEI were identified via chart review and included in the study if their post-LASIK OCT scans were available. The study protocol was approved by the OHSU Institutional Review Board (IRB). The study adhered to the principles of the Declaration of Helsinki and the Health Insurance Portability and Accountability Act (HIPPA) of 1996. All prospectively enrolled participants were given written information about the nature, benefits, and risks of the study and signed an informed consent form. The IRB waived the HIPPA authorization for participants who were identified in the retrospective chart review. 
OCT Corneal Maps
Corneal scans were acquired using a spectral-domain OCT system (Avanti, Visionix USA [formally Optovue], Lombard, IL, USA). The OCT scan pattern consisted of eight evenly spaced radial lines covering a circular area with a 6-mm diameter centered on the pupil, and this radial pattern was executed five times in a single scan session, producing a total of 40 images (8 radials × 5 repeats) for each scan. Scans with inadequate signal quality were excluded from the analysis. The study ensured that at least two satisfactory scans were acquired for each eye. 
Corneal thickness and topography maps were generated from the OCT images as described previously.1116 Briefly, the pachymetry map was derived by computing the spatial extent between the anterior and posterior corneal surfaces along a vector perpendicular to the anterior corneal surface.11,12 The epithelial thickness map was computed using distance between the segmented boundaries of the anterior corneal surface and Bowman's layer.13,14 The anterior corneal axial power map was derived from the corneal surface elevation map.16 The posterior mean curvature map was calculated based on the posterior corneal surface elevation map where the mean curvature was defined as the measure of the reciprocal of the radius of curvature at a specific point on the corneal surface.1517 
OCT records the back-reflection of the light from the cornea. The cornea reflectivity is highest when the OCT beam is perpendicular to the collagen lamellae and decreases rapidly with increasing off-perpendicular incidence angle.18 We developed a method to compensate for the corneal reflectivity variation by incidence angle and percent depth.19 The corrected OCT signal was then converted to a reflectance scale calibrated by a standard diffuse reflectance target (Spectralon, Labsphere, North Sutton, NH, US). The anterior stroma reflectance map was calculated by averaging the reflectance of the anterior one-third of the stroma. 
All corneal OCT maps were cropped to 5 × 5 mm in size and down-sampled to a size of 16 × 16 pixels. Average OCT thickness, topography, and stroma reflectance maps were calculated for normal, myopic LASIK, and hyperopic LASIK eyes. 
Convolutional Neural Networks
A lightweight convolutional neural network (CNN) architecture (Fig. 1) was designed to detect corneas that had undergone myopic or hyperopic LASIK treatment. The proposed CNN was configured to accept pachymetry, epithelial thickness, posterior mean curvature, anterior axial power, and anterior stroma reflectance maps as input features. In this configuration, each map was treated as an individual channel, thereby resulting in an input tensor dimensionality of 16 × 16 × 5 pixels for a single OCT scan within the CNN framework. 
Figure 1.
 
The architecture of CNN for LASIK detection.
Figure 1.
 
The architecture of CNN for LASIK detection.
Grid searches were systematically conducted to optimize both the network architecture and hyperparameters. Convolution filters of 2 × 2 dimensions, with a stride of 1 and the same padding, were uniformly applied across all convolutional layers. The number of filters was doubled in subsequent layers to compensate for the reduction in spatial dimensions caused by a max-pooling layer with a stride of 2. After the final max-pooling layer, a flatten layer was added, followed by four densely connected layers. The rectified linear unit (ReLU) activation function was used for all hidden layers. The output layer of the CNN featured three neurons with a Softmax activation function designed to output probabilities for normal, myopic LASIK, and hyperopic LASIK cases. A focal loss function was employed to address the class imbalance by applying a modulating factor to the standard cross-entropy loss, reducing the relative loss for well-classified cases and focusing more on difficult-to-classify instances. 
Evaluation and Statistics Analysis
Minimum pachymetry, maximum epithelial thickness, maximum posterior mean curvature, maximum anterior axial power, and average anterior stroma reflectance inside central 5-mm map zones were recorded. Descriptive statistics are expressed as mean ± standard deviation (SD). The data normality was tested using the Kolmogorov–Smirnov test. The mean values from the myopic and hyperopic LASIK groups were compared to the normal group using two-tailed t-tests (normality confirmed) or nonparametric Wilcoxon rank-sum tests (normality rejected). If both eyes of a participant were included in the study, a randomly selected eye was used for the comparison. For all statistical testing, the significance level was set at P < 0.05 and was adjusted with Bonferroni correction for multiple comparisons. Traditional statistical classification models, including logistic regression,20 random forest,21 gradient boosting,22 and support vector machine,23 were used to combine the OCT parameters listed above for detecting LASIK. 
To evaluate the performance of our proposed CNN model, we applied fivefold cross-validation to our data set. Within each fold, the data were split by the patient into the train (80%) and test (20%) sets. The splits were stratified so that all three groups (normal, myopic LASIK, and hyperopic LASIK) were represented in each split. Care was taken to ensure that the scans from the same participant were not included in both the testing and the training data sets. When testing the trained model, the algorithm randomly selected one scan from each eye in the test data set to generate the classification output. Balanced accuracy, sensitivity, specificity, precision, F1 score, and area under the receiver operating characteristic curve (AUC) were used to evaluate the artificial intelligence (AI) model performance. In addition, a confusion matrix was calculated to summarize the proportion of the CNN model's correct and incorrect classification outputs. The fivefold cross-validation was repeated five times, and the average performance metrics were calculated accordingly. 
An ablation study was conducted to evaluate the contribution of each input channel to the model performance. Each of the five input channels (pachymetry, epithelial thickness, posterior mean curvature, anterior axial power, and anterior stroma reflectance) was individually set to zero to assess the CNN model performance with the remaining four channels. 
In addition, a group of post-LASIK participants who were not involved in the cross-validation were used to test the CNN model performance (out-of-sample test). Their laser treatment details were available through chart review. 
Results
Study Participants
In total, 212 participants were involved in the AI model training and testing, including 96 eyes of 48 healthy controlled participants (age 36.0 ± 8.0 years, range 23.0 to 54.0 years; 47.9% male, 52.1% female), 205 eyes of 119 myopic LASIK patients (age 66.1 ± 8.5 years, range 41.8 to 82.2 years; 50.4% male, 49.6% female), and 78 eyes of 45 hyperopic LASIK patients (age 68.5 ± 10.6 years, range 26.0 to 83.9 years; 46.7% male, 53.3% female). The out-of-sample test was conducted with an additional 48 eyes of 24 LASIK patients (age 33.5 ± 7.5 years, range 25.0 to 53.0 years; 58.3% male, 41.7% female). 
The refraction of the healthy volunteers had an average spherical equivalent of –3.94 ± 3.4 D (range –11.25 to + 4.5 D) with a cylinder magnitude of 0.94 ± 0.78 D (range 0.00 to 4.5 D). Many of the LASIK patients in this study underwent surgery decades ago, and a significant number were referred from other clinics, resulting in incomplete prior LASIK data. When available, we recorded the amount of refraction treated and the surgery date for a subset of patients. The average LASIK correction had a spherical equivalent of –4.01 ± 2.0 D (range –8.38 to –0.42 D) in 31 myopic LASIK patients (61 eyes) and 2.66 ± 1.4 D (range 1.0 to 5.0 D) in 6 hyperopic LASIK patients (12 eyes). The time intervals between OCT scanning and LASIK procedures were 11.7 ± 4.9 (range 3.7 to 20.0) years in 34 LASIK patients. 
Representative OCT cross-sectional scans are shown in Supplementary Figure S1. The average corneal OCT maps are provided in Figure 2. Compared with the normal corneas, the myopic LASIK corneas were thinner in the middle, thicker in the central epithelium, and flattened on the anterior corneal surface. In contrast, the hyperopic corneas had a thicker epithelium in the periphery and were steepened on the anterior corneal surface. Both myopic and hyperopic LASIK corneas were brighter in terms of the anterior stroma reflectance. The descriptive statistics of minimum pachymetry, maximum epithelium thickness, maximum anterior axial power, and average anterior stroma reflectance revealed a similar pattern in myopic and hyperopic LASIK eyes (Table 1). The traditional statistical classification models provided the following total balance accuracy in detecting LASIK: 72.2% ± 5.5% for logistic regression, 69.8% ± 8.3% for the random forest classifier, 72.3% ± 6.3% for the gradient boosting classifier, and 74.1% ± 5.5% for the support vector machine. 
Figure 2.
 
Averaged corneal OCT maps of normal, myopic LASIK (M-LASIK), and hyperopic LASIK (H-LASIK) eyes.
Figure 2.
 
Averaged corneal OCT maps of normal, myopic LASIK (M-LASIK), and hyperopic LASIK (H-LASIK) eyes.
Table 1.
 
Descriptive Statistics of Corneal Optical Coherence Tomography Measurements
Table 1.
 
Descriptive Statistics of Corneal Optical Coherence Tomography Measurements
CNN Model Evaluation
The proposed CNN model demonstrated excellent performance in detecting myopic and hyperopic LASIK with an overall balanced accuracy of 90.2% ± 3.6%. More specifically, the confusion matrix (Fig. 3) showed that the AI model successfully classified corneas that were normal, had undergone myopic LASIK, and had undergone hyperopic LASIK with an accuracy of 86.7%, 93.5%, and 90.2%, respectively. The F1 scores (Table 2) of 94.4% (myopic LASIK) and 90.8% (hyperopic LASIK) verified excellent agreement between the clinical information (ground truth) and CNN model outputs. The learning curves confirm that the model generalizes well to the validation set, showing no signs of overfitting or underfitting (see Supplementary Fig. S2). 
Figure 3.
 
A confusion matrix that summarized the CNN model's correct and incorrect classification outputs. The values were in proportion (0∼1) and were averaged across the repeated fivefold cross-validation. H-LASIK, hyperopic LASIK; M-LASIK, myopic LASIK.
Figure 3.
 
A confusion matrix that summarized the CNN model's correct and incorrect classification outputs. The values were in proportion (0∼1) and were averaged across the repeated fivefold cross-validation. H-LASIK, hyperopic LASIK; M-LASIK, myopic LASIK.
Table 2.
 
Performance Matrix of the Convolutional Neural Network Model for LASIK Detection
Table 2.
 
Performance Matrix of the Convolutional Neural Network Model for LASIK Detection
The ablation test showed that the CNN model using all five corneal OCT map inputs achieved the highest overall accuracy (Fig. 4). The posterior mean curvature and anterior axial power maps provide the highest overall accuracy (see Supplementary Fig. S3) among the corneal topographic maps for detecting LASIK history. 
Figure 4.
 
A bar plot showing the results of the ablation test. The CNN model using all five map inputs achieved the highest overall accuracy. * indicates a statistically significant decrease in the model performance.
Figure 4.
 
A bar plot showing the results of the ablation test. The CNN model using all five map inputs achieved the highest overall accuracy. * indicates a statistically significant decrease in the model performance.
The class activation maps (Fig. 5) illustrated that the central region of the corneal images was most influential in predicting the myopic LASIK, while both the central and peripheral regions were influential in predicting the hyperopic LASIK. 
Figure 5.
 
Class activation maps of the first convolution layer for myopic LASIK and hyperopic LASIK eyes. Higher class activation values indicate the regions of the map that were most influential in classification. I, inferior; N, nasal; S, superior; T, temporal.
Figure 5.
 
Class activation maps of the first convolution layer for myopic LASIK and hyperopic LASIK eyes. Higher class activation values indicate the regions of the map that were most influential in classification. I, inferior; N, nasal; S, superior; T, temporal.
The CNN model showed excellent performance in the out-of-sample test. All cases were classified correctly (Fig. 6). 
Figure 6.
 
The CNN model output of out-of-sample post-LASIK eyes is marked with black stars (cases classified as myopic LASIK [M-LASIK]) or red crosses (cases classified as hyperopic LASIK [H-LASIK]). The actual LASIK treatments were recorded with laser correction sphere (x-axis) and cylinder (y-axis) values (unit: diopters). The CNN model output 100% agreed with the clinical information.
Figure 6.
 
The CNN model output of out-of-sample post-LASIK eyes is marked with black stars (cases classified as myopic LASIK [M-LASIK]) or red crosses (cases classified as hyperopic LASIK [H-LASIK]). The actual LASIK treatments were recorded with laser correction sphere (x-axis) and cylinder (y-axis) values (unit: diopters). The CNN model output 100% agreed with the clinical information.
Discussion
In this study, we investigated a lightweight CNN model designed to detect LASIK history using corneal OCT maps. The proposed AI model demonstrated excellent accuracy (overall balanced accuracy of 90.2% ± 3.6%) in identifying eyes that had previously undergone myopic or hyperopic LASIK based on OCT scans. 
Studies have shown that AI greatly improves the diagnosis and management of various corneal conditions by offering automated, accurate, and efficient analysis of OCT and Scheimpflug tomography images.2428 Eleiwa et al.29 employed anterior segment OCT images and transfer learning with a VGG19 model to distinguish normal corneas from early and late-stage Fuchs cases. Kamiya et al.6 utilized a ResNet-18 network to discriminate between keratoconus and normal corneas and classify keratoconus severity using corneal color-coded maps obtained from anterior segment OCT images. Zéboulon et al.27 introduced a deep learning pipeline based on a U-Net for detecting various stages of corneal edema. Elsawy et al.7,30 fine-tuned and compared pretrained AlexNet, VGG16, and VGG19 networks for diagnosing dry eye syndrome, keratoconus, and Fuchs and later presented a multiscale neural network combining parallel resolution encoders with the pretrained layers of VGG16 to distinguish keratoconus and Fuchs from healthy corneas using OCT images. These studies have employed advanced CNN models, such as VGG or ResNet, as the backbone for processing OCT images or topography maps. While these models provide a wide range of image features due to their large number of parameters, they also pose challenges related to overfitting, particularly when the training data are limited. 
OCT cross-sectional images are commonly used as input for diagnostic analysis with AI in the literature.3133 We are taking a different approach by utilizing maps measuring specific properties of the cornea (thickness, shape, and reflectance). The maps can be generated from various OCT devices and scan patterns by following standardized protocols. This makes our method generalizable to OCT machines produced by different manufacturers. 
Our study used a lightweight CNN architecture consisting of just three convolutional layers and three fully connected layers. This design was intentional, aiming to balance the model complexity with the computational efficiency. We compared our lightweight CNN with ResNet-18,34 EfficientNetB0,35 DenseNet121,36 and Vision Transformer37 (see Supplementary Table S1). Our model demonstrated the highest computational efficiency, achieving faster training and inference times, and superior total balanced accuracy. Our results show that deep learning networks do not have to be overly complex to achieve high diagnostic accuracy. This is especially important in medicine, where practical feasibility and real-world implementation are key considerations. 
The descriptive statistics of corneal OCT parameters (Table 1) indicated significant differences in corneal thickness, epithelial thickness, anterior axial power, and anterior stroma reflectance between normal and LASIK groups. Traditional multivariable statistical classification models, including logistic regression, random forest classifier, gradient boosting classifier, and support vector machine, were used to combine the OCT parameters to detect LASIK. Among the traditional models, the support vector machine achieved the highest total balance accuracy (74.1%). In comparison, our proposed CNN model outperformed all of them with a total balance accuracy of 90.2%. 
We compared different down-sampling map sizes of 8 × 8, 16 × 16, and 32 × 32 pixels. The CNN model achieved total balance accuracies of 89.5% ± 4.4%, 90.2% ± 3.6%, and 90.3% ± 3.4%, respectively. Using 8 × 8 pixel maps resulted in a 0.7% decrease in accuracy, while increasing the size to 32 × 32 pixels improved the accuracy only by 0.1%. A down-sampling size of 16 × 16 pixels was selected as the optimal balance between retaining critical information and ensuring computational efficiency. 
A limitation of our study is that only central 5-mm × 5-mm corneal maps were used as inputs for the CNN model. LASIK typically reshapes an optical zone of about 6 mm in diameter, with a transition zone extending to 8.0 to 9.0 mm. In hyperopic LASIK cases, peripheral corneal changes are more prominent than central alterations. This may explain why some hyperopic LASIK cases were misclassified as normal (2.3%, case example in Supplementary Fig. S4), and some normal cases (6%) were mistaken as hyperopic LASIK by the CNN model (Fig. 3). In future investigations, we will use larger (up to 10 mm diameter) corneal maps to see if it will improve the model performance. Another limitation is that our LASIK participants did not include mixed astigmatism correction. Patients with mixed astigmatism correction are relatively rare. 
Photorefractive keratectomy (PRK) induces changes in corneal shape and thickness similar to those of LASIK. We anticipate that, with training and validation using post-PRK data, our CNN model will be effective in detecting PRK history. We plan to explore this in a future study. 
In conclusion, our lightweight CNN model provides a useful tool for LASIK history detection. 
Acknowledgments
Supported by the National Institutes of Health (Bethesda, MD) (grants R01EY029023, R21EY034330, T32EY023211, and P30EY010572), a research grant and equipment support from Visionix USA (formerly Optovue), a research grant from the Medical Research Foundation of Oregon, the Malcolm M. Marquis, MD Endowed Fund for Innovation, and an unrestricted grant from Research to Prevent Blindness (New York, NY, USA) to Casey Eye Institute, Oregon Health & Science University. 
Disclosure: J. Hong, Visionix USA (formerly Optovue) (F); A.A. Nanji, None; R.D. Stutzman, None; W.D. Chamberlain, None; X. Song, None; D. Huang, Visionix USA (F, P, R), Genentech (P, R), Intalight (F), Canon (F), Cylite (F); Y. Li, Visionix USA (F, P) 
References
Chia TMT, Jung HC. Cataract surgery following sequential myopic and hyperopic LASIK. Case Rep Ophthalmol. 2018; 9: 264–268. [CrossRef] [PubMed]
Tang M, Wang L, Koch DD, Li Y, Huang D. Intraocular lens power calculation after previous myopic laser vision correction based on corneal power measured by Fourier-domain optical coherence tomography. J Cataract Refract Surg. 2012; 38: 589–594. [CrossRef] [PubMed]
Wang L, Tang M, Huang D, Weikert MP, Koch DD. Comparison of newer intraocular lens power calculation methods for eyes after corneal refractive surgery. Ophthalmology. 2015; 122: 2443–2449. [CrossRef] [PubMed]
Li Y, Netto MV, Shekhar R, Krueger RR, Huang D. A longitudinal study of LASIK flap and stromal thickness with high-speed optical coherence tomography. Ophthalmology. 2007; 114: 1124–1132. [CrossRef] [PubMed]
Kermany DS, Goldbaum M, Cai W, et al. Identifying medical diagnoses and treatable diseases by image-based deep learning. Cell. 2018; 172: 1122–1131.e1129. [CrossRef] [PubMed]
Kamiya K, Ayatsuka Y, Kato Y, et al. Keratoconus detection using deep learning of colour-coded maps with anterior segment optical coherence tomography: a diagnostic accuracy study. BMJ Open. 2019; 9: e031313. [CrossRef] [PubMed]
Elsawy A, Abdel-Mottaleb M. A novel network with parallel resolution encoders for the diagnosis of corneal diseases. IEEE Trans Biomed Eng. 2021; 68: 3671–3680. [CrossRef] [PubMed]
Tahvildari M, Singh RB, Saeed HN. Application of artificial intelligence in the diagnosis and management of corneal diseases. Semin Ophthalmol. 2021; 36: 641–648. [CrossRef] [PubMed]
Li Z, Wang L, Wu X, et al. Artificial intelligence in ophthalmology: the path to the real-world clinic. Cell Rep Med. 2023; 4: 101095. [CrossRef] [PubMed]
Assaf JF, Yazbeck H, Reinstein DZ, et al. Automated detection of keratorefractive laser surgeries on optical coherence tomography using deep learning. medRxiv. 2024, URL: https://doi.org/10.1101/2024.03.08.24304001, Accessed March 9, 2025.
Li Y, Meisler DM, Tang M, et al. Keratoconus diagnosis with optical coherence tomography pachymetry mapping. Ophthalmology. 2008; 115: 2159–2166. [CrossRef] [PubMed]
Li Y, Shekhar R, Huang D. Corneal pachymetry mapping with high-speed optical coherence tomography. Ophthalmology. 2006; 113: 792–799.e792. [CrossRef] [PubMed]
Li Y, Tan O, Brass R, Weiss JL, Huang D. Corneal epithelial thickness mapping by Fourier-domain optical coherence tomography in normal and keratoconic eyes. Ophthalmology. 2012; 119: 2425–2433. [CrossRef] [PubMed]
Li Y, Chamberlain W, Tan O, Brass R, Weiss JL, Huang D. Subclinical keratoconus detection by pattern analysis of corneal and epithelial thickness maps with optical coherence tomography. J Cataract Refract Surg. 2016; 42: 284–295. [CrossRef] [PubMed]
Tang M, Shekhar R, Huang D. Mean curvature mapping for detection of corneal shape abnormality. IEEE Trans Med Imaging. 2005; 24: 424–428. [CrossRef] [PubMed]
Pavlatos E, Huang D, Li Y. Eye motion correction algorithm for OCT-based corneal topography. Biomed Opt Express. 2020; 11: 7343–7356. [CrossRef] [PubMed]
Do Carmo MP . Differential Geometry of Curves and Surfaces: Revised and Updated Second Edition. New York: Courier Dover Publications; 2016.
Tan O, Liu L, You Q, et al. Focal loss analysis of nerve fiber layer reflectance for glaucoma diagnosis. Transl Vis Sci Technol. 2021; 10: 9. [CrossRef] [PubMed]
Assaf JF, Hong J, Li Y, Huang D. Characterization of directional reflectance in corneal tissue: a comprehensive optical coherence tomography analysis. Invest Ophthalmol Vis Sci. 2024; 65: PB0057.
Fisher RA. The use of multiple measurements in taxonomic problems. Ann Eugenics. 1936; 7: 179–188. [CrossRef]
Breiman L. Random forests. Machine Learning. 2001; 45: 5–32. [CrossRef]
Friedman JH. Greedy function approximation: a gradient boosting machine. Ann Stat. 2001; 29(5): 1189–1232. [CrossRef]
Cortes C., Vapnik V. Support-vector networks. Machine Learning. 1995; 20: 273–297.
Ting DSW, Pasquale LR, Peng L, et al. Artificial intelligence and deep learning in ophthalmology. Br J Ophthalmol. 2019; 103: 167–175. [CrossRef] [PubMed]
Xu Z, Xu J, Shi C, et al. Artificial intelligence for anterior segment diseases: a review of potential developments and clinical applications. Ophthalmol Ther. 2023; 12: 1439–1455. [CrossRef] [PubMed]
Zeboulon P, Debellemaniere G, Bouvet M, Gatinel D. Corneal topography raw data classification using a convolutional neural network. Am J Ophthalmol. 2020; 219: 33–39. [CrossRef] [PubMed]
Zéboulon P, Ghazal W, Bitton K, Gatinel D. Separate detection of stromal and epithelial corneal edema on optical coherence tomography using a deep learning pipeline and transfer learning. Photonics. 2021; 8: 483. [CrossRef]
Issarti I, Consejo A, Jimenez-Garcia M, Hershko S, Koppen C, Rozema JJ. Computer aided diagnosis for suspect keratoconus detection. Comput Biol Med. 2019; 109: 33–42. [CrossRef] [PubMed]
Eleiwa T, Elsawy A, Ozcan E, Abou Shousha M. Automated diagnosis and staging of Fuchs' endothelial cell corneal dystrophy using deep learning. Eye Vis. 2020; 7: 44. [CrossRef]
Elsawy A, Eleiwa T, Chase C, et al. Multidisease deep learning neural network for the diagnosis of corneal diseases. Am J Ophthalmol. 2021; 226: 252–261. [CrossRef] [PubMed]
Elsawy A, Eleiwa T, Chase C, et al. Multidisease deep learning neural network for the diagnosis of corneal diseases. Am J Ophthalmol. 2021; 226: 252–261. [CrossRef] [PubMed]
Elsawy A, Abdel-Mottaleb M. A novel network with parallel resolution encoders for the diagnosis of corneal diseases. IEEE Trans Biomed Eng. 2021; 68: 3671–3680. [CrossRef] [PubMed]
Chase C, Elsawy A, Eleiwa T, Ozcan E, Tolba M, Abou Shousha M. Comparison of autonomous AS-OCT deep learning algorithm and clinical dry eye tests in diagnosis of dry eye disease. Clin Ophthalmol. 2021; 15: 4281–4289. [CrossRef] [PubMed]
He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, NV, USA: IEEE; 2016: 770–778.
Tan M, Le Q. Efficientnet: rethinking model scaling for convolutional neural networks. In: International Conference on Machine Learning. Long Beach, California: PMLR; 2019: 6105–6114.
Huang G, Liu Z, Van Der Maaten L, Weinberger KQ. Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, HI, USA: IEEE; 2017: 4700–4708.
Dosovitskiy A, Beyer L, Kolesnikov A, et al. An image is worth 16x16 words: transformers for image recognition at scale. ArXiv:2010.11929. 2020; URL:https://arxiv.org/abs/2010.11929, Accessed March 9, 2025.
Figure 1.
 
The architecture of CNN for LASIK detection.
Figure 1.
 
The architecture of CNN for LASIK detection.
Figure 2.
 
Averaged corneal OCT maps of normal, myopic LASIK (M-LASIK), and hyperopic LASIK (H-LASIK) eyes.
Figure 2.
 
Averaged corneal OCT maps of normal, myopic LASIK (M-LASIK), and hyperopic LASIK (H-LASIK) eyes.
Figure 3.
 
A confusion matrix that summarized the CNN model's correct and incorrect classification outputs. The values were in proportion (0∼1) and were averaged across the repeated fivefold cross-validation. H-LASIK, hyperopic LASIK; M-LASIK, myopic LASIK.
Figure 3.
 
A confusion matrix that summarized the CNN model's correct and incorrect classification outputs. The values were in proportion (0∼1) and were averaged across the repeated fivefold cross-validation. H-LASIK, hyperopic LASIK; M-LASIK, myopic LASIK.
Figure 4.
 
A bar plot showing the results of the ablation test. The CNN model using all five map inputs achieved the highest overall accuracy. * indicates a statistically significant decrease in the model performance.
Figure 4.
 
A bar plot showing the results of the ablation test. The CNN model using all five map inputs achieved the highest overall accuracy. * indicates a statistically significant decrease in the model performance.
Figure 5.
 
Class activation maps of the first convolution layer for myopic LASIK and hyperopic LASIK eyes. Higher class activation values indicate the regions of the map that were most influential in classification. I, inferior; N, nasal; S, superior; T, temporal.
Figure 5.
 
Class activation maps of the first convolution layer for myopic LASIK and hyperopic LASIK eyes. Higher class activation values indicate the regions of the map that were most influential in classification. I, inferior; N, nasal; S, superior; T, temporal.
Figure 6.
 
The CNN model output of out-of-sample post-LASIK eyes is marked with black stars (cases classified as myopic LASIK [M-LASIK]) or red crosses (cases classified as hyperopic LASIK [H-LASIK]). The actual LASIK treatments were recorded with laser correction sphere (x-axis) and cylinder (y-axis) values (unit: diopters). The CNN model output 100% agreed with the clinical information.
Figure 6.
 
The CNN model output of out-of-sample post-LASIK eyes is marked with black stars (cases classified as myopic LASIK [M-LASIK]) or red crosses (cases classified as hyperopic LASIK [H-LASIK]). The actual LASIK treatments were recorded with laser correction sphere (x-axis) and cylinder (y-axis) values (unit: diopters). The CNN model output 100% agreed with the clinical information.
Table 1.
 
Descriptive Statistics of Corneal Optical Coherence Tomography Measurements
Table 1.
 
Descriptive Statistics of Corneal Optical Coherence Tomography Measurements
Table 2.
 
Performance Matrix of the Convolutional Neural Network Model for LASIK Detection
Table 2.
 
Performance Matrix of the Convolutional Neural Network Model for LASIK Detection
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×