March 2021
Volume 10, Issue 3
Open Access
Articles  |   March 2021
Improvement of Multiple Generations of Intraocular Lens Calculation Formulae with a Novel Approach Using Artificial Intelligence
Author Affiliations & Notes
  • John Ladas
    Wilmer Eye Institute, Baltimore, MD, USA
    Maryland Eye Consultants and Surgeons, Silver Spring, MD, USA
  • Donna Ladas
    Maryland Eye Consultants and Surgeons, Silver Spring, MD, USA
  • Shawn R. Lin
    Stein Eye Institute, Los Angeles, CA, USA
    Massachusetts Eye and Ear Infirmary, Boston, MA, USA
  • Uday Devgan
    Stein Eye Institute, Los Angeles, CA, USA
  • Aazim A. Siddiqui
    Albert Einstein College of Medicine, New York, NY, USA
  • Albert S. Jun
    Wilmer Eye Institute, Baltimore, MD, USA
  • Correspondence: John Ladas, 2101 Medical Park Dr, Suite 101, Silver Spring, MD 20902, USA. e-mail: jladas@iolcalc.com 
Translational Vision Science & Technology March 2021, Vol.10, 7. doi:https://doi.org/10.1167/tvst.10.3.7
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      John Ladas, Donna Ladas, Shawn R. Lin, Uday Devgan, Aazim A. Siddiqui, Albert S. Jun; Improvement of Multiple Generations of Intraocular Lens Calculation Formulae with a Novel Approach Using Artificial Intelligence. Trans. Vis. Sci. Tech. 2021;10(3):7. https://doi.org/10.1167/tvst.10.3.7.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose: Cataract surgery is the most common eye surgery. Appropriate optimization of intraocular lens (IOL) calculation formulae can result in improved patient outcomes. The purpose of this article is to describe a methodology of optimizing existing IOL formulae and develop hybrid formulae based on artificial intelligence (AI).

Methods: Preoperative biometric and postoperative outcomes data were obtained from medical records at a single institution. A numeric computing environment was used to analyze these data and refine IOL formulae using supervised learning AI. The mean absolute error of each IOL formulae with and without AI enhancement was determined, as well as the number of eyes within 0.5 diopter of the predicted refraction.

Results: AI algorithms improved the mean absolute error as well as number of eyes within 0.5 diopters of predicted refraction for each of the formulae tested (P < 0.05).

Conclusions: A novel methodology is described that uses AI to improve existing IOL formulae. This methodology has the potential to improve clinical outcomes for cataract surgery patients.

Translational Relevance: Artificial intelligence can be used to improve existing IOL formulae.

Introduction
Cataract surgery is the most common surgical procedure performed in the United States and worldwide each year. Calculating the most accurate power of the intraocular lens (IOL) is a critical factor in optimizing patient outcomes. Unlike most medical interventions, the outcome of cataract surgery is precise, mathematical, and typically known within a few weeks of the procedure. 
IOL calculation formulae have evolved over multiple generations. For instance, the original SRK was a first-generation formula that was based on regression data only.1 Second-generation formulae included factors that scaled the prediction based on axial length (AL).1,2 Third-generation formulae such as the Holladay 1, Hoffer Q, and SRK/T are theoretical mathematical formulae that are based on both vergence optics and a prediction of the effective lens position of the IOL.1,3 Further generations of formulae such as the Barrett Universal II and Haigis used the measured anterior chamber depth (ACD) as a predictor of the effective lens position.4,5 The original Ladas super formula (LSF) combined multiple formulae to enhance accuracy by using the most appropriate formula for a specific eye.6 
The use of artificial intelligence (AI) in IOL calculations has been mentioned within the literature and little has been published as to the exact methodology. The concept of using a neural network came from Clarke and Burnmeister more than 20 years ago.7 There was no specific description of the algorithm and the main disadvantage as they pointed out was the requirement of “substantial computing power and memory.” More recently, other calculation methods that use some form of AI include the Hill-RBF,8 the most recent version of the Kane formula,9 the Sramka,10 and the Pearl-DGS.11 
Machine learning is a subset of AI that uses statistical methods to learn from outcome data. Supervised learning can be categorized as either classification or regression algorithms. Whereas classification algorithms attempt to “label” data, regression supervised learning attempts to use outcome data to predict or adjust outcomes to a desired result. 
Methods of supervised nonlinear regression machine learning include support vector regression (SVR), extreme gradient boost, and neural networks among others. SVR creates a predictive model based on both input variables and outcome data. The model then predicts the inner nonlinear relationship between the input variables and outcome as a continuous variable rather than a binary classification.12 Extreme gradient boosting (XGBoost) is one of the gradient boosting methods that assemble many weak prediction models used typically in decision trees.13 XGBoost combines many decision trees to reveal the relationship between the input variables and output. An artificial neural network (ANN) is built up by nodes and layers. Each node is a nonlinear filter and each layer is built up by parallel nodes. The input will go through the network of those nodes and the weights of the connections will be learned by training the ANN model.14 The combination of those nodes thus finally reveals the nonlinear relationship between the inputs and output to predict the outcome. 
The purpose of the present article is to describe a methodology that can be utilized to improve existing formulae and create hybrid–AI formulae. 
Methods
After appropriate institutional review board approval was obtained for a retrospective study, the patient records for eyes undergoing uncomplicated cataract surgery at a single institution (Massachusetts Eye and Ear Infirmary) and using a single type of IOL (AcrySof SN60WF, Alcon, Ft. Worth, TX) were obtained. Billing data were used to identify 9185 routine cataract surgeries (current procedural terminology code 66984) occurring at Massachusetts Eye and Ear Infirmary between April 2016 and December 2018. Data were collected from the electronic medical record (Epic Systems Corporation, Verona, WI). The eyes were further selected based on implantation of 1 type of IOL (SN60WF) and a postoperative best corrected visual acuity of 20/25 or better. Eyes were also excluded from the data base search if the words “posterior capsular rupture,” “tear,” “hole,” or “rent” were noted in the operative report. This resulting dataset included the following parameters: AL, keratometry, ACD, lens thickness, sex, age, and postoperative manifest refraction. The postoperative “actual” result was obtained from the medical record and was at least 1 month after cataract surgery. Eyes were also excluded for insufficient data or the inability to calculate an outcome because an input parameter was beyond what a particular formula would allow (i.e., a keratometry of >55 diopters). A total of 1391 remained for analysis. 
Using the IOL power implanted from the record and the User Group for Laser Interference Biometry suggested A-constant for this specific lens and each formula, a predicted outcome was obtained for each eye. The formulae used were the SRK, the Holladay I, and the LSF. 
Next, supervised learning algorithms were developed to predict the error between each formula's predicted outcome and the actual outcome. The supervised learning algorithms tested were the SVR, XGBoost and ANN. The predicted error from each of these algorithms was used to adjust the specific formula for each eye individually to come up with a new predicted outcome. For instance, if an individual eye had a predicted outcome of −0.5 diopters using the SRK formula and the actual outcome was –1.0 diopters, the prediction error was (−0.5 to −1.0 = 0.5 diopters) for that particular eye. This was done for each eye and each formula. Next, the variables of AL, ACD, and the average keratometry were used with each machine learning model to help predict that error for each eye. 
The dataset was randomly separated into 10 equal parts. Nine of the 10 sets were used to train the algorithm and tested on the remaining tranche. The software used was (Python 3.7 with scikit-learn package) and the variables that were given to refine the formula were AL, keratometry, and ACD. This was sequentially done 10 times with each tranche being used as the testing set. 
The hyperparameters for each model were as follows: for the SVR: C = 1, epsilon = 0 and the kernel function was the radial basis function). For the XGBoost: Max depth = 3, number of estimators = 30, colsample_bytree = 1, scale_pos_weight = 0.8. For the ANN, the hidden layers were set at (10, 10, 10). The relu function was used for the input layer and the limited-memory Broyden–Fletcher–Goldfarb–Shanno as the solver. The models were tuned using a grid search within a specified subset of hyperparameters. If the result reached the edge of the initial range, the range was expanded and the grid search was applied again. Overfitting of the models was prevented by applying a five-fold cross-validation within the training dataset. The Shapiro-Wilk test was applied to verify a normal distribution of the data. 
The mean absolute error (AE) ± standard deviation, as well as percentage of eyes within 0.5 and 1.0 diopter of prediction for each formula and the supervised learning hybrid formula were calculated. 
Statistical analysis was performed by Excel. The Wilcoxon signed rank test was used to compare the mean AE by the various methods. The Bonferroni correction was used to control for multiple comparisons. A P value of less than 0.05 was considered statistically significant. 
Results
Patient demographics are shown in TableFigure 1 shows the mean AE for each formula without AI enhancement. As expected, the mean AE was progressively better for each generation of formula (SRK = 0.499 ± 0.012; Holladay 1 = 0.392 ± 0.013; LSF = 0.355 ± 0.017; P < 0.5). Further, the percentage of eyes within 0.5 diopters of the predicted refraction improved with each generation (SRK = 62%, Holladay 1, =72% and LSF = 76%). 
Table.
 
Patient Demographics
Table.
 
Patient Demographics
Figure 1.
 
Mean AE for each baseline formula. * P < 0.05.
Figure 1.
 
Mean AE for each baseline formula. * P < 0.05.
Figures 2A–C shows the baseline mean AE for each formula compared to its own version of an AI enhanced formula. Each type of supervised learning algorithm statistically improved the mean AE for each formula. The MAE was significantly lower than the baseline formula for each algorithm tested. SRK baseline = 0.499 ± 0.012, SRK + SVR = 0.325 ± 0.023, SRK + XGB = 0.314 ± 0.022, SRK + ANN = 0.439 ± 0.026) Holladay 1baseline = 0.392 ± 0.013, Holladay 1 + SVR = 0.307 ± 0.021, Holladay 1 + XGB = 0.309 ± 0.022, Holladay + ANN = 0.326 ± 0.022. LSF baseline = 0.355 ± 0.017, LSF + SVR = 0.311 ± 0.021, LSF + XGB = 0.310 ± 0.024, LSF + ANN = 0.319 ± 0.024. There was no instance that the improvement with this dataset was not statistically significant. This result was also true for the median AE. Figure 3 shows the percentage of eyes within 0.5 diopter of predicted refraction for each formula both with and without each supervised learning algorithm. The percent of eyes within 0.5 diopter of the predicted refraction for the SRK increased from 61% to a maximum of 81% with the XGB algorithm (Fig. 3). The Holladay 1 increased from 72% to 82% with both the SVR and XGB algorithms. The LSF increased from 76% to 82% with the XGB algorithm. 
Figure 2.
 
(AC) Mean AE for the (A) SRK formula, (B) Holladay 1 formula, (C) LSF; and with each supervised learning algorithm. * P < 0.05.
Figure 2.
 
(AC) Mean AE for the (A) SRK formula, (B) Holladay 1 formula, (C) LSF; and with each supervised learning algorithm. * P < 0.05.
Figure 3.
 
Eyes within 0.5 diopters of predicted refraction with the baseline formula and each supervised learning algorithm.
Figure 3.
 
Eyes within 0.5 diopters of predicted refraction with the baseline formula and each supervised learning algorithm.
Discussion
Our baseline data are consistent with outcomes recently reported in the literature for current theoretical formulae. In particular, the mean AE for the Holladay I formula was 0.397 diopter with 70% and 94% of the eyes within 0.5 and 1.0 diopter, respectively.9 This outcome is similar to our results demonstrating a mean AE of 0.392 and 72% and 95% within 0.5 and 1.0 diopter of prediction with the standard Holladay I formula. The range of all mean AEs in the study by Darcy et al.9 was 0.377 to 0.410 diopter for all of the theoretical formulae. The mean AE for the Kane formula was 0.377 and the Barrett Universal II was 0.390. The greatest number of eyes within 0.5 diopter of predicted refraction with any formula was 72%. The LSF was not included in that particular analysis. 
When we applied our unique supervised learning algorithm to this dataset, we were able to demonstrate a statistically significant decrease in the mean AE and median AE with each formula and each supervised learning method. Further, the number of eyes within 0.5 diopter of prediction increased for each formula when our supervised learning algorithm was applied. 
Additional variables have been shown to improve outcomes when accounted for individually. For instance, the Wang–Koch adjustment for AL has been applied to eyes greater than 25 mm.15 However, this important variable does not occur in a vacuum and is likely intimately related to other variables such as the ACD. Further, it is doubtful that any AL adjustment should start and stop at exactly 25 mm. Others have proposed incorporating additional variables to account for a multitude of factors. In fact, the Holladay 2 formula released in 1992 includes additional variables of lens thickness, white to white distance, preoperative refraction, and age.16 Other potential variables that have been suggested to have an effect on IOL prediction include the equatorial lens position, age, race, gender, aphakic refraction, relative ratio of various eye segments, C-factor, posterior corneal power, corneal thickness, specific lens design, and the exact power of the IOL.1,1722 Again, these variables do not occur in a vacuum and may be interrelated. Deep learning can be used to weigh the effect of multiple variables on reaching a desired outcome. These advancements and adjustments are unlikely to be conceived as single variables or discrete formulae, and progress using such approaches likely would be inefficient compared to machine learning methods. 
The approach described in this article is contrasted from forms of deep learning such as the Hill radial basis function that attempt to back calculate an algorithm from a fixed dataset. As mentioned, there are instances when the Hill-RBF computes values that are unreliable or out of bounds where there is a paucity of data. The methodology described herein is different in that there is a baseline formula and that is refined using machine learning. Further, a formula like the Hill-RBF would not be able to add additional variables if they were deemed important unless it started with a new dataset. The methodology we describe could add other variables and be tested in an ongoing manner to determine if they improved the accuracy or not. Other investigators have stated that their formulae use parts of AI and machine learning in their algorithms, but the descriptions are extremely limited. Perhaps the most well-described is that by Sramka et al.,10 which used the clinical result and machine learning to modify the IOL power and predicted outcome. They were able to demonstrate an improvement in the prediction error. 
From a theoretical standpoint, it is also interesting and perhaps predictable that each algorithm we tested improved each formula to a similar threshold with this particular dataset and the variables that were used. Indeed, each algorithm was able to predict and adjust each of the formula's errors individually and for each eye in a way that could never be written in a mathematical formula. 
In conclusion, this article describes a methodology to improve existing IOL calculation formulae using machine learning. Future work from this group will look to demonstrate this on additional formulae with the inclusion of additional variables. 
Acknowledgments
The authors thank Xiaonan Zhang for his help in programming and analyzing the machine learning algorithms. 
Disclosure: J. Ladas, Advanced Euclidean Solutions, LLC (F); D. Ladas, None; S.R. Lin, None; U. Devgan, Advanced Euclidean Solutions, LLC (F); A.A. Siddiqui, Advanced Euclidean Solutions, LLC (F); A.S. Jun, Advanced Euclidean Solutions, LLC (F) 
References
Olsen T. Calculation of intraocular lens power: a review. Acta Ophtalmol Scand. 2007; 85(5): 472–485. [CrossRef]
Olsen T, Thom K, Corydon L. Theoretical versus SRK I and SRK II calculation of intraocular lens power. J Cataract Refract Surg. 1990; 16(2): 217–225. [CrossRef] [PubMed]
Holladay JT, Prager TC, Chandler TY, Musgrove KH, Lewis JW, Ruiz RS. A three-part system for refining intraocular lens power calculations. J Cataract Refract Surg. 1988; 14(1): 17–24. [CrossRef] [PubMed]
Barrett GD. An improved universal theoretical formula for intraocular lens power prediction. J Cataract Refract Surg. 1993; 19(6): 713–720. [CrossRef] [PubMed]
Haigis W. Kongreß d. Deutschen Ges. f. Intraokularlinsen Implantation. In: Schott K, Jacobi KW, Freyler H, eds. Strahldurchrechnung in Gauß’scher Optik zur Beschreibung des Sustems Brille-Kontaktlinse-Hornhaut-Augenlinse (IOL). Berlin, Germany: Springer; 1991: 233–246.
Ladas JG, Siddiqui AA, Devgan U, Jun AS. A 3-D “super surface” combining modern intraocular formulas to generate a “super formula” and maximize accuracy. JAMA. 2015; 133(12): 1431–1436.
Clarke GP, Burmeister JB. Comparison of intraocular lens computations using a neural network versus the Holladay formula. J Cataract Refract Surg. 1997; 23(10): 1585–1589. [CrossRef] [PubMed]
Haag-Streit AG . Hill-RBF Method. Released: October 2017/V2.0. Koeniz, Switzerland: Haag-Streit AG. Available at: https://www.haag-streit.com/fileadmin/Haag-Streit_Diagnostics/biometry/EyeSuite_IOL/Brochures_Flyers/White_Paper_Hill-RBF_Method_20160819_2_0.pdf. Accessed Dec. 16, 2019.
Darcy K, Gunn D, Tavassoli S, Sparrow J, Kane JX. Assessment of the accuracy of new and updated intraocular lens power calculation formulas in 10930 eyes from the UK national health service. J Cataract Refract Surg. 2020; 46(1): 2–7. [PubMed]
Sramka M, Slovak M, Tuckova J, Stodulka P. Improving clinical refractive results of cataract surgery by machine learning. PeerJ. 2019; 7: e7202. [CrossRef] [PubMed]
Pearl-DGS formula for IOL power calculation. Available at: https://gatinel.com/en/recherche-formation/biometrie-oculaire-calcul-dimplant/pearl-dgs-formula-for-iol-power-calculation/. Accessed July 8, 2020.
Drucker H, Burges C, Kaufman L, Smola A, Vapnik V. Support vector regression machines. Adv Neural Inf Process Syst. 1996; 9: 155–161.
Chen T, Guestrin C. XGBoost : A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD Conference on Knowledge, Discovery and Data Mining Hilton Hotel, San Francisco, CA, USA, August 13-17, 2016. Krishnapuram B, Shah M, Smola A, Aggarwal C, Shen D, Rastogi R (Eds.). ACM, 785–794.
Zell A. Simulation Neuronaler Netze [Simulation of Neural Networks] (in German). (1st ed.). Boston: Addison-Wesley. 1994; p. 73.
Wang L, Holladay JT, Koch DD. Wang-Koch axial length adjustment for the Holladay 2 formula in long eyes. J Cataract Refract Surg. 2018; 44(10): 1291–1292. [CrossRef] [PubMed]
Mahdavi S, Holladay J. IOLMaster 500 and integration of the Holladay 2 formula for intraocular lens calculations. Eur Ophthalm Rev. 2011; 5(2): 134–135. [CrossRef]
Melles RB, Holladay JT, Chang WJ. Accuracy of intraocular lens calculation formulas. Ophthalmology. 2018; 125(2): 169–178. [CrossRef] [PubMed]
Olsen T. Prediction of the effective postoperative (intraocular lens) anterior chamber depth. J Cataract Refract Surg. 2006; 32(3): 419–424. [CrossRef] [PubMed]
Cooke DL, Cook TL. Approximating sum-of-segments axial length from a traditional optical low-coherence reflectometry measurement. J Cataract Refract Surg. 2019; 45(3): 351–354. [CrossRef] [PubMed]
Olsen T, Corydon L, Gimbel H. Intraocular lens power calculation with an improved anterior chamber depth prediction algorithm. J Cataract Refract Surg. 1995; 21(3): 313–319. [CrossRef] [PubMed]
Yoo YS, Whang WJ, Hwang KY. Use of the crystalline lens equatorial plane (LEP) as a new parameter for predicting postoperative IOL position. Am J Ophthalmol. 2019; 198: 17–24. [CrossRef] [PubMed]
Olsen T. The Olsen formula. In: Shammas HJ, ed. Intraocular Lens Power Calculations. Thorofare, NJ: Slack; 2004: 27–38.
Figure 1.
 
Mean AE for each baseline formula. * P < 0.05.
Figure 1.
 
Mean AE for each baseline formula. * P < 0.05.
Figure 2.
 
(AC) Mean AE for the (A) SRK formula, (B) Holladay 1 formula, (C) LSF; and with each supervised learning algorithm. * P < 0.05.
Figure 2.
 
(AC) Mean AE for the (A) SRK formula, (B) Holladay 1 formula, (C) LSF; and with each supervised learning algorithm. * P < 0.05.
Figure 3.
 
Eyes within 0.5 diopters of predicted refraction with the baseline formula and each supervised learning algorithm.
Figure 3.
 
Eyes within 0.5 diopters of predicted refraction with the baseline formula and each supervised learning algorithm.
Table.
 
Patient Demographics
Table.
 
Patient Demographics
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×