December 2021
Volume 10, Issue 14
Open Access
Articles  |   December 2021
A Deep Learning–Based Framework for Accurate Evaluation of Corneal Treatment Zone After Orthokeratology
Author Affiliations & Notes
  • Yong Tang
    School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China
  • Zhao Chen
    Aier School of Ophthalmology, Central South University, Changsha, China
  • Weijia Wang
    School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu, China
  • Longbo Wen
    Aier School of Ophthalmology, Central South University, Changsha, China
  • Linjing Zhou
    School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu, China
  • Mao Wang
    Information Center, Aier Eye Hospital Group, Changsha, China
  • Fan Tang
    Information Center, Aier Eye Hospital Group, Changsha, China
  • He Tang
    School of Electronic Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China
  • Weizhong Lan
    Aier School of Ophthalmology, Central South University, Changsha, China
    Guangzhou Aier Eye Hospital, Jinan University, Guangzhou, China
  • Zhikuan Yang
    Aier School of Ophthalmology, Central South University, Changsha, China
    Hunan Province Optometry Engineering Technology Research Center, Changsha, China
  • Correspondence: He Tang, School of Electronic Science and Engineering, University of Electronic Science and Technology of China, Chengdu 610054, China. e-mail: tanghe@uestc.edu.cn 
  • Weizhong Lan, Aier School of Ophthalmology, Central South University, Changsha 410000, China. e-mail: lanweizhong@aierchina.com 
  • Footnotes
    *  YT and ZC contributed equally to the work and therefore should be treated as co-first authors.
Translational Vision Science & Technology December 2021, Vol.10, 21. doi:https://doi.org/10.1167/tvst.10.14.21
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Yong Tang, Zhao Chen, Weijia Wang, Longbo Wen, Linjing Zhou, Mao Wang, Fan Tang, He Tang, Weizhong Lan, Zhikuan Yang; A Deep Learning–Based Framework for Accurate Evaluation of Corneal Treatment Zone After Orthokeratology. Trans. Vis. Sci. Tech. 2021;10(14):21. https://doi.org/10.1167/tvst.10.14.21.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose: Given the robust effectiveness of inhibiting myopia progression, orthokeratology has gained increasing popularity worldwide. However, identifying the boundary and the center of reshaped corneal area (i.e., treatment zone) is the main challenging task in evaluating the performance of orthokeratology. Here we present automated deep learning algorithms to solve the challenges.

Methods: A total of 6328 corneal topographical maps, including 2996 axial subtractive maps and 3332 tangential subtractive maps, were collected from 2044 myopic patients who received orthokeratology. The boundary and the center of the treatment zones were annotated by experts as ground truths using axial subtractive maps and tangential subtractive maps, respectively. The algorithms based on neural network structures of fully convolutional networks (FCNs) and convolutional neural networks (CNNs) were developed to automatically identify the boundary and the center of the treatment zone, respectively.

Results: The algorithm of FCNs identified the treatment zone boundaries with an accuracy intersection over union (IoU) of 0.90 ± 0.06 (mean ± SD; range, 0.60–0.97). The algorithm of CNNs also identified the treatment zone centers with an average deviation of 0.22 ± 0.22 mm (range, 0.01–1.66 mm).

Conclusions: These results show that a deep learning–based solution is able to provide an automatic and accurate tool to accomplish the two main challenges of orthokeratology.

Translational Relevance: Deep learning in orthokeratology can shorten the time while maintaining accurate results in clinical practice, which enables clinicians to help more patients daily.

Introduction
The orthokeratology (OK) lens is a rigid reverse-geometry contact lens that is designed to temporarily correct myopia by reshaping the cornea.1 Accumulating evidence has shown the efficacy of an OK lens on inhibiting myopia progression2 and that the lens has gained increasing popularity worldwide.3 A standard OK lens has four curves, from the center to the periphery, namely, a base curve, a reverse curve, an alignment curve, and a peripheral curve. Of these curves, the radius of the base curve is designed to be larger than that of the central cornea to achieve the treatment by inducing a flattened cornea, designated as the treatment zone. The location and size of the treatment zone are two critical parameters in evaluating the performance of the OK lens treatment. In practice, this is determined by examining the subtractive map between the topography measured before and after OK lens treatment using corneal topographers. Significant dislocation of the treatment zone (i.e., deviation of the center of the treatment zone from the pupil center) or a too small treatment zone leads to poor visual acuity, visual disturbance such as glare and double vision, and poor antimyopia effect.48 An imperfect treatment zone usually leads to multiple adjustments of the lens's parameters or even ordering a new lens. 
Despite the importance, there is no standard technique to assess the center and the boundary of the treatment zone. In general clinical practice, clinicians have to make decisions based on their experience. This decision-making procedure is very subjective and inevitably introduces variations among clinicians. In addition, the decision-making procedure in some ambiguous cases is particularly challenging for less experienced clinicians. In recent years, several scholars have attempted to develop semiobjective techniques to facilitate the decision-making procedure, based on a third-party commercial software.911 However, these techniques were only applicable for cases when the topography formed a complete, continuous, “bull’s-eye” pattern (i.e., an all-round perfect contact for the reverse curve, the cornea, and the usually well-centered location), which are not always the perfect cases. Another significant drawback of these techniques is that they are time-consuming and require clinicians to outline the contour of the treatment zone manually, followed by additional analysis using third-party software. 
Thanks to the remarkable developments of deep learning algorithms, recent years have seen significant advances in artificial intelligence (AI).12 As a new branch of AI, deep learning has sparked tremendous research interest and led to applications in many fields, including health care and medical diagnosis.13,14 The adoption of deep learning in medical image analysis, for instance, has demonstrated competitive performance in a range of analysis tasks, such as classification, detection, segmentation, registration, and location.1518 Deep learning has also shown exciting potential in ophthalmology based on optical coherence tomography and fundus photographs.16,19,20 In this study, we proposed and developed a deep learning approach to evaluate the treatment effect after orthokeratology. Specifically, we developed and trained a deep learning neural network with an encoder–decoder structure to identify the boundary of the treatment zone and developed a deep learning neural network with a convolutional neural network structure to determine the center of the treatment zone. Both algorithms achieved promising performance in validation data sets. With this approach, the critical evaluation process can be conducted fully automatically. 
Methods
Image Data Set
The overall flowchart of this study is shown in Figure 1. First, 6328 anonymized corneal topography maps were collected from Changsha Aier Eye Hospital, Aier Eye Hospital Group. The image data set comprised axial subtractive maps (n = 2996) and tangential subtractive maps (n = 3332) from patients who received orthokeratology treatment between 2015 and 2018. The corneal topography of these patients was measured using the Pentacam HR tomographer (Oculus GmbH, Wetzlar, Germany), which is a noninvasive anterior segment tomographer based on rotating Scheimpflug technology. For each eye, corneal topographies at two different stages were taken. Specifically, the first was taken before the OK lens was worn (t = 0 days) and the second was taken approximately 3 months (t = 90 days) after the OK lens treatment, when the corneal parameters were stabilized. Then, the subtractive map was generated by subtracting the second map from the first map. Finally, the subtractive map was extracted as an image file for later annotation. Subtractive maps can show the practitioner about the effect of orthokeratology on the corneal surface. Two types of subtractive maps were employed for analysis. Axial subtractive maps reflected the changes of optical power in axial corneal surfaces21,22 and were used to determine the area of the treatment zone in the study. Tangential subtractive maps offered a better representation of the changes in cornea23,24 and were used to determine the centration of the treatment zone. All the maps were extracted as images with a size of 300 × 300 pixels, corresponding to 10.63 × 10.63 mm; the diameter of the corneal topography was 254 pixels, or 9 mm. The study was approved by the institute's ethical committee (AIER2018IRB27). Given the nature of the data and the study design, participants’ informed consent was not required. 
Figure 1.
 
Overall flowchart of this study. A total of 6328 corneal maps from 2044 patients were collected, including 2996 axial subtractive maps and 3332 tangential subtractive maps. All maps were manually annotated by a group of experienced experts. The treatment zones and the treatment area centers were identified for axial subtractive maps and tangential subtractive maps, respectively. Afterward, deep learning models of FCN and CNN were developed and trained in the training data sets independently for the tasks of identifying treatment zones and centers, respectively. Last, the trained models were further evaluated in independent validation data sets.
Figure 1.
 
Overall flowchart of this study. A total of 6328 corneal maps from 2044 patients were collected, including 2996 axial subtractive maps and 3332 tangential subtractive maps. All maps were manually annotated by a group of experienced experts. The treatment zones and the treatment area centers were identified for axial subtractive maps and tangential subtractive maps, respectively. Afterward, deep learning models of FCN and CNN were developed and trained in the training data sets independently for the tasks of identifying treatment zones and centers, respectively. Last, the trained models were further evaluated in independent validation data sets.
Image Annotation
To provide the ground truths for training and validating the deep learning algorithms, we invited clinicians to manually annotate all images. First, the image files of axial subtractive maps were manually segmented by three experienced clinicians independently to indicate the boundary of the treatment zone. Then the results were reviewed by another expert, who is one of the authors, to ensure data quality. For a given axial subtractive map, one segmentation result was randomly chosen from the qualified results to serve as the ground truth. Similarly, three experienced clinicians were invited to identify the center of the treatment zone in the images of tangential subtractive maps independently. The ground truth of the center was obtained by averaging the positions of the three annotations to eliminate bias. The annotations were conducted using a tool developed in house. 
Deep Learning Methods
The automatic analysis of the OK treatment maps consisted of two tasks, namely, the identification of the treatment zone in axial subtractive maps and the identification of the centers in tangential subtractive maps. Since these two tasks were independent, we developed two different deep learning algorithms to achieve the objectives of these two tasks. Furthermore, the two algorithms were trained using two separate data sets of axial subtractive maps and tangential subtractive maps to identify the boundaries and the centers of the treatment zones, respectively. 
Identification of the Treatment Zone Boundary
The first task was to automatically identify the boundary of the treatment zone in an axial subtractive map. A semantic segmentation approach was used for the pixel-level classification, in which all pixels of a given axial subtractive image were labeled as either one of the two parts, namely, the treatment zone and the remaining nonflattened region. Deep learning models have achieved state-of-the-art performance in medical image semantic segmentation.17 In this study, we adopted a deep neural network based on the fully convolutional network (FCN) structure. It comprised two parts, including one encoder and one decoder.2527 The encoder performed the downsampling to convert the original image to a smaller size. This encoder is similar to a standard convolutional neural network (CNN) involving convolutional and max-pooling layers. Down through the encoder, multiple levels of feature representations were extracted to learn the context information. In the decoder, however, the features were expanded back into a larger image by upsampling. Transposed convolutions were used to obtain the localization information. By concatenating the corresponding layers of the encoder and decoder, information at both higher and lower resolutions was used to achieve precise segmentations at the pixel level. As a result, the pixels of the output image were labeled as the treatment zone or the nontreatment zone. Figure 2A illustrates a detailed schematic diagram of the encoder–decoder architecture. 
Figure 2.
 
Schematics for the structures of deep learning algorithms. (A) Neural network of FCN with ResNet-101 blocks for treatment zone identification. The structure comprises encoder and decoder with multiple CNN layers and ResNet blocks. The encoder transforms the input image into an abstract representation, which the decoder maps back into a mask image. The mask indicates the pixel-wise segmentation of the treatment zone. (B) Neural network structure of CNN layers for treatment zone center identification. The CNN layers extract abstract patterns progressively from the input image. Max-pooling layers further compress the representation into lower dimensions; finally, an output layer generates the two coordinates of the center position in the image.
Figure 2.
 
Schematics for the structures of deep learning algorithms. (A) Neural network of FCN with ResNet-101 blocks for treatment zone identification. The structure comprises encoder and decoder with multiple CNN layers and ResNet blocks. The encoder transforms the input image into an abstract representation, which the decoder maps back into a mask image. The mask indicates the pixel-wise segmentation of the treatment zone. (B) Neural network structure of CNN layers for treatment zone center identification. The CNN layers extract abstract patterns progressively from the input image. Max-pooling layers further compress the representation into lower dimensions; finally, an output layer generates the two coordinates of the center position in the image.
Identification of the Treatment Zone Center
In the second task, identifying the treatment zone center, the output of an input image is a tuple comprising the two coordinates of the treatment zone center. Therefore, we needed a neural network structure that was capable of extracting features by way of dimensionality reduction. In this study, we designed a four-layer CNN to calculate the coordinates of the treatment zone center. In each convolutional layer, a convolutional operation was performed using a kernel in a sliding window approach to scan the input image. Afterward, a rectified linear unit activation layer was conducted to introduce nonlinearity before a max-pooling operation was performed to shrink the dimensionality. The four convolutional layers were performed sequentially before the final two coordinates were output to indicate the treatment zone center. Figure 2B presents the detailed structure of the CNN model. 
We developed the deep learning models in Python (version 3.7.3) using the open-source deep learning library PyTorch (version 1.1.0). Additional Python libraries used in this study were NumPy (version 1.16.2), Pandas (version 0.24.2), and Matplotlib (version 3.0.3). The models were trained and evaluated using the Nvidia-SMI 418.67 Tesla P40 (Nvidia, Santa Clara, CA, USA) graphical processing unit with the compute unified device architecture toolkit (version 10.1.105). 
Algorithm Development
The boundary and the center of the treatment zone were pixelwise annotated by experts before they were used to develop the algorithm. Both the data sets of the axial subtractive maps and the tangential subtractive maps were randomly split into a training set (90%) and a validation set (10%) for deep learning model training and evaluation, respectively. First, deep learning neural networks of the FCN structure were trained to identify the boundary of the treatment zone based on the axial subtractive maps. At the same time, the CNN structures were trained to identify the treatment zone centers based on tangential subtractive maps. 
Evaluation of Treatment Zone Boundary Identification
To evaluate the performance of the deep learning models in identifying the treatment zone boundary, we adopted the evaluation metric of intersection over union (IoU = TP/(TP + FP + FN)), where TP is true positive, FP is false positive, FN is false negative. The IoU was calculated using the predicted area obtained by the proposed model and the ground truth annotated by the human expert for each validation sample. A larger value of IoU closer to 1 indicated better performance compared to poor performance with a smaller IoU. Therefore, the main task in developing the segmentation algorithm was to maximize the IoU value. We reported the average IoU as the performance of a given deep learning model over the samples in the whole validation set. Besides IoU, in line with segmentation literature, we also reported the value of the Dice similarity coefficient (DSC = 2TP/(TP + FP + TP + FN)) as the secondary evaluation metric to describe the overlapping of the ground truth and the prediction.18,28,29 To visualize the result of treatment zone identification, we drew the boundaries of the ground truth area and the predicted area in the same axial subtractive map. 
Evaluation of Treatment Zone Center Identification
The prediction of the treatment zone centers was to find the two values of coordinates in the tangential subtractive map. Given the two positions of the ground truth annotated by the experts and the prediction obtained by the proposed CNN model, adopting the Euclidean distance between the two centers to indicate the deviation was straightforward. The deviation was described as pixels in the image and later converted into the physical distance in millimeters. The optimization target of the model was to minimize the gap between the predicted center and the ground truth center. As described above, the ground truth center was obtained by averaging the three centers annotated by the three experts. To visualize the result of treatment zone center identification, the two centers were plotted in the same tangential subtractive map. 
Results
Data Description
A total of 6328 maps of 2044 patients treated with the OK lens were included in this study. The average age at the initiation of OK lens wear was 13.56 ± 2.78 years, ranging from 8 to 23 years. Among them, 909 were male (44.47%) and 1135 were female (55.53%). As a result, 2996 axial subtractive maps and 3332 tangential subtractive maps were obtained and anonymized to develop the deep learning algorithms. 
Performance of Treatment Zone Boundary Identification
For the identification of the treatment zone boundary in axial subtractive maps, the interclinician IoU was 0.54 ± 0.11, 0.50 ± 0.11, and 0.83 ± 0.09, respectively (all P < 0.01). Despite the significantly good agreement between these three experienced clinicians, only one segmentation result was randomly chosen to serve as the ground truth for each axial subtractive map, to reduce possible bias of the readers. Based on the data set of the ground truth, we first trained the FCN model on the training set, then evaluated the model on another independent validation set. The performance in the training stage was a mean ± SD IoU of 0.92 ± 0.04 and DSC of 0.98 ± 0.02. The final results of the segmentation model in the validation set were very promising, with an IoU of 0.90 ± 0.06 and a DSC of 0.94 ± 0.04. The distribution of IoU values is illustrated in Figure 3A. The results indicated that the FCN deep learning structure can accurately identify the treatment zone boundary in the axial subtractive map with encouraging performance. Figure 4 shows examples of the best and worst performance in the segmentation of the treatment zone. 
Figure 3.
 
Histogram of the performance in validation data sets. (A) Histogram of IoU for treatment zone identification in the validation set of axial subtractive maps. Larger IoU values indicate good segmentation of treatment zones. (B) Histogram of deviation (millimeters) for treatment zone center identification in the validation set of tangential subtractive maps. Small values of deviation indicate good identification of treatment zone centers.
Figure 3.
 
Histogram of the performance in validation data sets. (A) Histogram of IoU for treatment zone identification in the validation set of axial subtractive maps. Larger IoU values indicate good segmentation of treatment zones. (B) Histogram of deviation (millimeters) for treatment zone center identification in the validation set of tangential subtractive maps. Small values of deviation indicate good identification of treatment zone centers.
Figure 4.
 
Examples of treatment zone identification in the axial subtractive maps with the best (A) and the worst performance (B). The white contour was annotated by AI, while the black contour was annotated by the human expert. (A) Good segmentation of the treatment zone with the greatest IoU of 0.96. (B) Poor segmentation of the treatment zone with the smallest IoU of 0.66, which was due to the relatively great decentration of the OK lens and the resultant unclear treatment boundary. Cases with this type of corneal change usually lead to borderline unacceptable visual quality and require adjustment of the lens parameter to achieve better lens centration.
Figure 4.
 
Examples of treatment zone identification in the axial subtractive maps with the best (A) and the worst performance (B). The white contour was annotated by AI, while the black contour was annotated by the human expert. (A) Good segmentation of the treatment zone with the greatest IoU of 0.96. (B) Poor segmentation of the treatment zone with the smallest IoU of 0.66, which was due to the relatively great decentration of the OK lens and the resultant unclear treatment boundary. Cases with this type of corneal change usually lead to borderline unacceptable visual quality and require adjustment of the lens parameter to achieve better lens centration.
Performance of Treatment Zone Center Identification
Similarly, we trained and evaluated the CNN deep learning model to identify the treatment zone center using the training and validation sets of the tangential subtractive maps, respectively. In the training stage, the CNN model achieved an average deviation of 5.33 ± 4.42 pixels between the ground truth center and the predicted center, namely, 0.19 ± 0.16 mm. The performance on the validation set was 6.32 ± 6.23 pixels or 0.22 ± 0.22 mm. The distribution of deviation in the validation set is plotted in Figure 3B. The results demonstrate that the deep learning model can identify the treatment zone center with high accuracy. Figure 5 shows examples of the best and worst performance in the identification of the treatment zone center. 
Figure 5.
 
Examples of treatment zone center identification in the tangential subtractive maps with the best (A) and the worst performance (B). The white dot was annotated by AI. The black dots were annotated by the three human experts, and then the averaged location was represented by a red dot. (A) Good identification of the treatment center with the smallest deviation of 0.01 mm. (B) Poor identification of the treatment zone center with the greatest deviation of 1.66 mm, which was due to a rare case called “central island” (i.e., an abnormally raised ridge on the cornea after a failed OK lens treatment); therefore, the judgment by AI was misleading.
Figure 5.
 
Examples of treatment zone center identification in the tangential subtractive maps with the best (A) and the worst performance (B). The white dot was annotated by AI. The black dots were annotated by the three human experts, and then the averaged location was represented by a red dot. (A) Good identification of the treatment center with the smallest deviation of 0.01 mm. (B) Poor identification of the treatment zone center with the greatest deviation of 1.66 mm, which was due to a rare case called “central island” (i.e., an abnormally raised ridge on the cornea after a failed OK lens treatment); therefore, the judgment by AI was misleading.
Discussion
Deep learning is a new branch of artificial neural network characterized by multiple layers, a large number of neurons, and complicated network structures.12 By training the connectivity matrices, a deep learning network can effectively learn the abstract representations of the training samples in a variety of inference tasks with unprecedented performance. To our knowledge, this is the first time deep learning has been applied in facilitating OK lens treatment. It was found that a deep learning–based approach can identify the boundary and the center of the treatment zone automatically, with precision comparable to that of human experts and instant efficiency. 
Hiraoka et al.9 were among the first scholars to attempt to define the treatment zone after OK lens treatment with computer assistance. They manually outlined the contour of the treatment zone with 16 discrete dots based on the annotator's personal experience and then approximated an ellipse and its center using a customized data analysis program written in a third-party programming language (MATLAB; The MathWorks, Natick, MA, USA). The authors reported the technique had good reproducibility (the repeatability between examiners had not been reported in the literature). A later study by our team in which this technique was adopted showed a much poorer level of intraindividual repeatability,30 indicating the performance of the approach relies heavily on the annotator's experience. More recently, Mei et al.11 introduced another technique based on image analysis software (Image-Pro Plus; Media Cybernetics Corporation, Rockville, MD, USA). Similarly, the boundary of the treatment zone was manually depicted by the annotator. The technique was reported to demonstrate “excellent” reproducibility by one examiner and repeatability between two examiners. However, the authors admitted several inherited drawbacks of the technique to restrict its application in practice. For instance, the treatment zone could not be drawn completely when its boundary was discontinuous due to lens decentration or incomplete acquisition of image data by the topographer. In addition, they pointed out that the manual depiction required highly detailed operation skills with the software; therefore, they did not suggest the method for extensive analysis. 
Unlike the aforementioned studies, our approaches are based on advanced deep learning algorithms that can learn the abstract experience of professional experts. Therefore, the trained algorithms can mimic the decision making of experts in evaluating the treatment zone. Our results show that the algorithms developed performed very well in the identification of both the boundary and the center of the treatment zone. The level of accuracy fully met the need for clinical practice. Compared with the previous techniques, the current approach has several advantages. First, the annotation was conducted by three independent experts, but not one or two experts, to avoid individual bias. Second, the large data set covered all possible patterns of the treatment zones, including irregular, discontinuous boundaries (e.g., Fig. 4B) or even some rare cases such as “central island” (e.g., Fig. 5B). This diversity of the data set ensures the generalization as well as the robustness of our algorithms in real practice. Additionally, the developed algorithm can assist practitioners in completing the time-consuming evaluation task almost instantly, which can significantly improve clinical efficiency. Furthermore, our deep learning–based algorithms can also be continually improved by learning from growing data accumulated in practices, leading to better accuracy. The most exceptional advantage is that the fully automated procedure provides a standardized approach, which could minimize the judgment variation between follow-up visits and between clinicians. 
However, the study also has several limitations. First, the deep learning algorithms require all input images to be in the same format, and the users need to export the images with an identical standard before applying the algorithms. However, this can be solved in future algorithm development by training different images to be compatible with all formats. Second, the images were all captured using the Pentacam HR tomographer. It is unknown to what extent the performance of the algorithms developed would be in other topographers. However, according to the nature of the topographical images, the performance should be at a comparable level. Although different formats of the topographical images are used by other instruments, the procedures and algorithms are transferrable to shorten the development process. Meanwhile, with the availability of images taken by other tomographers, it is possible to simply train current models to be able to analyze those new images. Third, although average satisfactory performance has been achieved by the approach, some errors might happen in some rare cases (e.g., Figs. 4B and 5B). However, this could be easily noticed by clinicians in practice. With the assistance of AI annotations on the images, clinicians can easily evaluate the magnitude of the treatment zone dislocation of the lens or the overlapping proportion. 
In conclusion, this study shows the capabilities of deep learning models to assess the treatment zones and centers after orthokeratology with performance competitive with that of human experts, which can be employed as an automated solution to facilitate the assessment and reduce interindividual subjectivity during the follow-up visits of orthokeratology. However, similar to other algorithm-based approaches, the performance of the system needs further evaluation using other data sets for external validation. Therefore, this fully automated system has been published online for public test and application (see Supplemental Appendix and Video). Meanwhile, the system can be also employed as a teaching platform to train practitioners’ evaluation on the lens fitting in the future. 
Acknowledgments
The annotations of the topographical maps were supported in part by Chao Zhou, Jianhua Li, and Jiwen Yang. 
Supported by grants from the Hunan Provincial Science and Technology Plan Project (2019SK2051), the Science Fund for Distinguished Young Scientists (2019JJ20034) from Hunan Provincial Science and Technology Department, the Innovation Methodology Project (2017IM010700) of the Ministry of Science and Technology, the Fundamental Research Funds for the Central Universities of Central South University (2019ZZTS367), the Science Research Foundation of Aier Eye Hospital Group (AR1903D7), and Hunan Province International Science and Technology Cooperation Base (2020CB1002). 
Disclosure: Y. Tang, None; Z. Chen, None; W. Wang, None; L. Wen, None; L. Zhou, None; M. Wang, None; F. Tang, None; H. Tang, None; W. Lan, None; Z. Yang, None 
References
VanderVeen DK, Kraker RT, Pineles SL, et al. Use of orthokeratology for the prevention of myopic progression in children: a report by the American Academy of Ophthalmology. Ophthalmology. 2019; 126: 623–636. [CrossRef] [PubMed]
Lee YC, Wang JH, Chiu CJ. Effect of orthokeratology on myopia progression: twelve-year results of a retrospective cohort study. BMC Ophthalmol. 2017; 17: 243. [CrossRef] [PubMed]
Holden BA, Fricke TR, Wilson DA, et al. Global prevalence of myopia and high myopia and temporal trends from 2000 through 2050. Ophthalmology. 2016; 123: 1036–1042. [CrossRef] [PubMed]
Hiraoka T, Okamoto C, Ishii Y, Kakita T, Oshika T. Contrast sensitivity function and ocular higher-order aberrations following overnight orthokeratology. Invest Ophthalmol Vis Sci. 2007; 48: 550–556. [CrossRef] [PubMed]
Santodomingo-Rubido J, Villa-Collar C, Gilmartin B, Gutierrez-Ortega R. Factors preventing myopia progression with orthokeratology correction. Optom Vis Sci. 2013; 90: 1225–1236. [CrossRef] [PubMed]
Chen Z, Niu L, Xue F, et al. Impact of pupil diameter on axial growth in orthokeratology. Optom Vis Sci. 2012; 89: 1636–1640. [CrossRef] [PubMed]
Hiraoka T, Okamoto C, Ishii Y, Okamoto F, Oshika T. Recovery of corneal irregular astigmatism, ocular higher-order aberrations, and contrast sensitivity after discontinuation of overnight orthokeratology. Br J Ophthalmol. 2009; 93: 203–208. [CrossRef] [PubMed]
Carracedo G, Espinosa-Vidal TM, Martinez-Alberquilla I, Batres L. The topographical effect of optical zone diameter in orthokeratology contact lenses in high myopes. J Ophthalmol. 2019; 2019: 1082472. [CrossRef] [PubMed]
Hiraoka T, Mihashi T, Okamoto C, Okamoto F, Hirohara Y, Oshika T. Influence of induced decentered orthokeratology lens on ocular higher-order wavefront aberrations and contrast sensitivity function. J Cataract Refract Surg. 2009; 35: 1918–1926. [CrossRef] [PubMed]
Chen Z, Xue F, Zhou J, et al. Prediction of orthokeratology lens decentration with corneal elevation. Optom Vis Sci. 2017; 94: 903–907. [CrossRef] [PubMed]
Mei Y, Tang Z, Li Z, Yang X. Repeatability and reproducibility of quantitative corneal shape analysis after orthokeratology treatment using Image-Pro Plus software. J Ophthalmol. 2016; 2016: 1732476. [CrossRef] [PubMed]
LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015; 521: 436–444. [CrossRef] [PubMed]
Stead WW. Clinical implications and challenges of artificial intelligence and deep learning. JAMA. 2018; 320: 1107–1108. [CrossRef] [PubMed]
Esteva A, Robicquet A, Ramsundar B, et al. A guide to deep learning in healthcare. Nat Med. 2019; 25: 24–29. [CrossRef] [PubMed]
Hinton G. Deep learning—a technology with the potential to transform health care. JAMA. 2018; 320: 1101–1102. [CrossRef] [PubMed]
Kermany DS, Goldbaum M, Cai W, et al. Identifying medical diagnoses and treatable diseases by image-based deep learning. Cell. 2018; 172: 1122–1131.e1129. [CrossRef] [PubMed]
Litjens G, Kooi T, Bejnordi BE, et al. A survey on deep learning in medical image analysis. Med Image Anal. 2017; 42: 60–88. [CrossRef] [PubMed]
Singh VK, Rashwan HA, Romani S, et al. Breast tumor segmentation and shape classification in mammograms using generative adversarial and convolutional neural network. Expert Syst Applications. 2020; 139: 112855. [CrossRef]
Ting DSW, Cheung CY, Nguyen Q, et al. Deep learning in estimating prevalence and systemic risk factors for diabetic retinopathy: a multi-ethnic study. NPJ Digit Med. 2019; 2: 24. [CrossRef] [PubMed]
Grewal PS, Oloumi F, Rubin U, Tennant MTS. Deep learning in ophthalmology: a review. Can J Ophthalmol. 2018; 53: 309–313. [CrossRef] [PubMed]
Jiang J, Lian L, Wang F, Zhou L, Zhang X, Song E. Comparison of toric and spherical orthokeratology lenses in patients with astigmatism. J Ophthalmol. 2019; 2019: 4275269. [PubMed]
Tabernero J, Klyce SD, Sarver EJ, Artal P. Functional optical zone of the cornea. Invest Ophthalmol Vis Sci. 2007; 48: 1053–1060. [CrossRef] [PubMed]
Chung B, Lee H, Roberts CJ, et al. Decentration measurements using Placido corneal tangential curvature topography and Scheimpflug tomography pachymetry difference maps after small-incision lenticule extraction. J Cataract Refract Surg. 2019; 45: 1067–1073. [CrossRef] [PubMed]
Lee H, Roberts CJ, Arba-Mosquera S, Kang DSY, Reinstein DZ, Kim TI. Relationship between decentration and induced corneal higher-order aberrations following small-incision lenticule extraction procedure. Invest Ophthalmol Vis Sci. 2018; 59: 2316–2324. [CrossRef] [PubMed]
He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016. IEEE Computer Society; 2016: 770–778.
Long J, Shelhamer E, Darrell T. Fully convolutional networks for semantic segmentation. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015, Boston, MA, USA, June 7-12, 2015. IEEE Computer Society; 2015: 3431–3440.
Jin X, Li X, Xiao H, et al. Video Scene Parsing with Predictive Feature Learning. In: IEEE International Conference on Computer Vision, ICCV 2017, Venice, Italy, October 22-29, 2017. IEEE Computer Society; 2017: 5581–5589.
Klein A, Warszawski J, Hillengaß J, Maier-Hein KH. Automatic bone segmentation in whole-body CT images. Int J Comput Assist Radiol Surg. 2019; 14: 21–29. [CrossRef] [PubMed]
Bokhovkin A, Burnaev E. Boundary loss for remote sensing imagery semantic segmentation. In: Lu H, Tang H, Wang Z, eds. Advances in Neural Networks—ISNN 2019. Cham, Switzerland: Springer International; 2019: 388–401.
Li X, Wang L, Chen Z, Yang Z. Influence of treatment zone decentration on corneal higher-order wavefront aberrations and axial length elongation after orthokeratology. Chin J Optom Ophthalmol Vis Sci. 2017; 2019(9): 540–547.
Figure 1.
 
Overall flowchart of this study. A total of 6328 corneal maps from 2044 patients were collected, including 2996 axial subtractive maps and 3332 tangential subtractive maps. All maps were manually annotated by a group of experienced experts. The treatment zones and the treatment area centers were identified for axial subtractive maps and tangential subtractive maps, respectively. Afterward, deep learning models of FCN and CNN were developed and trained in the training data sets independently for the tasks of identifying treatment zones and centers, respectively. Last, the trained models were further evaluated in independent validation data sets.
Figure 1.
 
Overall flowchart of this study. A total of 6328 corneal maps from 2044 patients were collected, including 2996 axial subtractive maps and 3332 tangential subtractive maps. All maps were manually annotated by a group of experienced experts. The treatment zones and the treatment area centers were identified for axial subtractive maps and tangential subtractive maps, respectively. Afterward, deep learning models of FCN and CNN were developed and trained in the training data sets independently for the tasks of identifying treatment zones and centers, respectively. Last, the trained models were further evaluated in independent validation data sets.
Figure 2.
 
Schematics for the structures of deep learning algorithms. (A) Neural network of FCN with ResNet-101 blocks for treatment zone identification. The structure comprises encoder and decoder with multiple CNN layers and ResNet blocks. The encoder transforms the input image into an abstract representation, which the decoder maps back into a mask image. The mask indicates the pixel-wise segmentation of the treatment zone. (B) Neural network structure of CNN layers for treatment zone center identification. The CNN layers extract abstract patterns progressively from the input image. Max-pooling layers further compress the representation into lower dimensions; finally, an output layer generates the two coordinates of the center position in the image.
Figure 2.
 
Schematics for the structures of deep learning algorithms. (A) Neural network of FCN with ResNet-101 blocks for treatment zone identification. The structure comprises encoder and decoder with multiple CNN layers and ResNet blocks. The encoder transforms the input image into an abstract representation, which the decoder maps back into a mask image. The mask indicates the pixel-wise segmentation of the treatment zone. (B) Neural network structure of CNN layers for treatment zone center identification. The CNN layers extract abstract patterns progressively from the input image. Max-pooling layers further compress the representation into lower dimensions; finally, an output layer generates the two coordinates of the center position in the image.
Figure 3.
 
Histogram of the performance in validation data sets. (A) Histogram of IoU for treatment zone identification in the validation set of axial subtractive maps. Larger IoU values indicate good segmentation of treatment zones. (B) Histogram of deviation (millimeters) for treatment zone center identification in the validation set of tangential subtractive maps. Small values of deviation indicate good identification of treatment zone centers.
Figure 3.
 
Histogram of the performance in validation data sets. (A) Histogram of IoU for treatment zone identification in the validation set of axial subtractive maps. Larger IoU values indicate good segmentation of treatment zones. (B) Histogram of deviation (millimeters) for treatment zone center identification in the validation set of tangential subtractive maps. Small values of deviation indicate good identification of treatment zone centers.
Figure 4.
 
Examples of treatment zone identification in the axial subtractive maps with the best (A) and the worst performance (B). The white contour was annotated by AI, while the black contour was annotated by the human expert. (A) Good segmentation of the treatment zone with the greatest IoU of 0.96. (B) Poor segmentation of the treatment zone with the smallest IoU of 0.66, which was due to the relatively great decentration of the OK lens and the resultant unclear treatment boundary. Cases with this type of corneal change usually lead to borderline unacceptable visual quality and require adjustment of the lens parameter to achieve better lens centration.
Figure 4.
 
Examples of treatment zone identification in the axial subtractive maps with the best (A) and the worst performance (B). The white contour was annotated by AI, while the black contour was annotated by the human expert. (A) Good segmentation of the treatment zone with the greatest IoU of 0.96. (B) Poor segmentation of the treatment zone with the smallest IoU of 0.66, which was due to the relatively great decentration of the OK lens and the resultant unclear treatment boundary. Cases with this type of corneal change usually lead to borderline unacceptable visual quality and require adjustment of the lens parameter to achieve better lens centration.
Figure 5.
 
Examples of treatment zone center identification in the tangential subtractive maps with the best (A) and the worst performance (B). The white dot was annotated by AI. The black dots were annotated by the three human experts, and then the averaged location was represented by a red dot. (A) Good identification of the treatment center with the smallest deviation of 0.01 mm. (B) Poor identification of the treatment zone center with the greatest deviation of 1.66 mm, which was due to a rare case called “central island” (i.e., an abnormally raised ridge on the cornea after a failed OK lens treatment); therefore, the judgment by AI was misleading.
Figure 5.
 
Examples of treatment zone center identification in the tangential subtractive maps with the best (A) and the worst performance (B). The white dot was annotated by AI. The black dots were annotated by the three human experts, and then the averaged location was represented by a red dot. (A) Good identification of the treatment center with the smallest deviation of 0.01 mm. (B) Poor identification of the treatment zone center with the greatest deviation of 1.66 mm, which was due to a rare case called “central island” (i.e., an abnormally raised ridge on the cornea after a failed OK lens treatment); therefore, the judgment by AI was misleading.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×