**Purpose**:
The purpose of this study was to propose a new algorithm for the segmentation and thickness measurement of pathological corneas with irregular layers using a two-stage graph search and ray tracing.

**Methods**:
In the first stage, a graph, with only gradient edge-cost, is used to segment the air-epithelium and endothelium-aqueous boundaries. In the second stage, a graph, with gradient, directional, and multiplier edge-cost, is used to correct segmentation. The optical coherence tomography (OCT) image is flattened using the air-epithelium boundary and a graph search is used to segment the epithelium-Bowman's and Bowman's-stroma boundaries. Then, the OCT image is flattened using the endothelium-aqueous boundary and a graph search is used to segment the Descemet's membrane. Ray tracing is used to correct the inter-boundary distances, then the thickness is measured using the shortest distance. The proposed algorithm was trained and evaluated using 190 OCT images manually segmented by trained operators.

**Results**:
The mean and standard deviation of the unsigned errors of the algorithm-operator and inter-operator were 0.89 ± 1.03 and 0.77 ± 0.68 pixels in segmentation and 3.62 ± 3.98 and 2.95 ± 2.52 µm in thickness measurement.

**Conclusions**:
Our proposed algorithm can produce accurate segmentation and thickness measurements compared with the manual operators.

**Translational Relevance**:
Our algorithm could be potentially useful in the clinical practice.

^{1}is a common technique for the diagnosis of numerous eye diseases, such as dry eye, keratoconus, Fuchs dystrophy, and corneal graft rejection, and managing their progression.

^{2}

^{–}

^{9}The cornea has five layers: epithelium, Bowman's, stroma, Descemet's membrane, and endothelium.

^{10}In OCT images, five corneal layer boundaries are usually present: air-epithelium (EP), epithelium-Bowman's (BW), Bowman's-stroma (ST), Descemet's membrane (DM), and endothelium-aqueous (EN). To measure the thickness of different corneal layers, the layer boundaries are segmented, the locations of the layer boundaries are corrected to account for the refraction of the OCT light, and the point-wise distances between successive boundaries are calculated. Manual measurement of the thickness is challenging for three reasons. First, manual segmentation has low repeatability and reproducibility,

^{11}and it is time-consuming.

^{11}

^{–}

^{13}Second, points of each boundary need to be corrected for both distance and direction using multiple refractive indices for the cornea, but mistakenly points are corrected for only distance using one refractive index for the cornea.

^{14}Third, the thickness measurement is measured as the difference in the axial coordinates, which is not accurate. Therefore, the development of robust automatic algorithms for the segmentation and thickness measurement of the corneal layers is necessary for accurate results. Most of the existing automatic methods focus on the segmentation of corneal layer boundaries without considering the thickness measurement, which is important. In addition, they focus on the regular-shaped cornea whose layer boundaries can be modeled using prior known models.

^{11}

^{–}

^{13}

^{,}

^{15}

^{–}

^{26}In these methods, the corneal layer boundaries are modeled using circles,

^{22}parabolas,

^{11}

^{–}

^{13}

^{,}

^{16}

^{,}

^{17}

^{,}

^{19}fourth-order polynomials,

^{15}

^{,}

^{18}ellipses,

^{24}

^{,}

^{25}or Zernike polynomials.

^{23}Some of these methods first extract the highest intensity points of each boundary from the OCT image or its gradient and fit them to a model to estimate the boundary.

^{11}

^{,}

^{13}

^{,}

^{15}

^{,}

^{17}

^{,}

^{22}The accuracy of these methods depends on the quality of the OCT images and they are most likely to fail in images with low signal to noise ratio (SNR). In other methods, instead of extracting the highest intensity points, graph theory and dynamic programming or level sets are used to search for the minimum-cost paths in the image, according to a designed cost function, which is more likely to pass through the boundary.

^{12}

^{,}

^{19}

^{,}

^{21}

^{,}

^{23}

^{,}

^{25}

^{,}

^{26}However, when segmenting the corneal OCT images with low SNR, a model is used with these methods to approximate or limit the search region of each boundary in low-SNR regions.

^{27}

^{–}

^{30}Fang et al.

^{28}used a hybrid approach that combines convolutional neural networks

^{31}and graph search to segment retinal layer boundaries. They train a convolutional neural network on batches from all boundaries as well as the background. Then, they use their network to predict the likelihood of each pixel belonging to a boundary or background. Then, a graph search is used with these likelihood maps to segment each boundary. However, the corneal OCT images usually have low SNR at the periphery

^{12}and the segmentation needs to be handled properly in these regions. Methods in References 27, 29, and 30 use the U-shape network (e.g., U-Net

^{32}) to semantically segment each layer as a region. However, Dos Santos et al.

^{27}crop the OCT images to remove the low SNR regions, and they do not post-process the output segmentation, which is not practical. Mathai et al. use curve fitting for postprocessing, which assumes a model for the layer boundary. Thus, the limitations of the existing segmentation methods are (1) they are not robust to OCT images with low SNR, (2) they do not segment layer boundaries with arbitrary shapes, (3) they segment less number of layer boundaries, and (4) there is no emphasis on thickness measurement.

^{11}

^{–}

^{13}We trained our method using 60 images, and evaluate it using 130 OCT images from multiple datasets. We use a 2D ray-tracing algorithm to correct the locations of the layer boundary due to the refraction of the imaging light, and then we measure the thickness profiles for different corneal layers using the shortest distance. There are four contributions in this paper:

- 1. We present a graph-based method for the segmentation of corneal OCT images without assuming any model to better handle the irregularity of the layer boundaries in pathological eyes.
- 2. We segment five corneal layer boundaries, which are necessary to be able to generate thickness profiles for different corneal layers.
- 3. We present an accurate method to correct the inter-boundary distances using ray tracing, and we measure the thickness of the corneal layers, which is important because the diagnostic measures of many corneal diseases are based on thickness values.
- 4. We compare the performance of our segmentation algorithm with three existing methods using our and their datasets.

^{12}and it contained 20 OCT images of normal controls obtained using a different OCT machine. Therefore, we had a total dataset of 190 OCT images (i.e., 100 normal OCT images and 90 pathological OCT images). The OCT images, in our study, were manually segmented by two trained operators for quantitative comparison with the automatic method. Both operators had previous experience with corneal OCT images for more than 3 years. The OCT images in our previous study

^{11}were manually segmented by two operators where one of them is the first manual operator in this study. The OCT images in LaRocca's dataset

^{12}were manually segmented by two different experienced and trained operators.

^{33}The pixels \(\mathcal{P}\) of the image were used to represent the vertices \(\mathcal{V}\) of the graph \(\mathcal{G}\) in addition to two terminal vertices: a source vertex

*s*and a target vertex

*t*(i.e., \(\mathcal{V} = \mathcal{P} \cup \{ {s,t} \}\)). Each image pixel was connected to its immediate five neighbors using weighted directed links (i.e., \({{\cal E}_N}\)). The terminals

*s*and

*t*were connected to the vertices of the leftmost and rightmost columns, respectively, using constant-weight links (i.e., \({{\cal E}_T}\)). The graph edges \({\cal E}\) included the neighborhood edges \({{\cal E}_N}\) and the terminal edges \({{\cal E}_T}\) (i.e., \({\cal E} = {{\cal E}_N} \cup {{\cal E}_T}\)). An illustration of the constructed graph is shown in Figure 1. Any set of connected edges between the source

*s*and target

*t*vertices constructs an

*s*-

*t*path, and its cost is the sum of the weights of its edges. The segmentation of a corneal layer boundary was formulated as finding the minimum-cost

*s*-

*t*path according to some cost function. The terminal edge cost was set to a constant value to give equal chance to any path. The proposed neighborhood edge cost

*C*, between two vertices

_{ab}*a*and

*b*, consisted of three terms:

- • A gradient cost \(C_{ab}^G\), which represented the main cost term based on the intensity values of the gradient image.
- • A directional cost \(C_{ab}^D\), which measured the error between the segmentation and a polynomial representing the general shape of the boundary.
- • A multiplier \(C_{ab}^M\), which kept the cost of the central edges intact, but it reduced the cost of the peripheral vertical edges to encourage the segmentation to go down.

*e*is the exponential function,

*g*and

_{a}*g*are the normalized gradient values at the vertices

_{b}*a*and

*b*, respectively, and σ is a scaling constant. We used the exponential function because it is changing slowly near

*g*= 0 and

*g*= 1, so the weak edges (i.e.,

*g*≈ 0) will have roughly the same higher cost, and the strong edges (i.e.,

*g*≈ 1) will roughly have the same low cost. In addition, we used multiplication with the gradient cost of the two vertices

*a*and

*b*because it acts as a similarity measure between the two vertices (i.e., multiplication gives lower cost when both vertices have very close gradients). The directional cost was computed using:

*y*is the difference in the axial coordinates of a reference direction and Δ

_{ref}*y*is the difference in the axial coordinates of the vertices

*a*and

*b*. The multiplier cost was computed using:

*x*is the transversal coordinate. The function ϕ(

*x*) is a rectangular function given by:

*W*is the image width. The function ρ is given by:

*x*and Δ

*y*are the differences in the transversal and axial coordinates of the vertices

*a*and

*b*, respectively, and ω

_{x}and ω

_{y}are weighting constants ≤ 1. The function ρ can be rewritten as:

_{x}> ω

_{y}, the vertical movement has the least cost, and this encourages graph search to move vertically, and vice versa. After constructing the graph and computing its edge cost, the Bellman-Ford algorithm

^{34}was used to search for the minimum-cost

*s*-

*t*path. We used the Bellman-Ford algorithm because it does not require graph weights to be non-zero (i.e., no need to add a stability constant), and its Matlab implementation is more efficient than Dijkstra's algorithm.

*f*was obtained by filtering the OCT image using the filter

_{x}*h*and a vertical gradient

_{n}*f*was obtained by filtering the image using the filter \(h_n^T\) where

_{y}*T*is the transpose operator. These filters robustly captured the horizontal and vertical edges. For the segmentation of the EP and EN boundaries, we combined

*f*and

_{x}*f*using:

_{y}*f*. The function λ was computed using:

_{x}*W*is the width of the image and

*w*is a constant. The weighting function λ gave more weight to the peripheral regions and less weight to the central region of the horizontal gradient

*f*, which usually contains the central artifact. Then, we locally normalized the gradient image

_{x}*f*to strengthen the weak edges using:

*g*is the normalized image, μ

_{l}is the local mean, and σ

_{l}is the local standard deviation.

^{35}Then,

*g*is scaled between 0 and 1 using a min-max normalization.

^{36}

^{12}Segmenting them correctly is very important to be able to segment the inner corneal layer boundaries. A typical example of the corneal OCT images is shown in Figure 3a with common artifacts, such as the saturation artifact, horizontal artifact, and speckle noise.

^{12}First, the OCT image was smoothed using a 5 × 5 Gaussian filter

^{37}to reduce the speckle noise. Then, the horizontal gradient of the OCT image was obtained by filtering the smoothed image using a horizontal filter

*h*

_{11}using (8) and its absolute value was obtained. The vertical gradient of the OCT image was obtained by filtering the smoothed image using a vertical filter \(h_{11}^T\) using (8) and its absolute value was obtained. The gradient image was obtained by combining the absolute horizontal and vertical gradients using (9). Then, the gradient image was locally normalized using (11) and scaled between 0 and 1 (e.g., Fig. 3b). The normalized gradient highlights both the dark-to-bright transition (i.e., the EP boundary) and the bright-to-dark transition (i.e., the EN boundary). Some false edges appeared at the top and bottom parts of the normalized gradient image because of the image filtering

^{37}; therefore, to prevent the graph search from following these false edges, we cut their continuity by setting the top left, top right, and bottom central regions to zero.

*s*-

*t*path in the constructed graph (e.g., Fig. 4a). Then, the segmented boundary was occluded in the normalized gradient image by setting a window of 100 pixels around it to zero, and the edge-costs of the graph were updated. The second-most prominent boundary (i.e., usually the EN boundary) was segmented by finding the minimum-cost

*s*-

*t*path in the updated graph (e.g., Fig. 4a). The mean of the two boundaries was used to distinguish between them (i.e., the EP boundary is before the EN boundary). The initial segmentation usually had defects at the peripheral parts with low SNR and shifted from the true boundary due to the image filtering

^{37}(e.g., Fig. 4b). We hypothesize that the graph search tends to go upward toward the region with higher SNR; therefore, to discourage the graph from curling up toward higher SNR regions, we included the directional and multiplier costs as defined in (1). To remove the defective parts of the segmentation, the segmented boundary was smoothed using a 1 × 25 moving average filter, then filtered using

*h*

_{27}horizontal gradient filter. The locations of maximum values of the absolute of the gradient indicate the locations of the defects. The reference direction in the directional cost \(C_{ab}^D\) was derived by fitting a second-order polynomial to the initial segmentation after removing its defective parts. We used a second-order polynomial because it was used in literature to represent the corneal layer boundaries,

^{11}

^{–}

^{13}

^{,}

^{20}

^{,}

^{38}

^{,}

^{39}and more importantly it has exactly one local minimum that can represent the general shape of each corneal layer boundary with only one apex.

^{40}The graph search was encouraged to move vertically by setting ω

_{x}> ω

_{y}in the multiplier cost \(C_{ab}^M\) in (4). Another directed graph was constructed using the normalized gradient image and the edge cost was computed using (1). The EP and EN boundaries were segmented by finding the minimum-cost

*s*-

*t*paths in the graph. Finally, the obtained segmentation of the EP and EN boundaries was shifted, to be aligned with the boundaries in the OCT image, and smoothed. The axial locations of the segmented boundaries had discrete values (i.e., whole pixels), so to have smooth transitions in the segmentation with sub-pixel values, the axial locations were smoothed using a 1 × 15 moving average filter (e.g., Fig. 4d).

^{8}

^{,}

^{9}

^{,}

^{41}

^{–}

^{43}to limit the search region for the inner boundary (e.g., Figs. 5a, 5b). To segment each inner boundary, the flattened image was smoothed using a horizontal median filter with a size of 1 ×

*win*to strengthen the boundary, filtered using a vertical filter, locally normalized using (11), and scaled between 0 and 1 (e.g., Figs. 5c, 5d). The BW boundary was the most prominent inner boundary, then ST, and DM boundaries. The parameters used with each boundary were obtained experimentally from the training dataset, and their details are summarized in Table 1. A directed graph was constructed using the normalized image and the edge cost was computed using (1) where the directional term was computed using the nearest layer boundary. In the flattened image, each boundary was nearly horizontal, so the graph search was encouraged to move horizontally using (4). The boundary is segmented by finding the minimum-cost

*s*-

*t*path in the constructed graph (e.g., Figs. 5e, 5f). The segmentation of the inner boundaries was rotated back to the original OCT image and smoothed using a 1 × 15 moving-average filter. The final segmentation of all boundaries is shown in Figure 6a with the highlighted regions in Figure 6b.

^{14}A refractive index of 1.401 is used for the corneal epithelium and 1.376 for the rest of the corneal layers.

^{44}First, the segmentation was converted into millimeters using the scan depth (i.e., 2.18 millimeters) and width (i.e., 6 millimeters) of our OCT machine. An iterative ray-tracing algorithm was applied to correct the refraction of the OCT imaging light.

^{45}In the ray-tracing algorithm, we applied the vector form of Snell's law at the boundary between every two successive layers (i.e., illustrated in Fig. 7). Initially, the direction of the incident ray was assumed to be orthogonal to the corneal apex (i.e., parallel to the axial direction) at each point of the boundary (e.g.,

*p*

_{1}in Fig. 7) and given by:

*x*and Δ

*y*are the differences in the transversal and axial directions, respectively. At each boundary point, the cosine of the incidence angle was computed using:

_{2}was computed using:

*n*

_{1}and

*n*

_{2}are the refractive indices of the incidence and refraction media.

^{11}

^{,}

^{12}For the datasets of this study, the two trained operators were instructed to segment all the OCT images in the training and testing datasets (i.e., totaling 120 segmented OCT images). Then, the automatic segmentation was used to segment the OCT images in all the images, including the images in the datasets of the studies in References 11 and 12. Ray tracing was used to correct the locations of the segmented corneal layer boundaries to account for light refraction. Finally, the thickness of the corneal layers was measured. Additionally, we have implemented the methods in Elsawy et al.,

^{11}LaRocca et al.,

^{12}and Zhang et al. to compare their performance with our proposed segmentation method on our datasets as well as their datasets. LaRocca et al.

^{12}used a graph with an edge cost given by:

*f*

_{y,a}and

*f*

_{y,b}are the vertical gradients at the vertices

*a*and

*b*, respectively, obtained using (8) and

*K*is a stability constant.

*B*

^{1}and

*B*

^{2}, we calculated the mean unsigned boundary error (UBE) between the two boundaries and using:

*N*is the number of points of each boundary. We also calculated the corneal region similarity between two corneal regions (i.e., regions bounded by the EP and EN boundaries),

*R*

^{1}and

*R*

^{2}, using the Dice similarity coefficient which is given by:

*T*

^{1}and

*T*

^{2}, using:

^{11}), and 20 (i.e., LaRocca's dataset

^{12}) OCT images, receptively (i.e., a total of 130 OCT images). All the testing datasets were not used in training the methods. The OCT images included different artifacts and noise, such as saturation artifact, speckle noise, and low SNR.

*P*< 0.001), Elsawy's dataset (0.76 ± 0.79 vs. 0.95 ± 0.82 pixels;

*P*< 0.001), and LaRocca's dataset (1.53 ± 2.66 vs. 1.61 ± 2.18 pixels;

*P*< 0.001). The mean UBE of the algorithm-operator was significantly less than the mean UBE of the inter-operator for the EP, BW, and EN boundaries on our dataset and Elsawy's dataset (

*P*< 0.001). It was only insignificant for the DM boundary for our testing dataset (

*P*= 0.4051). The mean UBE of the algorithm-operator was significantly less than the mean UBE of the inter-operator for the EP boundary on LaRocca's dataset (

*P*< 0.001). This is because our method was trained using a dataset obtained from an OCT machine different from the OCT machine that was used to obtain LaRocca's dataset. The mean UBE of the algorithm-operator was significantly larger than the mean UBE of the inter-operator for the ST and DM boundaries on the testing dataset that contained pathological eyes (

*P*< 0.001). This suggests that our method performed better on normal eyes for those layer boundaries and still had a close performance to the inter-operator errors for the pathological eyes.

*P*= 0.3765), but there were significant differences on Elsawy's dataset (0.9956 ± 0.0013 vs. 0.9949 ± 0.0014;

*P*= 0.00050), and on LaRocca's dataset (0.9908 ± 0.0054 vs. 0.9897 ± 0.0028;

*P*= 0.0256).

^{11}

^{–}

^{13}are summarized in Table 6, Table 7, and Table 8. There were significant differences between the mean UBE of our proposed methods and the methods in Elsawy et al.,

^{11}LaRocca et al.,

^{12}and Zhang et al.

^{13}on the testing dataset (0.89 ± 1.03 vs. 1.85 ± 8.80, 2.00 ± 5.10, 5.81 ± 17.82 pixels; respectively;

*P*< 0.001), Elsawy's dataset (0.76 ± 0.79 vs. 0.72 ± 0.68, 1.05 ± 1.55, 6.41 ± 22.70; respectively;

*P*< 0.001), and LaRocca's dataset (1.53 ± 2.66 vs. 4.87 ± 16.15, 3.03 ± 6.94, 4.84 ± 6.65 pixels; respectively;

*P*< 0.001). A summary of the best-performing method for each boundary is shown in Table 9 for all datasets. The proposed method was better than the other methods on the testing dataset. However, the proposed method was inferior to Elsawy's method for the EP, DM, and EN boundaries on Elsawy's dataset. This is because the proposed method does not fit the segmentation to a model to be generic, but Elsawy et al.

^{11}used polynomial models, which were more accurate for the segmentation of normal OCT images. The proposed method was better than other methods for the EP and BW interfaces on LaRocca's dataset. These results suggest that the proposed method is better when the corneal layer boundary has an irregular shape, but less favorable when the layer boundary can be modeled using a specific model as in the case of normal OCT images.

*P*< 0.001), and Elsawy's dataset (3.21 ± 2.90 vs. 2.99 ± 2.54 µm;

*P*< 0.001). On average, the difference in thickness measurement was less the optical resolution of the imaging machine (i.e., 3 µm). This confirms that the thickness measurement using the proposed graph segmentation can reproduce the thickness measurement from the manual operators.

^{2}

^{–}

^{9}Therefore, an automated algorithm to segment layer boundaries in OCT images and measure layer thicknesses is highly needed. Existing methods have limitations in segmenting pathological corneas with irregular boundaries and they do not segment all boundaries. In addition, they do not consider the thickness measurement, which is of great interest. In this work, we proposed a graph-based algorithm that segments five corneal layer boundaries in pathological cases, we accounted for light refraction, and we measured the thickness of different layers. The proposed algorithm can segment five corneal layer boundaries in pathological corneas. To our knowledge, there is no other algorithm that segments five corneal layer boundaries in pathological corneas that have irregular layer boundaries. The proposed algorithm was validated against manual segmentation from different operators on three datasets. The datasets contained a total of 130 OCT images of pathological and healthy corneas, and they had the common artifacts that typically exist in corneal OCT images to ensure the robustness of the algorithm. The proposed algorithm segments the corneal layer boundaries accurately, where the mean unsigned boundary error of between the algorithm and the mean operator is less than one pixel across all boundaries except the DM. The segmentation of the DM boundary is less accurate which may be due to the low SNR of images. Another reason for this could be that we used a large search window for the DM boundary to accommodate the possible variations of its location in pathological eyes, which can lead to increasing the chance of error. The proposed segmentation results had similar corneal region similarity compared with the inter-operator mean corneal region similarity across the testing datasets. In Elsawy's and LaRocca's datasets, there were significant differences in the advantage of our algorithm. The obtained results suggest that the proposed segmentation algorithm could be equivalent to manual operators.

^{12}and Zhang's method

^{13}on our and their datasets. Elsawy's method

^{11}performed better than the proposed method in segmenting three layer boundaries on Elsawy's dataset because their method was tuned on this dataset, which included normal eyes only. In addition, they used a polynomial model to segment the layer boundaries, which was more accurate. However, Elsawy's method

^{11}was not able to generalize to pathological OCT images in the testing dataset. In addition, the proposed segmentation method could segment more layer boundaries than other methods.

^{12}

^{,}

^{13}

^{12}In addition, the graphical comparison of the proposed segmentation method and other segmentation methods

^{11}

^{–}

^{13}showed that the proposed method performed better than other methods.

**A. Elsawy,**Non-Provisional Patents (Application No. 8992023 and 61809518), and PCT/ US2018/013409 (M.A. and A.E.);

**G. Gregori,**None;

**T. Eleiwa,**None;

**M. Abdel-Mottaleb,**None;

**M.A. Shousha,**None

*Science*. 1991; 254: 1178–1181. [CrossRef] [PubMed]

*Invest Ophth Vis Sci*. 2004; 45: U28.

*J Refract Surg*. 2010; 26: 259–271. [CrossRef] [PubMed]

*BMC Ophthalmol*. 2008; 8: 1. [CrossRef] [PubMed]

*Am J Ophthalmol*. 2017; 178: 27–37. [CrossRef] [PubMed]

*Ophthalmology*. 2011; 118: 1291–1296. [PubMed]

*Ophthalmic Surg Lasers Imaging*. 2011; 42(Suppl): S15–S27. [CrossRef] [PubMed]

*Ophthalmology.*2014; 121: 988–993. [PubMed]

*Ophthalmology*. 2010; 117: 1220–1227. [PubMed]

*Indian J Ophthalmol*. 2018; 66: 190–194. [PubMed]

*Transl Vis Sci Techn*. 2019; 8: 39.

*Biomed Opt Express*. 2011; 2: 1524–1538. [PubMed]

*Ieee Access*. 2017; 5: 10352–10363.

*J Refract Surg*. 1995; 11: 100–105. [PubMed]

*Lect Notes Comput Sci*. 2010; 6313: 44–57.

*Prog Biomed Optics Imag Proc SPIE*. 2019;10949.

*J Med Signals Sens*. 2014; 4: 171–180. [PubMed]

*Biomed Opt Express*. 2018; 9: 2716–2732. [PubMed]

*Segmentation of 830- and 1310-nm LASIK corneal optical coherence tomography images*. SPIE; 2002.

*J Innov Opt Heal Sci*. 2012; 5: 1250030-1–1250030-9.

*Lect Notes Comput Sci*. 2017; 10554: 109–117.

*J Biomed Opt*. 2013; 18: 56003. [PubMed]

*Eye Vision*. 2015; 2: 1. [PubMed]

*Biomed Signal Proces*. 2016; 25: 91–98.

*Biomed Opt Express*. 2019; 10: 622–641. [PubMed]

*Biomed Opt Express*. 2017; 8: 2732–2744. [PubMed]

*Proc IEEE Int Symp Biomed Imaging*. 2019;1432–1436.

*Comput Biol Med*. 2019; 114: 103445. [PubMed]

*Commun ACM*. 2017; 60: 84–90.

*Lect Notes Comput Sc*. 2015; 9351: 234–241.

*Introduction to graph theory*. Upper Saddle River, NJ: Prentice Hall; 1996.

*Appl Math Lett*. 1993; 6: 3–6.

*IEEE Signal Process Mag*. 2003; 20: 43–52.

*Data mining: concepts and techniques*. New York, NY: Elsevier; 2011.

*Digital Image Processing*. Boston, MA: Addison-Wesley; 1992;2.

*Ophthalmology*. 2007; 114: 1124–1132. [PubMed]

*Ophthalmology*. 2006; 113: 792–799.e792. [PubMed]

*Calculus: Concepts and contexts*. Boston, MA: Cengage Learning; 2009.

*J Refract Surg*2016; 32: 27–32. [PubMed]

*J Cataract Refract Surg*. 2009; 35: 1055–1062. [PubMed]

*J Refract Surg*. 2010; 26: 127–133. [PubMed]

*J Biomed Opt*. 2012; 17: 116010. [PubMed]

*An introduction to ray tracing*. New York, NY: Elsevier; 1989.