Open Access
Articles  |   October 2020
Pathological-Corneas Layer Segmentation and Thickness Measurement in OCT Images
Author Affiliations & Notes
  • Amr Elsawy
    Bascom Palmer Eye Institute, Miller School of Medicine, University of Miami, Miami, FL, USA
    Electrical and Computer Engineering, University of Miami, Miami, FL, USA
  • Giovanni Gregori
    Bascom Palmer Eye Institute, Miller School of Medicine, University of Miami, Miami, FL, USA
  • Taher Eleiwa
    Bascom Palmer Eye Institute, Miller School of Medicine, University of Miami, Miami, FL, USA
    Ophthalmology Department, Benha Faculty of Medicine, Egypt
  • Mohamed Abdel-Mottaleb
    Electrical and Computer Engineering, University of Miami, Miami, FL, USA
  • Mohamed Abou Shousha
    Bascom Palmer Eye Institute, Miller School of Medicine, University of Miami, Miami, FL, USA
    Electrical and Computer Engineering, University of Miami, Miami, FL, USA
    Biomedical Engineering, University of Miami, Miami, FL, USA
  • Correspondence: Mohamed Abou Shousha, Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, 900 NW 17th Street, Miami, FL 33136, USA. e-mail: mshousha@med.miami.edu 
Translational Vision Science & Technology October 2020, Vol.9, 24. doi:https://doi.org/10.1167/tvst.9.11.24
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Amr Elsawy, Giovanni Gregori, Taher Eleiwa, Mohamed Abdel-Mottaleb, Mohamed Abou Shousha; Pathological-Corneas Layer Segmentation and Thickness Measurement in OCT Images. Trans. Vis. Sci. Tech. 2020;9(11):24. https://doi.org/10.1167/tvst.9.11.24.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose: The purpose of this study was to propose a new algorithm for the segmentation and thickness measurement of pathological corneas with irregular layers using a two-stage graph search and ray tracing.

Methods: In the first stage, a graph, with only gradient edge-cost, is used to segment the air-epithelium and endothelium-aqueous boundaries. In the second stage, a graph, with gradient, directional, and multiplier edge-cost, is used to correct segmentation. The optical coherence tomography (OCT) image is flattened using the air-epithelium boundary and a graph search is used to segment the epithelium-Bowman's and Bowman's-stroma boundaries. Then, the OCT image is flattened using the endothelium-aqueous boundary and a graph search is used to segment the Descemet's membrane. Ray tracing is used to correct the inter-boundary distances, then the thickness is measured using the shortest distance. The proposed algorithm was trained and evaluated using 190 OCT images manually segmented by trained operators.

Results: The mean and standard deviation of the unsigned errors of the algorithm-operator and inter-operator were 0.89 ± 1.03 and 0.77 ± 0.68 pixels in segmentation and 3.62 ± 3.98 and 2.95 ± 2.52 µm in thickness measurement.

Conclusions: Our proposed algorithm can produce accurate segmentation and thickness measurements compared with the manual operators.

Translational Relevance: Our algorithm could be potentially useful in the clinical practice.

Introduction
Optical measuring the thickness of different corneal layers using optical coherence tomography (OCT)1 is a common technique for the diagnosis of numerous eye diseases, such as dry eye, keratoconus, Fuchs dystrophy, and corneal graft rejection, and managing their progression.29 The cornea has five layers: epithelium, Bowman's, stroma, Descemet's membrane, and endothelium.10 In OCT images, five corneal layer boundaries are usually present: air-epithelium (EP), epithelium-Bowman's (BW), Bowman's-stroma (ST), Descemet's membrane (DM), and endothelium-aqueous (EN). To measure the thickness of different corneal layers, the layer boundaries are segmented, the locations of the layer boundaries are corrected to account for the refraction of the OCT light, and the point-wise distances between successive boundaries are calculated. Manual measurement of the thickness is challenging for three reasons. First, manual segmentation has low repeatability and reproducibility,11 and it is time-consuming.1113 Second, points of each boundary need to be corrected for both distance and direction using multiple refractive indices for the cornea, but mistakenly points are corrected for only distance using one refractive index for the cornea.14 Third, the thickness measurement is measured as the difference in the axial coordinates, which is not accurate. Therefore, the development of robust automatic algorithms for the segmentation and thickness measurement of the corneal layers is necessary for accurate results. Most of the existing automatic methods focus on the segmentation of corneal layer boundaries without considering the thickness measurement, which is important. In addition, they focus on the regular-shaped cornea whose layer boundaries can be modeled using prior known models.1113,1526 In these methods, the corneal layer boundaries are modeled using circles,22 parabolas,1113,16,17,19 fourth-order polynomials,15,18 ellipses,24,25 or Zernike polynomials.23 Some of these methods first extract the highest intensity points of each boundary from the OCT image or its gradient and fit them to a model to estimate the boundary.11,13,15,17,22 The accuracy of these methods depends on the quality of the OCT images and they are most likely to fail in images with low signal to noise ratio (SNR). In other methods, instead of extracting the highest intensity points, graph theory and dynamic programming or level sets are used to search for the minimum-cost paths in the image, according to a designed cost function, which is more likely to pass through the boundary.12,19,21,23,25,26 However, when segmenting the corneal OCT images with low SNR, a model is used with these methods to approximate or limit the search region of each boundary in low-SNR regions. 
Recently, some studies have used deep learning for the segmentation of retinal and corneal layer boundaries.2730 Fang et al.28 used a hybrid approach that combines convolutional neural networks31 and graph search to segment retinal layer boundaries. They train a convolutional neural network on batches from all boundaries as well as the background. Then, they use their network to predict the likelihood of each pixel belonging to a boundary or background. Then, a graph search is used with these likelihood maps to segment each boundary. However, the corneal OCT images usually have low SNR at the periphery12 and the segmentation needs to be handled properly in these regions. Methods in References 27, 29, and 30 use the U-shape network (e.g., U-Net32) to semantically segment each layer as a region. However, Dos Santos et al.27 crop the OCT images to remove the low SNR regions, and they do not post-process the output segmentation, which is not practical. Mathai et al. use curve fitting for postprocessing, which assumes a model for the layer boundary. Thus, the limitations of the existing segmentation methods are (1) they are not robust to OCT images with low SNR, (2) they do not segment layer boundaries with arbitrary shapes, (3) they segment less number of layer boundaries, and (4) there is no emphasis on thickness measurement. 
In this work, we propose a graph-based algorithm for the segmentation and thickness measurement of the corneal layers of pathological eyes. We use a two-stage graph-based segmentation method to segment pathological corneas whose layer boundaries are irregular and do not have a specific model. We compare our method to three counterpart methods.1113 We trained our method using 60 images, and evaluate it using 130 OCT images from multiple datasets. We use a 2D ray-tracing algorithm to correct the locations of the layer boundary due to the refraction of the imaging light, and then we measure the thickness profiles for different corneal layers using the shortest distance. There are four contributions in this paper: 
  • 1. We present a graph-based method for the segmentation of corneal OCT images without assuming any model to better handle the irregularity of the layer boundaries in pathological eyes.
  • 2. We segment five corneal layer boundaries, which are necessary to be able to generate thickness profiles for different corneal layers.
  • 3. We present an accurate method to correct the inter-boundary distances using ray tracing, and we measure the thickness of the corneal layers, which is important because the diagnostic measures of many corneal diseases are based on thickness values.
  • 4. We compare the performance of our segmentation algorithm with three existing methods using our and their datasets.
Methods
Dataset Description
In this study, radial OCT scans of the central cornea were obtained using a high-definition OCT machine (HD-OCT; Envisu R2210, Bioptigen, Buffalo Grove, IL) from 67 eyes of 53 patients recruited at Bascom Palmer Eye Institute (BPEI). The presence of the specular reflection was used to ensure that the scans were centered at the corneal apex. All patients signed a written informed consent approved by the University of Miami Institutional Review Boards (IRBs). The images have a size of 1024 × 1000 pixels. The axial depth of the machine is 2.18 millimeters, and the transversal width of the machine is 6 millimeters. The OCT machine has an optical resolution of 3 µm. A set of 120 images was randomly collected from all eyes using a computer script. The dataset contained an even number of OCT images for patients diagnosed with keratoconus, Fuchs dystrophy, and dry eye, in addition to normal controls (i.e., 30 each). The dataset was evenly divided into a training dataset of 60 OCT images and a testing dataset of 60 OCT images. In addition to our dataset, we used two additional datasets from other studies for further assessment of our proposed method and for comparison with the other methods. A dataset was obtained from our previous study in Reference 11 and it contained 50 OCT images of normal controls obtained using the same OCT machine. Another dataset was obtained from LaRocca et al.12 and it contained 20 OCT images of normal controls obtained using a different OCT machine. Therefore, we had a total dataset of 190 OCT images (i.e., 100 normal OCT images and 90 pathological OCT images). The OCT images, in our study, were manually segmented by two trained operators for quantitative comparison with the automatic method. Both operators had previous experience with corneal OCT images for more than 3 years. The OCT images in our previous study11 were manually segmented by two operators where one of them is the first manual operator in this study. The OCT images in LaRocca's dataset12 were manually segmented by two different experienced and trained operators. 
Graph Construction
A graph \(\mathcal{G}\) of vertices \(\mathcal{V}\) and edges \({\cal E}\) was constructed from the given image.33 The pixels \(\mathcal{P}\) of the image were used to represent the vertices \(\mathcal{V}\) of the graph \(\mathcal{G}\) in addition to two terminal vertices: a source vertex s and a target vertex t (i.e., \(\mathcal{V} = \mathcal{P} \cup \{ {s,t} \}\)). Each image pixel was connected to its immediate five neighbors using weighted directed links (i.e., \({{\cal E}_N}\)). The terminals s and t were connected to the vertices of the leftmost and rightmost columns, respectively, using constant-weight links (i.e., \({{\cal E}_T}\)). The graph edges \({\cal E}\) included the neighborhood edges \({{\cal E}_N}\) and the terminal edges \({{\cal E}_T}\) (i.e., \({\cal E} = {{\cal E}_N} \cup {{\cal E}_T}\)). An illustration of the constructed graph is shown in Figure 1. Any set of connected edges between the source s and target t vertices constructs an s-t path, and its cost is the sum of the weights of its edges. The segmentation of a corneal layer boundary was formulated as finding the minimum-cost s-t path according to some cost function. The terminal edge cost was set to a constant value to give equal chance to any path. The proposed neighborhood edge cost Cab, between two vertices a and b, consisted of three terms: 
  • A gradient cost \(C_{ab}^G\), which represented the main cost term based on the intensity values of the gradient image.
  • A directional cost \(C_{ab}^D\), which measured the error between the segmentation and a polynomial representing the general shape of the boundary.
  • A multiplier \(C_{ab}^M\), which kept the cost of the central edges intact, but it reduced the cost of the peripheral vertical edges to encourage the segmentation to go down.
Figure 1.
 
The construction of the graph from the image where the image pixels are used as the vertices of the graph in addition to source and target vertices. (a) Each pixel is connected to its immediate neighbors using five-connectivity, and (b) the source vertex is connected to the image leftmost pixels and the target vertex is connected to the image rightmost pixels.
Figure 1.
 
The construction of the graph from the image where the image pixels are used as the vertices of the graph in addition to source and target vertices. (a) Each pixel is connected to its immediate neighbors using five-connectivity, and (b) the source vertex is connected to the image leftmost pixels and the target vertex is connected to the image rightmost pixels.
We combined the three terms as follows:  
\begin{equation}{C_{ab}} = \left( {C_{ab}^G + C_{ab}^D} \right)C_{ab}^M\end{equation}
(1)
where we added \(C_{ab}^G\) and \(C_{ab}^D\) to ensure that either term contributed to the edge cost, which is important at the periphery when the gradient intensity is weak, and \(C_{ab}^D\) was weighted by a constant α to control its contribution. We multiplied the sum by a multiplier cost to ensure that both sum and multiplier contributed to the edge cost. In other words, we grouped the gradient and directional costs using “logical OR” operation and grouped the sum and the multiplier cost using “logical AND” operation. The gradient cost was computed using:  
\begin{equation}C_{ab}^G = {e^{ - \sigma g_a^2}} \cdot {e^{ - \sigma g_b^2}} = {e^{ - \sigma \left( {g_a^2 + g_b^2} \right)}}\end{equation}
(2)
where e is the exponential function, ga and gb are the normalized gradient values at the vertices a and b, respectively, and σ is a scaling constant. We used the exponential function because it is changing slowly near g = 0 and g = 1, so the weak edges (i.e., g ≈ 0) will have roughly the same higher cost, and the strong edges (i.e., g ≈ 1) will roughly have the same low cost. In addition, we used multiplication with the gradient cost of the two vertices a and b because it acts as a similarity measure between the two vertices (i.e., multiplication gives lower cost when both vertices have very close gradients). The directional cost was computed using:  
\begin{equation}C_{ab}^D = \left| {\Delta {y_{ref}} - \Delta y} \right|\end{equation}
(3)
where Δyref is the difference in the axial coordinates of a reference direction and Δy is the difference in the axial coordinates of the vertices a and b. The multiplier cost was computed using:  
\begin{equation}C_{ab}^M = \underbrace {1 \cdot \varphi \left( x \right)}_{central\,multiplier} + \underbrace {\rho \cdot \left[ {1 - \varphi \left( x \right)} \right]}_{peripheral\,multiplier}\end{equation}
(4)
where x is the transversal coordinate. The function ϕ(x) is a rectangular function given by:  
\begin{equation}\varphi \left( x \right) = \left\{ \begin{array}{@{}*{2}{c}@{}} 1&{{\rm{if\ }}\left| {x - W/2} \right| \le W/4}\\ {0,}&{{\rm{otherwise}}} \end{array}\right.\end{equation}
(5)
where W is the image width. The function ρ is given by:  
\begin{equation}\rho = \frac{{{\omega _x}\left| {\Delta x} \right| + {\omega _y}\left| {\Delta y} \right|}}{{\left| {\Delta x} \right| + \left| {\Delta y} \right|}}\end{equation}
(6)
where Δx and Δy are the differences in the transversal and axial coordinates of the vertices a and b, respectively, and ωx and ωy are weighting constants ≤ 1. The function ρ can be rewritten as:  
\begin{equation}\varphi \left( x \right) = \left\{ \begin{array}{@{}*{2}{c}@{}} {{\omega _x},}&{{\rm{if\ }}\left| {\Delta y} \right| = 0}\\ {{\omega _y},}&{{\rm{if\ }}\left| {\Delta x} \right| = 0}\\ { \le 1,}&{{\rm{if\ }}\left| {\Delta x} \right| = \left| {\Delta y} \right| = 1} \end{array}\right.\end{equation}
(7)
 
Therefore, if ωx >  ωy, the vertical movement has the least cost, and this encourages graph search to move vertically, and vice versa. After constructing the graph and computing its edge cost, the Bellman-Ford algorithm34 was used to search for the minimum-cost s-t path. We used the Bellman-Ford algorithm because it does not require graph weights to be non-zero (i.e., no need to add a stability constant), and its Matlab implementation is more efficient than Dijkstra's algorithm. 
Image Filtering
In the segmentation of the outer (or inner) layer boundaries, we smoothed the image using Gaussian (or median) filter, then we used a gradient filter of the form:  
\begin{equation}{h_n} = \left[ {\begin{array}{*{20}{c}} { - \overrightarrow {{1_{\left\lfloor {\frac{n}{2}} \right\rfloor }}} }&0&{\overrightarrow {{1_{\left\lfloor {\frac{n}{2}} \right\rfloor }}} } \end{array}} \right]\end{equation}
(8)
where \(\overrightarrow {{1_{\lfloor {\frac{n}{2}} \rfloor }}} \) is an all-ones row vector of length \(\lfloor {\frac{n}{2}} \rfloor \). A horizontal gradient fx was obtained by filtering the OCT image using the filter hn and a vertical gradient fy was obtained by filtering the image using the filter \(h_n^T\) where T is the transpose operator. These filters robustly captured the horizontal and vertical edges. For the segmentation of the EP and EN boundaries, we combined fx and fy using:  
\begin{equation}f = \lambda \left| {{f_x}} \right| + \left| {{f_y}} \right|\end{equation}
(9)
where λ is a weighting function that gives less weight to the central region of the horizontal gradient fx. The function λ was computed using:  
\begin{equation}\lambda = 1 - {e^{ - \left( {\frac{{x - W/2}}{w}} \right)}}\end{equation}
(10)
where W is the width of the image and w is a constant. The weighting function λ gave more weight to the peripheral regions and less weight to the central region of the horizontal gradient fx, which usually contains the central artifact. Then, we locally normalized the gradient image f to strengthen the weak edges using:  
\begin{equation}g = \frac{{f - {\mu _l}}}{{{\sigma _l}}}\end{equation}
(11)
where g is the normalized image, μl is the local mean, and σl is the local standard deviation.35 Then, g is scaled between 0 and 1 using a min-max normalization.36 
Segmentation Algorithm
The flowchart of the segmentation algorithm is shown in Figure 2 and each step is discussed in detail in the following subsections. 
Figure 2.
 
A flowchart that illustrates the steps of the proposed segmentation algorithm.
Figure 2.
 
A flowchart that illustrates the steps of the proposed segmentation algorithm.
Image Resizing
The obtained OCT images had a large size of 1024 × 1000 pixels; therefore, to speed up the segmentation, the images were resized to a size of 512 × 500 pixels using decimation. 
Segmentation of Outer Boundaries
The outer boundaries include the EP and EN boundaries, which are the most and second-most prominent boundaries in the corneal OCT image.12 Segmenting them correctly is very important to be able to segment the inner corneal layer boundaries. A typical example of the corneal OCT images is shown in Figure 3a with common artifacts, such as the saturation artifact, horizontal artifact, and speckle noise.12 First, the OCT image was smoothed using a 5 × 5 Gaussian filter37 to reduce the speckle noise. Then, the horizontal gradient of the OCT image was obtained by filtering the smoothed image using a horizontal filter h11 using (8) and its absolute value was obtained. The vertical gradient of the OCT image was obtained by filtering the smoothed image using a vertical filter \(h_{11}^T\) using (8) and its absolute value was obtained. The gradient image was obtained by combining the absolute horizontal and vertical gradients using (9). Then, the gradient image was locally normalized using (11) and scaled between 0 and 1 (e.g., Fig. 3b). The normalized gradient highlights both the dark-to-bright transition (i.e., the EP boundary) and the bright-to-dark transition (i.e., the EN boundary). Some false edges appeared at the top and bottom parts of the normalized gradient image because of the image filtering37; therefore, to prevent the graph search from following these false edges, we cut their continuity by setting the top left, top right, and bottom central regions to zero. 
Figure 3.
 
(a) An unaveraged optical coherence tomography (OCT) image and (b) the locally normalized gradient of the OCT image.
Figure 3.
 
(a) An unaveraged optical coherence tomography (OCT) image and (b) the locally normalized gradient of the OCT image.
A directed graph was constructed using the normalized gradient image, and the edge cost was computed using (2) (i.e., using gradient cost only). The most prominent boundary (i.e., usually the EP boundary) was segmented by finding the minimum-cost s-t path in the constructed graph (e.g., Fig. 4a). Then, the segmented boundary was occluded in the normalized gradient image by setting a window of 100 pixels around it to zero, and the edge-costs of the graph were updated. The second-most prominent boundary (i.e., usually the EN boundary) was segmented by finding the minimum-cost s-t path in the updated graph (e.g., Fig. 4a). The mean of the two boundaries was used to distinguish between them (i.e., the EP boundary is before the EN boundary). The initial segmentation usually had defects at the peripheral parts with low SNR and shifted from the true boundary due to the image filtering37 (e.g., Fig. 4b). We hypothesize that the graph search tends to go upward toward the region with higher SNR; therefore, to discourage the graph from curling up toward higher SNR regions, we included the directional and multiplier costs as defined in (1). To remove the defective parts of the segmentation, the segmented boundary was smoothed using a 1 × 25 moving average filter, then filtered using h27 horizontal gradient filter. The locations of maximum values of the absolute of the gradient indicate the locations of the defects. The reference direction in the directional cost \(C_{ab}^D\) was derived by fitting a second-order polynomial to the initial segmentation after removing its defective parts. We used a second-order polynomial because it was used in literature to represent the corneal layer boundaries,1113,20,38,39 and more importantly it has exactly one local minimum that can represent the general shape of each corneal layer boundary with only one apex.40 The graph search was encouraged to move vertically by setting ωx > ωy in the multiplier cost \(C_{ab}^M\) in (4). Another directed graph was constructed using the normalized gradient image and the edge cost was computed using (1). The EP and EN boundaries were segmented by finding the minimum-cost s-t paths in the graph. Finally, the obtained segmentation of the EP and EN boundaries was shifted, to be aligned with the boundaries in the OCT image, and smoothed. The axial locations of the segmented boundaries had discrete values (i.e., whole pixels), so to have smooth transitions in the segmentation with sub-pixel values, the axial locations were smoothed using a 1 × 15 moving average filter (e.g., Fig. 4d). 
Figure 4:
 
The segmentation steps. (a) The initial segmentation using a gradient term, (b) zoomed regions that show the defects in the initial segmentation, (c) the second segmentation using gradient, directional and multiplier terms, and (d) the same zoomed regions after the second segmentation.
Figure 4:
 
The segmentation steps. (a) The initial segmentation using a gradient term, (b) zoomed regions that show the defects in the initial segmentation, (c) the second segmentation using gradient, directional and multiplier terms, and (d) the same zoomed regions after the second segmentation.
Segmentation of the Inner Boundaries
The OCT image was flattened using the nearest outer boundary (i.e., EP or EN) to make the inner boundary nearly horizontal. The flattening was done by performing a circular shift on each column of the image such that the outer boundary became a horizontal line in the flattened image. The flattened image was cropped based on the known thickness of corneal layers in OCT images8,9,4143 to limit the search region for the inner boundary (e.g., Figs. 5a, 5b). To segment each inner boundary, the flattened image was smoothed using a horizontal median filter with a size of 1 × win to strengthen the boundary, filtered using a vertical filter, locally normalized using (11), and scaled between 0 and 1 (e.g., Figs. 5c, 5d). The BW boundary was the most prominent inner boundary, then ST, and DM boundaries. The parameters used with each boundary were obtained experimentally from the training dataset, and their details are summarized in Table 1. A directed graph was constructed using the normalized image and the edge cost was computed using (1) where the directional term was computed using the nearest layer boundary. In the flattened image, each boundary was nearly horizontal, so the graph search was encouraged to move horizontally using (4). The boundary is segmented by finding the minimum-cost s-t path in the constructed graph (e.g., Figs. 5e, 5f). The segmentation of the inner boundaries was rotated back to the original OCT image and smoothed using a 1 × 15 moving-average filter. The final segmentation of all boundaries is shown in Figure 6a with the highlighted regions in Figure 6b. 
Figure 5.
 
(a) An OCT image flattened, using the air-epithelium boundary, and cropped. (b) The OCT image flattened, using the endothelium-aqueous boundary, and cropped. (c) A locally normalized gradient of a, (d) a locally normalized gradient of b, (e) the segmentation of the epithelium-Bowman's and Bowman's-stroma boundaries, and (f) the segmentation of the Descemet's membrane.
Figure 5.
 
(a) An OCT image flattened, using the air-epithelium boundary, and cropped. (b) The OCT image flattened, using the endothelium-aqueous boundary, and cropped. (c) A locally normalized gradient of a, (d) a locally normalized gradient of b, (e) the segmentation of the epithelium-Bowman's and Bowman's-stroma boundaries, and (f) the segmentation of the Descemet's membrane.
Table 1.
 
The Parameters Used for the Segmentation of BW, ST, and DM Boundaries Using EP and EN Boundaries
Table 1.
 
The Parameters Used for the Segmentation of BW, ST, and DM Boundaries Using EP and EN Boundaries
Figure 6.
 
(a) The result of the automatic segmentation of the air-epithelium (blue color), epithelium-Bowman's (cyan color), Bowman's-stroma (green color), Descemet's (yellow color), and endothelium-aqueous (orange color) boundaries, and (b) zoomed regions of the OCT image.
Figure 6.
 
(a) The result of the automatic segmentation of the air-epithelium (blue color), epithelium-Bowman's (cyan color), Bowman's-stroma (green color), Descemet's (yellow color), and endothelium-aqueous (orange color) boundaries, and (b) zoomed regions of the OCT image.
2D Ray-Tracing Algorithm and Thickness Measurement
The cornea is transparent, and the OCT light is refracted when it passes through it. Moreover, the corneal layers have different refractive indices.14 A refractive index of 1.401 is used for the corneal epithelium and 1.376 for the rest of the corneal layers.44 First, the segmentation was converted into millimeters using the scan depth (i.e., 2.18 millimeters) and width (i.e., 6 millimeters) of our OCT machine. An iterative ray-tracing algorithm was applied to correct the refraction of the OCT imaging light.45 In the ray-tracing algorithm, we applied the vector form of Snell's law at the boundary between every two successive layers (i.e., illustrated in Fig. 7). Initially, the direction of the incident ray was assumed to be orthogonal to the corneal apex (i.e., parallel to the axial direction) at each point of the boundary (e.g., p1 in Fig. 7) and given by:  
\begin{equation}\vec{I} = \langle \begin{array}{*{20}{c}} 0&1 \end{array}\rangle \end{equation}
(12)
and the normal to the boundary was computed as the gradient of the boundary points using:
Figure 7.
 
A diagram that illustrates the Snell's law where \(\vec{I}\) is the incident ray, \(\vec{R}\) is the refracted ray, \(\vec{N}\) is the normal to the boundary, θ1 is the incidence angle, θ2 is the refraction angle, n1 and n2 are the refractive indices.
Figure 7.
 
A diagram that illustrates the Snell's law where \(\vec{I}\) is the incident ray, \(\vec{R}\) is the refracted ray, \(\vec{N}\) is the normal to the boundary, θ1 is the incidence angle, θ2 is the refraction angle, n1 and n2 are the refractive indices.
 
\begin{equation}\vec{N} = \langle \begin{array}{*{20}{c}} { - {\rm{\Delta y}}}&{{\rm{\Delta x}}} \end{array}\rangle \end{equation}
(13)
where Δx and Δy are the differences in the transversal and axial directions, respectively. At each boundary point, the cosine of the incidence angle was computed using:  
\begin{equation}\cos {\theta _1} = - \vec{N} \cdot \vec{I}\end{equation}
(14)
where \(\vec{N}\) is the normal to the boundary and \(\vec{I}\) is the incident ray. The cosine of the refraction angle θ2 was computed using:  
\begin{equation}\cos {\theta _2} = \sqrt {1 - {{\left( {\frac{{{n_1}}}{{{n_2}}}} \right)}^2}\left( {1 - {{\left( {\cos {\theta _1}} \right)}^2}} \right)} \end{equation}
(15)
where n1 and n2 are the refractive indices of the incidence and refraction media. 
The refracted ray R was computed using:  
\begin{equation}\vec{R} = \frac{{{n_1}}}{{{n_2}}}\vec{I} + \left( {\frac{{{n_1}}}{{{n_2}}}\cos {\theta _1} - \cos {\theta _2}} \right)\vec{N}\end{equation}
(16)
 
The optical path length through each layer (e.g., p1p2 in Fig. 7) was corrected to the geometric path length (e.g., p1p2 in Fig. 7) using:  
\begin{equation}{l^{geo}} = \frac{{{l^{opt}}}}{{{n_2}}}\end{equation}
(17)
 
The updated location of the point p2 was computed using:  
\begin{equation}p{{\rm{^{\prime}}}_2} = {p_1} + {l^{geo}} \cdot \vec{R}\end{equation}
(18)
 
After correcting each layer boundary, the layer boundary was uniformly resampled and used to correct its successor. Then, the thickness of each corneal layer was measured as the distance between its boundaries. The thickness at some boundary point was measured as the shortest distance from the boundary point to the successive boundary. In this study, the thicknesses of the epithelium, Bowman's, stroma, Descemet's membrane, and the cornea were measured. 
Evaluation
To evaluate the performance of the segmentation and thickness measurement produced by the proposed algorithm, two trained manual operators participated in the study. In addition, the manual segmentation of additional operators was obtained from other studies along with their datasets.11,12 For the datasets of this study, the two trained operators were instructed to segment all the OCT images in the training and testing datasets (i.e., totaling 120 segmented OCT images). Then, the automatic segmentation was used to segment the OCT images in all the images, including the images in the datasets of the studies in References 11 and 12. Ray tracing was used to correct the locations of the segmented corneal layer boundaries to account for light refraction. Finally, the thickness of the corneal layers was measured. Additionally, we have implemented the methods in Elsawy et al.,11 LaRocca et al.,12 and Zhang et al. to compare their performance with our proposed segmentation method on our datasets as well as their datasets. LaRocca et al.12 used a graph with an edge cost given by:  
\begin{equation}{C_{ab}} = 2 - {f_{y,a}} - {f_{y,b}} + K\end{equation}
(19)
where fy,a and fy,b are the vertical gradients at the vertices a and b, respectively, obtained using (8) and K is a stability constant. 
Quantitative Measures of the Segmentation
To quantitatively compare two segmented boundaries, B1 and B2, we calculated the mean unsigned boundary error (UBE) between the two boundaries and using:  
\begin{equation}\frac{1}{N}\mathop \sum \limits_{i = 1}^N \left| {B_i^1 - B_i^2} \right|\end{equation}
(20)
where N is the number of points of each boundary. We also calculated the corneal region similarity between two corneal regions (i.e., regions bounded by the EP and EN boundaries), R1 and R2, using the Dice similarity coefficient which is given by:  
\begin{equation}Dice = \frac{{2\left| {{R^1} \cap {R^2}} \right|}}{{\left| {{R^1}} \right| + \left| {{R^2}} \right|}}\end{equation}
(21)
 
Quantitative Measure of the Thickness
To quantitatively compare two different thickness measurements, we calculated the mean and standard deviation of each thickness profile. In addition, we computed the mean unsigned thickness error (UTE) in microns between two thickness profiles, T1 and T2, using:  
\begin{equation}\frac{1}{M}\mathop \sum \limits_{j = 1}^M \left| {T_i^1 - T_i^2} \right|\end{equation}
(22)
where M is the number of points in each thickness profile. To validate the algorithm, we computed these quantitative measures between each two operators (i.e., inter-operator) and between the operators and the algorithm (i.e., algorithm-operator). In all comparisons, we used Wilcoxon rank sum test (RST) with a significance level of 0.05. Values are presented as mean ± standard deviation. 
Results
Datasets
The graph segmentation method was trained on a training dataset of 60 OCT images, that included different corneal pathologies, and was tested on three datasets of 60 (i.e., from this study), 50 (i.e., Elsawy's dataset11), and 20 (i.e., LaRocca's dataset12) OCT images, receptively (i.e., a total of 130 OCT images). All the testing datasets were not used in training the methods. The OCT images included different artifacts and noise, such as saturation artifact, speckle noise, and low SNR. 
Segmentation Results
The results of the mean UBE between the algorithm and operators (i.e., algorithm-operator) and the mean UBE between the manual operators (i.e., inter-operator) are summarized in Table 2Table 3, and Table 4. There were significant errors between the algorithm-operator mean UBE and the inter-operator mean UBE on the testing dataset (0.89 ± 1.03 vs. 0.77 ± 0.68 pixels; P < 0.001), Elsawy's dataset (0.76 ± 0.79 vs. 0.95 ± 0.82 pixels; P < 0.001), and LaRocca's dataset (1.53 ± 2.66 vs. 1.61 ± 2.18 pixels; P < 0.001). The mean UBE of the algorithm-operator was significantly less than the mean UBE of the inter-operator for the EP, BW, and EN boundaries on our dataset and Elsawy's dataset (P < 0.001). It was only insignificant for the DM boundary for our testing dataset (P = 0.4051). The mean UBE of the algorithm-operator was significantly less than the mean UBE of the inter-operator for the EP boundary on LaRocca's dataset (P < 0.001). This is because our method was trained using a dataset obtained from an OCT machine different from the OCT machine that was used to obtain LaRocca's dataset. The mean UBE of the algorithm-operator was significantly larger than the mean UBE of the inter-operator for the ST and DM boundaries on the testing dataset that contained pathological eyes (P < 0.001). This suggests that our method performed better on normal eyes for those layer boundaries and still had a close performance to the inter-operator errors for the pathological eyes. 
Table 2.
 
Results of the Mean Unsigned Boundary Error Between the Operators and the Algorithm on the Testing Dataset
Table 2.
 
Results of the Mean Unsigned Boundary Error Between the Operators and the Algorithm on the Testing Dataset
Table 3.
 
Results of the Mean Unsigned Boundary Error Between the Operators and the Algorithm on Elsawy's Dataset11
Table 3.
 
Results of the Mean Unsigned Boundary Error Between the Operators and the Algorithm on Elsawy's Dataset11
Table 4.
 
Results of the Mean Unsigned Boundary Error Between the Operators and the Algorithm on LaRocca's Dataset12
Table 4.
 
Results of the Mean Unsigned Boundary Error Between the Operators and the Algorithm on LaRocca's Dataset12
The results of the mean algorithm-operator corneal-region similarity and the mean inter-operator corneal-region similarity are summarized in Table 5 for the three datasets. There were no significant differences between the algorithm-operator mean corneal region similarity and the inter-operator mean corneal region similarity on the testing dataset (0.9959 ± 0.0008 vs. 0.9957 ± 0.0010; P = 0.3765), but there were significant differences on Elsawy's dataset (0.9956 ± 0.0013 vs. 0.9949 ± 0.0014; P = 0.00050), and on LaRocca's dataset (0.9908 ± 0.0054 vs. 0.9897 ± 0.0028; P = 0.0256). 
Table 5.
 
Results of the Corneal-Region Dice Similarity Coefficient Between the Operators and Between the Operators and the Algorithm
Table 5.
 
Results of the Corneal-Region Dice Similarity Coefficient Between the Operators and Between the Operators and the Algorithm
Comparison of the Proposed Segmentation with Other Methods
The results of other methods in References 11, 12, and 13 were reported based on our implementation and using an image with a size of 512 × 500 pixels to be consistent on all methods and all datasets. The results of the mean UBE for our proposed segmentation method as well as the three implemented methods1113 are summarized in Table 6Table 7, and Table 8. There were significant differences between the mean UBE of our proposed methods and the methods in Elsawy et al.,11 LaRocca et al.,12 and Zhang et al.13 on the testing dataset (0.89 ± 1.03 vs. 1.85 ± 8.80, 2.00 ± 5.10, 5.81 ± 17.82 pixels; respectively; P < 0.001), Elsawy's dataset (0.76 ± 0.79 vs. 0.72 ± 0.68, 1.05 ± 1.55, 6.41 ± 22.70; respectively; P < 0.001), and LaRocca's dataset (1.53 ± 2.66 vs. 4.87 ± 16.15, 3.03 ± 6.94, 4.84 ± 6.65 pixels; respectively; P < 0.001). A summary of the best-performing method for each boundary is shown in Table 9 for all datasets. The proposed method was better than the other methods on the testing dataset. However, the proposed method was inferior to Elsawy's method for the EP, DM, and EN boundaries on Elsawy's dataset. This is because the proposed method does not fit the segmentation to a model to be generic, but Elsawy et al.11 used polynomial models, which were more accurate for the segmentation of normal OCT images. The proposed method was better than other methods for the EP and BW interfaces on LaRocca's dataset. These results suggest that the proposed method is better when the corneal layer boundary has an irregular shape, but less favorable when the layer boundary can be modeled using a specific model as in the case of normal OCT images. 
Table 6.
 
Comparison of the Mean Unsigned Boundary Error of the Proposed Method With the Methods in References 11, 12, and 13 on the Testing Dataset
Table 6.
 
Comparison of the Mean Unsigned Boundary Error of the Proposed Method With the Methods in References 11, 12, and 13 on the Testing Dataset
Table 7.
 
Comparison of the Mean Unsigned Boundary Error of the Proposed Method With Three Methods1113 on Elsawy's Dataset
Table 7.
 
Comparison of the Mean Unsigned Boundary Error of the Proposed Method With Three Methods1113 on Elsawy's Dataset
Table 8.
 
Comparison of the Mean Unsigned Boundary Error of the Proposed Method With Three Methods1113 on LaRocca's Dataset
Table 8.
 
Comparison of the Mean Unsigned Boundary Error of the Proposed Method With Three Methods1113 on LaRocca's Dataset
Table 9.
 
Summary of the Best-Performing Method for Each Boundary on all Datasets
Table 9.
 
Summary of the Best-Performing Method for Each Boundary on all Datasets
Thickness Measurement Results
The results of the thickness measurement are summarized in Table 10 and Table 11. We used the testing and Elsawy's datasets because these datasets have a radial scan pattern; therefore, the thickness measurement is reliable in 2D. Based on the study in Reference 41, the mean thickness of the central cornea, epithelium, Bowman's, stroma, and Descemet endothelium were 555.50 ± 29.64, 54.60 ± 4.25, 16.70 ± 1.73, 467.51 ± 28.91, and 16.74 ± 1.66 µm, respectively, for healthy eyes. By comparing these thickness values to the results in Table 11, which were obtained from a dataset of healthy eyes, the measured thickness values are close to the reported values in Reference 41. In addition, the measured thickness values calculated from the segmentation of the manual operators are close to the measured thickness values calculated from the graph segmentation. Based on studies in References 8, 9, and 42, corneal diseases can cause thinning or thickening of the cornea. The results in Table 10 were obtained from a dataset of healthy and diseased eyes, so the layer thickness measurements had larger variability. The thickness measurements in Table 10 are within the normal values, but with a larger standard deviation due to variability. In addition, the measured thickness values from the manual operators are close to the measured thickness values from the graph segmentation. The results of the mean UTE in µm are summarized Table 12 and Table 13. The mean UTE of the algorithm versus the mean operator was close to the mean UTE of the inter-operator on the testing dataset (3.62 ± 3.98 vs. 2.95 ± 2.52 µm; P < 0.001), and Elsawy's dataset (3.21 ± 2.90 vs. 2.99 ± 2.54 µm; P < 0.001). On average, the difference in thickness measurement was less the optical resolution of the imaging machine (i.e., 3 µm). This confirms that the thickness measurement using the proposed graph segmentation can reproduce the thickness measurement from the manual operators. 
Table 10.
 
Thickness Measurement in µm on the Testing Dataset
Table 10.
 
Thickness Measurement in µm on the Testing Dataset
Table 11.
 
Thickness Measurement in µm on Elsawy's Dataset
Table 11.
 
Thickness Measurement in µm on Elsawy's Dataset
Table 12.
 
Mean Unsigned Thickness Error in µm on the Testing Dataset Between the Operators and Between the Operators and the Algorithm
Table 12.
 
Mean Unsigned Thickness Error in µm on the Testing Dataset Between the Operators and Between the Operators and the Algorithm
Table 13.
 
Mean Unsigned Thickness Error in µm on Elsawy's Dataset Between the Operators and Between the Operators and the Algorithm
Table 13.
 
Mean Unsigned Thickness Error in µm on Elsawy's Dataset Between the Operators and Between the Operators and the Algorithm
Segmentation Examples
Graphical examples of the proposed segmentation against the two operators are shown in Figure 8 and Figure 9 for OCT images from our testing dataset. From the graphical examples, the proposed segmentation is close to the manual segmentation. Graphical examples of the proposed segmentation against the expert and trained operator are shown in Figure 10 and Figure 11 for OCT images from LaRocca's dataset. From the graphical examples, the proposed segmentation is closer to the expert than the trained operator. For a visual comparison of the proposed segmentation method against other methods in References 11, 12, and 13, examples of the segmentation of different methods are shown in Figure 12 and Figure 13. From the examples, the proposed method is closer to the mean of the two operators than other methods. An example of the worst performance of the proposed method is shown in Figure 14 where the epithelium-Bowman's boundary is mislocated because of the size of the search window. However, other methods had many dislocated boundaries in several images. 
Figure 8.
 
(a) Graphical comparison of the graph segmentation (orange color) against the manual segmentation of the two trained operators (blue and green colors) for an optical coherence tomography image from our testing dataset, (b) zoomed top-left box, (c) zoomed top-right box, and (d) zoomed bottom box.
Figure 8.
 
(a) Graphical comparison of the graph segmentation (orange color) against the manual segmentation of the two trained operators (blue and green colors) for an optical coherence tomography image from our testing dataset, (b) zoomed top-left box, (c) zoomed top-right box, and (d) zoomed bottom box.
Figure 9.
 
(a) Graphical comparison of the graph segmentation (orange color) against the manual segmentation of the two trained operators (blue and green colors) for another optical coherence tomography image from our testing dataset, (b) zoomed top-left box, (c) zoomed top-right box, and (d) zoomed bottom box.
Figure 9.
 
(a) Graphical comparison of the graph segmentation (orange color) against the manual segmentation of the two trained operators (blue and green colors) for another optical coherence tomography image from our testing dataset, (b) zoomed top-left box, (c) zoomed top-right box, and (d) zoomed bottom box.
Figure 10.
 
(a) Graphical comparison of the graph segmentation (orange color) against the manual segmentation of the two trained operators (blue and green colors) for an optical coherence tomography image from LaRocca's dataset, (b) zoomed top-left box, (c) zoomed top-right box, and (d) zoomed bottom box.
Figure 10.
 
(a) Graphical comparison of the graph segmentation (orange color) against the manual segmentation of the two trained operators (blue and green colors) for an optical coherence tomography image from LaRocca's dataset, (b) zoomed top-left box, (c) zoomed top-right box, and (d) zoomed bottom box.
Figure 11.
 
(a) Graphical comparison of the graph segmentation (orange color) against the manual segmentation of the two trained operators (blue and green colors) for another optical coherence tomography image from LaRocca's dataset, (b) zoomed top-left box, (c) zoomed top-right box, and (d) zoomed bottom box.
Figure 11.
 
(a) Graphical comparison of the graph segmentation (orange color) against the manual segmentation of the two trained operators (blue and green colors) for another optical coherence tomography image from LaRocca's dataset, (b) zoomed top-left box, (c) zoomed top-right box, and (d) zoomed bottom box.
Figure 12.
 
(a) Graphical comparison of different segmentation methods and the mean segmentation of the trained operators for an optical coherence tomography image from our testing dataset, (b) zoomed top box, (c) zoomed bottom box, and (d) zoomed middle box.
Figure 12.
 
(a) Graphical comparison of different segmentation methods and the mean segmentation of the trained operators for an optical coherence tomography image from our testing dataset, (b) zoomed top box, (c) zoomed bottom box, and (d) zoomed middle box.
Figure 13.
 
(a) Graphical comparison of different segmentation methods and the mean segmentation of the trained operators for an optical coherence tomography image from our testing dataset, (b) zoomed top-left box, (c) zoomed bottom-left box, and (d) zoomed middle-central box.
Figure 13.
 
(a) Graphical comparison of different segmentation methods and the mean segmentation of the trained operators for an optical coherence tomography image from our testing dataset, (b) zoomed top-left box, (c) zoomed bottom-left box, and (d) zoomed middle-central box.
Figure 14.
 
(a) Graphical example of the worst performance of the proposed method where the epithelium-Bowman's boundary is mislocated due to the used search region for the boundary, (b) zoomed top-left box, and (c) zoomed top-right box.
Figure 14.
 
(a) Graphical example of the worst performance of the proposed method where the epithelium-Bowman's boundary is mislocated due to the used search region for the boundary, (b) zoomed top-left box, and (c) zoomed top-right box.
Discussion
Measuring the thickness of different corneal layers using OCT images is used for the diagnosis of different corneal diseases.29 Therefore, an automated algorithm to segment layer boundaries in OCT images and measure layer thicknesses is highly needed. Existing methods have limitations in segmenting pathological corneas with irregular boundaries and they do not segment all boundaries. In addition, they do not consider the thickness measurement, which is of great interest. In this work, we proposed a graph-based algorithm that segments five corneal layer boundaries in pathological cases, we accounted for light refraction, and we measured the thickness of different layers. The proposed algorithm can segment five corneal layer boundaries in pathological corneas. To our knowledge, there is no other algorithm that segments five corneal layer boundaries in pathological corneas that have irregular layer boundaries. The proposed algorithm was validated against manual segmentation from different operators on three datasets. The datasets contained a total of 130 OCT images of pathological and healthy corneas, and they had the common artifacts that typically exist in corneal OCT images to ensure the robustness of the algorithm. The proposed algorithm segments the corneal layer boundaries accurately, where the mean unsigned boundary error of between the algorithm and the mean operator is less than one pixel across all boundaries except the DM. The segmentation of the DM boundary is less accurate which may be due to the low SNR of images. Another reason for this could be that we used a large search window for the DM boundary to accommodate the possible variations of its location in pathological eyes, which can lead to increasing the chance of error. The proposed segmentation results had similar corneal region similarity compared with the inter-operator mean corneal region similarity across the testing datasets. In Elsawy's and LaRocca's datasets, there were significant differences in the advantage of our algorithm. The obtained results suggest that the proposed segmentation algorithm could be equivalent to manual operators. 
In order to compare the performance of our proposed methods with existing segmentation methods, we implemented the methods in References 11, 12, and 13 and evaluated their performance on our and their datasets. Our proposed segmentation method performed significantly better than LaRocca's method12 and Zhang's method13 on our and their datasets. Elsawy's method11 performed better than the proposed method in segmenting three layer boundaries on Elsawy's dataset because their method was tuned on this dataset, which included normal eyes only. In addition, they used a polynomial model to segment the layer boundaries, which was more accurate. However, Elsawy's method11 was not able to generalize to pathological OCT images in the testing dataset. In addition, the proposed segmentation method could segment more layer boundaries than other methods.12,13 
The graphical comparison of the proposed segmentation method and trained operators showed that it was close to the manual segmentation of the trained operators on our testing dataset and closer to the expert than the trained operator on LaRocca's dataset.12 In addition, the graphical comparison of the proposed segmentation method and other segmentation methods1113 showed that the proposed method performed better than other methods. 
The corneal layer thickness measurements obtained using the proposed method have excellent reliability with accuracy up to the optical resolution of the machine. In addition, the measured layer thickness was close to the thickness reported in the literature. The algorithm had a mean unsigned thickness error less than 1 µm compared to that of the inter-operator for the epithelium and Bowman's layers as well as the total corneal thickness. In addition, the algorithm had a mean unsigned thickness error less than the optical resolution of the imaging machine (i.e., 3 µm) for the Descemet's. Our proposed algorithm could be a potential tool to assist in clinical practice because it overcomes the low repeatability, low reproducibility, and laborious work of the manual segmentation. 
There are some limitations in the proposed segmentation method. First, the directional term in the cost function depends on having a good estimate of the reference polynomial from the initial segmentation. This is affected by the detection of the low SNR regions and the size of the good part of the initial segmentation. The proposed method can fail in the segmentation of severe pathological corneal OCT images if the layer boundaries are not clear or completely missing. Finally, this study is limited to the segmentation and thickness measurement in 2D OCT images. However, the thickness measurement can be generalized to 3D using the segmentation of a complete OCT scan. 
Conclusion
In this paper, we proposed an automatic algorithm for the segmentation and thickness measurement of the corneal layers using OCT images of normal as well as pathological corneas. Our algorithm was validated against manual operators on different datasets and compared to its counterpart methods in the literature. The proposed algorithm can segment five corneal layer boundaries in pathological corneas as well as healthy corneas, which is significantly better than some existing methods. The thickness measurement is accurate up to the optical resolution of the machine. Our proposed algorithm can be a potential tool to assist in clinical practice. 
Acknowledgments
The authors thank LaRocca for providing them with the dataset used in his study. 
United States Non-Provisional Patent (Application No. 14/247903) and United States Provisional Patent (Application No. 62/445,106) (M.A.). United States Non-Provisional Patents (Application No. 8992023 and 61809518), and PCT/US2018/013409 (M.A. and A.E.). Patents and PCT are owned by University of Miami and licensed to Resolve Ophthalmics, LLC. M.A. is an equity holder and sits on the Board of Directors for Resolve Ophthalmics, LLC. 
Supported by a NEI K23 award (K23EY026118), NEI core center grant to the University of Miami (P30 EY014801), and Research to Prevent Blindness (RPB). The funding organization had no role in the design or conduct of this research. 
Disclosure: A. Elsawy, Non-Provisional Patents (Application No. 8992023 and 61809518), and PCT/ US2018/013409 (M.A. and A.E.); G. Gregori, None; T. Eleiwa, None; M. Abdel-Mottaleb, None; M.A. Shousha, None 
References
Huang D, Swanson EA, Lin CP, et al. Optical coherence tomography. Science. 1991; 254: 1178–1181. [CrossRef] [PubMed]
Haque S, Jones L, Simpson T. Corneal and epithelial thickness in keratoconus: a comparison of ultrasonic pachymetry, orbscan and optical coherence tomography measurements. Invest Ophth Vis Sci. 2004; 45: U28.
Reinstein DZ, Gobbe M, Archer TJ, Silverman RH, Coleman DJ. Epithelial, stromal, and total corneal thickness in keratoconus: three-dimensional display with artemis very-high frequency digital ultrasound. J Refract Surg. 2010; 26: 259–271. [CrossRef] [PubMed]
Patwardhan AA, Khan M, Mollan SP, Haigh P. The importance of central corneal thickness measurements and decision making in general ophthalmology clinics: a masked observational study. BMC Ophthalmol. 2008; 8: 1. [CrossRef] [PubMed]
Abou Shousha M, Yoo SH, Sayed MS, et al. In vivo characteristics of corneal endothelium/Descemet membrane complex for the diagnosis of corneal graft rejection. Am J Ophthalmol. 2017; 178: 27–37. [CrossRef] [PubMed]
Vajzovic LM, Karp CL, Haft P, et al. Ultra high-resolution anterior segment optical coherence tomography in the evaluation of anterior corneal dystrophies and degenerations. Ophthalmology. 2011; 118: 1291–1296. [PubMed]
Wang J, Abou Shousha M, Perez VL, et al. Ultra-high resolution optical coherence tomography for imaging the anterior segment of the eye. Ophthalmic Surg Lasers Imaging. 2011; 42(Suppl): S15–S27. [CrossRef] [PubMed]
Abou Shousha M, Perez VL, Canto APFS, et al. The use of Bowman's layer vertical topographic thickness map in the diagnosis of keratoconus. Ophthalmology. 2014; 121: 988–993. [PubMed]
Abou Shousha M, Perez VL, Wang JH, et al. Use of ultra-high-resolution optical coherence tomography to detect in vivo characteristics of Descemet's membrane in Fuchs' dystrophy. Ophthalmology. 2010; 117: 1220–1227. [PubMed]
Sridhar MS . Anatomy of cornea and ocular surface. Indian J Ophthalmol. 2018; 66: 190–194. [PubMed]
Elsawy A, Abdel-Mottaleb M, Sayed IO, et al. Automatic segmentation of corneal microlayers on optical coherence tomography images. Transl Vis Sci Techn. 2019; 8: 39.
LaRocca F, Chiu SJ, McNabb RP, Kuo AN, Izatt JA, Farsiu S. Robust automatic segmentation of corneal layer boundaries in SDOCT images using graph theory and dynamic programming. Biomed Opt Express. 2011; 2: 1524–1538. [PubMed]
Zhang TQ, Elazab A, Wang XG, et al. A novel technique for robust and fast segmentation of corneal layer interfaces based on spectral-domain optical coherence tomography imaging. Ieee Access. 2017; 5: 10352–10363.
Patel S, Marshall J, Fitzke FW. Refractive-index of the human corneal epithelium and stroma. J Refract Surg. 1995; 11: 100–105. [PubMed]
Eichel JA, Bizheva KK, Clausi DA, Fieguth PW. Automated 3D reconstruction and segmentation from optical coherence tomography. Lect Notes Comput Sci. 2010; 6313: 44–57.
Elsawy A, Abdel-Mottaleb M, Abou Shousha M. Segmentation of corneal optical coherence tomography images using graph search and radon transform. Prog Biomed Optics Imag Proc SPIE. 2019;10949.
Elsawy A, Abdel-Mottaleb M, Abou Shousha M. Segmentation of corneal optical coherence tomography images using randomized Hough transform. Presented at: SPIE Medical Imaging, 2019, San Diego, California, United States. 2019; Abstract 10949.
Jahromi MK, Kafieh R, Rabbani H, et al. An automatic algorithm for segmentation of the boundaries of corneal layers in optical coherence tomography images using Gaussian mixture model. J Med Signals Sens. 2014; 4: 171–180. [PubMed]
Keller B, Draelos M, Tang G, et al. Real-time corneal segmentation and 3D needle tracking in intrasurgical OCT. Biomed Opt Express. 2018; 9: 2716–2732. [PubMed]
Li Y, Shekhar R, Huang D. Segmentation of 830- and 1310-nm LASIK corneal optical coherence tomography images. SPIE; 2002.
Robles VA, Antony BJ, Koehn DR, Anderson MG, Garvin MK. 3D graph-based automated segmentation of corneal layers in anterior-segment optical coherence tomography images of mice. Presented at: SPIE Medical Imaging, 2014, San Diego, California, United States. 2014; Abstract 9038.
Shu P, Sun YK. Automated extraction of the inner contour of the anterior chamber using optical coherence tomography images. J Innov Opt Heal Sci. 2012; 5: 1250030-1–1250030-9.
Wagner J, Pezold S, Cattin PC. Model-driven 3-D regularisation for robust segmentation of the refractive corneal surfaces in spiral OCT scans. Lect Notes Comput Sci. 2017; 10554: 109–117.
Williams D, Zheng YL, Bao FJ, Elsheikh A. Automatic segmentation of anterior segment optical coherence tomography images. J Biomed Opt. 2013; 18: 56003. [PubMed]
Williams D, Zheng YL, Bao FJ, Elsheikh A. Fast segmentation of anterior segment optical coherence tomography images using graph cut. Eye Vision. 2015; 2: 1. [PubMed]
Williams D, Zheng YL, Davey PG, Bao FJ, Shen MX, Elsheikh A. Reconstruction of 3D surface maps from anterior segment optical coherence tomography images using graph theory and genetic algorithms. Biomed Signal Proces. 2016; 25: 91–98.
Dos Santos VA, Schmetterer L, Stegmann H, et al. CorneaNet: fast segmentation of cornea OCT scans of healthy and keratoconic eyes using deep learning. Biomed Opt Express. 2019; 10: 622–641. [PubMed]
Fang LY, Cunefare D, Wang C, Guymer RH, Li ST, Farsiu S. Automatic segmentation of nine retinal layer boundaries in OCT images of non-exudative AMD patients using deep learning and graph search. Biomed Opt Express. 2017; 8: 2732–2744. [PubMed]
Mathai TS, Lathrop KL, Galeotti J. Learning to segment corneal tissue interfaces in OCT images. Proc IEEE Int Symp Biomed Imaging. 2019;1432–1436.
Pekala M, Joshi N, Liu TYA, Bressler NM, DeBuc DC, Burlina P. Deep learning based retinal OCT segmentation. Comput Biol Med. 2019; 114: 103445. [PubMed]
Krizhevsky A, Sutskever I, Hinton GE. ImageNet classification with deep convolutional neural networks. Commun ACM. 2017; 60: 84–90.
Ronneberger O, Fischer P, Brox T. U-Net: convolutional networks for biomedical image segmentation. Lect Notes Comput Sc. 2015; 9351: 234–241.
West DB . Introduction to graph theory. Upper Saddle River, NJ: Prentice Hall; 1996.
Goldberg AV, Radzik T. A heuristic improvement of the Bellman-Ford algorithm. Appl Math Lett. 1993; 6: 3–6.
Sage D, Unser M. Teaching image-processing programming in Java. IEEE Signal Process Mag. 2003; 20: 43–52.
Han J, Pei J, Kamber M. Data mining: concepts and techniques. New York, NY: Elsevier; 2011.
Gonzalez RC, Woods RC. Digital Image Processing. Boston, MA: Addison-Wesley; 1992;2.
Li Y, Netto MV, Shekhar R, Krueger RR, Huang D. A longitudinal study of LASIK flap and stromal thickness with high-speed optical coherence tomography. Ophthalmology. 2007; 114: 1124–1132. [PubMed]
Li Y, Shekhar R, Huang D. Corneal pachymetry mapping with high-speed optical coherence tomography. Ophthalmology. 2006; 113: 792–799.e792. [PubMed]
Stewart J . Calculus: Concepts and contexts. Boston, MA: Cengage Learning; 2009.
Lopez de la Fuente C, Sanchez-Cano A, Segura F, Hospital EO, Pinilla I. Evaluation of total corneal thickness and corneal layers with spectral-domain optical coherence tomography. J Refract Surg 2016; 32: 27–32. [PubMed]
Prospero Ponce CM, Rocha KM, Smith SD, Krueger RR. Central and peripheral corneal thickness measured with optical coherence tomography, Scheimpflug imaging, and ultrasound pachymetry in normal, keratoconus-suspect, and post-laser in situ keratomileusis eyes. J Cataract Refract Surg. 2009; 35: 1055–1062. [PubMed]
Yazici AT, Bozkurt E, Alagoz C, et al. Central corneal thickness, anterior chamber depth, and pupil diameter measurements using Visante OCT, Orbscan, and Pentacam. J Refract Surg. 2010; 26: 127–133. [PubMed]
Yadav R, Kottaiyan R, Ahmad K, Yoon G. Epithelium and Bowman's layer thickness and light scatter in keratoconic cornea evaluated using ultrahigh resolution optical coherence tomography. J Biomed Opt. 2012; 17: 116010. [PubMed]
Glassner AS . An introduction to ray tracing. New York, NY: Elsevier; 1989.
Figure 1.
 
The construction of the graph from the image where the image pixels are used as the vertices of the graph in addition to source and target vertices. (a) Each pixel is connected to its immediate neighbors using five-connectivity, and (b) the source vertex is connected to the image leftmost pixels and the target vertex is connected to the image rightmost pixels.
Figure 1.
 
The construction of the graph from the image where the image pixels are used as the vertices of the graph in addition to source and target vertices. (a) Each pixel is connected to its immediate neighbors using five-connectivity, and (b) the source vertex is connected to the image leftmost pixels and the target vertex is connected to the image rightmost pixels.
Figure 2.
 
A flowchart that illustrates the steps of the proposed segmentation algorithm.
Figure 2.
 
A flowchart that illustrates the steps of the proposed segmentation algorithm.
Figure 3.
 
(a) An unaveraged optical coherence tomography (OCT) image and (b) the locally normalized gradient of the OCT image.
Figure 3.
 
(a) An unaveraged optical coherence tomography (OCT) image and (b) the locally normalized gradient of the OCT image.
Figure 4:
 
The segmentation steps. (a) The initial segmentation using a gradient term, (b) zoomed regions that show the defects in the initial segmentation, (c) the second segmentation using gradient, directional and multiplier terms, and (d) the same zoomed regions after the second segmentation.
Figure 4:
 
The segmentation steps. (a) The initial segmentation using a gradient term, (b) zoomed regions that show the defects in the initial segmentation, (c) the second segmentation using gradient, directional and multiplier terms, and (d) the same zoomed regions after the second segmentation.
Figure 5.
 
(a) An OCT image flattened, using the air-epithelium boundary, and cropped. (b) The OCT image flattened, using the endothelium-aqueous boundary, and cropped. (c) A locally normalized gradient of a, (d) a locally normalized gradient of b, (e) the segmentation of the epithelium-Bowman's and Bowman's-stroma boundaries, and (f) the segmentation of the Descemet's membrane.
Figure 5.
 
(a) An OCT image flattened, using the air-epithelium boundary, and cropped. (b) The OCT image flattened, using the endothelium-aqueous boundary, and cropped. (c) A locally normalized gradient of a, (d) a locally normalized gradient of b, (e) the segmentation of the epithelium-Bowman's and Bowman's-stroma boundaries, and (f) the segmentation of the Descemet's membrane.
Figure 6.
 
(a) The result of the automatic segmentation of the air-epithelium (blue color), epithelium-Bowman's (cyan color), Bowman's-stroma (green color), Descemet's (yellow color), and endothelium-aqueous (orange color) boundaries, and (b) zoomed regions of the OCT image.
Figure 6.
 
(a) The result of the automatic segmentation of the air-epithelium (blue color), epithelium-Bowman's (cyan color), Bowman's-stroma (green color), Descemet's (yellow color), and endothelium-aqueous (orange color) boundaries, and (b) zoomed regions of the OCT image.
Figure 8.
 
(a) Graphical comparison of the graph segmentation (orange color) against the manual segmentation of the two trained operators (blue and green colors) for an optical coherence tomography image from our testing dataset, (b) zoomed top-left box, (c) zoomed top-right box, and (d) zoomed bottom box.
Figure 8.
 
(a) Graphical comparison of the graph segmentation (orange color) against the manual segmentation of the two trained operators (blue and green colors) for an optical coherence tomography image from our testing dataset, (b) zoomed top-left box, (c) zoomed top-right box, and (d) zoomed bottom box.
Figure 9.
 
(a) Graphical comparison of the graph segmentation (orange color) against the manual segmentation of the two trained operators (blue and green colors) for another optical coherence tomography image from our testing dataset, (b) zoomed top-left box, (c) zoomed top-right box, and (d) zoomed bottom box.
Figure 9.
 
(a) Graphical comparison of the graph segmentation (orange color) against the manual segmentation of the two trained operators (blue and green colors) for another optical coherence tomography image from our testing dataset, (b) zoomed top-left box, (c) zoomed top-right box, and (d) zoomed bottom box.
Figure 10.
 
(a) Graphical comparison of the graph segmentation (orange color) against the manual segmentation of the two trained operators (blue and green colors) for an optical coherence tomography image from LaRocca's dataset, (b) zoomed top-left box, (c) zoomed top-right box, and (d) zoomed bottom box.
Figure 10.
 
(a) Graphical comparison of the graph segmentation (orange color) against the manual segmentation of the two trained operators (blue and green colors) for an optical coherence tomography image from LaRocca's dataset, (b) zoomed top-left box, (c) zoomed top-right box, and (d) zoomed bottom box.
Figure 11.
 
(a) Graphical comparison of the graph segmentation (orange color) against the manual segmentation of the two trained operators (blue and green colors) for another optical coherence tomography image from LaRocca's dataset, (b) zoomed top-left box, (c) zoomed top-right box, and (d) zoomed bottom box.
Figure 11.
 
(a) Graphical comparison of the graph segmentation (orange color) against the manual segmentation of the two trained operators (blue and green colors) for another optical coherence tomography image from LaRocca's dataset, (b) zoomed top-left box, (c) zoomed top-right box, and (d) zoomed bottom box.
Figure 12.
 
(a) Graphical comparison of different segmentation methods and the mean segmentation of the trained operators for an optical coherence tomography image from our testing dataset, (b) zoomed top box, (c) zoomed bottom box, and (d) zoomed middle box.
Figure 12.
 
(a) Graphical comparison of different segmentation methods and the mean segmentation of the trained operators for an optical coherence tomography image from our testing dataset, (b) zoomed top box, (c) zoomed bottom box, and (d) zoomed middle box.
Figure 13.
 
(a) Graphical comparison of different segmentation methods and the mean segmentation of the trained operators for an optical coherence tomography image from our testing dataset, (b) zoomed top-left box, (c) zoomed bottom-left box, and (d) zoomed middle-central box.
Figure 13.
 
(a) Graphical comparison of different segmentation methods and the mean segmentation of the trained operators for an optical coherence tomography image from our testing dataset, (b) zoomed top-left box, (c) zoomed bottom-left box, and (d) zoomed middle-central box.
Figure 14.
 
(a) Graphical example of the worst performance of the proposed method where the epithelium-Bowman's boundary is mislocated due to the used search region for the boundary, (b) zoomed top-left box, and (c) zoomed top-right box.
Figure 14.
 
(a) Graphical example of the worst performance of the proposed method where the epithelium-Bowman's boundary is mislocated due to the used search region for the boundary, (b) zoomed top-left box, and (c) zoomed top-right box.
Table 1.
 
The Parameters Used for the Segmentation of BW, ST, and DM Boundaries Using EP and EN Boundaries
Table 1.
 
The Parameters Used for the Segmentation of BW, ST, and DM Boundaries Using EP and EN Boundaries
Table 2.
 
Results of the Mean Unsigned Boundary Error Between the Operators and the Algorithm on the Testing Dataset
Table 2.
 
Results of the Mean Unsigned Boundary Error Between the Operators and the Algorithm on the Testing Dataset
Table 3.
 
Results of the Mean Unsigned Boundary Error Between the Operators and the Algorithm on Elsawy's Dataset11
Table 3.
 
Results of the Mean Unsigned Boundary Error Between the Operators and the Algorithm on Elsawy's Dataset11
Table 4.
 
Results of the Mean Unsigned Boundary Error Between the Operators and the Algorithm on LaRocca's Dataset12
Table 4.
 
Results of the Mean Unsigned Boundary Error Between the Operators and the Algorithm on LaRocca's Dataset12
Table 5.
 
Results of the Corneal-Region Dice Similarity Coefficient Between the Operators and Between the Operators and the Algorithm
Table 5.
 
Results of the Corneal-Region Dice Similarity Coefficient Between the Operators and Between the Operators and the Algorithm
Table 6.
 
Comparison of the Mean Unsigned Boundary Error of the Proposed Method With the Methods in References 11, 12, and 13 on the Testing Dataset
Table 6.
 
Comparison of the Mean Unsigned Boundary Error of the Proposed Method With the Methods in References 11, 12, and 13 on the Testing Dataset
Table 7.
 
Comparison of the Mean Unsigned Boundary Error of the Proposed Method With Three Methods1113 on Elsawy's Dataset
Table 7.
 
Comparison of the Mean Unsigned Boundary Error of the Proposed Method With Three Methods1113 on Elsawy's Dataset
Table 8.
 
Comparison of the Mean Unsigned Boundary Error of the Proposed Method With Three Methods1113 on LaRocca's Dataset
Table 8.
 
Comparison of the Mean Unsigned Boundary Error of the Proposed Method With Three Methods1113 on LaRocca's Dataset
Table 9.
 
Summary of the Best-Performing Method for Each Boundary on all Datasets
Table 9.
 
Summary of the Best-Performing Method for Each Boundary on all Datasets
Table 10.
 
Thickness Measurement in µm on the Testing Dataset
Table 10.
 
Thickness Measurement in µm on the Testing Dataset
Table 11.
 
Thickness Measurement in µm on Elsawy's Dataset
Table 11.
 
Thickness Measurement in µm on Elsawy's Dataset
Table 12.
 
Mean Unsigned Thickness Error in µm on the Testing Dataset Between the Operators and Between the Operators and the Algorithm
Table 12.
 
Mean Unsigned Thickness Error in µm on the Testing Dataset Between the Operators and Between the Operators and the Algorithm
Table 13.
 
Mean Unsigned Thickness Error in µm on Elsawy's Dataset Between the Operators and Between the Operators and the Algorithm
Table 13.
 
Mean Unsigned Thickness Error in µm on Elsawy's Dataset Between the Operators and Between the Operators and the Algorithm
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×