Open Access
Artificial Intelligence  |   March 2025
Corneal Layer Segmentation in Healthy and Pathological Eyes: A Joint Super-Resolution Generative Adversarial Network and Adaptive Graph Theory Approach
Author Affiliations & Notes
  • Khin Yadanar Win
    Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
    SERI-NTU Advanced Ocular Engineering (STANCE) Program, Singapore, Singapore
  • Jipson Wong Hon Fai
    Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
  • Wong Qiu Ying
    Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
  • Chloe Chua Si Qi
    Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
  • Jacqueline Chua
    Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
    Ophthalmology and Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore, Singapore
  • Damon Wong
    Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
    SERI-NTU Advanced Ocular Engineering (STANCE) Program, Singapore, Singapore
    Ophthalmology and Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore, Singapore
  • Marcus Ang
    Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
    Ophthalmology and Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore, Singapore
  • Leopold Schmetterer
    Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
    SERI-NTU Advanced Ocular Engineering (STANCE) Program, Singapore, Singapore
    Ophthalmology and Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore, Singapore
    School of Chemistry, Chemical Engineering and Biotechnology, Nanyang Technological University, Singapore, Singapore
    Centre for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria
    Department of Clinical Pharmacology, Medical University of Vienna, Vienna, Austria
    Institute of Molecular and Clinical Ophthalmology, Basel, Switzerland
    Fondation Ophtalmologique Adolphe De Rothschild, Paris, France
  • Bingyao Tan
    Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
    SERI-NTU Advanced Ocular Engineering (STANCE) Program, Singapore, Singapore
    Ophthalmology and Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore, Singapore
  • Correspondence: Bingyao Tan, Singapore Eye Research Institute, Singapore National Eye Centre, 20 College Road, The Academia, Level 6, Discovery Tower, Singapore 169856, Singapore. e-mail: [email protected] 
Translational Vision Science & Technology March 2025, Vol.14, 19. doi:https://doi.org/10.1167/tvst.14.3.19
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Khin Yadanar Win, Jipson Wong Hon Fai, Wong Qiu Ying, Chloe Chua Si Qi, Jacqueline Chua, Damon Wong, Marcus Ang, Leopold Schmetterer, Bingyao Tan; Corneal Layer Segmentation in Healthy and Pathological Eyes: A Joint Super-Resolution Generative Adversarial Network and Adaptive Graph Theory Approach. Trans. Vis. Sci. Tech. 2025;14(3):19. https://doi.org/10.1167/tvst.14.3.19.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose: To enhance corneal layer segmentation and thickness measurement in ultra-high axial resolution optical coherence tomography (OCT) images for both healthy and pathological eyes using super-resolution generative adversarial network and adaptive graph theory.

Methods: We combine a super-resolution generative adversarial network (SRGAN) with adaptive graph theory for an improved segmentation accuracy of five corneal layers: epithelium, Bowman's, corneal stroma, Descemet's membrane, and endothelium. The fine-tuned SRGAN enhances the contrast and visibility of layer interfaces, particularly Descemet's membrane. For the layer segmentation with graph theory, search spaces were adapted according to the contrasts of the layers. We segmented volumetric high-resolution corneal OCT images of healthy participants, patients who underwent Descemet's membrane endothelial keratoplasty (DMEK), and patients with Fuchs endothelial corneal dystrophy (FECD).

Results: Enface thickness maps were generated over a 4-mm field of view from both healthy and pathological eyes. The measurements showed high reproducibility (intraclass correlation coefficient [ICC] = 0.97) for the whole cornea and stroma and moderate reproducibility for the other layers (ICC = 0.64 for epithelium/Bowman's complex; ICC = 0.53 for endothelium/Descemet's membrane complex). The average thickness errors were 3.5 µm for the total cornea, 4.4 µm for epithelium, 2.5 µm for Bowman's, 4.3 µm for stroma, and 3.0 µm for endothelium/Descemet's membrane complex.

Conclusions: The proposed method consistently outperforms conventional graph search methods across all corneal layer segmentations, which is beneficial for diagnosing and monitoring corneal diseases.

Translational Relevance: Our method can provide precise thickness measurement of multiple corneal layers, which has the potential to improve DMEK monitoring and FECD diagnosis.

Introduction
The human cornea comprises five primary layers: the epithelium (EP), Bowman's layer (BW), the corneal stroma (ST), Descemet's membrane (DM), and the endothelium (ED).1,2 Accurate quantification of the layer thickness facilitates early detection, diagnosis, and management of various corneal diseases, such as Fuchs endothelial corneal dystrophy (FECD) and keratoconus.37 For instance, it has been reported that thickness measurements of endothelium/Descemet's membrane complex (ED/DM complex) have the potential to aid in diagnosing FECD and grading its severity.7 Corneal layer thickness measurements are valuable indicators for monitoring the restoration of corneal endothelial function after Descemet's membrane endothelial keratoplasty (DMEK).8 
Optical coherence tomography (OCT)9 is a noninvasive and label-free modality widely used in detecting corneal pathology. However, commercial OCT systems with limited axial resolution cannot resolve all corneal layers, particularly DM and pre-Descemet's membrane. Ultra-high-resolution OCT (UHROCT) uses a light source with a broadband spectrum to provide an axial resolution of <2 µm. It can resolve all corneal layers with high precision.1015 Figure 1 illustrates corneal anatomy alongside a cross-sectional ultra-high-resolution OCT image of a human cornea with FECD. This is particularly beneficial in conditions where the thickening of DM and the appearance of guttae are the primary characteristics of the disease progression.16 However, for better quantification of the spatial distribution of tissue alterations, precise layer segmentation is needed. 
Figure 1.
 
Illustration of corneal anatomy (left) and a cross-sectional ultra-high-resolution OCT image of a FECD eye (right), where five layers are visible.
Figure 1.
 
Illustration of corneal anatomy (left) and a cross-sectional ultra-high-resolution OCT image of a FECD eye (right), where five layers are visible.
We summarized recent efforts in corneal layer segmentation on OCT images in Table 1.1728 These methods are broadly categorized into either conventional or deep learning approaches. Conventional approaches include graph theory,19,23 active contour,2931 or simply finding the points with strong intensity followed by a curve fitting.2022 Moreover, a few studies aimed to segment finer healthy corneal layers with semiautomatic17 and automatic approaches.22,23 These semiautomatic approaches17,18 assumed a regular-shaped cornea whose layer boundaries could be fitted by a regular curve. Fully automatic methods include random sample consensus (RANSAC) with polynomial curve fitting22 and a two-stage graph search23 to segment all corneal layers. The two-stage graph search,23 while superior to RANSAC,22 still necessitates the manually defined fixed search spaces for intralayer segmentation and manual corrections for severe pathological images.7 Moreover, Hough transform and Kalman filter21 and Gaussian mixture model20 were used to segment corneal layers (EP, ED, and BW), with extrapolating the segmentation into low signal-to-noise ratio (SNR) regions using parabolic fitting. 
Table 1.
 
Overview of the Related Work on Corneal Layer Segmentation
Table 1.
 
Overview of the Related Work on Corneal Layer Segmentation
On the other hand, deep learning algorithms require training on a vast amount of labeled data. UNet-based network is often used as the backbone model,2426 but it cannot effectively segment layers with arbitrary shapes.24 Elsawy et al.27 introduced PIPE-Net with a pyramidal input, parallel encoders, and a densely connected decoder to segment four corneal layer interfaces; however, their method has high computational cost due to multiencoders and suffers from layer discontinuities. In these deep learning-based studies,26,28 the authors introduced a boundary-guided convolutional neural network (termed BG-CNN)26 and an edge-enhanced convolutional network (termed EE-Net).28 These models were only applied to segment thick corneal layers. Although deep learning methods have shown promising results, their dependency on large training data poses a significant challenge for limited data acquired from prototypes, and manual segmentation is labor-intensive. Notably, due to the limited resolution of commercial OCT systems, fine layers, including DM, were usually not the target for designing the segmentation algorithms. Most of the studies focused on the segmentation of prominent boundaries such as EP, ED, and ST,1921,2426,28 while some studies using the high-definition OCTs also included the segmentation of DM in healthy17,18 and pathological corneas.22,23 DM plays a critical role in corneal diseases like FECD and DM detachment, where DM is the primary site of pathological changes. Figure 2 contrasts the healthy cornea (left) against one affected by FECD with local thickening and increased scattering of DM. The heterogeneous appearances of the DM layer in healthy and pathological corneal OCTs make it challenging to develop a single algorithm catering to various conditions. Moreover, the images also show the artifacts from central specular reflection and low SNR regions (labeled in Fig. 2), making segmentation even more challenging. 
Figure 2.
 
OCT images of a healthy cornea (left) and one affected by FECD (right). Magnified areas highlight the endothelium layer, showing focal thickening in the FECD-affected cornea compared to the healthy cornea. Notably, both images exhibit central artifacts, and the healthy cornea image shows a region with low SNR region at the bottom.
Figure 2.
 
OCT images of a healthy cornea (left) and one affected by FECD (right). Magnified areas highlight the endothelium layer, showing focal thickening in the FECD-affected cornea compared to the healthy cornea. Notably, both images exhibit central artifacts, and the healthy cornea image shows a region with low SNR region at the bottom.
Here, we developed a segmentation algorithm to identify all corneal layers in UHROCT images, for both healthy and pathological corneas. First, we enhanced the contrast of layer boundaries with a pretrained generative adversarial network fine-tuned for OCT images, and subsequently, we used automated adaptive search windows for graph theory–based boundary segmentation, effectively handling the heterogeneous appearance of DM in healthy and pathological conditions. 
Materials and Methods
Dataset
Our study employed a custom-built spectral domain OCT imaging system for corneal imaging.10,11 This system operates in an 800-nm wavelength region using a superluminescent diode. It provides an axial resolution of <2 µm in tissue. A spectrometer interfaced with a line scan camera (2048 pixels, 250 kHz; Octoplus, E2V, Chelmsford, Essex, UK) acquires the interferometric fringes, corresponding to a 1.07-mm image depth in air. The dataset was acquired at the Singapore National Eye Center. The data collection adhered to the ethical guidelines of the Declaration of Helsinki and was approved by the SingHealth Centralized Institutional Review Board. Informed consent was obtained from all participants. The dataset includes 101 eyes from 59 participants with 27 FECD, 19 DMEK, and 13 healthy participants. Each volume consists of 255 B-scans acquired using a radial scanning protocol. Each B-scan has 1600 A-scans. A subset of 101 B-scans, randomly selected from these eyes, was manually labeled by a trained clinician. 
Proposed Method
We propose a joint approach of a super-resolution generative adversarial network with an adaptive graph search for corneal layer segmentation to generate thickness maps of individual layers. Figure 3 depicts the three-step pipeline: (A) image acquisition and enhancement, (B) corneal layer segmentation, and (C) three-dimensional (3D) thickness map generation. The following subsections elaborate on each step. 
Figure 3.
 
Proposed pipeline for corneal layer segmentation and thickness map generation. Dashed lines represent individual processes, while solid lines indicate the sequence of overarching methods associated with each process.
Figure 3.
 
Proposed pipeline for corneal layer segmentation and thickness map generation. Dashed lines represent individual processes, while solid lines indicate the sequence of overarching methods associated with each process.
Image Acquisition and Layer Boundary Enhancement
We obtained OCT B-scans using a radial scanning protocol, as illustrated by the radial lines over the eye's image (Fig. 3A). While EP and ED layers are prominently visible, the layers in between, particularly DM and BW, present less well-defined boundaries. Enhancing the clarity and sharpness of these boundaries can significantly improve their visualization and subsequent boundary segmentation. We employed the Real-ESRGAN (Real-Enhanced Super-Resolution Generative Adversarial Network) algorithm.32 It is a variant of a super-resolution generative adversarial network designed for image upscaling, refining, and improving overall image quality by addressing noise, blur, pixelation, and compression artifacts. 
Real-ESRGAN was pretrained on natural images32 and fine-tuned using OCT image pairs. We used the raw UHROCT images enhanced with histogram equalization as high-contrast images. The corresponding low-resolution counterparts were generated by a twofold degradation process used in Real-ESRGAN, including smoothing with a Gaussian filter, resizing using bicubic and bilinear methods, contrast reduction, and adding Gaussian noise. Figure 4 shows the network architecture of Real-ESRGAN and the fine-tuning process. The model comprises two primary components: a generator leveraging a Residual in Residual Dense Block (RRDB) architecture and a U-Net–based discriminator. As shown in Figure 4, the generator is built with RRDBs, consisting of dense blocks with five convolutional layers each, followed by LeakyRELU activation. More information on the RRDB-based generator is available in a previous ESRGAN study.33 For layer boundary enhancement, we fine-tuned Real-ESRGAN on a subset of 2000 images randomly selected from 20 volumes. This fine-tuning process was carried out using a Tesla V100 DGXS 32 GB GPU with PyTorch (NVIDIA Corporation, USA). An Adam optimizer with a learning rate of 10−4 and 1000 epochs was set for fine-tuning. This fine-tuned generator was then applied to our OCT images to generate the boundary-enhanced images. In addition, an ablation study was conducted to evaluate the impact of this component individually, as described in the Ablation Study Setup section. 
Figure 4.
 
Process of fine-tuning Real-ESRGAN and its generator architecture. Conv, convolution; HR, high resolution; LR, low resolution; LReLU, leaky ReLU; β, residual scaling parameter.32,33
Figure 4.
 
Process of fine-tuning Real-ESRGAN and its generator architecture. Conv, convolution; HR, high resolution; LR, low resolution; LReLU, leaky ReLU; β, residual scaling parameter.32,33
Segmentation of EP and ED Boundaries Using Two-Stage Graph Search
Figure 5 illustrates segmentation of the outer boundaries, EP and ED, using a two-stage graph search.19,23,34,35 Initially, the horizontal Gx and vertical Gy gradients were generated by applying horizontal filter \({{h}_n} = [\begin{array}{*{20}{c}} 1&0&{ - 1{{]}_n}} \end{array}\) and vertical filter \({{V}_n} = h_n^T\), respectively, to generate the combined gradient image (Fig. 5A). The gradient values were then locally normalized to scale between 0 and 1. Given EP's well-defined structure and high intensity, a gradient filter with a small kernel size (n = 5) was applied to contrast the EP edges. Conversely, a larger kernel size (n = 9) was used for the ED boundary, which was less pronounced and had a lower signal intensity at the periphery, to better highlight the subtle boundary edges. The segmentation process began by constructing a graph from EP's normalized gradient image. In this graph, each pixel represented a node, and edge cost CG was computed using an exponential function e based on the squared normalized gradient values from a and b:  
\begin{eqnarray} C_{ab}^G = {{e}^{ - \sigma \left( G_{a }^2\, +\, G_{b}^2 \right)}}\end{eqnarray}
(1)
where σ is a scaling constant. The Bellman–Ford algorithm36 was applied to search the shortest path, which corresponded to the EP boundary. Following this, a similar graph construction and search process was used for the ED boundary but with the normalized gradient image prepared for ED segmentation. The vicinity of the already segmented EP (spanning 50 pixels both before and after) was occluded to prevent its influence on detecting the ED boundary. Finally, both the EP and ED boundaries were organized based on their mean values to ensure that the EP boundary was positioned before the ED boundary in the arrangement. 
Figure 5.
 
Flowchart for segmenting EP and ED boundaries from OCT images. The sequence begins with normalized gradients computation followed by a two-stage graph search to detect EP and subsequently ED boundary points.
Figure 5.
 
Flowchart for segmenting EP and ED boundaries from OCT images. The sequence begins with normalized gradients computation followed by a two-stage graph search to detect EP and subsequently ED boundary points.
To address defects and path curling up, the second-graph search as performed. For each boundary, the combined cost C was computed by integrating directional CD (2) and multiplier cost CM (3) into the gradient edge cost CG, as depicted in Figure 5C.  
\begin{eqnarray}C_{ab}^D = \left| {{\rm{\Delta }}{{y}_{ref}} - {\rm{\Delta }}y} \right|\end{eqnarray}
(2)
where Δyref refers to the difference in the axial coordinates of a reference direction, and Δy is the difference in the axial coordinates of the vertices a and b.  
\begin{eqnarray}C_{ab}^M = \underbrace {1 \cdot \phi \left( x \right)}_{centralmultiplier} + \underbrace {\rho \cdot \left[ {1 - \phi \left( x \right)} \right]}_{peripheralmultiplier}\end{eqnarray}
(3)
where x represents the transversal coordinate. The combined cost C, as shown in Figure 5C, guides the graph search to progress vertically and adhere to a second-degree polynomial shape. EP and ED boundaries were delineated by finding the shortest paths using the Bellman–Ford algorithm.36 Further details on graph construction and cost functions can be found in references.19,23,34,35 
Segmentation of IntraBoundaries
Figure 6 depicts the segmentation of three intralayer boundaries: BW, ST, and DM. Initially, regions of interest (ROIs) were determined by setting certain pixel ranges around the EP and ED boundaries. The ROI for BW and ST was set as the area from 20 pixels above to 100 pixels underneath the EP boundary. For DM, the ROI was adjusted from 50 pixels above to 10 pixels underneath the ED boundary. The gradients were then calculated from the ROIs for BW and ST using a vertical kernel \({{v}_n} = [\begin{array}{*{20}{c}} { - 1}&0&{1{{]}^T}} \end{array}\), while for DM, a kernel \({{v}_n} = [\begin{array}{*{20}{c}} 1&0&{ - 1{{]}^T}} \end{array}\), was employed. The images were then flattened according to EP and ED.23 The adaptive graph search was initialized by constructing the graphs on the normalized gradient images and determining searching windows for each layer. As shown in Figure 6, for BW and ST, we first identified the EP peak using the location of the EP segmented boundary on the intensity profile and subsequently identified two peaks that come after the EP peak. The distances between these peaks and the EP peak were measured to determine the estimated target locations for BW and ST. A search space (±4 pixels) was symmetrically centered on these target estimates. For DM, as it is adjacent to the endothelium, we identified a peak near the ED/DM regions. The search range for DM was defined using the peak's intensity width as the target estimate, with a symmetrical tolerance of ±3 pixels around it. 
Figure 6.
 
Flowchart for segmenting intralayer boundaries: BW, ST, and DM from OCT images using adaptive graph search. ROI, region of interest.
Figure 6.
 
Flowchart for segmenting intralayer boundaries: BW, ST, and DM from OCT images using adaptive graph search. ROI, region of interest.
After constructing graphs and defining the adaptive search spaces, a cost was computed by integrating the gradient edge cost CG, the directional edge cost CD, and the multiplier cost CM. Finally, the Bellman–Ford algorithm36 was applied to find the shortest paths for boundary delineation by minimizing the cost. 
Localized Bisquare Polynomial Curve Fitting
To correct segmentation errors in regions with low SNR, we employed bisquare curve fitting with a second-degree polynomial.37 First, we fitted the segmented boundary to the bisquare curve. The segmentation was adjusted only within the low-SNR area based on the fitted curve. This approach helps prevent overcorrection and preserves the cornea's natural contours. A similar approach was used to mitigate the effects of central artifacts around the corneal apex in OCT images. These artifacts can obscure layer boundaries and lead the algorithm to misidentify artifact edges as actual boundaries. This often results in the segmented layer curving up or down, depending on intensity variations.19,22,23 In artifact-affected areas, we compared the segmented boundary to the fitted curve. If the difference exceeded a set threshold, the artifact-affected region was replaced with the fitted boundary. This method enhances overall segmentation accuracy by minimizing errors caused by central artifacts. 
3D Thickness Mapping
The thickness of each corneal layer was converted from pixels to millimeters. A two-dimensional ray-tracing algorithm was employed to correct the refraction errors.23,38 Refraction correction was applied at the segmented boundaries between successive layers using Snell's law. First, the cosine of the refraction angle θ2 was computed using the refractive indices of successive layers (1.401 for the epithelium and 1.376 for other layers).23,39 The direction of the refracted ray was then updated as  
\begin{eqnarray}\vec{R} = \frac{{{{n}_1}}}{{{{n}_2}}}\vec{I} + \left( {\frac{{{{n}_1}}}{{{{n}_2}}}\cos {{\theta }_1} - \cos {{\theta }_2}} \right)\overrightarrow{N} \end{eqnarray}
(4)
where n1 and n2 are the refractive indices of the incidence and refracted medium, respectively. \(\vec{I}\) represents the incident ray, and \(\vec{N}\) is the normal ray to the boundary point B1. With \(\vec{R}\), the boundary point B2 was corrected as \(B_2^{\prime}\) using  
\begin{eqnarray}B_2^{\prime} = {{B}_1} + G \cdot \vec{R}\end{eqnarray}
(5)
where G is the geometric distance from the boundary point to the boundary point of its successive layer (e.g., B1  →  B2). This correction process was applied iteratively to all segmented boundaries. The thickness of each corneal layer was then measured as the distance between two adjacent corrected boundaries. Subsequently, the thickness measurements were transformed from polar coordinates into Cartesian coordinates, followed by interpolation to construct continuous enface thickness maps. A bull's-eye representation was then superimposed on the thickness maps to show thickness within each region. 
Ablation Study Setup
To evaluate the contributions of each component in our approach, we conducted an ablation study assessing the impact of Real-ESRGAN and the adaptive search technique on segmentation accuracy. The study involved testing different configurations to isolate the effects of each enhancement. Specifically, we tested (1) a baseline configuration using only a basic graph search without enhancements, (2) a graph search with Real-ESRGAN integration alone, (3) a graph search with adaptive search window integration alone, and (4) our full method combining both enhancements. Each configuration was evaluated based on the unsigned thickness error (UTE) across all corneal layers. 
Evaluation Metrics
We compared the results from our automated segmentation with ground truth. The evaluation was carried out using both boundary-based metrics to evaluate the precision of boundary localization and region-based metrics to measure the segmentation accuracy within the specific corneal layer. The primary metric for evaluating the boundary segmentation was the unsigned boundary error (UBE), which computes the mean absolute distance in micrometers between the corresponding boundary points of ground-truth and segmented images. Let L denote the total number of layer boundaries in an image F of size X rows and Y columns. Boundaries are incrementally labeled from 1  ≤  l  ≤  L , in accordance with their vertical positions. We define xp and xgt as the layer boundary points of segmented image and ground truth, respectively. xl,y indicates the data points at which the lth boundary passes through column y. Using these notations, UBE of the lth boundary is defined as  
\begin{eqnarray} UB{{E}_l}\left( {{{x}^p},{{x}^{gt}}} \right) = \frac{1}{Y}\mathop \sum \limits_{y = 1}^Y |x_{l,y}^p - x_{l,y}^{gt}|\end{eqnarray}
(6)
 
Signed boundary error (SBE) was also computed to evaluate the directional bias of the segmentation method, whether it overestimates or underestimates the boundaries. SBE computes the average signed distance in micrometers between corresponding points of segmented and ground truth. A positive value of SBE indicates that the ground-truth boundary is typically above the estimated boundary and vice versa. For the mean differences, a negative value indicates that a segmented interface was positioned above the corresponding manually segmented interface, and a positive value indicates that a segmented interface was located below the corresponding manually segmented interface. SBE for the lth boundary is defined as follows:  
\begin{eqnarray} SB{{E}_l}\left( x^p,x^{gt} \right) = \frac{1}{Y}\mathop \sum \limits_{y = 1}^Y (x_{l,y}^p - x_{l,y}^{gt})\end{eqnarray}
(7)
 
For region-based evaluation, intersection over union (IoU) and Dice score were calculated. Let R denote the layer regions situated between the lth and the (l + 1)th, with \(R_l^p\ \)and \(R_l^{gt}\)referring to the set of pixels in the segmented and ground-truth layers, respectively. The Dice coefficient measures the extent of overlap between \(R_l^p\ \)and \(R_l^{gt}\ \)as twice the intersection divided by the sum of their sizes, while the IoU quantifies the overlap as the intersection divided by the union of the samples. IoU and Dice can be formulated as follows:  
\begin{eqnarray} IoU\left( {R_l^p, R_l^{gt}} \right) = \frac{{\left| {R_l^p \cap R_l^{gt}} \right|}}{{\left| {R_l^p \cup R_l^{gt}} \right|}}\end{eqnarray}
(8)
 
\begin{eqnarray} Dice\left( {R_l^p, R_l^{gt}} \right) = \frac{{2 \cdot \left| {R_l^p \cap R_l^{gt}} \right|}}{{\left| {R_l^p} \right| + \left| {R_l^{gt}} \right|}} \end{eqnarray}
(9)
 
To address clinical needs, UTE was also calculated. It measures the absolute difference in layer thickness between \(R_l^p\) and \(R_l^{gt}\). While IoU and Dice coefficient assess the overall segmentation accuracy, UTE pinpoints localized thickness errors. In contrast to IoU and Dice, UTE is insensitive to the absolute position of the boundaries as the thickness of Rl remains unchanged even if adjacent boundaries uniformly shift. UTE can be formulated as follows:  
\begin{eqnarray} && UT{{E}_l}\left( {{{x}^p},{{x}^{gt}}} \right) \nonumber \\ && = \frac{1}{Y}\mathop \sum \limits_{y = 1}^Y \big|\big(x_{l + 1,y}^{gt} - x_{l,y}^{gt}\big) - \big( {x_{l + 1,y}^p - x_{l,y}^p} \big)\big| \quad \end{eqnarray}
(10)
where xl,y and xl + 1, y represent the thickness of the layer region between the lth and the (l + 1)th boundary at column y. To further evaluate the consistency and reproducibility of the algorithm, we segmented image volumes from two consecutive acquisitions, from which the enface thickness maps were generated. We then evaluated the regional average thickness values of every corneal layer. The coefficient of variation and intraclass correlation coefficient (ICC) were used as the performance measures:  
\begin{eqnarray} CoV = \frac{\sigma }{\mu } \times 100\% \end{eqnarray}
(11)
 
\begin{eqnarray} ICC = \frac{{\sigma _{between}^2}}{{\sigma _{between}^2 + \sigma _{within}^2}} \end{eqnarray}
(12)
σ and μ are the standard deviation and mean.\(\ \sigma _{between}^2\) denotes the variance between groups and \(\sigma _{within}^2\) for variance within groups. 
Experimental Results
The EP and ED were segmented without adaptive search due to their well-defined intensity gradients, which was ∼40% faster and had similar segmentation accuracy as compared to adaptive search (average boundary errors of 2.7 ± 2.0 µm for EP and 2.6 ± 2.0 µm for ED vs. 3.1 ± 1.9 µm [EP] and 2.5 ± 2.1 µm [ED] using classical segmentation and adaptive graph search, respectively). For intralayer segmentation, setting the proper graph search space is vital for accurate boundary segmentation. Previous studies have relied on predefined fixed search spaces, which may not always yield accurate results due to variations in layers’ thickness across different conditions. Figure 7 illustrates the effectiveness of using an adaptive search space compared to a fixed one in handling healthy images, as well as those with DMEK and FECD. The first three rows show the results from fixed search spaces, denoted as “ED-3,” “ED-7,” and “ED-12,” which represent target estimates of the ED boundary, with search spaces of 3 pixels centered on each estimate (i.e., 3, 7, and 12 pixels above the ED boundary, respectively). It either under- or overestimated DM thickness in various conditions due to a lack of searching flexibility. In contrast, adaptive searching space performed better in segmenting both healthy and pathological DM. As ED-7 searching space resulted in the least segmentation errors compared to other predefined search spaces, all subsequent results for conventional graph search adhere to the ED-7 search space for consistency. Figure 8 compares the segmentation outcomes of the conventional graph search with our method, against ground truths. While both methods produce good segmentation results for healthy images or those without significant structural changes, the performance of conventional graph search is limited when dealing with pathological images. In contrast, our proposed method demonstrates its robustness by dynamically adjusting to these variations, resulting in more accurate segmentation of pathological conditions. As shown in Figure 8, this capability is evident in its accurate segmentation of areas with focal thickening and guttae in the ED and DM boundaries. 
Figure 7.
 
Segmentation results for DM using the fixed search spaces. The top row displays the raw OCT images, followed by segmentation results using search spaces ED-3, ED-7, and ED-12, where ED represents the segmented ED boundary data points. This illustrates the effects of undersegmentation and oversegmentation across a range of image conditions.
Figure 7.
 
Segmentation results for DM using the fixed search spaces. The top row displays the raw OCT images, followed by segmentation results using search spaces ED-3, ED-7, and ED-12, where ED represents the segmented ED boundary data points. This illustrates the effects of undersegmentation and oversegmentation across a range of image conditions.
Figure 8.
 
Comparison of the corneal segmentation results. These examples evidence the improved segmentation of the proposed method, particularly in images with pathological features.
Figure 8.
 
Comparison of the corneal segmentation results. These examples evidence the improved segmentation of the proposed method, particularly in images with pathological features.
The boundary segmentation errors were quantitatively evaluated using UBE and SBE. A smaller UBE value indicates higher precision; our method achieved UBEs of 2.7 ± 2.0 µm, 2.6 ± 2.0 µm, and 3.9 ± 2.6 µm for EP, ED, and ST, respectively, indicating a small deviation from manually delineated boundaries. However, large deviations were detected on the thinner boundaries of BW (UBE 4.0 ± 3.7 µm) and DM (UBE 2.6 ± 1.7 µm). For comparison, we tested different fixed search spaces for each intracorneal boundary. For the DM boundary, the tested values were ED-3, ED-7, and ED-12. ED-7 had the lowest UBE. In the BW layer, we evaluated EP+20, EP+22, EP+24, EP+26, and EP+28, and EP+22 performed best. For the ST layer, the tested values were BW+2, BW+4, BW+6, and BW+8, and BW+6 had the lowest UBE. The best-performing fixed search spaces for each layer are compared with the adaptive method in Table 2. The adaptive method consistently achieved lower UBEs and SBEs across all intracorneal layers. The t-test results showed P values <0.0001 for all layers. Hence, the improvements with the adaptive method are statistically significant. IoU and Dice scores are summarized in Table 3. The results indicate accurate segmentation of the whole cornea and ST layer. The EP layer also obtained high segmentation accuracy, with an IoU >88% and Dice >93%. Thinner layers, including DM and BW layers, exhibited lower IoU scores of 63% and 59%, along with Dice scores of 76% and 71%, respectively. 
Table 2.
 
Boundary-Wise Evaluation of Three Intracorneal Boundaries Showing UBE (mean Absolute Distance in Micrometers between Ground Truth and Segmented Boundary Points) and SBE (Direction of Segmentation Bias, with Positive Values for Overestimations and Negative Values for Underestimations, in Micrometers
Table 2.
 
Boundary-Wise Evaluation of Three Intracorneal Boundaries Showing UBE (mean Absolute Distance in Micrometers between Ground Truth and Segmented Boundary Points) and SBE (Direction of Segmentation Bias, with Positive Values for Overestimations and Negative Values for Underestimations, in Micrometers
Table 3.
 
Layer-Wise Evaluation: IoU and Dice Score for Total Cornea and Four Corneal Layers
Table 3.
 
Layer-Wise Evaluation: IoU and Dice Score for Total Cornea and Four Corneal Layers
The ablation study presented in Table 4 evaluates the impact of integrating Real-ESRGAN and adaptive search windows on UTE. The outcomes indicate consistent improvement across all layers by integrating these methods compared to the conventional graph search. A notable improvement was observed in the ST layer by integrating either Real-ESRGAN or adaptive search windows. With Real-ESRGAN, the EP, BW, and DM layers showed marginal improvement. The use of adaptive search windows improved accuracy more than twofold for total cornea, EP, and ST layers and marginally improved the BW and ED/DM complex. When comparing conventional graph search with our joint approach of Real-ESRGAN and adaptive search windows, significant improvements in UTE were observed across all corneal layers. The most notable enhancement was a nearly fivefold improvement in the ST layer, where the UTE was decreased from 20.4 µm to 4.3 µm. In the EP layer, UTE was reduced from 12.0 µm to 4.4 µm by improving nearly threefold. In the BW and DM layers, accuracy improved by approximately twofold. Figure 9 demonstrates the gradient images of flattened layers and segmented boundaries before and after enhancement with Real-ESRGAN. As shown, the gradients of Real-ESRGAN–enhanced images had intensified boundaries and resulted in better boundaries detection. Quantitatively, the peak-signal-to-noise ratio (PSNR) of the gradient image slightly increased from −20 dB to −21 dB after enhancement. To validate the impact of Real-ESRGAN, we compared the performance with and without its application in Table 4. With enhancement, UTE consistently improved across all corneal layers. 
Table 4.
 
Ablation Study on the Impact of Real-ESRGAN and Adaptive Search on UTE
Table 4.
 
Ablation Study on the Impact of Real-ESRGAN and Adaptive Search on UTE
Figure 9.
 
Comparison of gradient and segmented layer images with and without Real-ESRGAN enhancement. (A, B) Gradient image of flattened EP (red), BW (green), ST (cyan), DM (blue), and ED (yellow) layers for boundary detection without (left) and with (right) Real-ESRGAN enhancement. (C) Segmented layer images with respective boundaries overlaid.
Figure 9.
 
Comparison of gradient and segmented layer images with and without Real-ESRGAN enhancement. (A, B) Gradient image of flattened EP (red), BW (green), ST (cyan), DM (blue), and ED (yellow) layers for boundary detection without (left) and with (right) Real-ESRGAN enhancement. (C) Segmented layer images with respective boundaries overlaid.
In Figure 10, the Bland–Altman plots demonstrate the degree of agreement between the ground-truth measurements and our method's measurements for total cornea, EP, BW, ST, and ED/DM. For the total cornea, a minimal mean bias with narrow limits indicates good agreement between the two methods for measuring the total corneal thickness. The plots for EP and ST layers have narrower limits, suggesting more consistent measurements. BW and ED/DM plots show broader ranges of limits, indicating higher variability and less measurement consistency. The plots exhibit greater variability and bias in the pathological conditions (DMEK and FECD) compared to healthy conditions. To further quantify these findings, the mean difference ± standard deviation (SD) values were derived from the Bland–Altman plots to compare our adaptive method with the regular graph search. For the total cornea, the regular graph search achieved −0.2 ± 11.6 µm, while our method reduced variability with 0.8 ± 3.1 µm. For the BW layer, the mean difference was reduced from 2.1 ± 4.3 µm (graph search) to −0.2 ± 2.7 µm with our method. In the EP, ST, and ED/DM layers, adaptive graph search significantly improved thickness predictions compared to the regular graph search, reducing the mean difference ± SD from −11.5 ± 5.7 µm to −1.8 ± 5.0 µm (EP), from 16.2 ± 14.6 µm to 1.3 ± 4.5 µm (ST), and from −7.1 ± 3.0 µm to 1.6 ± 2.1 µm (ED/DM complex). 
Figure 10.
 
Bland–Altman plots for comparing the corneal layer thickness between ground truth and our measurements. Each plot corresponds to a different corneal layer: total cornea, EP, BW, ST, and ED/DM complex, with the mean ± 1.96 standard deviation. Mean Diff, mean difference.
Figure 10.
 
Bland–Altman plots for comparing the corneal layer thickness between ground truth and our measurements. Each plot corresponds to a different corneal layer: total cornea, EP, BW, ST, and ED/DM complex, with the mean ± 1.96 standard deviation. Mean Diff, mean difference.
We generated enface thickness maps for individual layers. Figure 11 shows the thickness maps of all corneal layers in healthy, DMEK, and FECD eyes. The top row presents raw OCT B-scans. The subsequent rows display total corneal thickness, EP thickness, BW thickness, ST thickness, and ED/DM thickness maps, respectively. Healthy corneas exhibit uniform thickness across all layers. In contrast, DMEK eyes show slight variations in thickness, particularly in the total corneal and stromal thickness maps. FECD cases reveal significant focal thickening in the ED/DM complex, indicating pathological changes characteristic of FECD. The thickness maps are annotated with cardinal directions (S, superior; N, nasal; T, temporal; I, inferior) for spatial reference, and the color bars provide the scale for thickness measurements in micrometers. 
Figure 11.
 
Enface thickness maps of healthy, DMEK, and FECD. I, inferior; N, nasal; S, superior; T, temporal.
Figure 11.
 
Enface thickness maps of healthy, DMEK, and FECD. I, inferior; N, nasal; S, superior; T, temporal.
Repeatability tests demonstrated high consistency in measurements with variations under 3% for most layers, except for the ED/DM layer, which had less than 16% variation, as shown in Figure 12. The average ICC values calculated from each layer indicated overall high repeatability for the whole cornea and ST (ICC = 0.97), but lower ICC values were obtained from the EP/BW (ICC = 0.64) and ED/DM (ICC = 0.53) layers, especially under FECD conditions. 
Figure 12.
 
Bar plots of coefficient of variance for repeatability of thickness measurements between two consecutive acquisitions for corneal layers: (A) total cornea, (B) EP/BW complex, (C) ST layer, and (D) ED/DM complex.
Figure 12.
 
Bar plots of coefficient of variance for repeatability of thickness measurements between two consecutive acquisitions for corneal layers: (A) total cornea, (B) EP/BW complex, (C) ST layer, and (D) ED/DM complex.
Discussion
In this article, we proposed a fully automatic and accurate algorithm that can segment all corneal layers from ultra-high-resolution OCT B-scans. The proposed algorithm was tested on a range of OCT images from healthy control, FECD, and DMEK participants. Our method consistently outperformed the conventional graph search for all corneal layer segmentation (Table 4). The algorithm's ability in accurately and adaptively segmenting DM in both healthy and pathological conditions addresses a critical gap in existing studies of corneal layer segmentation. This focus on DM segmentation is particularly relevant for advancing the diagnosis and monitoring of FECD and DMEK, where DM is the primary site of pathological changes. 
Prior to segmentation, our study integrated Real-ESRGAN to address challenges presented by low SNR regions and less-defined layer boundaries. Generative adversarial networks for image super-resolution (SRGANs) such as Real-ESRGAN are broadly used to upscale image resolution and improve the visibility of anatomic details by enhancing contrast, color, and brightness in medical images.4044 Previous studies, such as those by Ha et al.43 and Sun and Ng,44 have demonstrated the effectiveness of SRGANs in enhancing anatomic details in fundus images and mitigating blooming artifacts in coronary computed tomography angiography, respectively. In our research, the resultant image from fine-tuned Real-ESRGAN demonstrated the improved contrast and clearer layer boundaries. These enhancements aid in the effective graph search process by intensifying the gradients, which are a critical part of the graph search. Our method demonstrated improved segmentation performance, particularly for images with focal thickening and guttae. However, it is important to note that our method does not effectively compensate for artifacts like the shadow of eyelashes or saturation artifacts, likely due to the model's limited exposure to such data. In addition, the effect of speckle noise was not evaluated in this study. Therefore, future work should focus on the integration of artifact-specific data augmentation and OCT-specific noise models during fine-tuning to minimize these effects on segmentation and further refine the enhancement process. 
Due to the scarcity of labeled data in our study, it was not feasible to employ deep learning–based segmentation, which necessitates large amounts of high-quality and diverse labeled data for model training. An advantage of our adaptive graph search method is that, unlike deep learning methods, it does not need a large amount of labeled data for model training. Hence, it is suitable for limited labeled datasets and for data from lab-built prototype devices. This method is superior to previous graph search methods in that it takes into account the variations of thickness and layer structural changes by adding the adaptive search space. As the graph search is rooted in a conventional image-processing algorithm and does not require labeled data, it can be easily reproduced in clinical settings. Furthermore, it can be used as a tool to generate large amounts of ground-truth labels for future deep learning research in corneal layer segmentation. 
Recently, semi-supervised learning has emerged as an alternative solution for enhancing medical image segmentation in deep learning. This approach is particularly useful in scenarios where there is a limited amount of labeled data and a large pool of unlabeled data.45 This could ease the burden of extensive manual annotation. Additionally, the advent of foundation models like Meta AI's Segment Anything Model (SAM)46 and its adaptation for medical image segmentation, such as MedSAM,47 has revolutionized image segmentation. MedSAM is reported to do medical image segmentation across different domains and modalities and has demonstrated performance comparable to the models specially trained on specific domains or modalities.47,48 These foundational models can be fine-tuned to segment new tasks from less-represented modalities. Furthermore, instead of laborious pixel-by-pixel manual annotation, these foundational models could be used as interactive tools to generate the ground-truth labels through prompts such as points or bounding boxes. This approach can reduce both the time and expertise required for manual data labeling in future research. 
Limitations
It is important to acknowledge that while the proposed method shows promising results in segmenting corneal layers in healthy, DMEK, and most FECD cases, the presence of severe membrane deformation or detachment may degrade the segmentation accuracy. In addition, due to the unavailability of a publicly accessible ultra-high-resolution OCT corneal dataset, we were unable to evaluate our algorithm's efficacy across a broader range of pathological conditions or to directly compare it with existing studies. To address these limitations, future research should aim to incorporate more diverse datasets that cover a broader range of corneal diseases to enhance the robustness and generalizability of the algorithm. 
Conclusions
In this article, we propose a fully automatic algorithm for segmenting and mapping the thickness of five corneal layers using ultra-high axial resolution OCT images in both healthy and pathological conditions like FECD and DMEK. By integrating Real-ESRGAN with adaptive graph search, our method achieves significant improvement in segmentation accuracy across all corneal layers when compared to existing graph search methods. Specifically, our method demonstrates a significant advantage in segmenting the ultra-fine DM layer in pathological corneas. This advancement is expected to greatly aid in the diagnosis and evaluation of FECD and DMEK. Future work will aim to extend the proposed method to support the segmentation of corneal layers in other pathological conditions and validate the method on a more extensive and diverse set of pathological eyes. The proposed method can be a potential tool to generate accurate corneal layer segmentation and promote the analysis of corneal layer thickness, which is beneficial for diagnosing and monitoring corneal diseases. Additionally, it could be used to generate ground-truth data for deep learning–based approaches. 
Acknowledgments
Supported by grants from the National Medical Research Council (OFLCG/004c/2018-00; MOH-000249-00; MOH-000647-00; MOH-001001-00; MOH-001015-00; MOH-000500-00; MOH-000707-00; MOH-001072-06; MOH-001286-00); National Research Foundation Singapore (NRF2019-THE002-0006 and NRF-CRP24-2020-0001); Agency for Science, Technology and Research (A20H4b0141); and the Singapore Eye Research Institute & Nanyang Technological University (SERI-NTU Advanced Ocular Engineering (STANCE) Program). 
Disclosure: K.Y. Win, None; J.W.H. Fai, None; W.Q. Ying, None; C.C.S. Qi, None; J. Chua, None; D. Wong, None; M. Ang, None; L. Schmetterer, None; B. Tan, None 
References
Ang M, Baskaran M, Werkmeister RM, et al. Anterior segment optical coherence tomography. Prog Retin Eye Res. 2018; 66: 132–156. [CrossRef] [PubMed]
Meek KM, Knupp C. Corneal structure and transparency. Prog Retin Eye Res. 2015; 49: 1–16. [CrossRef] [PubMed]
Xu Z, Jiang J, Yang C, et al. Value of corneal epithelial and Bowman's layer vertical thickness profiles generated by UHR-OCT for sub-clinical keratoconus diagnosis. Sci Rep. 2016; 6(1): 31550. [CrossRef] [PubMed]
Abou Shousha M, Perez VL, Wang J, et al. Use of ultra-high-resolution optical coherence tomography to detect in vivo characteristics of Descemet's membrane in Fuchs’ dystrophy. Ophthalmology. 2010; 117(6): 1220–1227. [CrossRef] [PubMed]
Abou Shousha M, Perez VL, Canto APFS, et al. The use of Bowman's layer vertical topographic thickness map in the diagnosis of keratoconus. Ophthalmology. 2014; 121(5): 988–993. [CrossRef] [PubMed]
Eleiwa TK, Cook JC, Elsawy AS, et al. Diagnostic performance of three-dimensional endothelium/Descemet membrane complex thickness maps in active corneal graft rejection. Am J Ophthalmol. 2020; 210: 48–58. [CrossRef] [PubMed]
Eleiwa T, Elsawy A, Tolba M, Feuer W, Yoo S, Shousha MA. Diagnostic performance of 3-dimensional thickness of the endothelium–Descemet complex in Fuchs’ endothelial cell corneal dystrophy. Ophthalmology. 2020; 127(7): 874–887. [CrossRef] [PubMed]
Heslinga FG, Lucassen RT, van den Berg MA, et al. Corneal pachymetry by as-oct after Descemet's membrane endothelial keratoplasty. Sci Rep. 2021; 11(1): 13976. [CrossRef] [PubMed]
Huang D, Swanson EA, Lin CP, et al. Optical coherence tomography. Science. 1991; 254(5035): 1178–1181. [CrossRef] [PubMed]
dos Santos VA, Schmetterer L, Groschl M, Garhofer G, Werkmeister RM. In vivo tear film thickness measurement and tear film dynamics¨ visualization using spectral domain oct and an efficient delay estimator. In Izatt JA, Fujimoto JG, Tuchin VV, eds. Optical Coherence Tomography and Coherence Domain Optical Methods in Biomedicine XX. Vol. 9697. San Francisco, California, USA: SPIE; 2016: 12–22.
Yao X, Devarajan K, Werkmeister RM, et al. In vivo corneal endothelium imaging using ultrahigh resolution oct. Biomed Optics Express. 2019; 10(11): 5675–5686. [CrossRef]
Werkmeister RM, Sapeta S, Schmidl D, et al. Ultrahigh-resolution oct imaging of the human cornea. Biomed Optics Express. 2017; 8(2): 1221–1239. [CrossRef]
Tan B, Hosseinaee Z, Han L, Kralj O, Sorbara L, Bizheva K. 250 khz, 1.5 µm resolution sd-oct for in-vivo cellular imaging of the human cornea. Biomed Optics Express. 2018; 9(12): 6569–6583. [CrossRef]
Han L, Tan B, Hosseinaee Z, Chen LK, Hileeto D, Bizheva K. Line-scanning SD-OCT for in-vivo, non-contact, volumetric, cellular resolution imaging of the human cornea and limbus. Biomed Optics Express. 2022; 13(7): 4007–4020. [CrossRef]
Bizheva K, Tan B, MacLelan B, et al. Sub-micrometer axial resolution oct for in-vivo imaging of the cellular structure of healthy and keratoconic human corneas. Biomed Optics Express. 2017; 8(2): 800–812. [CrossRef]
Waring GO, Bourne WM, Edelhauser HF, Kenyon KR. The corneal endothelium: normal and pathologic structure and function. Ophthalmology. 1982; 89(6): 531–590. [CrossRef] [PubMed]
Eichel JA, Mishra AK, Clausi DA, Fieguth PW, Bizheva KK. A novel algorithm for extraction of the layers of the cornea. In Proceedings of the Canadian Conference on Computer and Robot Vision (CRV ’09). Kelowna, Canada: IEEE; 2009: 313–320.
Eichel JA, Bizheva KK, Clausi DA, Fieguth PW. Automated 3D reconstruction and segmentation from optical coherence tomography. Lecture Notes Comput Sci. 2010; 6313: 44–57. [CrossRef]
LaRocca F, Chiu SJ, McNabb RP, Kuo AN, Izatt JA, Farsiu S. Robust automatic segmentation of corneal layer boundaries in SDOCT images using graph theory and dynamic programming. Biomed Optics Express. 2011; 2: 1524–1538. [CrossRef]
Jahromi MK, Kafieh R, Rabbani H, et al. An automatic algorithm for segmentation of the boundaries of corneal layers in optical coherence tomography images using gaussian mixture model. J Med Signals Sensors. 2014; 4: 171–180.
Zhang TQ, Elazab A, Wang XG, et al. A novel technique for robust and fast segmentation of corneal layer interfaces based on spectral-domain optical coherence tomography imaging. IEEE Access. 2017; 5: 10352–10363. [CrossRef]
Elsawy A, Abdel-Mottaleb M, Sayed IO, et al. Automatic segmentation of corneal microlayers on optical coherence tomography images. Transl Vis Sci Technol. 2019; 8: 39. [CrossRef] [PubMed]
Elsawy A, Gregori G, Eleiwa T, Abdel-Mottaleb M, Shousha MA. Pathological-corneas layer segmentation and thickness measurement in oct images. Transl Vis Sci Technol. 2020; 9(11): 24. [CrossRef] [PubMed]
Mathai TS, Lathrop KL, Galeotti J. Learning to segment corneal tissue interfaces in oct images. In: Proceedings of the IEEE International Symposium on Biomedical Imaging. Venice, Italy: IEEE; 2019: 1432–1436.
Dos Santos VA, Schmetterer L, Stegmann H, et al. Corneanet: fast segmentation of cornea oct scans of healthy and keratoconic eyes using deep learning. Biomed Optics Express. 2019; 10: 622–641. [CrossRef]
Wang L, Shen M, Chang Q, et al. Automated delineation of corneal layers on OCT images using a boundary-guided CNN. Pattern Recognition. 2021; 120: 108158. [CrossRef] [PubMed]
Elsawy A, Abdel-Mottaleb M. Pipe-net: a pyramidal-input-parallel-encoding network for the segmentation of corneal layer interfaces in oct images. Comput Biol Med. 2022; 147: 105595. [CrossRef] [PubMed]
Wang L, Shen M, Shi C, et al. Ee-net: An edge-enhanced deep learning network for jointly identifying corneal micro-layers from optical coherence tomography. Biomed Signal Process Control. 2022; 71: 103213. [CrossRef]
Li Y, Shekhar R, Huang D. Corneal pachymetry mapping with high-speed optical coherence tomography. Ophthalmology. 2006; 113(5): 792–799. [CrossRef] [PubMed]
Graglia F, Mari JL, Baikoff G, Sequeira J. Contour detection of the cornea from oct radial images. In: 29th Annual International Conference of the IEEE Engineering in Medicine and Biology Society. Lyon, France: IEEE; 2007.
Li Y, Shekhar R, Huang D. Segmentation of 830- and 1310-nm LASIK corneal optical coherence tomography images. In: Sonka M, Michael Fitzpatrick J, eds. Proceedings of SPIE—The International Society for Optical Engineering. Vol. 4684-18. San Diego, California, USA: SPIE; 2002.
Xie L, Dong C, Shan Y. Real-ESRGAN: training real-world blind super-resolution with pure synthetic data. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. IEEE; 2021: 1905–1914.
Wang X, Yu Ke, Wu S, et al. ESRGAN: Enhanced super-resolution generative adversarial networks. In: Leal-Taixé L, Roth S, eds. Proceedings of the European Conference on Computer Vision (ECCV) Workshops. Munich, Germany: Springer-Verlag; 2018.
Chiu SJ, Li XT, Nicholas P, Toth CA, Izatt JA, Farsiu S. Automatic segmentation of seven retinal layers in SDOCT images congruent with expert manual segmentation. Optics Express. 2010; 18(18): 19413–19428. [CrossRef] [PubMed]
West DB . Introduction to Graph Theory. Upper Saddle River, NJ: Prentice Hall; 1996.
Bellman R . On a routing problem. Q Appl Math. 1958; 16(1): 87–90. [CrossRef]
Holland PW, Welsch RE. Robust regression using iteratively reweighted least-squares. Commun Stat Theory Methods. 1977; A6: 813–827.
Glassner AS . An Introduction to Ray Tracing. New York, NY: Elsevier; 1989.
Yadav R, Kottaiyan R, Ahmad K, Yoon G. Epithelium and Bowman's layer thickness and light scatter in keratoconic cornea evaluated using ultrahigh resolution optical coherence tomography. J Biomed Optics. 2012; 17(11): 116010. [CrossRef]
You A, Kim JK, Ryu IH, Yoo TK. Application of generative adversarial networks (GAN) for ophthalmology image domains: a survey. Eye Vis. 2022; 9(1): 6. [CrossRef]
Aghelan A, Rouhani M. Fine-tuned generative adversarial network-based model for medical image super-resolution. arXiv preprint arXiv:2211.00577. 2022, https://doi.org/10.48550/arXiv.2211.00577. Accessed February 25, 2024.
Mahapatra D, Bozorgtabar B, Garnavi R. Image super-resolution using progressive generative adversarial networks for medical image analysis. Comput Med Imaging Graphics. 2019; 71: 30–39. [CrossRef]
Ha A, Sun S, Kim YK, et al. Deep-learning-based enhanced optic-disc photography. PLoS One. 2020; 15(10): e0239913. [CrossRef] [PubMed]
Sun Z, Ng CK. Finetuned super-resolution generative adversarial network (artificial intelligence) model for calcium deblooming in coronary computed tomography angiography. J Pers Med. 2022; 12(9): 1354. [CrossRef] [PubMed]
Jiao R, Zhang Y, Ding L, et al. Learning with limited annotations: a survey on deep semi-supervised learning for medical image segmentation. Comput Biol Med. 2023; 169: 107840. [CrossRef] [PubMed]
Kirillov A, Mintun E, Ravi N, et al. Segment anything. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. Paris, France: IEEE; 2023: 4015–4026.
Ma J, He Y, Li F, Han L, You C, Wang B. Segment anything in medical images. Nat Commun. 2024; 15(1): 654. [CrossRef] [PubMed]
Huang Y, Yang X, Liu L, et al. Segment anything model for medical images? Med Image Anal. 2024; 92: 103061. [CrossRef] [PubMed]
Figure 1.
 
Illustration of corneal anatomy (left) and a cross-sectional ultra-high-resolution OCT image of a FECD eye (right), where five layers are visible.
Figure 1.
 
Illustration of corneal anatomy (left) and a cross-sectional ultra-high-resolution OCT image of a FECD eye (right), where five layers are visible.
Figure 2.
 
OCT images of a healthy cornea (left) and one affected by FECD (right). Magnified areas highlight the endothelium layer, showing focal thickening in the FECD-affected cornea compared to the healthy cornea. Notably, both images exhibit central artifacts, and the healthy cornea image shows a region with low SNR region at the bottom.
Figure 2.
 
OCT images of a healthy cornea (left) and one affected by FECD (right). Magnified areas highlight the endothelium layer, showing focal thickening in the FECD-affected cornea compared to the healthy cornea. Notably, both images exhibit central artifacts, and the healthy cornea image shows a region with low SNR region at the bottom.
Figure 3.
 
Proposed pipeline for corneal layer segmentation and thickness map generation. Dashed lines represent individual processes, while solid lines indicate the sequence of overarching methods associated with each process.
Figure 3.
 
Proposed pipeline for corneal layer segmentation and thickness map generation. Dashed lines represent individual processes, while solid lines indicate the sequence of overarching methods associated with each process.
Figure 4.
 
Process of fine-tuning Real-ESRGAN and its generator architecture. Conv, convolution; HR, high resolution; LR, low resolution; LReLU, leaky ReLU; β, residual scaling parameter.32,33
Figure 4.
 
Process of fine-tuning Real-ESRGAN and its generator architecture. Conv, convolution; HR, high resolution; LR, low resolution; LReLU, leaky ReLU; β, residual scaling parameter.32,33
Figure 5.
 
Flowchart for segmenting EP and ED boundaries from OCT images. The sequence begins with normalized gradients computation followed by a two-stage graph search to detect EP and subsequently ED boundary points.
Figure 5.
 
Flowchart for segmenting EP and ED boundaries from OCT images. The sequence begins with normalized gradients computation followed by a two-stage graph search to detect EP and subsequently ED boundary points.
Figure 6.
 
Flowchart for segmenting intralayer boundaries: BW, ST, and DM from OCT images using adaptive graph search. ROI, region of interest.
Figure 6.
 
Flowchart for segmenting intralayer boundaries: BW, ST, and DM from OCT images using adaptive graph search. ROI, region of interest.
Figure 7.
 
Segmentation results for DM using the fixed search spaces. The top row displays the raw OCT images, followed by segmentation results using search spaces ED-3, ED-7, and ED-12, where ED represents the segmented ED boundary data points. This illustrates the effects of undersegmentation and oversegmentation across a range of image conditions.
Figure 7.
 
Segmentation results for DM using the fixed search spaces. The top row displays the raw OCT images, followed by segmentation results using search spaces ED-3, ED-7, and ED-12, where ED represents the segmented ED boundary data points. This illustrates the effects of undersegmentation and oversegmentation across a range of image conditions.
Figure 8.
 
Comparison of the corneal segmentation results. These examples evidence the improved segmentation of the proposed method, particularly in images with pathological features.
Figure 8.
 
Comparison of the corneal segmentation results. These examples evidence the improved segmentation of the proposed method, particularly in images with pathological features.
Figure 9.
 
Comparison of gradient and segmented layer images with and without Real-ESRGAN enhancement. (A, B) Gradient image of flattened EP (red), BW (green), ST (cyan), DM (blue), and ED (yellow) layers for boundary detection without (left) and with (right) Real-ESRGAN enhancement. (C) Segmented layer images with respective boundaries overlaid.
Figure 9.
 
Comparison of gradient and segmented layer images with and without Real-ESRGAN enhancement. (A, B) Gradient image of flattened EP (red), BW (green), ST (cyan), DM (blue), and ED (yellow) layers for boundary detection without (left) and with (right) Real-ESRGAN enhancement. (C) Segmented layer images with respective boundaries overlaid.
Figure 10.
 
Bland–Altman plots for comparing the corneal layer thickness between ground truth and our measurements. Each plot corresponds to a different corneal layer: total cornea, EP, BW, ST, and ED/DM complex, with the mean ± 1.96 standard deviation. Mean Diff, mean difference.
Figure 10.
 
Bland–Altman plots for comparing the corneal layer thickness between ground truth and our measurements. Each plot corresponds to a different corneal layer: total cornea, EP, BW, ST, and ED/DM complex, with the mean ± 1.96 standard deviation. Mean Diff, mean difference.
Figure 11.
 
Enface thickness maps of healthy, DMEK, and FECD. I, inferior; N, nasal; S, superior; T, temporal.
Figure 11.
 
Enface thickness maps of healthy, DMEK, and FECD. I, inferior; N, nasal; S, superior; T, temporal.
Figure 12.
 
Bar plots of coefficient of variance for repeatability of thickness measurements between two consecutive acquisitions for corneal layers: (A) total cornea, (B) EP/BW complex, (C) ST layer, and (D) ED/DM complex.
Figure 12.
 
Bar plots of coefficient of variance for repeatability of thickness measurements between two consecutive acquisitions for corneal layers: (A) total cornea, (B) EP/BW complex, (C) ST layer, and (D) ED/DM complex.
Table 1.
 
Overview of the Related Work on Corneal Layer Segmentation
Table 1.
 
Overview of the Related Work on Corneal Layer Segmentation
Table 2.
 
Boundary-Wise Evaluation of Three Intracorneal Boundaries Showing UBE (mean Absolute Distance in Micrometers between Ground Truth and Segmented Boundary Points) and SBE (Direction of Segmentation Bias, with Positive Values for Overestimations and Negative Values for Underestimations, in Micrometers
Table 2.
 
Boundary-Wise Evaluation of Three Intracorneal Boundaries Showing UBE (mean Absolute Distance in Micrometers between Ground Truth and Segmented Boundary Points) and SBE (Direction of Segmentation Bias, with Positive Values for Overestimations and Negative Values for Underestimations, in Micrometers
Table 3.
 
Layer-Wise Evaluation: IoU and Dice Score for Total Cornea and Four Corneal Layers
Table 3.
 
Layer-Wise Evaluation: IoU and Dice Score for Total Cornea and Four Corneal Layers
Table 4.
 
Ablation Study on the Impact of Real-ESRGAN and Adaptive Search on UTE
Table 4.
 
Ablation Study on the Impact of Real-ESRGAN and Adaptive Search on UTE
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×