Open Access
Articles  |   May 2019
ReLayer: a Free, Online Tool for Extracting Retinal Thickness From Cross-Platform OCT Images
Author Affiliations & Notes
  • Giovanni Ometto
    Division of Optometry and Visual Science, School of Health Sciences, City, University of London, London, UK
  • Ismail Moghul
    UCL Cancer Institute, University College London, London, UK
  • Giovanni Montesano
    Division of Optometry and Visual Science, School of Health Sciences, City, University of London, London, UK
    University of Milan, School of Ophthalmology, Milan, Italy
    Moorfields Eye Hospital, London, UK
  • Andrew Hunter
    School of Computer Science, University of Lincoln, Lincoln, UK
  • Nikolas Pontikos
    Moorfields Eye Hospital, London, UK
    Institute of Ophthalmology, University College London, London, UK
  • Pete R. Jones
    Division of Optometry and Visual Science, School of Health Sciences, City, University of London, London, UK
    Moorfields Eye Hospital, London, UK
    Institute of Ophthalmology, University College London, London, UK
  • Pearse A. Keane
    Moorfields Eye Hospital, London, UK
    Institute of Ophthalmology, University College London, London, UK
    NIHR Biomedical Research Centre (Moorfields Eye Hospital NHS Foundation Trust/University College London), London, UK
  • Xiaoxuan Liu
    Queen Elizabeth Hospital Birmingham, University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK
    Centre for Translational Inflammation Research, College of Medical and Dental Sciences, University of Birmingham, Edgbaston, Birmingham, UK
  • Alastair K. Denniston
    NIHR Biomedical Research Centre (Moorfields Eye Hospital NHS Foundation Trust/University College London), London, UK
    Queen Elizabeth Hospital Birmingham, University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK
    Centre for Translational Inflammation Research, College of Medical and Dental Sciences, University of Birmingham, Edgbaston, Birmingham, UK
  • David P. Crabb
    Division of Optometry and Visual Science, School of Health Sciences, City, University of London, London, UK
  • Correspondence: Giovanni Ometto, Division of Optometry and Visual Science, School of Health Sciences, City, University of London, Northampton Square, Clerkenwell, London EC1V 0HB, UK. e-mail: giovanni.ometto@city.ac.uk 
Translational Vision Science & Technology May 2019, Vol.8, 25. doi:https://doi.org/10.1167/tvst.8.3.25
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Giovanni Ometto, Ismail Moghul, Giovanni Montesano, Andrew Hunter, Nikolas Pontikos, Pete R. Jones, Pearse A. Keane, Xiaoxuan Liu, Alastair K. Denniston, David P. Crabb; ReLayer: a Free, Online Tool for Extracting Retinal Thickness From Cross-Platform OCT Images. Trans. Vis. Sci. Tech. 2019;8(3):25. https://doi.org/10.1167/tvst.8.3.25.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose: To describe and evaluate a free, online tool for automatically segmenting optical coherence tomography (OCT) images from different devices and computing summary measures such as retinal thickness.

Methods: ReLayer (https://relayer.online) is an online platform to which OCT scan images can be uploaded and analyzed. Results can be downloaded as plaintext (.csv) files. The segmentation method includes a novel, one-dimensional active contour model, designed to locate the inner limiting membrane, inner/outer segment, and retinal pigment epithelium. The method, designed for B-scans from Heidelberg Engineering Spectralis, was adapted for Topcon 3D OCT-2000 and OptoVue AngioVue. The method was applied to scans from healthy and pathological eyes, and was validated against segmentation by the manufacturers, the IOWA Reference Algorithms, and manual segmentation.

Results: Segmentation of a B-scan took ≤1 second. In healthy eyes, mean difference in retinal thickness from ReLayer and the reference standard was below the resolution of the Spectralis and 3D OCT-2000, and slightly above the resolution of the AngioVue. In pathological eyes, ReLayer performed similarly to IOWA (P = 0.97) and better than Spectralis (P < 0.001).

Conclusions: A free online platform (ReLayer) is capable of segmenting OCT scans with similar speed, accuracy, and reliability as the other tested algorithms, but offers greater accessibility. ReLayer could represent a valuable tool for researchers requiring the full segmentation, often not made available by commercial software.

Translational Relevance: A free online platform (ReLayer) provides free, accessible segmentation of OCT images: data often not available via existing commercial software.

Introduction
Optical coherence tomography (OCT) allows for the acquisition of cross-section pictures of the retina (Fig. 1a). Since its invention, OCT images have rapidly become an established medical tool, supporting clinicians' diagnosis/decisions, and a fundamental resource in scientific research.1 The key information provided by these pictures, also called b-scans, is the measurement of the thickness of retinal layers that is essential for detecting, monitoring, and guiding treatment for many eye conditions, including glaucoma, diabetic retinopathy, macular edema, age-related macular degeneration, macular hole, macular pucker, central serous retinopathy, and vitreous traction.2 Currently, measurements can only be obtained using proprietary software and are not available for export or manipulation (Fig. 1b). This presents a limitation, particularly in scientific research, as the availability of this information is essential for the understanding of structural changes of the retina in eye-related pathologies. 
Figure 1
 
(a) A B-scan image from the test data set as exported from the Heidelberg Engineering Spectralis. (b) The same image scan with the manufacturer's segmentation of 11 layers obtained with the proprietary Heyex software and shown superimposed.
Figure 1
 
(a) A B-scan image from the test data set as exported from the Heidelberg Engineering Spectralis. (b) The same image scan with the manufacturer's segmentation of 11 layers obtained with the proprietary Heyex software and shown superimposed.
To address this problem, segmentation algorithms for the layers in OCT images have been published314 and some of these have been made freely available as software/code: for example, the IOWA Reference Algorithms v3.8.07 software, the Graph-Based Segmentation,8 and the Retina Segmentation Toolbox.9 However, open segmentation remains inaccessible to most clinicians and researchers due to lack of time, skills, and resources to run, compile, or replicate published algorithms/code. 
ReLayer (https://relayer.online) is a free, online platform designed to provide a solution to the accessibility problem and to produce measurements that are as accurate as those from the proprietary software. This is achieved by introducing a novel, cross platform, segmentation algorithm that is accessible via web browsers. The platform can be used simply by drag-and-dropping image files onto the web-interface (Fig. 2). The analysis is run on Matlab R2016a software (MathWorks, Natick, MA) installed on the server. Results are visualized graphically and are made available for download in comma-separated-value (.csv) format. ReLayer provides the segmentation of inner limiting membrane (ILM), retinal pigment epithelium (RPE), and inner segment/outer segment (ISOS) layers. The retinal thickness, calculated as the distance from ILM to the interface between the Bruch's membrane (BM) and RPE, is computed and visualized on the platform. Here we evaluate the performance of this prototype system, and compare speed, accuracy, and reliability against other available methods, in scans from different acquisition devices and in scans from patients and healthy volunteers. 
Figure 2
 
The web interface of ReLayer: (a) the main page showing the area dedicated to the drag-and-drop of the scans or the alternative browsing option and the button to launch the analysis. (b) The visualization of the results including the segmentation superimposed on the scans and the interactive, three-dimensional visualization of the retinal thickness.
Figure 2
 
The web interface of ReLayer: (a) the main page showing the area dedicated to the drag-and-drop of the scans or the alternative browsing option and the button to launch the analysis. (b) The visualization of the results including the segmentation superimposed on the scans and the interactive, three-dimensional visualization of the retinal thickness.
Materials and Methods
Algorithm
The algorithm was designed to segment retinal layers from 6-mm-wide macular B-scans acquired with Heidelberg Engineering (Heidelberg, Germany) Spectralis and exported as .tiff image files, the default export format. The algorithm was then adapted to process 6-mm-wide scans exported in the same image format from Topcon (Tokyo, Japan) 3D OCT-2000 and 3-mm-wide scans from OptoVue (Fremont, CA) AngioVue devices. This was achieved by resampling images from the two devices to match the axial and lateral micrometer-to-pixel ratio of those of the Spectralis. Exported images are 512 × 495 pixels (width × height) in size from the Spectralis, 512 × 855 from the 3D OCT-2000, and 640 × 304 from the AngioVue. Manufacturers report an approximate axial micrometer-to-pixel ratio of 3.87, 2.59, and 3.05 μm, respectively, for the three devices, and an axial resolution of 3.9, 5 to 6, and 5 μm.1517 For generalization, we describe the algorithm using micrometers when possible. 
The algorithm was developed using Matlab R2016a software (MathWorks) with the Image Processing Toolbox. The algorithm sequentially attempts the identification of the three layers in a B-scan, in order: ILM, RPE, and ISOS. The segmentation of each layer is a two-step process that restricts the search space for the next layer (Fig. 3). In short, the first step is the detection of a line representing the initial guess for each layer, and is based on the detection of nodal points laying over horizontal edges in the image. The second step corrects each guess by moving it closer to the edges showing in the image. This is obtained using a novel technique based on the active contour model.18,19 If the input to the algorithm is a volume of multiple B-scans, the algorithm analyzes each B-scan sequentially. 
Figure 3
 
Flowchart of the sequence of operations performed by the algorithm.
Figure 3
 
Flowchart of the sequence of operations performed by the algorithm.
Detection of the Initial Guess
The initial guess was obtained through the identification of 36 nodal points spanning the whole width of the scan and connected with linear interpolation. Due to the bright, linear, and quasi-horizontal appearance of the retinal layers in the scans, these points were selected from those of the horizontal edges, conventionally defined by the magnitude of vertically oriented gradients of intensity. To detect the edges, the image was preprocessed using Gaussian filtering (sigma = 3 pixels, kernel size = 6 pixels) to remove noise (Fig. 4a), and then the magnitude of the vertical component of the gradient was calculated using the Sobel gradient operator.20 The result of this operation was a new image of the same size of the original one, where the value at each pixel was the magnitude of the vertical gradient at the corresponding location in the original image (Fig. 4b). Then, 36, 14-pixel wide columns (ci, i = {1, …, 36}), equally spaced and spanning the whole image-width, were selected. The left and right halves of the first and last columns, centered on the edges of the image, were discarded. Then, all values in each column were averaged across the rows to obtain 36 vertical profiles of the averaged gradient (vpi) (Fig. 4c). The average was used to weaken the impact of localized, vertical gradients. These vertical profiles were analyzed to identify their peaking values. The peaking values were used in the selection of 36 points p(i), i = {1, …, 36}, centered in the middle of the respective column ci and vertically located at the location of the peak. Peaks were identified separately for the initial guesses of the ILM, RPE, and ISOS, to obtain three sets of 36 nodal points: pILM(i), pRPE(i) and pISOS(i), respectively (Fig. 5). Of the two highest peaks in each vpi, the one closer to the top edge of the image was selected as pILM(i) (Fig. 5a). The closest peak to the bottom of the image, of those below the ILM and higher than half the highest peak below the ILM, was defined as pRPE(i) (Fig. 5c). To detect the points of the ISOS, each vpi was multiplied by a gamma probability density function (gpdf), with the origin shifted 20 μm above the RPE, oriented toward the top of the image and defined by the shape parameter k = 1.84 and scale parameter θ = 58 μm. The resulting statistical mode of such gpdf was approximately equal to 80 μm. The gpdf was designed so that the multiplication vpi * gpdf would strengthen the peaks of vpi close to the peak of gpdf and cancel out peaks below or closer than 20 μm to the RPE. Then, the points pISOS(i) were selected as the highest peaks in the profiles vpi * gpdf (Fig. 5e). If the algorithm could not identify any of these peaks, the relative points for the initial guess were discarded. Finally, the initial guesses for the ILM, RPE, and ISOS were obtained by linear interpolation of the identified nodal points pILM(i), pRPE(i), and pISOS(i). 
Figure 4
 
(a) Example B-scan processed with the Gaussian filter for noise removal; (b) processed with Sobel gradient operator for edge detection; (c) divided in 36 columns to obtain 36 vertical profiles of the of the averaged gradient. In red, the vertical profile vp6 obtained averaging the values across the rows of the sixth column.
Figure 4
 
(a) Example B-scan processed with the Gaussian filter for noise removal; (b) processed with Sobel gradient operator for edge detection; (c) divided in 36 columns to obtain 36 vertical profiles of the of the averaged gradient. In red, the vertical profile vp6 obtained averaging the values across the rows of the sixth column.
Figure 5
 
(a) The vertical profile vp6 and the nodal point pILM(6). The vertical coordinate of the point is identified by the high peak in vp6 closest to the top of the image; (b) the 36 nodal points pILM of the initial guess for ILM. (c) The vertical profile vp6 and the nodal point pRPE(6). The vertical coordinate of the point is identified by the high peak in vp6 below ILM and closest to the bottom margin of the image; (d) the 36 nodal points pRPE of the initial guess for RPE. (e) The vertical profile vp6 (in red), the gpdf starting 20 above the RPE (in black), the profile of vpi * gpdf (in blue), and the point pISOS(6). The vertical coordinate was obtained as the highest peak of vpi * gpdf; (f) the 36 nodal points pISOS of the initial guess for ISOS.
Figure 5
 
(a) The vertical profile vp6 and the nodal point pILM(6). The vertical coordinate of the point is identified by the high peak in vp6 closest to the top of the image; (b) the 36 nodal points pILM of the initial guess for ILM. (c) The vertical profile vp6 and the nodal point pRPE(6). The vertical coordinate of the point is identified by the high peak in vp6 below ILM and closest to the bottom margin of the image; (d) the 36 nodal points pRPE of the initial guess for RPE. (e) The vertical profile vp6 (in red), the gpdf starting 20 above the RPE (in black), the profile of vpi * gpdf (in blue), and the point pISOS(6). The vertical coordinate was obtained as the highest peak of vpi * gpdf; (f) the 36 nodal points pISOS of the initial guess for ISOS.
Active Contour Model
The second step in the analysis was based on a modified version of the established technique known as “active contour model” or “snake,”18 frequently used in computerized image analysis for the segmentation of contours in images. Briefly, the model iteratively deforms a line (the initial guess) so that it adheres to the boundaries in an image. This is achieved by solving an energy minimization problem governed by the image energy Display Formula\(\def\upalpha{\unicode[Times]{x3B1}}\)\(\def\upbeta{\unicode[Times]{x3B2}}\)\(\def\upgamma{\unicode[Times]{x3B3}}\)\(\def\updelta{\unicode[Times]{x3B4}}\)\(\def\upvarepsilon{\unicode[Times]{x3B5}}\)\(\def\upzeta{\unicode[Times]{x3B6}}\)\(\def\upeta{\unicode[Times]{x3B7}}\)\(\def\uptheta{\unicode[Times]{x3B8}}\)\(\def\upiota{\unicode[Times]{x3B9}}\)\(\def\upkappa{\unicode[Times]{x3BA}}\)\(\def\uplambda{\unicode[Times]{x3BB}}\)\(\def\upmu{\unicode[Times]{x3BC}}\)\(\def\upnu{\unicode[Times]{x3BD}}\)\(\def\upxi{\unicode[Times]{x3BE}}\)\(\def\upomicron{\unicode[Times]{x3BF}}\)\(\def\uppi{\unicode[Times]{x3C0}}\)\(\def\uprho{\unicode[Times]{x3C1}}\)\(\def\upsigma{\unicode[Times]{x3C3}}\)\(\def\uptau{\unicode[Times]{x3C4}}\)\(\def\upupsilon{\unicode[Times]{x3C5}}\)\(\def\upphi{\unicode[Times]{x3C6}}\)\(\def\upchi{\unicode[Times]{x3C7}}\)\(\def\uppsy{\unicode[Times]{x3C8}}\)\(\def\upomega{\unicode[Times]{x3C9}}\)\(\def\bialpha{\boldsymbol{\alpha}}\)\(\def\bibeta{\boldsymbol{\beta}}\)\(\def\bigamma{\boldsymbol{\gamma}}\)\(\def\bidelta{\boldsymbol{\delta}}\)\(\def\bivarepsilon{\boldsymbol{\varepsilon}}\)\(\def\bizeta{\boldsymbol{\zeta}}\)\(\def\bieta{\boldsymbol{\eta}}\)\(\def\bitheta{\boldsymbol{\theta}}\)\(\def\biiota{\boldsymbol{\iota}}\)\(\def\bikappa{\boldsymbol{\kappa}}\)\(\def\bilambda{\boldsymbol{\lambda}}\)\(\def\bimu{\boldsymbol{\mu}}\)\(\def\binu{\boldsymbol{\nu}}\)\(\def\bixi{\boldsymbol{\xi}}\)\(\def\biomicron{\boldsymbol{\micron}}\)\(\def\bipi{\boldsymbol{\pi}}\)\(\def\birho{\boldsymbol{\rho}}\)\(\def\bisigma{\boldsymbol{\sigma}}\)\(\def\bitau{\boldsymbol{\tau}}\)\(\def\biupsilon{\boldsymbol{\upsilon}}\)\(\def\biphi{\boldsymbol{\phi}}\)\(\def\bichi{\boldsymbol{\chi}}\)\(\def\bipsy{\boldsymbol{\psy}}\)\(\def\biomega{\boldsymbol{\omega}}\)\(\def\bupalpha{\bf{\alpha}}\)\(\def\bupbeta{\bf{\beta}}\)\(\def\bupgamma{\bf{\gamma}}\)\(\def\bupdelta{\bf{\delta}}\)\(\def\bupvarepsilon{\bf{\varepsilon}}\)\(\def\bupzeta{\bf{\zeta}}\)\(\def\bupeta{\bf{\eta}}\)\(\def\buptheta{\bf{\theta}}\)\(\def\bupiota{\bf{\iota}}\)\(\def\bupkappa{\bf{\kappa}}\)\(\def\buplambda{\bf{\lambda}}\)\(\def\bupmu{\bf{\mu}}\)\(\def\bupnu{\bf{\nu}}\)\(\def\bupxi{\bf{\xi}}\)\(\def\bupomicron{\bf{\micron}}\)\(\def\buppi{\bf{\pi}}\)\(\def\buprho{\bf{\rho}}\)\(\def\bupsigma{\bf{\sigma}}\)\(\def\buptau{\bf{\tau}}\)\(\def\bupupsilon{\bf{\upsilon}}\)\(\def\bupphi{\bf{\phi}}\)\(\def\bupchi{\bf{\chi}}\)\(\def\buppsy{\bf{\psy}}\)\(\def\bupomega{\bf{\omega}}\)\(\def\bGamma{\bf{\Gamma}}\)\(\def\bDelta{\bf{\Delta}}\)\(\def\bTheta{\bf{\Theta}}\)\(\def\bLambda{\bf{\Lambda}}\)\(\def\bXi{\bf{\Xi}}\)\(\def\bPi{\bf{\Pi}}\)\(\def\bSigma{\bf{\Sigma}}\)\(\def\bPhi{\bf{\Phi}}\)\(\def\bPsi{\bf{\Psi}}\)\(\def\bOmega{\bf{\Omega}}\)\({E_{{\rm{image}}}}\) that pulls the points of the line toward lines and edges in the image, and the internal energy of the lineDisplay Formula\({E_{{\rm{internal}}}}\), interpretable as the stretching-bending capabilities of the line, which resists the deformation. This model has the advantage of being robust to noise and discontinued boundaries but requires priors, such as an initial guess and the weights/parameters defined by the user. The total energy of the contour Display Formula\({E_{{\rm{tot}}}}\) that is minimized through the iterations is given by:  
\begin{equation}{E_{{\rm{tot}}}} = \mathop \int \nolimits_0^1 k \cdot {E_{{\rm{image}}}}\left( {v\left( s \right),{w_{{\rm{line}}}},{w_{{\rm{edge}}}}} \right) + {E_{{\rm{internal}}}}\left( {v\left( s \right),\alpha ,\beta } \right)ds{\rm {,}}\end{equation}
where Display Formula\({v_i}\), i = {1, …, 512} are the points of the contour across the width of the image, Display Formula\({w_{{\rm{line}}}}\), Display Formula\({w_{{\rm{edge}}}}\) and Display Formula\(k\) are user-assigned weights that define the impact of lines and edges in the calculation of Display Formula\({E_{{\rm{image}}}}\), and the impact of Display Formula\({E_{{\rm{image}}}}\) in the calculation of Display Formula\({E_{{\rm{tot}}}}\), while Display Formula\(\alpha \) andDisplay Formula\(\beta \) are parameters that control the amount of stretch and curvature of the contour. The proposed algorithm used a modified version of the conventional two-dimensional model that restricts the deformation of the line to the vertical dimension, by allowing the points in the line to move only upwards or downwards. This was achieved by removing horizontal components from the conventional formulation of the minimization problem, reducing its dimensionality and therefore the complexity of the computation. This modified, one-dimensional active contour model was used three times by the algorithm, one for each of the initial guesses. The parameters used are shown in Table 1. The active contour models of ILM and ISOS shared the same parameters. Different parameters were used for the RPE to make the contour “stiffer”, allowing less sharp bends to reflect the cross-sectional morphology of the layer. At the end of the 50th iteration, the deformed initial guess represented the final segmentation.  
Table 1
 
Weights and Parameters of the Active Contours Models
Table 1
 
Weights and Parameters of the Active Contours Models
Evaluation
The method was tested against the manufacturer's segmentation for scans acquired from healthy eyes with the Spectralis and AngioVue. The segmentation from the Spectralis was obtained from .vol files exported from a version of the Heyex software (Heidelberg Engineering) enabled for RAW data export. The segmentation from the AngioVue was obtained from the .xml files exported from the device. The manufacturer's segmentation of scans from healthy eyes acquired with the Topcon device was not available, and the method was evaluated against a manual segmentation by an expert clinician (XL). The manual segmentation was aided by a custom tool created for the purpose with Matlab. The tool allowed the clinician to select points on the scans and interpolated them with polynomial fitting lines. The test scans were obtained from three data sets of volunteers and one data set of patients, each acquired with one of the three devices for previous studies. Volunteers underwent a visit from a clinician to exclude any pathology. The fourth data set included randomly selected patients with a range of known retinal pathology. The first data set included 48 raster scans acquired with Spectralis from 48 healthy eyes of 24 volunteers (protocol approved from the NRES East Midlands Ethics Committee, Ref: 14/EM/1163).21 The second data set consisted of 18 raster scans acquired with the 3D OCT-2000 from 18 healthy eyes of nine volunteers (protocol approved from the City University of London Ethics Committee, Ref: OPT/PR/16-17/36). The third data set included 15 raster scans from 15 healthy eyes of 15 volunteers acquired with the AngioVue (protocol approved from the Humanitas Gavazzeni Hospital Ethics Committee, Ref: 253-17 GAV).22 The fourth data set included 19 raster scans acquired with Spectralis from 19 eyes of 19 patients in presence of known retinal pathology (including macular edema, age-related macular degeneration, previous choroidal neovascularization) (protocol approved from the NRES East Midlands Ethics Committee, Ref: 14/EM/1163).21 From each raster scan of a volunteer, a single B-scan was randomly selected and used for the evaluation. The two segmentations from the Spectralis and AngioVue underwent a check by a clinician (GM) to identify any visible errors and, with the manual segmentation of the Topcon scans, were used as the Reference Standard (RS) for testing. In the data set of patients with macular edema, one of the five central B-scans was randomly selected for the analysis from each raster scan to capture structural changes. Here, the RS was obtained as the average of two manual segmentations by two clinicians (GM and XL). Areas of the scans where either clinician was unable to identify the layer were excluded from the analysis. On these scans, the RS was compared to the segmentation results of the proposed algorithm as well as to the segmentation from the manufacturer (Heidelberg Engineering) and from the IOWA Reference Algorithms. The full segmentation from IOWA was obtained from the XML files created during the segmentation. The mean absolute distance (MAD) was chosen to quantify the difference between RS and the tested segmentation for the three layers. The layer here defined as RPE was compared to the BM identified by the Spectralis to reflect the different notations. Retinal thickness, obtained as the distance between ILM, ISOS, and RPE (or BM in the Spectralis), was also measured. Finally, the micrometer-to-pixel ratios provided by the manufacturers were used to convert measurement in pixels to micrometers. This conversion allowed the evaluation of the MAD, revealing meaningful segmentation differences when the calculated distance was higher than the resolution of the device. MAD values for retinal thickness were calculated for each scan and compared across algorithms using mixed effects models. All statistical calculations were performed in R (R Foundation for Statistical Computing, Vienna, Austria). 
Results
Mean time to segment a single B-scan was completed in 0.94 seconds (standard deviation [SD] 0.15) when running on a desktop computer with an Intel (Santa Clara, CA) Core i5-6500 CPU @ 3.20 GHz and 16 MB of RAM memory. The online processing may take longer as images need to be uploaded to the server and can vary depending on the workload of the server, the number of clients connected at the time of processing, and the size of the volume to be processed. For these reasons, the total execution time for a volume can range between approximately 30 seconds to 2 minutes. 
In healthy volunteers, the clinical validation of the segmentation by the manufacturers did not identify any errors in the scans from the Spectralis, while some were identified in the segmentation of the RPE and ISOS layers from the AngioVue, but were considered minor. Table 2 shows the resulting MAD in pixels and in micrometers between layers segmented by ReLayer and RS for the three devices in scans from healthy eyes. The mean difference in calculated retinal thickness (SD) was 3.45 μm (0.83), 3.63 μm (2.39), and 6.16 μm (3.41), respectively for the Spectralis, 3D OCT-2000, and AngioVue. These values were lower than the resolution of the Spectralis and the 3D OCT-2000 (3.9 and 5–6 μm, respectively) and slightly above the declared resolution of the AngioVue (5 μm). This means that the mean difference of the proposed segmentation and the RS is negligible for the measurement of the retinal thickness from the first two devices under evaluation. Similarly, the comparison of individual layers revealed a mean difference below the resolution of all three instruments, with the exception of the RPE layer segmented on AngioVue scans, where the mean difference resulted slightly above its resolution. Results of the segmentation are shown for random samples of the three data sets in Figure 6. By inspection, no major differences could be identified between the proposed method and the segmentation used as the RS. 
Table 2
 
Distance Between the Proposed Segmentation and the RS, and Difference of the Calculated Thickness for the Three Devices
Table 2
 
Distance Between the Proposed Segmentation and the RS, and Difference of the Calculated Thickness for the Three Devices
Figure 6
 
The segmentation results from RS (green) and from ReLayer (dashed red) in a random subset of the test images from healthy subjects for images acquired with: (a) Heidelberg Engineering “Spectralis,” (b) Topcon 3D “OCT-2000,” and (c) OptoVue “AngioVue.”
Figure 6
 
The segmentation results from RS (green) and from ReLayer (dashed red) in a random subset of the test images from healthy subjects for images acquired with: (a) Heidelberg Engineering “Spectralis,” (b) Topcon 3D “OCT-2000,” and (c) OptoVue “AngioVue.”
Table 3 shows the results of the comparison between ReLayer, Spectralis, and IOWA algorithms against the RS in scans acquired with Spectralis from eyes with pathology. All thickness measurements obtained with the three different segmentation algorithms showed an average difference with RS that was greater than the published resolution of the device (3.9 μm). All individual layers identified by IOWA deviated from the RS in a measure that was on average greater than the resolution of the device. The mean distance between the segmentation from Spectralis and RS was below the resolution and, therefore, negligible only for segmentation of the ILM. The mean distance between the segmentation from ReLayer and the RS was negligible for both ILM and RPE layers. However, calculated SD values showed a greater variation around the average distance between ReLayer and RS than between the other two algorithms and RS, particularly for the RPE and ISOS. MAD average value for the retinal thickness obtained with ReLayer on each scan showed no statistically significant differences from IOWA (P = 0.973, 0.05 ± 0.23 μm maximum error-difference). However, both ReLayer and IOWA showed significantly lower MAD values compared with Spectralis (P < 0.001, 1.88 ± 0.23 μm and 1.83 ± 0.23 μm maximum error-difference, respectively). Examples of the segmentation results from the three algorithms and RS are shown in Figure 7 on selected areas to highlight their behavior in presence of pathological changes. See Supplementary Figure S1 for the segmentation results on the whole data set. 
Table 3
 
Distance Between the Three Segmentation Algorithms and the RS in the Data Set of Scans From Pathological Eyes
Table 3
 
Distance Between the Three Segmentation Algorithms and the RS in the Data Set of Scans From Pathological Eyes
Figure 7
 
Segmentation results on six selected areas depicting structural changes from six of the pathological scans used for testing. The RS is shown in green where both clinicians could identify the layer. ReLayer segmentation is represented by a dashed red line, Heidelberg Engineering segmentation by a blue line, and IOWA segmentation by a pink dotted line.
Figure 7
 
Segmentation results on six selected areas depicting structural changes from six of the pathological scans used for testing. The RS is shown in green where both clinicians could identify the layer. ReLayer segmentation is represented by a dashed red line, Heidelberg Engineering segmentation by a blue line, and IOWA segmentation by a pink dotted line.
Discussion
In healthy eyes, the segmentation from ReLayer was as accurate as that from the manufacturer in scans obtained from the Heidelberg Engineering Spectralis and as reliable as a manual segmentation by a clinical expert in scans from the Topcon 3D OCT-2000. The calculated thickness differed from the RS by 3.59 and 3.63 μm on average; these values are below the resolution of both instruments and compatible with measured repeatability of acquisition devices.23 The mean difference of the segmentation, evaluated for individual layers, was also below the resolution, meaning that further improvements would not be beneficial. When compared with the segmentation from the OptoVue AngioVue, the thickness calculated by the proposed method differed by 6.16 μm, 1.16 μm above the published resolution of the device. However, small segmentation errors were noted by the clinician when evaluating the RS segmentation from the AngioVue. These imperfections could have contributed to this difference and represented a limitation in the assessment of the method for this particular device. The brighter appearance of scans taken with this device could have also affected the segmentation by slightly changing the behavior of the proposed algorithm, which is based on the gradient between brighter/darker areas in the image. However, this could be an instance where using the same cross-platform segmentation algorithm would be particularly beneficial, providing a more homogeneous approach to the task. In short, these results indicate that the proposed segmentation was correct and within expected levels of measurement variability of that obtained from the devices.23 
The evaluation on scans from eyes with pathology revealed nonnegligible differences between all three segmentation methods and the RS. MAD of the proposed method from the RS was lower than that seen in Spectralis and IOWA. In addition, greater variation demonstrated by greater SD values suggests that segmentation errors by ReLayer were larger in some areas of these scans. Errors by the proposed algorithm could be due to its design, which is based on several hard thresholds. Alternatively, errors could be caused by isolated edges extending across multiple A-scans and trapping the segmentation into local minima. These results reflect the different behaviors of the three methods rather than identifying one to be superior. Segmentation discrepancies with the RS larger than the resolution show that a single, generalizable algorithm capable of accurately segmenting layers in both healthy and pathological scans is still an open challenge. These findings support ideas of using disease-specific algorithms in the presence of ocular conditions.6,24 This problem will be addressed by ReLayer with future introduction of variations of the algorithm to the platform, customized to address individual conditions. The comparison of the MAD average for the calculated retinal thickness showed a similar deviation from the RS for ReLayer and IOWA, which was smaller than Spectralis. Notably, the maximum error-difference was below the resolution of the instrument. 
Execution time is generally slower than the segmentation provided by the manufacturers and IOWA but is still fast enough to allow the analysis of raster scans in a clinical setting, for example during a patient's consultation. ReLayer is at the very early stages of its development and improvements are planned in the near future. Execution time can and will be improved considerably by translating the code into a compiled language. In addition, we plan to move our service to cloud computing, allowing for users all over the world to use the software at the same time, with no negative impact on performance 
ReLayer is fully automatic, free, and has no requirements other than the access to a web browser. The intuitive drag-and-drop of the scans, the 3D visualization of the thickness profile, and the download of the coordinates of all segmented layers in .csv files, make results easily accessible. For these reasons, we believe that ReLayer represents a useful tool to both researchers and clinicians. Future developments will include the segmentation of new layers, support for OCT scans from Carl Zeiss Meditec (Jena, Germany) Cirrus devices, and for wide-field scans. Finally, the platform will be upgraded to include disease-specific segmentation, to allow processing of multiple volumes with a single upload and to allow manual correction of the results. 
Acknowledgments
Supported by the Wellcome Trust 200141/Z/15/Z. AKD and PAK receive a proportion of their funding from the Department of Health's NIHR Biomedical Research Centre for Ophthalmology at Moorfields Eye Hospital and UCL Institute of Ophthalmology. GO, XL, DC, and AKD receive a proportion of their funding for this project from the Wellcome Trust, through a Health Improvement Challenge grant. IM is supported by the Biotechnology and Biological Sciences Research Council (Grant No. BB/M009513/1). 
Disclosure: G. Ometto, None; I. Moghul, None; G. Montesano, None; A. Hunter, None; N. Pontikos, None; P.R. Jones, None; P.A. Keane, Deepmind (C), Allergan (R), Bayer (R), Carl Zeiss Meditec (R), Haag-Streit (R), Heidelberg Engineering (R), Novartis (R), Topcon (R); X. Liu, None; A.K. Denniston, None; D.P. Crabb, Roche (F), CenterVue (C), Allergan (R), Santen (R) 
References
Fujimoto J, Swanson E. The development, commercialization, and impact of optical coherence tomography. Invest Ophthalmol Vis Sci. 2016; 57: OCT1–OCT13.
American Academy of Ophthalmology. What conditions can OCT help to diagnose? Available at: https://www.aao.org/eye-health/treatments/what-does-optical-coherence-tomography-diagnose. Accessed December 1, 2018.
DeBuc DC. A review of algorithms for segmentation of retinal image data using optical coherence tomography. In: Ho P-G, ed. Image Segmentation. London, UK: InTech; 2011: 15–54.
Lang A, Carass A, Hauser M, et al. Retinal layer segmentation of macular OCT images using boundary classification. Biomed Opt Exp. 2013; 4: 1133–1152.
Lang A, Carass A, Sotirchos E, Calabresi P, Prince JL. Segmentation of retinal OCT images using a random forest classifier. Proc SPIE Int Soc Opt Eng. 2013: 86690R.
Fang L, Cunefare D, Wang C, Guymer RH, Li S, Farsiu S. Automatic segmentation of nine retinal layer boundaries in OCT images of non-exudative AMD patients using deep learning and graph search. Biomed Opt Exp. 2017; 8: 2732–2744.
Abràmoff MD, Garvin MK, Sonka M. Retinal imaging and image analysis. IEEE Rev Biomed Eng. 2010; 3: 169–208.
Teng P. Caserel–an open source software for computer-aided segmentation of retinal layers in optical coherence tomography images. Zenodo. 2013; 10. https://doi.org/10.5281/zenodo.17893.
Rathke F, Schmidt S, Schnörr C. Probabilistic intra-retinal layer segmentation in 3-D OCT images using global shape regularization. Med Image Anal. 2014; 18: 781–794.
Belghith A, Bowd C, Medeiros FA, et al. Does the location of Bruch's membrane opening change over time? Longitudinal analysis using San Diego Automated Layer Segmentation Algorithm (SALSA). Invest Ophthalmol Vis Sci. 2016; 57: 675–682.
Belghith A, Bowd C, Medeiros FA, Weinreb RN, Zangwill LM. Automated segmentation of anterior lamina cribrosa surface: how the lamina cribrosa responds to intraocular pressure change in glaucoma eyes? 2015 IEEE 12th International Symposium on Biomedical Imaging (ISBI). New York, NY: IEEE; 2015: 222–225.
Lee CS, Baughman DM, Lee AY. Deep learning is effective for the classification of OCT images of normal versus age-related macular degeneration. Ophthalmol Retina. 2017; 1: 322–327.
De Fauw J, Ledsam JR, Romera-Paredes B, et al. Clinically applicable deep learning for diagnosis and referral in retinal disease. Nature Med. 2018; 24: 1342.
Kermany DS, Goldbaum M, Cai W, et al. Identifying medical diagnoses and treatable diseases by image-based deep learning. Cell. 2018; 172: 1122–1131.
Heidelberg Engineering “Spectralis” datasheet. Available at: https://business-lounge.heidelbergengineering.com/gb/en/products/spectralis/spectralis/downloads/. Accessed December 1, 2018.
Topcon “3D OCT-2000” datasheet. Available at: http://www.topcon-medical.eu/files/EU_Downloads/Products/3D_OCT-2000/3D_OCT_2000series_en.brochure.pdf. Accessed December 1, 2018.
OptoVue “AngioVue” datasheet. Available at: https://www.optovue.com/products/AngioVue-ifusion. Accessed December 1, 2018.
Kass M, Witkin A, Terzopoulos D. Snakes: Active contour models. Int J Comp Vis. 1988; 1: 321–331.
Farsiu S, Chiu SJ, Izatt JA, Toth CA. Fast detection and segmentation of drusen in retinal optical coherence tomography images. Ophthalmic Technologies XVIII. Bellingham, WA: International Society for Optics and Photonics; 2008: 68440D.
Sobel I, Feldman G. A 3x3 isotropic gradient operator for image processing, presented at a talk at the Stanford Artificial Project. In: Duda R, Hart P, eds. Pattern Classification and Scene Analysis. John Wiley & Sons, 1968: 271–272.
Montesano G, Way C, Ometto G, et al. Optimizing OCT acquisition parameters for assessments of vitreous haze for application in uveitis. Sci Rep. 2018; 8: 1648.
Allegrini D, Montesano G, Fogagnolo P, et al. The volume of peripapillary vessels within the retinal nerve fibre layer: an optical coherence tomography angiography study of normal subjects. Br J Ophthalmol. 2018; 102: 611–621.
Terry L, Cassels N, Lu K, et al. Automated retinal layer segmentation using spectral domain optical coherence tomography: evaluation of inter-session repeatability and agreement between devices. PLoS One. 2016; 11: e0162001.
Chiu SJ, Izatt JA, O'Connell RV, Winter KP, Toth CA, Farsiu S. Validated automatic segmentation of AMD pathology including drusen and geographic atrophy in SD-OCT images. Invest Ophthalmol Vis Sci. 2012; 53: 53–61.
Figure 1
 
(a) A B-scan image from the test data set as exported from the Heidelberg Engineering Spectralis. (b) The same image scan with the manufacturer's segmentation of 11 layers obtained with the proprietary Heyex software and shown superimposed.
Figure 1
 
(a) A B-scan image from the test data set as exported from the Heidelberg Engineering Spectralis. (b) The same image scan with the manufacturer's segmentation of 11 layers obtained with the proprietary Heyex software and shown superimposed.
Figure 2
 
The web interface of ReLayer: (a) the main page showing the area dedicated to the drag-and-drop of the scans or the alternative browsing option and the button to launch the analysis. (b) The visualization of the results including the segmentation superimposed on the scans and the interactive, three-dimensional visualization of the retinal thickness.
Figure 2
 
The web interface of ReLayer: (a) the main page showing the area dedicated to the drag-and-drop of the scans or the alternative browsing option and the button to launch the analysis. (b) The visualization of the results including the segmentation superimposed on the scans and the interactive, three-dimensional visualization of the retinal thickness.
Figure 3
 
Flowchart of the sequence of operations performed by the algorithm.
Figure 3
 
Flowchart of the sequence of operations performed by the algorithm.
Figure 4
 
(a) Example B-scan processed with the Gaussian filter for noise removal; (b) processed with Sobel gradient operator for edge detection; (c) divided in 36 columns to obtain 36 vertical profiles of the of the averaged gradient. In red, the vertical profile vp6 obtained averaging the values across the rows of the sixth column.
Figure 4
 
(a) Example B-scan processed with the Gaussian filter for noise removal; (b) processed with Sobel gradient operator for edge detection; (c) divided in 36 columns to obtain 36 vertical profiles of the of the averaged gradient. In red, the vertical profile vp6 obtained averaging the values across the rows of the sixth column.
Figure 5
 
(a) The vertical profile vp6 and the nodal point pILM(6). The vertical coordinate of the point is identified by the high peak in vp6 closest to the top of the image; (b) the 36 nodal points pILM of the initial guess for ILM. (c) The vertical profile vp6 and the nodal point pRPE(6). The vertical coordinate of the point is identified by the high peak in vp6 below ILM and closest to the bottom margin of the image; (d) the 36 nodal points pRPE of the initial guess for RPE. (e) The vertical profile vp6 (in red), the gpdf starting 20 above the RPE (in black), the profile of vpi * gpdf (in blue), and the point pISOS(6). The vertical coordinate was obtained as the highest peak of vpi * gpdf; (f) the 36 nodal points pISOS of the initial guess for ISOS.
Figure 5
 
(a) The vertical profile vp6 and the nodal point pILM(6). The vertical coordinate of the point is identified by the high peak in vp6 closest to the top of the image; (b) the 36 nodal points pILM of the initial guess for ILM. (c) The vertical profile vp6 and the nodal point pRPE(6). The vertical coordinate of the point is identified by the high peak in vp6 below ILM and closest to the bottom margin of the image; (d) the 36 nodal points pRPE of the initial guess for RPE. (e) The vertical profile vp6 (in red), the gpdf starting 20 above the RPE (in black), the profile of vpi * gpdf (in blue), and the point pISOS(6). The vertical coordinate was obtained as the highest peak of vpi * gpdf; (f) the 36 nodal points pISOS of the initial guess for ISOS.
Figure 6
 
The segmentation results from RS (green) and from ReLayer (dashed red) in a random subset of the test images from healthy subjects for images acquired with: (a) Heidelberg Engineering “Spectralis,” (b) Topcon 3D “OCT-2000,” and (c) OptoVue “AngioVue.”
Figure 6
 
The segmentation results from RS (green) and from ReLayer (dashed red) in a random subset of the test images from healthy subjects for images acquired with: (a) Heidelberg Engineering “Spectralis,” (b) Topcon 3D “OCT-2000,” and (c) OptoVue “AngioVue.”
Figure 7
 
Segmentation results on six selected areas depicting structural changes from six of the pathological scans used for testing. The RS is shown in green where both clinicians could identify the layer. ReLayer segmentation is represented by a dashed red line, Heidelberg Engineering segmentation by a blue line, and IOWA segmentation by a pink dotted line.
Figure 7
 
Segmentation results on six selected areas depicting structural changes from six of the pathological scans used for testing. The RS is shown in green where both clinicians could identify the layer. ReLayer segmentation is represented by a dashed red line, Heidelberg Engineering segmentation by a blue line, and IOWA segmentation by a pink dotted line.
Table 1
 
Weights and Parameters of the Active Contours Models
Table 1
 
Weights and Parameters of the Active Contours Models
Table 2
 
Distance Between the Proposed Segmentation and the RS, and Difference of the Calculated Thickness for the Three Devices
Table 2
 
Distance Between the Proposed Segmentation and the RS, and Difference of the Calculated Thickness for the Three Devices
Table 3
 
Distance Between the Three Segmentation Algorithms and the RS in the Data Set of Scans From Pathological Eyes
Table 3
 
Distance Between the Three Segmentation Algorithms and the RS in the Data Set of Scans From Pathological Eyes
Supplement 1
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×