Open Access
Artificial Intelligence  |   October 2022
Synthetic OCT Data Generation to Enhance the Performance of Diagnostic Models for Neurodegenerative Diseases
Author Affiliations & Notes
  • Hajar Danesh
    School of Advanced Technologies in Medicine, Medical Image and Signal Processing Research Center, Isfahan University of Medical Sciences, Isfahan, Isfahan, Iran
  • David H. Steel
    Sunderland Eye Infirmary, Sunderland, Tyne and Wear, UK
    Centre for Transformative Neuroscience and Institute of Biosciences, Newcastle University, Newcastle upon Tyne, Tyne and Wear, UK
  • Jeffry Hogg
    Royal Victoria Infirmary Eye Department, Newcastle Upon Tyne Hospitals NHS Foundation Trust, Newcastle Upon Tyne, Newcastle Upon Tyne, UK
    Population Health Sciences Institute, Newcastle University, Newcastle Upon Tyne, Tyne and Wear, UK
  • Fereshteh Ashtari
    Isfahan Neurosciences Research Center, Isfahan University of Medical Sciences, Isfahan, Iran
  • Will Innes
    Royal Victoria Infirmary Eye Department, Newcastle Upon Tyne Hospitals NHS Foundation Trust, Newcastle Upon Tyne, Newcastle Upon Tyne, UK
    School of Computing, Newcastle University, Newcastle upon Tyne, Tyne and Wear, UK
  • Jaume Bacardit
    School of Computing, Newcastle University, Newcastle upon Tyne, Tyne and Wear, UK
  • Anya Hurlbert
    Centre for Transformative Neuroscience and Institute of Biosciences, Newcastle University, Newcastle upon Tyne, Tyne and Wear, UK
  • Jenny C. A. Read
    Centre for Transformative Neuroscience and Institute of Biosciences, Newcastle University, Newcastle upon Tyne, Tyne and Wear, UK
  • Rahele Kafieh
    School of Advanced Technologies in Medicine, Medical Image and Signal Processing Research Center, Isfahan University of Medical Sciences, Isfahan, Isfahan, Iran
    Centre for Transformative Neuroscience and Institute of Biosciences, Newcastle University, Newcastle upon Tyne, Tyne and Wear, UK
    Department of Engineering, Durham University, South Road, Durham, UK
  • Correspondence: Rahele Kafieh, Department of Engineering, Durham University, South Road, Durham, UK. [email protected] 
Translational Vision Science & Technology October 2022, Vol.11, 10. doi:https://doi.org/10.1167/tvst.11.10.10
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Hajar Danesh, David H. Steel, Jeffry Hogg, Fereshteh Ashtari, Will Innes, Jaume Bacardit, Anya Hurlbert, Jenny C. A. Read, Rahele Kafieh; Synthetic OCT Data Generation to Enhance the Performance of Diagnostic Models for Neurodegenerative Diseases. Trans. Vis. Sci. Tech. 2022;11(10):10. https://doi.org/10.1167/tvst.11.10.10.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose: Optical coherence tomography (OCT) has recently emerged as a source for powerful biomarkers in neurodegenerative diseases such as multiple sclerosis (MS) and neuromyelitis optica (NMO). The application of machine learning techniques to the analysis of OCT data has enabled automatic extraction of information with potential to aid the timely diagnosis of neurodegenerative diseases. These algorithms require large amounts of labeled data, but few such OCT data sets are available now.

Methods: To address this challenge, here we propose a synthetic data generation method yielding a tailored augmentation of three-dimensional (3D) OCT data and preserving differences between control and disease data. A 3D active shape model is used to produce synthetic retinal layer boundaries, simulating data from healthy controls (HCs) as well as from patients with MS or NMO.

Results: To evaluate the generated data, retinal thickness maps are extracted and evaluated under a broad range of quality metrics. The results show that the proposed model can generate realistic-appearing synthetic maps. Quantitatively, the image histograms of the synthetic thickness maps agree with the real thickness maps, and the cross-correlations between synthetic and real maps are also high. Finally, we use the generated data as an augmentation technique to train stronger diagnostic models than those using only the real data.

Conclusions: This approach provides valuable data augmentation, which can help overcome key bottlenecks of limited data.

Translational Relevance: By addressing the challenge posed by limited data, the proposed method helps apply machine learning methods to diagnose neurodegenerative diseases from retinal imaging.

Introduction
Multiple sclerosis (MS) is an unpredictable and recurrent disease that affects the nerve cells of the brain and spinal cord and destroys the protective myelin sheath around the nerve fibers, causing vision problems and impaired muscle control. Loss of vision is one of the leading causes of disability in patients with MS. Several studies have reported a correlation between the axonal loss in the optic nerve and the degree of functional disability in patients with MS.1,2 Neuromyelitis optica (NMO) is another neurodegenerative disease that affects the eye and spinal cord and occurs when the immune system attacks healthy cells in the central nervous system. 
Optical coherence tomography (OCT) facilitates cross-sectional imaging of the retina based on interference patterns produced by low-coherence near-infrared light reflected from retinal tissues. OCT has been used as an easy, fast, and noninvasive method for qualitative and quantitative evaluation of retinal changes in neurologic disorders.3,4 This technique makes it possible to reconstruct cross-sectional structural images with an axial resolution of approximately 4 µm.5 
Over the past decades, several clinical and paraclinical procedures have been performed to diagnose neurodegenerative diseases like MS and NMO. Magnetic resonance imaging (MRI) is widely used to diagnose specific inflammatory lesions and tissue atrophy in the brain and spinal cord.6,7 Recently, diagnostic procedures have been increasingly complemented by retinal imaging with OCT, first described in MS by Parisi et al.8 Further studies have shown that two features derived from OCT scans in MS and NMO—namely, the peripapillary retinal nerve fiber layer thickness as a measure of axonal health and the macular volume and ganglion cell and inner plexiform layer (GCILP) thickness as a measure of neuronal health—are linked to MRI-based measures of myelin health in the posterior visual pathway.916 
Machine learning (ML) methods have great promise in ophthalmology for discriminating different diseases.17,18 The main limitation of ML in applications like discrimination of MS and NMO is the availability of large and well-annotated training data sets. Synthetic OCT data could address this issue by supplying additional training data, covering underrepresented classes to reduce bias, and avoiding the privacy issues associated with the collection of real imaging data. 
The construction of synthetic OCT data has been considered by researchers over the past few years.1922 In recent work,23,24 we used an active shape model (ASM25) to construct synthetic two-dimensional (2D) and three-dimensional (3D) OCT data in the macular region. In this article, we use that model as an augmentation method to generate synthetic 3D OCT boundaries of the macular region from healthy controls (HCs) and patients with MS and NMO. The thickness maps of retinal layers (strong biomarkers of MS and NMO) are then calculated using both synthetic and real data. Three validation strategies are formulated to assess the utility and integrity of the generated data and to justify its use to augment real data in future research. The strategies include histogram comparison methods, comparison of statistical properties between original 2D maps (retinal thickness maps), and a standard classification measure to evaluate the efficacy of the synthetic data augmentation method in disease prediction. Figure 1 shows the proposed approach in a graphical abstract. 
Figure 1.
 
Graphical abstract of the proposed approach.
Figure 1.
 
Graphical abstract of the proposed approach.
Materials and Methods
The OCT data set was collected using Spectralis SD-OCT (Heidelberg Engineering, Heidelberg, Germany) at the Kashani Comprehensive MS center, Isfahan, Iran (details in Ashtari et al.14). It consists of OCT data from HCs (26 eyes) and patients with NMO (30 eyes) and MS (30 eyes). To construct the proposed model, a limited number (five OCT volumes) were randomly selected from each class to be used in the training stage and to synthesize 25 three-dimensional OCT boundaries in each category. In total, 130 OCT volumes from HCs26 were used for further validation of the synthetic data (30 eyes for training and 100 eyes for further comparison with synthetic data). 
Each OCT image stack contains 25 horizontal B-scans (each with 512 A-scans, with automatic real-time tracking in nine frames and axial resolution of 3.8 mm), scanning a macular area of 6 by 6 mm focused on the fovea. Automated segmentation of retinal layer boundaries was performed using a custom-developed graph-based method27 with reference values presented in Kafieh et al.26 The segmentation results were quality controlled and manually corrected in case of errors by an ophthalmologist using the method in Montazerin et al.28 To account for eye laterality, 3D OCTs from left eyes are flipped and the nasal area is located on the right side of the thickness maps. 
The boundaries of intramacular layers were calculated for macular retinal nerve fiber layer (mRNFL), GCIPL, inner nuclear layer (INL), outer plexiform layer (OPL), outer nuclear layer (ONL), myoid-ellipsoid zone (MEZ), and retinal pigment epithelium (RPE). The reporting is according to APOSTEL recommendations.29 Figure 2 shows an example of a 3D OCT image stack with extracted layers. 
Figure 2.
 
Example of retinal 3D OCT image stack and the location of retinal layer boundaries.
Figure 2.
 
Example of retinal 3D OCT image stack and the location of retinal layer boundaries.
Synthetic Data Generation Using 3D ASM
Generating synthetic data requires a statistical shape model that can produce new 3D retinal layer boundaries. The model should be general (i.e., able to generate any plausible example of the class it represents), so the thickness and morphology of the synthetic retinal layers must be sufficiently variable. However, the model should also be target oriented, which means that it is only authorized to generate suitable shapes. To achieve this, a statistical model based on prior knowledge is built by analyzing the statistical characteristics of a set of annotated 3D OCT scans in the training stage. The annotations provide a set of feature points in three dimensions by which the training maps are aligned and from which the model extracts their principal components. The trained model captures and generalizes the statistical characteristics of the retinal layer boundaries in the training set, allowing us to synthesize similar layer boundary shapes. 
Annotated 3D OCT scans in the training stage are first cropped to cover a symmetrical 6-mm distance around the fovea. The layer boundaries in the training set must cover the different types of variation we wish the model to represent. We annotate each 3D OCT image stack with a set of points used as landmarks for the alignment. In this study, we use 91 landmark points on each of the eight retinal layer boundaries on each of the 25 B-scans in a stack, for a total of n = 18,200 points per image stack. 
For the ith boundary, the jth landmark point is represented by (xij, yij, zij), in coordinates where x,y corresponds to the horizontal and vertical components of each B-scan, z indexes the identity of each B-scan, and j runs from 1 to 91. By definition, zij is the same for all points in a given B-scan. The first and last landmark points are taken to be the left and right edges of the B-scan; thus, by definition, xi1 = 1 and xi91 = 512 in every case, with the values y11 and yi91 giving the height of the ith boundary at the edges. To obtain the other landmark points, we identify the coordinates corresponding to the center of the macula in the given image stack (xmac,ymac,zmac). For all boundaries, xi46 is defined to be xmac. The remaining points xi2xi45 and xi47xi90 are then spaced evenly between, respectively, xi1 = 1 and xi46 = xmac, and xi46 = xmac and xi91 = 512. Each yij is then the height of the ith boundary at xij
We apply Procrustes analysis30 to align all image stacks to a reference image stack, and a point distribution model31 is then constructed. Assuming that the variability within the population occurs along just a few directions in this space, the dimensionality is reduced to a lower space using principal component analysis (PCA). Each layer boundary in the training set can now be approximated by the mean shape plus a weighted sum of the first t principal components, and we can synthesize new layer boundaries by allocating different numbers to weights of plausible principal components. Details of this procedure are elaborated in the Supplementary Material
In this way, we generate synthetic vectors X, describing the locations of the 18,200 landmark points on layer boundaries of a synthetic OCT stack. We finally interpolate values linearly between the landmark points to recover the complete synthesized boundaries. 
Figure 3 shows examples of the synthesized layer boundaries for each of the classes (HC, MS, and NMO). As we focus on retinal thickness maps because of their usefulness as biomarkers for neurologic diseases, it is not necessary to create synthetic textures in order to generate a full OCT image, as we did in Danesh et al.24 
Figure 3.
 
Example of synthetic retinal layer boundaries in each class (HC, MS, and NMO). The color codes the height (y-value) of each boundary point.
Figure 3.
 
Example of synthetic retinal layer boundaries in each class (HC, MS, and NMO). The color codes the height (y-value) of each boundary point.
Construction of Retinal Thickness Maps
Retinal thickness maps have a potential role in the diagnosis of neurodegenerative diseases.3234 They reveal information implicit in the 3D OCT volumes by providing easily interpretable maps for each retinal layer. We, therefore, calculated the thickness of each retinal layer and of the whole retina as 2D maps to be used in the validation strategies discussed below. The thickness of each retinal layer is the distance between consecutive retinal boundaries; similarly, the thickness of the entire retina is obtained as the distance between the first and the last boundaries. Accordingly, macular thickness maps are calculated for all three data classes, demonstrated in Figures 45, and 6 for selected retinal layers (mRNFL, GCIPL, RPE) and the total retinal thickness. The real thickness maps and corresponding synthetic maps are shown in the first and second rows, respectively. Each thickness map has a size of 512 × 25 pixels, but the examples in Figures 4 to 6 are resized to 500 × 500 pixels for better visualization. 
Figure 4.
 
Example of thickness maps from mRNFL, GCIPL, RPE, and the total retina in an area of 6 by 6 mm around the fovea in the HC class. The upper row (a) shows the real data, and the lower row (b) shows the synthetic data with the proposed method.
Figure 4.
 
Example of thickness maps from mRNFL, GCIPL, RPE, and the total retina in an area of 6 by 6 mm around the fovea in the HC class. The upper row (a) shows the real data, and the lower row (b) shows the synthetic data with the proposed method.
Figure 5.
 
Example of thickness maps from mRNFL, GCIPL, RPE, and the total retina in an area of 6 by 6 mm around the fovea in the MS class. The upper row (a) shows the real data, and the lower row (b) shows the synthetic data with the proposed method.
Figure 5.
 
Example of thickness maps from mRNFL, GCIPL, RPE, and the total retina in an area of 6 by 6 mm around the fovea in the MS class. The upper row (a) shows the real data, and the lower row (b) shows the synthetic data with the proposed method.
Figure 6.
 
Example of thickness maps from mRNFL, GCIPL, RPE, and the total retina in an area of 6 by 6 mm around the fovea in the NMO class. The upper row (a) shows the real data, and the lower row (b) shows the synthetic data with the proposed method.
Figure 6.
 
Example of thickness maps from mRNFL, GCIPL, RPE, and the total retina in an area of 6 by 6 mm around the fovea in the NMO class. The upper row (a) shows the real data, and the lower row (b) shows the synthetic data with the proposed method.
Validation Strategies
Three validation strategies are employed to check the usefulness and integrity of the synthesized data and to justify its use as an augmentation method in future research. The first two are statistical. First, we compute image histograms of the thickness maps for a given layer and compare these in real and synthesized maps. This checks that the range of thicknesses is comparable but does not evaluate their spatial pattern. To achieve that, we examine the cross-correlation and mean absolute difference between pairs of thickness maps and compare the distribution obtained for pairs of real maps with that obtained when one image is real and the other synthetic. The last strategy builds classification models to discriminate between HCs, MS, and NMO in order to assess our synthetic data generator as a data augmentation method. We compare the predictive performance of models trained using only real maps versus models trained from a set enriched with our synthetic data. 
Histogram-Based Validation
This validation strategy checks the correspondence between the histogram of the generated 2D maps (retinal thickness maps) and the histogram of the maps in the training set. For this purpose, retinal thickness maps are calculated for retinal layers (mRNFL, GCIPL, INL, OPL, ONL, MEZ, and RPE). For each retinal layer with a thickness map of 512 × 25 pixels, the histogram is calculated to represent the distribution of thickness values by counting how many pixels out of 512 × 25 fall into each thickness interval. The histogram is then normalized to display relative frequencies as a proportion of pixels that fall into each of several thickness categories, with the sum of the heights equaling 1. The normalized histogram can then be interpreted as discretized probability distributions whose value at any given thickness provides a relative likelihood that the value of the random variable (pixels of the 2D image of thickness map) would be close to that sample thickness. To quantify the similarity of pairs of normalized histograms in the real training set (H1) and synthetic data (H2), four measurements are used. 
  • i. The correlation coefficient is used to determine the type (direct or inverse) and degree of the relationship between two discretized probability distributions, approximating the similarity of the histograms. This coefficient ranges between 1 and –1 (zero if no correlation exists):  
    \begin{eqnarray}&&C\left( {{H_1},{H_2}} \right) \nonumber\\ &&= \frac{{\mathop \sum \nolimits_I ({H_1}\left( I \right) - \overline {{H_1}} {\rm{\ }})({H_2}\left( I \right) - \overline {{H_2}} {\rm{\ }})}}{{\sqrt {\mathop \sum \nolimits_I {{({H_1}\left( I \right) - \overline {{H_1}} {\rm{\ }})}^2}\mathop \sum \nolimits_I {{({H_2}\left( I \right) - \overline {{H_2}} {\rm{\ }})}^2}} }}\end{eqnarray}
    (1)
    where \(\overline {{H_i}} ,\ i = [ {1,2} ]\) is the mean value of each histogram over the total number of histogram bins, and I denotes the bin number.
  • ii. The chi-square distance calculates the normalized square difference between two histograms, and for identical histograms, this distance equals zero:  
    \begin{equation}{\chi ^2}\left( {{H_1},{H_2}} \right) = \mathop \sum \nolimits_I \frac{{{{({H_1}\left( I \right) - {H_2}\left( I \right)\ )}^2}}}{{{H_1}\left( I \right)}}\end{equation}
    (2)
  • iii. The histogram intersection calculates the similarity of two discretized probability distributions (histograms), with possible values of the intersection lying between 0 (no overlap) and 1 (identical distributions).35 
    \begin{equation}I\left( {{H_1},{H_2}} \right) = \mathop \sum \nolimits_I \min ({H_1}\left( I \right),{H_2}\left( I \right)\ )\end{equation}
    (3)
  • iv. The Hellinger distance is related to the Bhattacharyya coefficient BC(H1,H2) and is used to quantify the similarity between two probability distributions.36 The maximum Hellinger distance is 1, and in the case of best match with a Bhattacharyya coefficient of 1, the Hellinger distance is 0.  
    \begin{equation}BC\left( {{H_1},{H_2}} \right) = \ \mathop \sum \nolimits_I \sqrt {{H_1}\left( I \right)\ {H_2}\left( I \right)} \ \end{equation}
    (4)
     
    \begin{equation}H\left( {{H_1},{H_2}} \right) = \sqrt {1 - BC\left( {{H_1},{H_2}} \right)} \end{equation}
    (5)
Pairwise Comparisons Between Thickness Maps: Cross-Correlation and Mean Absolute Error
We carry out this validation based on healthy controls. We have 30 OCT stacks from healthy controls that were used to synthesize 100 synthetic stacks (the “training set”) and a further 100 OCT stacks from healthy controls that were not used in synthesizing the data (the “validation set”). To assess the degree of variation between thickness maps in healthy controls, we computed the maximum value of the cross-correlation between pairs of thickness maps, one taken from the validation set and one from the training set. This gave us a “real” distribution of maximum cross-correlations that we could then compare with synthetic data (i.e., the distribution of values when one thickness map is taken from the validation set and the other from the synthetic set). We also did a similar analysis using the mean absolute error (difference in the thickness of a given layer between a pair of maps). Again, the training and validation real maps were used to assess the distribution of mean absolute error in real maps, and this was then compared with the distribution obtained when synthetic maps were used instead of the real maps. We also carried out this analysis comparing the synthetic maps with the training set they were generated from, in order to see whether the synthetic maps matched these more closely than the cross-validation set. 
Synthetic OCT Image Generation as a Data Augmentation Strategy for Diagnostic Tasks
The last validation strategy is based on a standard classification to validate the ability of the synthesis model as a data augmentation method. We train classification models using training sets containing only real maps versus using training sets augmented with our synthetic data in order to determine if the latter leads to better predictive performance to distinguish either MS or NMO from HC. Of the available eight retinal thickness maps, the first and second maps (from mRNFL and GCIPL) and the total macular thickness are the layers that discriminate best between HC, MS, and NMO, according to the literature.916 Each of these three thickness maps has a size of 512 × 25 pixels, and we used PCA to reduce the dimension of each map to 5. Overall, we construct a 15D space as input features for classification models. We train two types of binary classifiers, one to discriminate HC from MS and one for classifying HC from NMO, using a support vector machine (SVM) with radial basis kernel functions in both cases. A stratified fivefold cross-validation was used to evaluate the predictive performance of these models with a nested cross-validation for hyperparameter tuning (C and gamma) based on grid search. The partition into folds was done using the real data only, and in cross-validation iterations when a fold was used for training, we enhanced it with our synthetic data for the experiments with augmentation. This ensured that all test predictions were done on the real data and that the experiments with or without augmentation used the same partitions. 
Results
The results from the proposed three validation strategies are presented in this section. Overall, these aim to determine whether the synthesized OCT boundaries are proper representatives of the real ones and whether they can be used as an augmentation method in future research. 
Results of Histogram-Based Validation
Four metrics, including correlation coefficient, chi-square distance, histogram intersection, and Hellinger distance, are used to quantify the similarity of pairs of normalized histograms in the real training set (H1) and synthetic data (H2). 
Figure 7 shows three samples of the generated data with corresponding boundaries in three classes. It should be emphasized that the boundaries are the only necessary data in this article due to the focus on retinal thickness maps rather than B-scans. Therefore, the synthetic images in Figure 7 are provided only for illustrative purposes and are not used in validation stages. 
Figure 7.
 
Three examples of synthetic images and corresponding boundaries from the three classes.
Figure 7.
 
Three examples of synthetic images and corresponding boundaries from the three classes.
Five real OCT volumes are randomly selected from each class (HC, MS, and NMO) and 25 three-dimensional retinal layer boundary images are synthesized. From each 3D boundary image, thickness maps of seven retinal layers (mRNFL, GCIPL, INL, OPL, ONL MEZ, RPE) and the thickness of the total macula are calculated (each with a size of 512 × 25 pixels). The five thickness maps of real data with 5 × 512 × 25 values and 25 thickness maps of synthetic data with 25 × 512 × 25 values are then fed into a two-sample t-test. The P values reported in Table 1 show that real and synthetic data are not significantly different for any layer. 
Table 1.
 
Comparison of Thickness of Different Macular Layers Between Groups (Mean ± SD)
Table 1.
 
Comparison of Thickness of Different Macular Layers Between Groups (Mean ± SD)
The normalized histograms are also compared between different pairs of real training data in Table 2Table 3 reports the comparison between real and synthetic data from the same class. Finally, Table 4 reports the comparison between real and synthetic data from different classes. 
Table 2.
 
Averaged Metrics between All Possible Pairs of Normalized Histograms in the Real Training Set (Values from First and Second Maps [mRNFL and GCIPL] and the Total Macular Thickness)
Table 2.
 
Averaged Metrics between All Possible Pairs of Normalized Histograms in the Real Training Set (Values from First and Second Maps [mRNFL and GCIPL] and the Total Macular Thickness)
Table 3.
 
Averaged Metrics Between All Possible Pairs of Normalized Histograms in the Two Groups of the Real Training Set and Synthetic Data from the Same Class (Values from First and Second Maps [mRNFL and GCIPL] and the Total Macular Thickness)
Table 3.
 
Averaged Metrics Between All Possible Pairs of Normalized Histograms in the Two Groups of the Real Training Set and Synthetic Data from the Same Class (Values from First and Second Maps [mRNFL and GCIPL] and the Total Macular Thickness)
Table 4.
 
Averaged Metrics between Pairs of Normalized Histograms in the Two Groups of the Real Training Set and Synthetic Data from Different Classes (Values From First and Second Maps [mRNFL and GCIPL] and the Total Macular Thickness)
Table 4.
 
Averaged Metrics between Pairs of Normalized Histograms in the Two Groups of the Real Training Set and Synthetic Data from Different Classes (Values From First and Second Maps [mRNFL and GCIPL] and the Total Macular Thickness)
Results of Pairwise Comparisons Between Thickness Maps
Figure 8 shows the distribution of the peak cross-correlation (left columns) and the mean squared error (right columns) when comparing pairs of thickness maps for different layers. The green histograms give the distribution expected for real maps, as a reference. The red histograms compare the distributions encountered when comparing synthetic maps with real maps (from the cross-validation set; i.e., real maps that were not used in the generation of the synthetic maps). Ideally, these distributions would be identical, with a Kolmogorov–Smirnov (KS) “D” statistic of 0, but in fact they differ (KS-D shown on each panel; all are highly significant). However, the deviations are generally fairly small, and the modes are generally similar. 
Figure 8.
 
Comparison of pairwise cross-correlations (left column) and mean absolute errors (right column) for real versus synthetic thickness maps of the different retinal layers (rows). The green histograms show the distribution of the specified metric comparing pairs of real thickness maps: one from the cross-validation set of 100 maps and one from the training set of 30 maps. All pairs were used, so the histogram is based on 3000 pairwise comparisons. The red histograms show the distribution for pairs where one is again from the cross-validation set and one is from the set of 100 synthetic maps generated from the training set (10,000 comparisons). Ideally, the red and green distributions would be identical, but they are not. The blue histograms show the distribution for pairs where one is from the training set and one is synthetic (3000 comparisons). The text in red shows the Kolmogorov–Smirnov “D” statistic between the red and green, blue and green, and blue and red distributions (all are very highly significantly different from zero given the large number of pairwise comparisons).
Figure 8.
 
Comparison of pairwise cross-correlations (left column) and mean absolute errors (right column) for real versus synthetic thickness maps of the different retinal layers (rows). The green histograms show the distribution of the specified metric comparing pairs of real thickness maps: one from the cross-validation set of 100 maps and one from the training set of 30 maps. All pairs were used, so the histogram is based on 3000 pairwise comparisons. The red histograms show the distribution for pairs where one is again from the cross-validation set and one is from the set of 100 synthetic maps generated from the training set (10,000 comparisons). Ideally, the red and green distributions would be identical, but they are not. The blue histograms show the distribution for pairs where one is from the training set and one is synthetic (3000 comparisons). The text in red shows the Kolmogorov–Smirnov “D” statistic between the red and green, blue and green, and blue and red distributions (all are very highly significantly different from zero given the large number of pairwise comparisons).
The blue histograms show the distributions when synthetic maps are compared with the training set of real maps they were generated from. One might have expected that these would agree more closely, thus producing higher cross-correlation and lower mean absolute error. However, no such effect is apparent, and the KS-D between this and the real/real green distributions is in fact slightly smaller than when the cross-validation set is used. The KS-D between the red and blue distributions is shown last, in pink. Unsurprisingly, these values are much lower, since we are now comparing two real/synth distributions, although in fact still significant. 
Overall, this analysis has revealed significant differences between the thickness map from real versus synthetic boundaries. However, the agreement seems acceptable. The distribution of the pairwise comparison metrics is very similar between pairs of real maps as between real/synthetic pairs. 
Results of Classification-Based Validation
A pilot evaluation is done using two binary SVM classifiers with radial basis kernel functions (one to discriminate HC from MS and one to discriminate HC from NMO) with the hyperparameters tuned using grid search. From each of the three classes (HC, MS, and NMO), thirty 3D OCT volumes were randomly selected, and the dimension of each OCT volume is reduced to 15 using the PCA algorithm (as elaborated above). The classification algorithms are trained twice, once with real data and once with the synthesized data. Stratified fivefold nested cross-validation is used for splitting training/test partitions. 
For classification of MS from HC, in the first trial, the real data indices for fivefold cross-validation were used for training. The metrics are reported in Table 5 (real training data). In the second trial, in each splitting iteration, 24 OCT scans for each class (whose indices are determined by fivefold algorithm as the training data) were selected. The 24 HC OCT scans were used by one synthesis model, and the 24 MS OCT scans were used by another synthesis model, each producing new maps similar to their own inputs. Different numbers of synthesized maps were produced using synthesis models for each category and added to the sets of 24 real maps. The metrics from the second trial are reported in Table 5
Table 5.
 
Comparisons of Classification Results for MS/HC Discrimination Between the Real Training Data and Real Data Plus Synthetic OCTs with the Proposed Method, for Different Numbers of Synthetic OCTs
Table 5.
 
Comparisons of Classification Results for MS/HC Discrimination Between the Real Training Data and Real Data Plus Synthetic OCTs with the Proposed Method, for Different Numbers of Synthetic OCTs
Furthermore, to compare the performance of the method with oversampling methods such as SMOTE,37 instead of synthesizing full 3D OCT data and then generating thickness maps to extract the final feature vector for the classification model, we directly resampled from the thickness map of the annotated training set to generate additional sample points for training in Figure 9. For this purpose, we use SMOTE to produce the synthetic points, generating a number of synthetic points equal to that of our method in order to have a fair comparison. Accuracy and F1 score are compared, and the results indicate that using SMOTE in this way also effectively enhances the training set, and the samples generated by our method lead to better performance in all scenarios. The rationale behind this result is that the oversampling technique assumes that the feature space behaves as a Euclidean space with equal relevance for each axis, so that distances are meaningful.38 The thickness maps may not fully fulfill this assumption. 
Figure 9.
 
Classification performance for MS/HC discrimination for different training sets: real training data, real data with different numbers of additional synthetic OCTs, and real data with different numbers of data added via SMOTE.
Figure 9.
 
Classification performance for MS/HC discrimination for different training sets: real training data, real data with different numbers of additional synthetic OCTs, and real data with different numbers of data added via SMOTE.
Similarly, the same procedure was repeated for classification of HC and NMO data, and the results are shown in Table 6 and Figure 10
Table 6.
 
Comparisons of Classification Results for NMO/HC Discrimination Between the Real Training Data and Real Data Plus Synthetic OCTs with the Proposed Method, for Different Numbers of Synthetic OCTs
Table 6.
 
Comparisons of Classification Results for NMO/HC Discrimination Between the Real Training Data and Real Data Plus Synthetic OCTs with the Proposed Method, for Different Numbers of Synthetic OCTs
Figure 10.
 
Classification performance for NMO/HC discrimination for different training sets: real training data, real data with different numbers of additional synthetic OCTs, and real data with different numbers of data added via SMOTE.
Figure 10.
 
Classification performance for NMO/HC discrimination for different training sets: real training data, real data with different numbers of additional synthetic OCTs, and real data with different numbers of data added via SMOTE.
Discussion and Conclusion
In this study, we have shown (1) that the proposed 3D ASM synthesis model can generate realistic-appearing synthetic maps of retinal layer boundaries, (2) that the histogram-based validation shows the histogram of generated 2D maps (retinal thickness maps) corresponds to the histogram of the maps in the training database, (3) that cross-correlations between the generated and real 2D maps shows the validity of the synthetic data, and (4) a standard classification-based validation confirms the efficacy of synthetic data as an augmentation method for training the classification algorithms. 
The proposed model is general, and by changing the weights vector, it can generate new examples of different diagnostic classes. However, the realistic shape (resembling layer boundaries from real OCT maps) is controlled by imposing limitations on the weights vector. Figures 4 to 6 visually depict 2D maps of synthetic thickness maps and show their similarity to real data. The similarity between real and synthetic data in horizontal B-scans is also demonstrated in Figure 7. Despite the subtlety of differences between the three classes (HC, MS, NMO), which often elude expert clinicians, the numerical evaluations prove that the proposed model is able to capture them. 
The normalized histograms are compared between different pairs of the data in Tables 23, and 4. Higher correlation coefficients and histogram intersections, as well as lower chi-square distances and Hellinger distances, indicate greater similarity in the compared probability distributions. Comparison of Table 2 with Table 3 indicates that the similarity between all synthetic–real pairs of histograms in the same disease class is as great as the similarity between real–real histogram pairs. In Table 4, pairs of normalized histograms are drawn from the real and synthetic data, across disease classes. These values indicate that real MS and synthetic NMO data are similar in their normalized thickness histograms for total retina, mRNFL, and GCIPL. However, the difference between HC and disease data is captured by the synthetic data, in that the similarity is reduced between HC (real)/MS (synthetic) and HC (real)/NMO (synthetic) total macular volume and GCIPL thickness, in keeping with earlier clinical studies916 that indicate the greatest effect of disease on those parameters. 
The cross-correlation analysis is more sensitive, because it assesses not only the distribution of thickness values in a layer but also the pattern of thickness across the retina. This method did reveal small but significant differences between the synthetic maps and real maps. More work will be needed to understand and correct these discrepancies. 
However, even in its current state, we demonstrate that the proposed model is demonstrably useful as an augmentation method, since including synthetic examples improved performance. Our method also avoids the problem of mode collapse observed in other synthetic image construction algorithms like generative adversarial networks (GANs). Provided different weights are used, the ASM model is guaranteed to generate different synthetic data points. The intradiversity of the method was also demonstrated in our correlation-based validation: the distribution of cross-correlation values was similar when synthetic thickness maps were compared with (a) the real “training” maps used to generate them (blue histograms in Fig. 8) as with (b) a distinct set of real “cross-validation” maps (red histograms). If the synthetic maps remained very close to the real maps used to generate them, we would have expected systematically higher correlations in the former comparison. Furthermore, the complexity of the proposed method is lower than GAN methods, and it is able to work with a very small training data set, which is not practical for GANs. 
Finally, one of the important outcomes of the proposed synthetic data is that we can provide large amounts of data for training ML algorithms without the privacy concerns that affect real human data, since retinal images are considered protected health information.39 The method could be expanded to produce synthetic OCT image data with different features ranging from age, ethnicity, and severity (e.g., for use in tele-education platforms).40 Thus, the ASM model has potential for aiding future education of neuro-ophthalmology trainees. 
In conclusion, the proposed method for generating synthetic OCT data can address the problem of limited data sets from patients with neurodegenerative disease, which could help us apply ML approaches even when OCT data are limited. The degree of matching between corresponding layers in synthesized data reflects the realism of the synthetic data and justifies its use to augment real data in future research. 
Acknowledgments
Independent research funded by the National Institute for Health Research and NHSX (Artificial Intelligence, “OCTAHEDRON: Optical Coherence Tomography Automated Heuristics for Early Diagnosis via Retina in Ophthalmology and Neurology,” AI_AWARD01976). 
Disclosure: H. Danesh, None; D.H. Steel, None; J. Hogg, None; F. Ashtari, None; W. Innes, None; J. Bacardit, None; A. Hurlbert, None; J.C.A. Read, None; R. Kafieh, None 
References
Garcia-Martin E, Rodriguez-Mena D, Herrero R, et al. Neuro-ophthalmologic evaluation, quality of life, and functional disability in patients with MS. Neurology. 2013; 81(1): 76–83. [CrossRef] [PubMed]
Rebolleda G, González-López JJ, Muñoz-Negrete FJ, Oblanca N, Costa-Frossard L, Álvarez-Cermeño JC. Color-code agreement among stratus, cirrus, and spectralis optical coherence tomography in relapsing-remitting multiple sclerosis with and without prior optic neuritis. Am J Ophthalmol. 2013; 155(5): 890–897.e2. [CrossRef] [PubMed]
Waldman AT, Liu GT, Lavery AM, et al. Optical coherence tomography and visual evoked potentials in pediatric MS. Neurol Neuroimmunol Neuroinflamm. 2017; 4(4): e356. [CrossRef] [PubMed]
Jindahra P, Hedges TR, Mendoza-Santiesteban CE, Plant GT. Optical coherence tomography of the retina: applications in neurology. Curr Opin Neurol. 2010; 23(1): 16–23. [CrossRef] [PubMed]
Huang D, Swanson EA, Lin CP, et al. Optical Coherence Tomography. Cambridge: Massachusetts Institute of Technology, Whitaker College of Health Sciences and Technology; 1993.
Geraldes R, Ciccarelli O, Barkhof F, et al. The current role of MRI in differentiating multiple sclerosis from its imaging mimics. Nat Rev Neurol. 2018; 14(4): 199. [CrossRef] [PubMed]
Sinnecker T, Schumacher S, Mueller K, et al. MRI phase changes in multiple sclerosis vs neuromyelitis optica lesions at 7T. Neurol Neuroimmunol Neuroinflamm. 2016; 3(4): e259. [CrossRef] [PubMed]
Parisi V, Manni G, Spadaro M, et al. Correlation between morphological and functional retinal impairment in multiple sclerosis patients. Invest Ophthalmol Vis Sci. 1999; 40(11): 2520–2527. [PubMed]
Manogaran P, Vavasour IM, Lange AP, et al. Quantifying visual pathway axonal and myelin loss in multiple sclerosis and neuromyelitis optica. NeuroImage Clin. 2016; 11: 743–750. [CrossRef] [PubMed]
Garcia-Martin E, Pueyo V, Ara J, et al. Effect of optic neuritis on progressive axonal damage in multiple sclerosis patients. Mult Scler J. 2011; 17(7): 830–837. [CrossRef]
Hanson JV, Lukas SC, Pless M, Schippling S. Optical coherence tomography in multiple sclerosis. Semin Neurol. 2016; 36(2): 177–184. [CrossRef] [PubMed]
Albrecht P, Ringelstein M, Müller A, et al. Degeneration of retinal layers in multiple sclerosis subtypes quantified by optical coherence tomography. Mult Scler J. 2012; 18(10): 1422–1429. [CrossRef]
Petzold A, de Boer JF, Schippling S, et al. Optical coherence tomography in multiple sclerosis: a systematic review and meta-analysis. Lancet Neurol. 2010; 9(9): 921–932. [CrossRef] [PubMed]
Ashtari F, Ataei A, Kafieh R, et al. Optical coherence tomography in neuromyelitis optica spectrum disorder and multiple sclerosis: a population-based study. Mult Scler Relat Disord. 2021; 47: 102625. [CrossRef] [PubMed]
Syc SB, Saidha S, Newsome SD, et al. Optical coherence tomography segmentation reveals ganglion cell layer pathology after optic neuritis. Brain. 2012; 135(2): 521–533. [CrossRef] [PubMed]
Saidha S, Syc SB, Ibrahim MA, et al. Primary retinal pathology in multiple sclerosis as detected by optical coherence tomography. Brain. 2011; 134(2): 518–533. [CrossRef] [PubMed]
De Fauw J, Ledsam JR, Romera-Paredes B, et al. Clinically applicable deep learning for diagnosis and referral in retinal disease. Nat Med. 2018; 24(9): 1342–1350. [CrossRef] [PubMed]
Yoon J, Han J, Park JI, et al. Optical coherence tomography-based deep-learning model for detecting central serous chorioretinopathy. Sci Rep. 2020; 10(1): 1–9. [PubMed]
Montuoro A, Waldstein SM, Gerendas B, Langs G, Simader C, Schmidt-Erfurth U. Statistical retinal OCT appearance models. Invest Ophthalmol Vis Sci. 2014; 55(13): 4808.
Serranho P, Maduro C, Santos T, Cunha-Vaz J, Bernardes R. Synthetic oct data for image processing performance testing. In: 2011 18th IEEE International Conference on Image Processing. IEEE; 2011: 401–404.
Varnousfaderani ES, Vogl W-D, Wu J, et al. Improve synthetic retinal OCT images with present of pathologies and textural information. In: Medical Imaging 2016: Image Processing; 2016: International Society for Optics and Photonics. 2016;97843V.
Kulkarni P, Lozano D, Zouridakis G, Twa M. A statistical model of retinal optical coherence tomography image data. In: 2011 Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE; 2011: 6127–6130.
Danesh H, Maghooli K, Dehghani A, Kafieh R. Automatic production of synthetic labelled OCT images using an active shape model. IET Image Proc. 2020; 14(15): 3812–3818. [CrossRef]
Danesh H, Maghooli K, Dehghani A, Kafieh R. Synthetic OCT data in challenging conditions: three-dimensional OCT and presence of abnormalities. Med Biol Eng Comput. 2022; 60(1): 189–203. [CrossRef] [PubMed]
Cootes TF, Taylor CJ, Cooper DH, Graham J. Active shape models—their training and application. Comput Vis Image Understanding. 1995; 61(1): 38–59. [CrossRef]
Kafieh R, Rabbani H, Hajizadeh F, Abramoff MD, Sonka M. Thickness mapping of eleven retinal layers segmented using the diffusion maps method in normal eyes. J Ophthalmol. 2015; 2015: 259123. [CrossRef] [PubMed]
Kafieh R, Rabbani H, Abramoff MD, Sonka M. Intra-retinal layer segmentation of 3D optical coherence tomography using coarse grained diffusion map. Med Image Anal. 2013; 17(8): 907–928. [CrossRef] [PubMed]
Montazerin M, Sajjadifar Z, Khalili Pour E, et al. Livelayer: A semi-automatic software program for segmentation of layers and diabetic macular edema in optical coherence tomography images. Sci Rep. 2021; 11(1): 1–13.
Cruz-Herranz A, Balk LJ, Oberwahrenbrock T, et al. The APOSTEL recommendations for reporting quantitative optical coherence tomography studies. Neurology. 2016; 86(24): 2303–2309. [CrossRef] [PubMed]
Ross A. Procrustes Analysis. Course report. Columbia: Department of Computer Science and Engineering, University of South Carolina; 2004.
Cootes TF, Taylor CJ. Statistical Models of Appearance for Computer Vision. Technical report. Manchester, UK: University of Manchester; 2004.
den Haan J, Verbraak FD, Visser PJ, Bouwman FH. Retinal thickness in Alzheimer's disease: a systematic review and meta-analysis. Alzheimers Dement (Amst). 2017; 6: 162–170. [CrossRef] [PubMed]
Schneider E, Zimmermann H, Oberwahrenbrock T, et al. Optical coherence tomography reveals distinct patterns of retinal damage in neuromyelitis optica and multiple sclerosis. PLoS One. 2013; 8(6): e66151. [CrossRef] [PubMed]
Mujat M, Chan RC, Cense B, et al. Retinal nerve fiber layer thickness map determined from optical coherence tomography images. Optics Express. 2005; 13(23): 9480–9491. [CrossRef] [PubMed]
Lee S, Xin JH, Westland S. Evaluation of image similarity by histogram intersection. Color Res Appl. 2005; 30(4): 265–274. [CrossRef]
Hellinger E. Neue begründung der theorie quadratischer formen von unendlichvielen veränderlichen. J Reine Angewandte Mathematik. 1909; 1909(136): 210–271. [CrossRef]
Chawla NV, Bowyer KW, Hall LO, Kegelmeyer WP. SMOTE: synthetic minority over-sampling technique. J Artificial Intelligence Res. 2002; 16: 321–357. [CrossRef]
Manchev N. ML internals: Synthetic Minority Oversampling (SMOTE) Technique. Accessed July 8, 2022.
Farzin H, Abrishami-Moghaddam H, Moin M-S. A novel retinal identification system. EURASIP J Adv Signal Process. 2008; 2008: 1–10. [CrossRef]
Caffery LJ, Taylor M, Gole G, Smith AC. Models of care in tele-ophthalmology: a scoping review. J Telemed Telecare. 2019; 25(2): 106–122. [CrossRef] [PubMed]
Figure 1.
 
Graphical abstract of the proposed approach.
Figure 1.
 
Graphical abstract of the proposed approach.
Figure 2.
 
Example of retinal 3D OCT image stack and the location of retinal layer boundaries.
Figure 2.
 
Example of retinal 3D OCT image stack and the location of retinal layer boundaries.
Figure 3.
 
Example of synthetic retinal layer boundaries in each class (HC, MS, and NMO). The color codes the height (y-value) of each boundary point.
Figure 3.
 
Example of synthetic retinal layer boundaries in each class (HC, MS, and NMO). The color codes the height (y-value) of each boundary point.
Figure 4.
 
Example of thickness maps from mRNFL, GCIPL, RPE, and the total retina in an area of 6 by 6 mm around the fovea in the HC class. The upper row (a) shows the real data, and the lower row (b) shows the synthetic data with the proposed method.
Figure 4.
 
Example of thickness maps from mRNFL, GCIPL, RPE, and the total retina in an area of 6 by 6 mm around the fovea in the HC class. The upper row (a) shows the real data, and the lower row (b) shows the synthetic data with the proposed method.
Figure 5.
 
Example of thickness maps from mRNFL, GCIPL, RPE, and the total retina in an area of 6 by 6 mm around the fovea in the MS class. The upper row (a) shows the real data, and the lower row (b) shows the synthetic data with the proposed method.
Figure 5.
 
Example of thickness maps from mRNFL, GCIPL, RPE, and the total retina in an area of 6 by 6 mm around the fovea in the MS class. The upper row (a) shows the real data, and the lower row (b) shows the synthetic data with the proposed method.
Figure 6.
 
Example of thickness maps from mRNFL, GCIPL, RPE, and the total retina in an area of 6 by 6 mm around the fovea in the NMO class. The upper row (a) shows the real data, and the lower row (b) shows the synthetic data with the proposed method.
Figure 6.
 
Example of thickness maps from mRNFL, GCIPL, RPE, and the total retina in an area of 6 by 6 mm around the fovea in the NMO class. The upper row (a) shows the real data, and the lower row (b) shows the synthetic data with the proposed method.
Figure 7.
 
Three examples of synthetic images and corresponding boundaries from the three classes.
Figure 7.
 
Three examples of synthetic images and corresponding boundaries from the three classes.
Figure 8.
 
Comparison of pairwise cross-correlations (left column) and mean absolute errors (right column) for real versus synthetic thickness maps of the different retinal layers (rows). The green histograms show the distribution of the specified metric comparing pairs of real thickness maps: one from the cross-validation set of 100 maps and one from the training set of 30 maps. All pairs were used, so the histogram is based on 3000 pairwise comparisons. The red histograms show the distribution for pairs where one is again from the cross-validation set and one is from the set of 100 synthetic maps generated from the training set (10,000 comparisons). Ideally, the red and green distributions would be identical, but they are not. The blue histograms show the distribution for pairs where one is from the training set and one is synthetic (3000 comparisons). The text in red shows the Kolmogorov–Smirnov “D” statistic between the red and green, blue and green, and blue and red distributions (all are very highly significantly different from zero given the large number of pairwise comparisons).
Figure 8.
 
Comparison of pairwise cross-correlations (left column) and mean absolute errors (right column) for real versus synthetic thickness maps of the different retinal layers (rows). The green histograms show the distribution of the specified metric comparing pairs of real thickness maps: one from the cross-validation set of 100 maps and one from the training set of 30 maps. All pairs were used, so the histogram is based on 3000 pairwise comparisons. The red histograms show the distribution for pairs where one is again from the cross-validation set and one is from the set of 100 synthetic maps generated from the training set (10,000 comparisons). Ideally, the red and green distributions would be identical, but they are not. The blue histograms show the distribution for pairs where one is from the training set and one is synthetic (3000 comparisons). The text in red shows the Kolmogorov–Smirnov “D” statistic between the red and green, blue and green, and blue and red distributions (all are very highly significantly different from zero given the large number of pairwise comparisons).
Figure 9.
 
Classification performance for MS/HC discrimination for different training sets: real training data, real data with different numbers of additional synthetic OCTs, and real data with different numbers of data added via SMOTE.
Figure 9.
 
Classification performance for MS/HC discrimination for different training sets: real training data, real data with different numbers of additional synthetic OCTs, and real data with different numbers of data added via SMOTE.
Figure 10.
 
Classification performance for NMO/HC discrimination for different training sets: real training data, real data with different numbers of additional synthetic OCTs, and real data with different numbers of data added via SMOTE.
Figure 10.
 
Classification performance for NMO/HC discrimination for different training sets: real training data, real data with different numbers of additional synthetic OCTs, and real data with different numbers of data added via SMOTE.
Table 1.
 
Comparison of Thickness of Different Macular Layers Between Groups (Mean ± SD)
Table 1.
 
Comparison of Thickness of Different Macular Layers Between Groups (Mean ± SD)
Table 2.
 
Averaged Metrics between All Possible Pairs of Normalized Histograms in the Real Training Set (Values from First and Second Maps [mRNFL and GCIPL] and the Total Macular Thickness)
Table 2.
 
Averaged Metrics between All Possible Pairs of Normalized Histograms in the Real Training Set (Values from First and Second Maps [mRNFL and GCIPL] and the Total Macular Thickness)
Table 3.
 
Averaged Metrics Between All Possible Pairs of Normalized Histograms in the Two Groups of the Real Training Set and Synthetic Data from the Same Class (Values from First and Second Maps [mRNFL and GCIPL] and the Total Macular Thickness)
Table 3.
 
Averaged Metrics Between All Possible Pairs of Normalized Histograms in the Two Groups of the Real Training Set and Synthetic Data from the Same Class (Values from First and Second Maps [mRNFL and GCIPL] and the Total Macular Thickness)
Table 4.
 
Averaged Metrics between Pairs of Normalized Histograms in the Two Groups of the Real Training Set and Synthetic Data from Different Classes (Values From First and Second Maps [mRNFL and GCIPL] and the Total Macular Thickness)
Table 4.
 
Averaged Metrics between Pairs of Normalized Histograms in the Two Groups of the Real Training Set and Synthetic Data from Different Classes (Values From First and Second Maps [mRNFL and GCIPL] and the Total Macular Thickness)
Table 5.
 
Comparisons of Classification Results for MS/HC Discrimination Between the Real Training Data and Real Data Plus Synthetic OCTs with the Proposed Method, for Different Numbers of Synthetic OCTs
Table 5.
 
Comparisons of Classification Results for MS/HC Discrimination Between the Real Training Data and Real Data Plus Synthetic OCTs with the Proposed Method, for Different Numbers of Synthetic OCTs
Table 6.
 
Comparisons of Classification Results for NMO/HC Discrimination Between the Real Training Data and Real Data Plus Synthetic OCTs with the Proposed Method, for Different Numbers of Synthetic OCTs
Table 6.
 
Comparisons of Classification Results for NMO/HC Discrimination Between the Real Training Data and Real Data Plus Synthetic OCTs with the Proposed Method, for Different Numbers of Synthetic OCTs
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×