Open Access
Special Issue  |   April 2020
Fast and Automated Hyperreflective Foci Segmentation Based on Image Enhancement and Improved 3D U-Net in SD-OCT Volumes with Diabetic Retinopathy
Author Affiliations & Notes
  • Sha Xie
    School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, China
  • Idowu Paul Okuwobi
    School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, China
  • Mingchao Li
    School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, China
  • Yuhan Zhang
    School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, China
  • Songtao Yuan
    Department of Ophthalmology, First Affiliated Hospital with Nanjing Medical University, Nanjing, China
  • Qiang Chen
    School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, China
  • Correspondence: Qiang Chen, School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, China. e-mail: chen2qiang@njust.edu.cn 
  • Songtao Yuan, Department of Ophthalmology, First Affiliated Hospital with Nanjing Medical University, Nanjing, China. e-mail: yuansongtao@vip.sina.com 
Translational Vision Science & Technology April 2020, Vol.9, 21. doi:https://doi.org/10.1167/tvst.9.2.21
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Sha Xie, Idowu Paul Okuwobi, Mingchao Li, Yuhan Zhang, Songtao Yuan, Qiang Chen; Fast and Automated Hyperreflective Foci Segmentation Based on Image Enhancement and Improved 3D U-Net in SD-OCT Volumes with Diabetic Retinopathy. Trans. Vis. Sci. Tech. 2020;9(2):21. doi: https://doi.org/10.1167/tvst.9.2.21.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose: To design a robust and automated hyperreflective foci (HRF) segmentation framework for spectral-domain optical coherence tomography (SD-OCT) volumes, especially volumes with low HRF-background contrast.

Methods: HRF in retinal SD-OCT volumes appear with low-contrast characteristics that results in the difficulty of HRF segmentation. Therefore to effectively segment the HRF we proposed a fully automated method for HRF segmentation in SD-OCT volumes with diabetic retinopathy (DR). First, we generated the enhanced SD-OCT images from the denoised SD-OCT images with an enhancement method. Then the enhanced images were cascaded with the denoised images as the two-channel input to the network against the low-contrast HRF. Finally, we replaced the standard convolution with slice-wise dilated convolution in the last layer of the encoder path of 3D U-Net to obtain long-range information.

Results: We evaluated our method using two-fold cross-validation on 33 SD-OCT volumes from 27 patients. The average dice similarity coefficient was 70.73%, which was higher than that of the existing methods with significant difference (P < 0.01).

Conclusions: Experimental results demonstrated that the proposed method is faster and achieves more reliable segmentation results than the current HRF segmentation algorithms. We expect that this method will contribute to clinical diagnosis and disease surveillance.

Translational Relevance: Our framework for the automated HRF segmentation of SD-OCT volumes may improve the clinical diagnosis of DR.

Introduction
Diabetic retinopathy (DR), a common microvascular complication of diabetes, not only has impact on vision but also increases the risk of life-threatening systemic vascular complications.1 The hyperreflective foci (HRF) are one of the manifestations of diabetes in the retina. HRF are morphological signs of accumulation of lipid extravasation, proteinaceous material, and inflammatory cells, and consequently precursors of hard exudates.2,3 They are small in size and scatter throughout all retina layers but mainly locate in the outer retina layers around fluid accumulation in the intraretinal cystoid spaces.1 Previous work indicated that the potential number and location of HRF may be predictors of the ultimate treatment outcome for diabetic disease.4,5 Recent studies have shown that the number of HRF increased significantly as the severity of DR increased, and HRF are reduced in diabetic patients after treatment and are positively correlated with visual acuity results.68 In addition, it has been reported that quantitative changes in HRF can be used to evaluate the effectiveness of medications.9 The correlation between the average number of HRF and the severity of DR indicates that quantifying the number of HRF may be used to estimate the severity of DR, and identify eyes that require further tests or treatments.8 Therefore accurate HRF segmentation has great significance for disease progression and treatment response. 
Spectral-domain optical coherence tomography (SD-OCT) has gradually become the main imaging modality because of its fast scanning, high resolution, and high signal-to-noise ratio.10,11 As shown in Figure 1, HRF present speckles with the following characteristics in SD-OCT images: (1) high and nonuniform intensities, (2) irregular shapes, (3) varying sizes, (4) blurry boundaries, and (5) scattering between the nerve fiber layer/ganglion cell layer (NFL/GCL) and the inner segments/outer segments (IS/OS). Considering the earlier mentioned characteristics of HRF in SD-OCT volumes, manual segmentation can be very error-prone and time-consuming. Thus it is necessary to develop automated HRF segmentation methods to assist the clinical diagnosis. 
Figure 1.
 
(a) One B-scan of an SD-OCT volume. The HRF are located between the NFL/GCL and the IS/OS. (b) Scaled-up local region with HRF. The HRF have high and nonuniform intensities, irregular shapes, and varying sizes. (c) Scaled-up local region with HRF. The HRF have blurry boundaries. Two retinal layers (NFL/GCL and IS/OS) are marked with yellow arrows, and HRF are marked with red arrows.
Figure 1.
 
(a) One B-scan of an SD-OCT volume. The HRF are located between the NFL/GCL and the IS/OS. (b) Scaled-up local region with HRF. The HRF have high and nonuniform intensities, irregular shapes, and varying sizes. (c) Scaled-up local region with HRF. The HRF have blurry boundaries. Two retinal layers (NFL/GCL and IS/OS) are marked with yellow arrows, and HRF are marked with red arrows.
To the best of our knowledge, there is little prior work that focuses on the HRF segmentation in SD-OCT retinal volumes using a fully automated method. Okuwobi et al.12 apply an automated grow-cut algorithm to segment and quantify HRF. It is difficult to perform accurate HRF segmentation based on the traditional automated methods because of the blurry boundaries and the nonuniform intensities. Another component tree-based HRF segmentation method is proposed by Okuwobi et al.,13 which consists of two parallel processes: region of interest generation, and HRF estimation. The processes are complicated, and it is not robust enough because it relies on handcrafted features. Yu et al.14 modify the GoogLeNet and train a patch-based classifier to distinguish one pixel into HRF or non-HRF. The HRF makes up only a small part of the images, which leads to an extreme imbalance between the positive samples and the negative samples. However, they only partially handle the class imbalance problem by a random undersampling method. The results have a significant number of false positives, which are reduced in a manually tuned separate postprocessing step. Also, the time cost of patch-based classification is expensive. Schlegl et al.15 utilize a ResUNet for HRF segmentation. Varga et al.16 apply image processing methods and Artificial Neural Networks, Deep Rectifier Neural Networks, and Convolutional Neural Networks to learn from the annotation of medical doctors and carry out the HRF segmentation. Most HRF cross two to four B-scans in three-dimensional (3D) cubes. What is more, the HRF with low intensities and low contrast increase the segmentation difficulty, as shown in Figure 2b. However, these two methods15,16 do not take this into account. 
Figure 2.
 
(a) One B-scan with high HRF-background contrast. (b) One B-scan with low HRF-background contrast. HRF are marked with red arrows.
Figure 2.
 
(a) One B-scan with high HRF-background contrast. (b) One B-scan with low HRF-background contrast. HRF are marked with red arrows.
In this article, we proposed a fully automated deep learning method for HRF segmentation in SD-OCT volumes with DR. To improve the HRF segmentation in the low-contrast images, we generated the additional enhanced images by an image enhancement algorithm and cascaded the enhanced images with denoised images as the input to the network. To obtain long-range information and capture more robust feature representation, we modified the 3D U-Net17 by replacing the standard 3D convolution with slice-wise dilated convolution in the last layer of the encoder path. 
Our proposed method provides more accurate HRF segmentation results than the existing methods. Specifically, the main contributions of our article can be summarized as follows: 
  1. The low-contrast HRF segmentation was improved by combining the enhanced images.
  2. The integration of 3D U-Net architecture and two-dimensional (2D) dilated convolution captured not only spatial information but also multiscale and long-range information, which improved the HRF segmentation accuracy.
  3. Our methods achieved the highest segmentation accuracy and the least time cost than the state-of-the-art methods.
Methodology
Overview
The aim of this study was to build a deep learning model to segment HRF in SD-OCT volumes, which can perform well in low-contrast images. Figure 3 shows an overview of the proposed method. The raw images were denoised and enhanced, and then the denoised and enhanced images were cropped into small voxel tiles to establish training and validation datasets. The denoised images were regarded as the first channel of inputting tensors, and the corresponding enhanced images were the second channel. After the inference of a cube from the test dataset, all outputs were recomposed to form a complete HRF segmentation result. Final results were obtained through eliminating the false targets outside the NFL/GCL and the IS/OS layers. 
Figure 3.
 
Overview of the proposed method.
Figure 3.
 
Overview of the proposed method.
Materials
An SD-OCT cube covers a 6 (horizontal)   × 6 (vertical)   × 2 (axial) mm3 area centered on the fovea, which corresponds to a 512   × 128   × 1024 voxel tile. A total of 33 SD-OCT cubes from 27 patients diagnosed with varying degrees of retinopathy severity were included in the study. All the SD-OCT cubes were acquired by a Cirrus SD-OCT device (Carl Zeiss Meditec, Inc., Dublin, CA). The ground truth was generated by two annotators, and the quality was assessed by an expert. This study was conducted in conformity with the institutional review board of the First Affiliated Hospital of Nanjing Medical University research ethics. The research was approved by an institutional human subjects committee and followed the tenets of the Declaration of Helsinki. 
Image Enhancement
At first, the raw images were denoised based on bilateral filter to reduce image noise. Some images showed low HRF-background contrast, which lead to the severe undersegmentation. Thus we applied an enhancement algorithm over the whole dataset to enhance HRF in SD-OCT images. The enhancement algorithm is shown in Figure 4. The sigmoid transfer function controls the range compression of the input image. Histogram equalization is applied to the output image from the sigmoid function. The resultant image from the histogram equalization is processed with orthogonal transform and log transform. At the same time, the parallel process is operated using the two aforementioned transform domain functions. Histogram matching is applied to combine the two parallel processes through data mapping. The inverse log and inverse orthogonal are used to transform the data of the mapped data. The results after enhancement are shown in Figure 5
Figure 4.
 
Pipeline of the enhancement algorithm.
Figure 4.
 
Pipeline of the enhancement algorithm.
Figure 5.
 
(a) Raw image. (b) Denoised image. (c) Enhanced image.
Figure 5.
 
(a) Raw image. (b) Denoised image. (c) Enhanced image.
Dataset Preparation
To reduce overfitting problem, our models were trained on small voxel tiles, which were extracted from the cubes by a sliding window. In the horizontal and axial directions, the stride of sampling was set to 32 in the training phase, and 128 in the test phase. In the vertical direction, the stride was set to 3 (the last stride was set to 2 because 128 is not evenly divisible by 3). We set the size of the voxel tiles to 128   × 128   × 3 through contrast experiments. The first channel of the input was the denoised voxel tile, and the second channel was the corresponding enhanced voxel tile. It is worth mentioning that because the HRF made up only a small part of the whole cube, we eliminated the voxel tiles without any HRF in the training dataset to balance the positive and negative samples. 
We tested all 33 SD-OCT cubes by a two-fold cross experiment to make the experiment results more sufficient and convincing. In experiment 1, 16 cubes (18,245 voxel tiles) were used for training and validation, and 17 cubes (23,392 voxel tiles) were used for test. In experiment 2, 17 cubes (24,499 voxel tiles) were used for training and validation, and 16 cubes (22,016 voxel tiles) were used for test. The rates of the training data and the validation data were 50:1. None of the images for test appeared in the training set, which ensured the independence of cubes. 
Network Architecture
Most HRF cross two to four B-scan images in the 3D cube. The 3D U-Net can build spatial correlation, which has shown its excellence in medical imaging tasks. Thus we utilized the 3D U-Net as the basic architecture to perform the HRF segmentation. It has an encoder and a decoder path each with four resolution steps. In the encoder path, each layer except the last one consists of two 3 × 3 × 3 convolutions each followed by a rectified linear unit (ReLu), and then a 2 × 2 × 2 max pooling with stride of one in depth dimension and strides of two in other dimension. In the decoder path, each layer contains an upconvolution of 4 × 4 × 4 by stride of one in depth dimension and strides of two in other dimension, followed by two 3 × 3 × 3 convolutions each followed by an ReLu. Shortcut connections from layers of equal resolution in the encoder path provide the essential high-resolution features to the decoder path. In the last layer, a 1 × 1 × 1 convolution reduces the number of output channels to 2. 
In this study, we believe that the high complexity of the HRF bring difficulties for 3D U-Net. Given the limited amount of data, increasing the depth of the network may not improve the segmentation results accompanied with the increase of the number of parameters and the amount of computation. Therefore we integrated the dilated convolutions to capture multiscale and long-range information instead of adding pooling layers and upsampling layers. However, replacing all the standard convolutions with dilated convolutions was unrealistic. To enlarge the receptive field while reducing the calculation complexity as possible, we added the dilated convolutions in the last layer of the encoder path. Because the 3D U-Net architecture captured the spatial information already and the depth of input was only three, we used three different 2D dilated convolutions to convolve each depth of the voxel tile separately, namely slice-wise dilated convolution, and then stacked the outputs in the depth direction. The dilation rate was set to 2. With this structure, our network had more robust feature representations than other models by extracting 3D and 2D features. 
Experimental Results and Analysis
We systematically compared our input with input 1 (only denoised voxel tiles) and input 2 (only enhanced voxel tiles) on our network to assess the performance of the enhancement algorithm. Second, we compared our network with 3D U-Net to evaluate the performance of the slice-wise dilated convolution. Finally, we compared our approach with some published methods, most of which were under a deep neural network framework, and the other was a traditional segmentation method. Experiments were conducted under the tensorflow frameworks using an NVIDIA GeForce GTX 1080Ti GPU (The manufacturer name and loaction for GPU is Kunqian in Nanjing.). We used Adam as the optimizer with a learning rate of 1e-5, and Softmax with weighted cross-entropy as the loss function and set the batch size to 8, total iteration to 10,000. 
Qualitative Analysis
Comparison of Different Inputs to Our Network
To assess the performance of the enhancement algorithm, we compared our input with input 1 (only denoised images) and input 2 (only enhanced images) on our network. We can observe from the first column of Figure 6 that all the input performed well in the high-contrast images. As for the low-contrast images, our method achieved the best performance, and the results of input 2 surpass that of input 1. It indicates that the enhancement algorithm improved the performance. However, the method of input 2 still cannot segment the HRF completely because of the destruction of the retinal structure and information loss. 
Figure 6.
 
Segmentation results using different input to our network. Yellow arrows represent the regions of undersegmentation.
Figure 6.
 
Segmentation results using different input to our network. Yellow arrows represent the regions of undersegmentation.
Comparison of Our Input to Different Network
To evaluate the performance of the slice-wise dilated convolution, we compared our network with 3D U-Net using our input. The segmentation results are shown in Figure 7. The results of 3D U-Net have more undersegmentation and more oversegmentation in serious lesions. With dilated convolutions, our network obtains multiscale and long-range information so that it is able to segment HRF of varying size, distinguish different lesions, and obtain a better performance than 3D U-Net. 
Figure 7.
 
Segmentation results using our input to different network. Yellow arrows represent the regions of undersegmentation.
Figure 7.
 
Segmentation results using our input to different network. Yellow arrows represent the regions of undersegmentation.
Comparison Against Existing Methods
To explore the performance of the proposed framework, we compared our approach with some published methods. Figure 8 shows the comparison of the proposed method, two of the Okuwobi et al.12,13 methods (namely grow-cut-based method and component tree-based method), GoogLeNet based on patch-based classification,14 ResUNet,15 DUNet,18 FCN,19 and U-Net++.20 The input of the Okuwobi et al. methods12,13 and GoogLeNet14 are only input 1 because the Okuwobi et al. methods are traditional methods and GoogLeNet is based on patch-based classification under the Caffe framework. The input of the other methods was our input. In the high-contrast images as shown in column 1 of Figure 8, the results of method12 and GoogLeNet have obvious undersegmentation. The component tree-based method13 has obvious oversegmentation because it makes two HRF to one HRF. Other methods have similar performance. As for the low-contrast images, our method performed better than other methods except for grow-cut-based method.12 However, the results of the grow-cut-based method12 exist excessive oversegmentation before postprocessing when the images have serious lesions or low contrast. The significant number of false positives cannot even be reduced in the postprocessing step. Other methods under deep neural network framework are more prone to undersegmentation when the images have low contrast, and oversegmentation when the images are seriously damaged. Experimental results prove that our proposed method has a more desirable performance in dealing with complicated and weak HRF structures among the methods mentioned earlier. 
Figure 8.
 
Comparison between the proposed method and other methods. Yellow and green arrows represent the regions of undersegmentation and oversegmentation, respectively.
Figure 8.
 
Comparison between the proposed method and other methods. Yellow and green arrows represent the regions of undersegmentation and oversegmentation, respectively.
Although our method can accurately segment HRF in most of the SD-OCT DR volumes, it has the problem of excessive segmentation caused by edemas and vessels, as shown in Figure 9. However, our method achieves the best performance in comparison with other methods in these regions. 
Figure 9.
 
Examples of the oversegmentations with our method. (a) Oversegmentation caused by edemas, (b) oversegmentation caused by vessels.
Figure 9.
 
Examples of the oversegmentations with our method. (a) Oversegmentation caused by edemas, (b) oversegmentation caused by vessels.
Qualitative Analysis
The dice similarity coefficient (DSC) was used to quantitatively evaluate our method, which is defined as:  
\begin{eqnarray*} {\rm{D}}SC\left( {A,B} \right) = \frac{{2\left( {\left| {A \cap B} \right|} \right)}}{{\left| {\rm{A}} \right| + \left| {\rm{B}} \right|}} \end{eqnarray*}
where A is the automated segmentation result, and B is the corresponding ground truth. We also used Precision and Recall to evaluate our method, which are defined as:  
\begin{eqnarray*}{\rm{Precision}} = \frac{{TP}}{{TP + FP}}\end{eqnarray*}
 
\begin{eqnarray*}{\rm{Recall}} = \frac{{TP}}{{TP + FN}}\end{eqnarray*}
where TP, FP, and FN are the true positive, false positive, and false negative, respectively. Paired and two-tailed t-test was employed to test for significant differences. 
To make the experiment results more sufficient and convincing, we tested all 33 SD-OCT cubes by a two-fold cross experiment. The results of these two independent experiments are shown in Table 1. The similar results indicate that our method can achieve good accuracy. 
Table 1.
 
HRF Segmentation Results Obtained with our Method by a Two-Fold Cross Experiment (unit: %)
Table 1.
 
HRF Segmentation Results Obtained with our Method by a Two-Fold Cross Experiment (unit: %)
Table 2 summarizes the quantitative results of the three comparison mentioned earlier. Additionally, we compared our input with input 1 and input 2 on 3D U-Net. Our input performs better than input 1 and input 2, and a significant difference was observed for DSC between our input and input 1/input 2 on our network (P < 0.0001), which proves the effectiveness of enhancement algorithm and denoised images. The difference was also statistically significant between input 1 and input 2 on our network (P < 0.05). Also, our network using input 2 and our input performs better than 3D U-Net, which indicates the effectiveness of slice-wise dilated convolution. However, our network using input 1 performs similar to 3D U-Net using input 1. The possible explanation for this might be that the information of denoised images is limited. It has been sufficiently captured by 3D U-Net. Enlarging the receptive field can only capture useless long-range information. 
Table 2.
 
Comparisons of Different Methods (unit: %). The highest DSC is in bold.
Table 2.
 
Comparisons of Different Methods (unit: %). The highest DSC is in bold.
In fact, some cubes in our dataset are dark, especially the 14th eye in Figure 10Figure 10 and Table 2 show the evaluation results of 33 eyes achieved by denoised images (input 1), enhanced images (input 2), and a combination of the first two (proposed method) as the input to our network. We can find that our method is almost superior to others on all cubes. Our method performs best with DSC of 70.73%, and the method of enhanced images as input performs subpar with DSC of 70.00%. The method of denoised images as input performs worst especially in the 14th, 23rd and 31st eye, which indicates that image enhancement can overcome the low-contrast problem effectively, and our method can retain the original information and overcome information loss caused by the image enhancement. 
Figure 10.
 
Comparison of the results on 33 eyes obtained by our network with different input.
Figure 10.
 
Comparison of the results on 33 eyes obtained by our network with different input.
Table 2 show the evaluation results of 33 cubes obtained by the proposed method and seven other methods.1215,1820 To further evaluate the performance of the enhanced algorithm and our framework, we also added the contrast experiments using input 1 as the input to other methods. As it can be seen from Table 2, our proposed method performs best with DSC of 70.73%, and a significant difference (P < 0.0001) was observed for DSC between our proposed method and other methods, which indicates that our proposed method exhibits the state-of-the-art performance and robustness. Generally speaking, the traditional methods12,13 relied on handcrafted features to a great extent and were not robust enough. The performance of the grow-cut-based method12 was the worst among all the methods. The component tree method13 performed well. However, the processes of this method13 were complicated and the results existed more oversegmentation. Yu et al.14 treat the segmentation task as a patch-based classification problem. Although this method reached the best performance with Precision, it took a long time to predict a new B-scan, which was not practical for real application. The residual block of ResUNet and the deformable convolution block of DUNet might make feature extraction of complicated HRF more difficult. Although the performance of U-Net++ was desirable and similar to ours, the number of the parameters of the U-Net++ was huge, and the test procedure to obtain more accurate results was complicated. Also, we can see from Table 2 that the results and robustness of ResUNet, DUNet, and UNet++ were significantly improved by our input, which proved the effectiveness of our input. However, our proposed method still performed the best, which proved the effectiveness of our network at the same time. The FCN was not a good choice for medical imaging segmentation. The results did not improve even though it used our input as input. In summary, our proposed method is simple to train and test while it reaches the best overall performance comparing to those listed methods. 
Most importantly, the mean running time of our method is 0.83 minutes per volume, which is the least in comparison with the methods mentioned earlier, as shown in Table 3. The time cost of method13 and U-Net++, which has the close segmentation results to our proposed method, is two and four times longer than ours, respectively, and the speed of other methods are much slower. 
Table 3.
 
Mean Computational Time of Various Methods for HRF Segmentation. The least time is in bold.
Table 3.
 
Mean Computational Time of Various Methods for HRF Segmentation. The least time is in bold.
Conclusions
We proposed a robust segmentation algorithm for HRF segmentation in SD-OCT volumes with DR. The main idea of the proposed method is to segment HRF completely in the low-contrast images. To achieve this, we utilized the enhancement algorithm to compensate for the contrast of the low-contrast images, and then took the enhanced images as the second channel of input to the network. We also introduced the dilated convolution in the last layer of the encoder path to enlarge the receptive field. The quantitative and qualitative comparison of the 33 SD-OCT volumes from 27 patients diagnosed with DR demonstrates that the proposed method is more effective for HRF quantification than other methods, and also performs well on low-contrast images. Therefore we expect that this method will contribute to clinical diagnosis and disease surveillance. 
Acknowledgments
Supported by the National Natural Science Foundation of China (61671242, 61701222), Key R&D Program of Jiangsu Science and Technology Department (BE2018131), and Suzhou Industrial Innovation Project (SS201759). 
Disclosure: S. Xie, None; I.P. Okuwobi, None; M. Li, None; Y. Zhang, None; S. Yuan, None; Q. Chen, None 
References
Bolz M, Schmidt-Erfurth U, Deak G, Mylonas G, Kriechbaum K, Scholda C. Optical coherence tomographic hyperreflective foci: a morphologic sign of lipid extravasation in diabetic macular edema. Ophthalmology. 2009; 116: 914–920.
De BU, Sacconi R, Pierro L, Lattanzio R, Bandello F. Optical coherence tomographic hyperreflective foci in early stages of diabetic retinopathy. Retina. 2014; 35: 449–453.
Cusick M, Chew EY, Chan CC,Kruth HS, Murphy RP, Ferris FL. Histopathology and regression of retinal hard exudates in diabetic retinopathy after reduction of elevated serum lipid levels. Ophthalmology. 2003; 110: 2126–2133. [CrossRef] [PubMed]
Framme C, Schweizer P, Imesch M, Wolf S, Wolf-Schnurrbusch U. Behavior of SD-OCT-detected hyperreflective foci in the retina of anti-VEGF-treated patients with diabetic macular edema. Invest Ophthalmol Vis Sci. 2012; 53: 5814–5818. [CrossRef] [PubMed]
Uji A, Tomoaki T, Kazuaki N, et al. Association between hyperreflective foci in the outer retina, status of photoreceptor layer, and visual acuity in diabetic macular edema. Am J Ophthalmol. 2012; 153: 710–717. [CrossRef] [PubMed]
Congdon NG, Friedman DS, Lietman T. Important causes of visual impairment in the world today. JAMA. 2003; 290: 2057–2060. [CrossRef] [PubMed]
Schreur V, Breuk AD, Venhuizen FG, et al. Retinal hyperreflective foci in type 1 diabetes mellitus. Retina. 2019 Jul 25. doi: 10.1097/IAE.0000000000002626 . [Epub ahead of print].
Mizukami T, Hotta Y, Katai N. Higher numbers of hyperreflective foci seen in the vitreous on spectral-domain optical coherence tomographic images in eyes with more severe diabetic retinopathy. Ophthalmologica. 2017; 238: 74–80. [CrossRef] [PubMed]
Mugisho OO, Green CR, Squirrell DM, et al. Connexin43 hemichannel block protects against the development of diabetic retinopathy signs in a mouse model of the disease. J Mol Med. 2019; 97: 215–229. [CrossRef] [PubMed]
Niu SJ, de Sisternes L, Chen Q, Rubin DL, Leng T. Fully automated prediction of geographic atrophy growth using quantitative spectral-domain optical coherence tomography biomarkers. Ophthalmology. 2016; 123: 1737–1750. [CrossRef] [PubMed]
Castro Lima V, Rodrigues EB, Nunes RP, Sallum JF, Farah ME, Meyer CH. Simultaneous confocal scanning laser ophthalmoscopy combined with high-resolution spectral-domain optical coherence tomography: a review. J Ophthalmol. 2011; 2011: 743670. [PubMed]
Okuwobi IP, Fan W, Yu CC, et al. Automated segmentation of hyperreflective foci in spectral domain optical coherence tomography with diabetic retinopathy. J Med Imaging. 2018; 5: 014002. [CrossRef]
Okuwobi IP, Ji ZX, Fan W, Yuan S, Bekalo L, Chen Q. Automated quantification of hyperreflective foci in SD-OCT with diabetic retinopathy. IEEE J Biomed Health Inform. 2019 Jul 19. doi: 10.1109/JBHI.2019.2929842 . [Epub ahead of print].
Yu CC, Xie S, Niu SJ, et al. Hyper-reflective foci segmentation in SD-OCT retinal images with diabetic retinopathy using deep convolutional neural networks. Med Phys. 2019; 46: 4502–4519. [CrossRef] [PubMed]
Schlegl T, Bogunovic H, Klimscha S, et al. Fully automated segmentation of hyperreflective foci in optical coherence tomography images. 2018; arXiv:1805.03278.
Varga L, Kovács A, Grósz T, et al. Automatic segmentation of hyperreflective foci in OCT images. Comput Methods Programs Biomed. 2019; 178: 91–103. [CrossRef] [PubMed]
Çiçek Ö, Abdulkadir A, Lienkamp S, Thomas B, Olaf R. 3D U-Net: learning dense volumetric segmentation from sparse annotation. Med Image Comput Comput-Assist Intervent. 2016; 9901: 424–432
Jin Q, Meng Z, Pham TD, Chen Q, Wei L, Su R. DUNet: a deformable network for retinal vessel segmentation. Knowl Based Syst. 2019; 178: 149–162. [CrossRef]
Long J, Shelhamer E, Darrell T, et al. Fully convolutional networks for semantic segmentation. IEEE Conf Comput Vis Pattern Recognit. 2015; 7: 3431–3440.
Zhou Z, Siddiquee MMR, Tajbakhsh N, Liang J. UNet++: a nested U-Net architecture for medical image segmentation. 2018; arXiv:1807.10165.
Figure 1.
 
(a) One B-scan of an SD-OCT volume. The HRF are located between the NFL/GCL and the IS/OS. (b) Scaled-up local region with HRF. The HRF have high and nonuniform intensities, irregular shapes, and varying sizes. (c) Scaled-up local region with HRF. The HRF have blurry boundaries. Two retinal layers (NFL/GCL and IS/OS) are marked with yellow arrows, and HRF are marked with red arrows.
Figure 1.
 
(a) One B-scan of an SD-OCT volume. The HRF are located between the NFL/GCL and the IS/OS. (b) Scaled-up local region with HRF. The HRF have high and nonuniform intensities, irregular shapes, and varying sizes. (c) Scaled-up local region with HRF. The HRF have blurry boundaries. Two retinal layers (NFL/GCL and IS/OS) are marked with yellow arrows, and HRF are marked with red arrows.
Figure 2.
 
(a) One B-scan with high HRF-background contrast. (b) One B-scan with low HRF-background contrast. HRF are marked with red arrows.
Figure 2.
 
(a) One B-scan with high HRF-background contrast. (b) One B-scan with low HRF-background contrast. HRF are marked with red arrows.
Figure 3.
 
Overview of the proposed method.
Figure 3.
 
Overview of the proposed method.
Figure 4.
 
Pipeline of the enhancement algorithm.
Figure 4.
 
Pipeline of the enhancement algorithm.
Figure 5.
 
(a) Raw image. (b) Denoised image. (c) Enhanced image.
Figure 5.
 
(a) Raw image. (b) Denoised image. (c) Enhanced image.
Figure 6.
 
Segmentation results using different input to our network. Yellow arrows represent the regions of undersegmentation.
Figure 6.
 
Segmentation results using different input to our network. Yellow arrows represent the regions of undersegmentation.
Figure 7.
 
Segmentation results using our input to different network. Yellow arrows represent the regions of undersegmentation.
Figure 7.
 
Segmentation results using our input to different network. Yellow arrows represent the regions of undersegmentation.
Figure 8.
 
Comparison between the proposed method and other methods. Yellow and green arrows represent the regions of undersegmentation and oversegmentation, respectively.
Figure 8.
 
Comparison between the proposed method and other methods. Yellow and green arrows represent the regions of undersegmentation and oversegmentation, respectively.
Figure 9.
 
Examples of the oversegmentations with our method. (a) Oversegmentation caused by edemas, (b) oversegmentation caused by vessels.
Figure 9.
 
Examples of the oversegmentations with our method. (a) Oversegmentation caused by edemas, (b) oversegmentation caused by vessels.
Figure 10.
 
Comparison of the results on 33 eyes obtained by our network with different input.
Figure 10.
 
Comparison of the results on 33 eyes obtained by our network with different input.
Table 1.
 
HRF Segmentation Results Obtained with our Method by a Two-Fold Cross Experiment (unit: %)
Table 1.
 
HRF Segmentation Results Obtained with our Method by a Two-Fold Cross Experiment (unit: %)
Table 2.
 
Comparisons of Different Methods (unit: %). The highest DSC is in bold.
Table 2.
 
Comparisons of Different Methods (unit: %). The highest DSC is in bold.
Table 3.
 
Mean Computational Time of Various Methods for HRF Segmentation. The least time is in bold.
Table 3.
 
Mean Computational Time of Various Methods for HRF Segmentation. The least time is in bold.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×