Open Access
Data Science  |   February 2022
Simple Code Implementation for Deep Learning–Based Segmentation to Evaluate Central Serous Chorioretinopathy in Fundus Photography
Author Affiliations & Notes
  • Tae Keun Yoo
    Department of Ophthalmology, Aerospace Medical Center, Korea Air Force, Cheongju, South Korea
    B&VIIT Eye Center, Seoul, South Korea
  • Bo Yi Kim
    Department of Ophthalmology, Severance Hospital, Yonsei University College of Medicine, Seoul, South Korea
  • Hyun Kyo Jeong
    Department of Ophthalmology, 10th Fighter Wing, Republic of Korea Air Force, Suwon, South Korea
  • Hong Kyu Kim
    Department of Ophthalmology, Dankook University Hospital, Dankook University College of Medicine, Cheonan, South Korea
  • Donghyun Yang
    Medical Research Center, Aerospace Medical Center, Republic of Korea Air Force, Cheongju, South Korea
  • Ik Hee Ryu
    B&VIIT Eye Center, Seoul, South Korea
    Visuworks, Seoul, South Korea
  • Correspondence: Tae Keun Yoo, Department of Ophthalmology, Aerospace Medical Center, Korea Air Force, Cheongju, South Korea, 635 Danjae-ro, Namil-myeon, Cheongwon-gun, Chungcheongbuk-do 363-849, South Korea. e-mail: eyetaekeunyoo@gmail.com; fawoo2@yonsei.ac.kr 
Translational Vision Science & Technology February 2022, Vol.11, 22. doi:https://doi.org/10.1167/tvst.11.2.22
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Tae Keun Yoo, Bo Yi Kim, Hyun Kyo Jeong, Hong Kyu Kim, Donghyun Yang, Ik Hee Ryu; Simple Code Implementation for Deep Learning–Based Segmentation to Evaluate Central Serous Chorioretinopathy in Fundus Photography. Trans. Vis. Sci. Tech. 2022;11(2):22. https://doi.org/10.1167/tvst.11.2.22.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose: Central serous chorioretinopathy (CSC) is a retinal disease that frequently shows resolution and recurrence with serous detachment of the neurosensory retina. Here, we present a deep learning analysis of subretinal fluid (SRF) lesion segmentation in fundus photographs to evaluate CSC.

Methods: We collected 194 fundus photographs of SRF lesions from the patients with CSC. Three graders manually annotated of the entire SRF area in the retinal images. The dataset was randomly separated into training (90%) and validation (10%) datasets. We used the U-Net segmentation model based on conditional generative adversarial networks (pix2pix) to detect the SRF lesions. The algorithms were trained and validated using Google Colaboratory. Researchers did not need prior knowledge of coding skills or computing resources to implement this code.

Results: The validation results showed that the Jaccard index and Dice coefficient scores were 0.619 and 0.763, respectively. In most cases, the segmentation results overlapped with most of the reference areas in the annotated images. However, cases with exceptional SRFs were not accurate in terms of prediction. Using Colaboratory, the proposed segmentation task ran easily in a web-based environment without setup or personal computing resources.

Conclusions: The results suggest that the deep learning model based on U-Net from the pix2pix algorithm is suitable for the automatic segmentation of SRF lesions to evaluate CSC.

Translational Relevance: Our code implementation has the potential to facilitate ophthalmology research; in particular, deep learning–based segmentation can assist in the development of pathological lesion detection solutions.

Introduction
Central serous chorioretinopathy (CSC) is characterized by neurosensory detachment of the retina with an accumulation of subretinal fluid (SRF) in the posterior pole.1 Patients with CSC generally experience decreased visual acuity, blurred central vision, metamorphopsia, and relative central scotoma. CSC predominantly affects middle-aged adults and more commonly occurs in men than in women.2 CSC is generally categorized into acute and chronic conditions. In acute cases, spontaneous resolution is common within several months, with a good visual prognosis. In contrast, SRF usually does not resolve spontaneously without therapeutic intervention in chronic cases. There could be a significant reduction in visual acuity with diffuse retinal pigment epithelium changes.3 The pathogenesis of recurrent and chronic conditions is poorly understood. Therefore, proper interventions are recommended for chronic CSC to improve its long-term outcomes. Patients with CSC should be followed up for a long time to evaluate the status of SRF and to predict the transition from acute to chronic CSC.3 
Color fundus photography is a basic modality for diagnosing ocular diseases and has been widely used in eye clinics and health checkup centers. However, it could be difficult to detect CSC in fundus photographs, especially by young ophthalmologists,4 because CSCs appear as various features such as blurred SRF areas and exudative yellowish deposits. Currently, the diagnostic workup for CSC relies on posterior segment optical coherence tomography (OCT).5 The use of OCT enables ophthalmologists to effectively evaluate SRF by capturing cross-sectional views and whole three-dimensional volume of the macular-centered retina. However, OCT imaging requires bulky equipment and an experienced examiner, which is not appropriate in a disease screening setting. Owing to the good performance of OCT in the diagnosis of CSC, previous deep learning studies have focused on the OCT image domain rather than fundus photography.6,7 
A previous study using a large dataset demonstrated that a classification model using the InceptionV3 architecture was able to detect CSC accurately from fundus photographs.4 However, previous studies using deep learning have not shown the ability to identify detailed pathological lesions to monitor the conditions of SRF in the fundus photographs with CSC.4,8 Recently, deep learning–based segmentation tasks have improved the diagnostic and decision-making process in various imaging domains.9 U-Net is a well-established deep learning architecture for the detection and segmentation tasks for biomedical imaging domains.10 Because the single U-Net model does not consider the detailed features of the output images, generative adversarial network (GAN) framework can improve the performance of the U-Net model.11,12 In GAN architecture, the generator and discriminator operate as adversaries to synthesize more realistic output images.13 Pix2pix is a popular GAN technique using U-Net for image-to-image translation. U-Net, based on pix2pix and its variants, has been successfully applied to retinal vessel segmentation,11 microscopic image segmentation,14 and virtual staining.15 
The segmentation of SRF lesions is needed to objectively evaluate the severity or status of CSC using fundus photography. In this study, we developed a deep learning–based segmentation model that simplifies the use of pix2pix for segmentation of the SRF area in fundus photography to evaluate CSC. We used a dataset of manually segmented fundus photographs of pathological SRF lesions and nonpathological lesions. After the dataset files were prepared, researchers trained and validated the U-Net model based on pix2pix with a few mouse clicks using Google Colaboratory, which requires only a web browser and a Google account; it does not need personal graphical processing units (GPUs). This study may serve as a guide for leveraging deep learning–based segmentation of ophthalmology imaging domains to optimize patient care. 
Methods
Image Acquisition
We developed a pix2pix deep learning model for automatic segmentation of the SRF area in fundus photographs to detect CSC. To perform our experiment, we retrospectively reviewed the diagnostic codes and fundus photographs of patients with CSC at the Aerospace Medical Center. Patients with CSC who presented to the hospital between January 1, 2010, and December 31, 2020, were included in the study. However, because the amount of data was small, we also used publicly accessible web data to improve model generalizability including the Retinal Fundus Multi-Disease Image Dataset and other studies providing fundus photographs of posterior serous retinal detachment.16,17 In particular, the Retinal Fundus Multi-Disease Image Dataset had 98 fundus photographs of patients with CSC, which were labeled by two ophthalmologists. This process also aimed to further de-identify the materials. Therefore, the image data were collected in a variety of settings. We used only macula-centered fundus photographs of the posterior pole of the eyes. Fundus photographs of vitreous haziness and other retinal diseases were excluded from this study. Considering the workload of manual segmentation and the proper number of training images provided by previous studies using U-Net or conditional GANs,11,18,19 a dataset with more than 100 cases was considered sufficient to train the U-Net segmentation model.20 Finally, the total dataset included fundus photographs and a segmentation image dataset from 194 eyes with CSC from medical centers and publicly accessible datasets. Additionally, we collected 93 fundus photographs of healthy eyes from the same sources to build a classification model to discriminate between CSCs and normal retinas. The retrospective data collection and analysis of anonymized data were approved by the Ethics Committee of Aerospace Medical Center (application no. ASMC21IRB001R) and followed the tenets of the Declaration of Helsinki. We confirm that this study was conducted only for noncommercial and academic purposes. 
Annotation
A flowchart of the proposed method is shown in Figure 1, including the definition of SRF lesions, automatic segmentation of SRF lesions by U-Net, and examples of manual segmentation. U-Net and pix2pix must be trained using paired inputs and corresponding target images. Manual segmentation is a tedious and time-consuming task requiring domain-specific knowledge. Initially, an ophthalmologist manually screened the fundus photography images with CSC. We asked three ophthalmologists including two licensed ophthalmologists (graders 1 and 2) and one ophthalmology resident (grader 3) to segment the entire SRF area in the retinal images. To decrease the workload, the task was performed by drawing polygons through mouse clicks with binary segmentation or free-hand drawing via a mouse that allowed small errors. Because there could be errors in manual segmentation, another ophthalmologist reviewed and corrected it manually in case the pixels were classified incorrectly. The segmentation was mainly targeted at the edges of an SRF area, blurred SRF area, yellowish deposits, and surface texture changes; however, subjective judgments were also made when boundaries were ambiguous. The final annotated mask images were created by post-processing to align the images. To train the pix2pix model, the source (fundus photographs) and corresponding target (binary mask image obtained from ophthalmologist drawing) images were placed side by side in a single image file to maintain the paired relationship across the two domains after random cropping. The codes for automated combination of two images side by side are shown in Supplementary Table S1. According to a previous study, labeling noise in weak annotations is induced by human annotators unintentionally during their movement or clicking a mouse, or by inconsistencies between different graders concerning ambiguous lesions.21 Therefore, our annotation including polygon labeling and noisy drawing using a mouse may be classified as a weak annotation, which may degrade the segmentation performance using U-Net. We sought to overcome the problem caused by weak annotations using a GAN. For reproducibility of this experimental code implementation, an example dataset including fundus photography and annotated mask images for segmenting SRF lesions for training and validation is available online at https://data.mendeley.com/datasets/4k64fwnp4k, which can be downloaded as zip files. To evaluate the segmentation performance, standard reference images were created by merging the annotations from the three graders using majority voting. 
Figure 1.
 
Flowchart of the proposed method for pathological lesion segmentation to evaluate CSC. (A) Definition of SRF lesions. (B) Annotation process and examples of manual segmentation. (C) Automatic segmentation of SRF lesion by U-Net.
Figure 1.
 
Flowchart of the proposed method for pathological lesion segmentation to evaluate CSC. (A) Definition of SRF lesions. (B) Annotation process and examples of manual segmentation. (C) Automatic segmentation of SRF lesion by U-Net.
Segmentation Algorithms
Figure 2 shows a diagram of the pix2pix architecture and U-Net structure used in this study. U-Net is the most popular fully convolutional network (FCN) architecture for image segmentation tasks for anatomical segmentation.22 It learns a segmentation ability from the pixel distribution from the provided input and annotation datasets. The basic structure of U-Net consists of two convolutional neural network (CNN) parts: an encoder, which is similar to a typical convolutional network capturing low-level representations, and a decoder, which contains up-convolutions for high-level feature map generation. In addition, the skip connection between the encoder and decoder ensures that the network can preserve the detailed information of the input images. However, U-Net is unable to correct for low-quality manual annotations.10 Pix2pix is a conditional GAN mapping pixels to pixels for image translation tasks.23 In the pix2pix architecture, U-Net serves as an image generator, and a convolutional PatchGAN classifier is used for the discriminator, which can perform a quality check for the output images generated by U-Net. Several studies have demonstrated that GAN frameworks can improve the segmentation performance of the U-Net model in fundus photography and dermatoscopy.11,24,25 A previous study using pelvic computed tomography images showed that pix2pix outperformed U-Net and other GAN techniques.12 In this manner, we performed an experiment using U-Net based on pix2pix for pathological lesion segmentation in fundus photography. 
Figure 2.
 
Diagram of the pix2pix architecture used in this study. (A) Pix2pix consists of the U-Net generator and PatchGAN discriminator networks. (B) The U-Net model was used for an image generator in the pix2pix architecture.
Figure 2.
 
Diagram of the pix2pix architecture used in this study. (A) Pix2pix consists of the U-Net generator and PatchGAN discriminator networks. (B) The U-Net model was used for an image generator in the pix2pix architecture.
We also adopted the original U-Net without GAN architecture and classic FCN-8s models to compare the segmentation performance of the U-Net based on pix2pix. FCN-8s is a popular algorithm that uses a basic convolutional network architecture for segmentation without skip connection.26 Previous studies have used FCN-8s as a baseline model to compare segmentation performance.25,27 The codes for the original U-Net and FCN-8s are available in a publicly accessible source (https://github.com/divamgupta/image-segmentation-keras). 
Discrimination Algorithm
In our pilot experiment during the development of the segmentation model (Supplementary Fig. S1 and Supplementary Table S2), we calculated the segmentation performance for each of the two training scenarios; one was a dataset consisting of only CSC cases and the other consisted of both healthy and CSC cases. Because the segmentation performance decreased significantly when training a mixture of normal and CSC retinas, we decided to build a CNN model that discriminated normal and abnormal retinas before inputted the fundus photographs to the U-Net, and then input the images classified as abnormal to U-Net in the two-stage cascade network. InceptionV3, which is the most well-known CNN architecture, was trained via standard transfer learning using a pretrained network as the feature extractors.28 The training was performed using CSC images in the training dataset and additional healthy retinal images. Because our study focuses on the segmentation and use of InceptionV3 via transfer learning, which is the most popular method for image classification, the code for the discriminator is not included in our Colaboratory code file. 
Software Summary
Code Implementation
In this study, we focused on the implementation of U-Net based on pix2pix around Google Colaboratory, which provides an appropriate range of computing resources (GPUs), memory, and disk space for free. A previous study already showed that Google Colaboratory can be an easy platform to study multiple biological imaging domains using deep learning.29 The web browser–based Jupyter Notebooks provided by Colaboratory can interactively run Python code, which is currently the most widely used language to deploy machine learning applications. The original pix2pix code is available on the TensorFlow webpage (https://www.tensorflow.org/tutorials/generative/pix2pix) under the Apache License v2.0, which allows users to use, modify, and redistribute the source code, and we modified it to use our sample dataset. Our code generated for segmenting SRF lesions for CSC is also available online at https://data.mendeley.com/datasets/4k64fwnp4k. The code file tiled “pix2pix_csc_segmentation.ipynb" provides users with Jupyter Notebooks for Google Colaboratory and a dataset for training and validation. To use this code implementation, researchers do not need prior knowledge of coding skills, and they can run the codes in a log-in to Google Drive and a few mouse clicks. As summarized in Figure 3, this process could be implemented in the following way: First, we prepared the example dataset in Google Drive. Second, we uploaded the code file in Google Drive and open the file on the Google Drive page on the web browser. Third, we prepared the dataset in Google Drive and match the folder locations of the code with the address of the actual folders. For example, in our experiment, we saved the training dataset at “csc/segmentation/train/” and the test dataset at “csc/segmentation/test/” on our own Google Drive. Fourth, click the play button to the left of the code cell one by one. The second code cell links the datasets to this Colaboratory notebook using Google Drive. The size of the input and annotation images was set to a resolution of 256 × 256 pixels to use the original architecture of the pix2pix and U-Net models. In this experiment using pix2pix, we set lambda, a weight term for regularizing, to 100, and we set the number of training iterations to 20,000. In this experiment, augmentation was not performed except for right-to-left flipping and jittering (using “random_jitter” function in the code) because it focused on the description of our annotated data and code implementation. 
Figure 3.
 
Process for using the Google Colaboratory to run the deep learning–based segmentation to evaluate CSC. (A) Dataset preparation. (B) Graphic user interface of the Google Colaboratory notebook. (C) Google Drive for storing and connecting the dataset. (D) Click button for the code execution. The original pix2pix code is available on the TensorFlow webpage (https://www.tensorflow.org/tutorials/generative/pix2pix) under the Apache License v2.0 (Copyright 2019 The TensorFlow Authors), which allows users to use, modify, and redistribute the source code.
Figure 3.
 
Process for using the Google Colaboratory to run the deep learning–based segmentation to evaluate CSC. (A) Dataset preparation. (B) Graphic user interface of the Google Colaboratory notebook. (C) Google Drive for storing and connecting the dataset. (D) Click button for the code execution. The original pix2pix code is available on the TensorFlow webpage (https://www.tensorflow.org/tutorials/generative/pix2pix) under the Apache License v2.0 (Copyright 2019 The TensorFlow Authors), which allows users to use, modify, and redistribute the source code.
Performance Metrics
The dataset was randomly separated into training (90%; n = 175) and validation (10%; n = 19) sets. To improve the reliability of the segmentation results, we fixed the data separation across the experiments and included it in the released dataset. We compared the predicted segmentation with standard reference images provided in the annotation dataset. To validate the segmentation ability, we calculated using standard metrics, the intersection-over-union (IoU, also known as the Jaccard index) and the Dice coefficient. The detailed definitions of IoU and Dice coefficient are provided, where Areference is the area of the standard reference from annotation and Aprediction is the segmentation result of the algorithm:  
\begin{eqnarray*}IoU = Jaccard\;Index = \;\frac{{{A_{reference}} \cap {A_{prediction}}}}{{{A_{reference}} \cup {A_{prediction}}}}\end{eqnarray*}
 
\begin{eqnarray*}Dice\;coefficient = \;\frac{{2\left| {{A_{reference}} \cap {A_{prediction}}} \right|}}{{\left| {{A_{reference}}} \right| + \left| {{A_{prediction}}} \right|}}\end{eqnarray*}
 
Additionally, the precision, sensitivity, and specificity at the image pixel level were calculated to compare the models. A detailed explanation of the metrics is provided in a previous review article on deep learning–based image segmentation.9 
Experimental Results
The collected fundus photographs were preprocessed to fit the segmentation task without distinguishing the data source and disclosed to the aforementioned publicly accessible web address. In the development of U-Net based on pix2pix around the Google Colaboratory, training and prediction could be performed by clicking the play buttons to execute part of the code file. The resources provided by Colaboratory were sufficient to train the pix2pix segmentation model. The training times in the Colaboratory environment varied depending on the network used and assigned GPUs. When we used the provided dataset (19 fundus photographs for validation and 1050 augmented images [by flipping] from 175 fundus photographs and 3 annotators for training), the training session of pix2pix required 78 minutes using a Tesla K80 GPU. 
The two-stage cascade network was designed to input only CSC images in the segmentation model. The discrimination performance for CSC detection is shown in Figure 4. In the initial stage of discriminating between CSC and normal retina, the deep learning model InceptionV3 achieved an AUC of 0.989 (95% confidence interval [95% CI], 0.965–0.996). The deep model using all factors gave an accuracy of 97.0% (95% CI, 93.9%–98.8%), sensitivity of 95.5% (95% CI, 91.0%–98.2%), and specificity of 100.0% (95% CI, 95.3%–100.0%). 
Figure 4.
 
Classification performance of the InceptionV3 model for CSC detection and the proposed process for SRF segmentation. The classification performance was measured using five-fold cross-validation. The dataset for training the InceptionV3 included fundus photographs from 194 eyes with CSC and 93 healthy eyes.
Figure 4.
 
Classification performance of the InceptionV3 model for CSC detection and the proposed process for SRF segmentation. The classification performance was measured using five-fold cross-validation. The dataset for training the InceptionV3 included fundus photographs from 194 eyes with CSC and 93 healthy eyes.
The Dice scores of inter-agreements for the three graders and the standard reference from majority voting are shown in Figure 5. The annotations showed a high correlation with each other, but they did not match perfectly. The segmentation performance of the U-Net models based on pix2pix is shown in Figure 6. When the U-Net was trained using data of the three graders at once, it showed better performance than the models trained based on one grader. When the model was validated in the test dataset, segmentation using the annotation data from all graders achieved a mean IoU of 0.619 and a mean Dice coefficient of 0.763. The IoU values of the U-Net segmentation models from graders 1, 2, and 3 were 0.575, 0.577, and 0.587, respectively, and there were no statistically significant differences compared with the results of the model trained using the data of all graders. Similarly, the dice coefficients of the U-Net segmentation from graders 1, 2, and 3 were 0.733, 0.734, and 0.720, respectively, and there was also no statistical significance. 
Figure 5.
 
Interagreement for three graders' annotation and reference on the whole dataset. We calculated the scores of (A) IoU (Jaccard index) and (B) Dice coefficient.
Figure 5.
 
Interagreement for three graders' annotation and reference on the whole dataset. We calculated the scores of (A) IoU (Jaccard index) and (B) Dice coefficient.
Figure 6.
 
Performance of U-Net segmentation models to evaluate CSC in the test dataset validation. The segmentation results were compared with standard reference (major voting of the graders). We calculated the scores of (A) IoU (Jaccard index) and (B) Dice coefficient.
Figure 6.
 
Performance of U-Net segmentation models to evaluate CSC in the test dataset validation. The segmentation results were compared with standard reference (major voting of the graders). We calculated the scores of (A) IoU (Jaccard index) and (B) Dice coefficient.
We further compared the performance of the U-Net based on pix2pix with that of the original U-Net and FCN-8s as baseline models. Table 1 presents the results of the pixel evaluation metrics for pathological lesion segmentation in different settings. When segmentation was performed by the U-Net based on pix2pix, the pixel precision, sensitivity, specificity, IoU, and F1 scores were 86.2%, 70.2%, 98.8%, 0.619, and 0.763, respectively. The original U-Net (U-Net only) achieved a precision of 82.0%, sensitivity of 70.3%, specificity of 97.4%, IoU of 0.581, and F1 value of 0.723. The classic FCN (FCN-8s) achieved a precision of 82.4%, sensitivity of 70.6%, specificity of 97.8%, IoU of 0.582, and F1 value of 0.726. The U-Net based on pix2pix had better segmentation metrics than the original U-Net (P = 0.044). However, there were no significant differences compared with FCN-8s (P = 0.252). 
Table 1.
 
Performance Comparison Between Different Segmentation Models in the Test Dataset Validation
Table 1.
 
Performance Comparison Between Different Segmentation Models in the Test Dataset Validation
Representative image segmentation examples of the U-Net are shown in Figure 7. In most cases, the segmentation results overlapped with most of the reference areas in the annotated images. However, cases with exceptional SRFs, such as lesions in a wide area or oval shape, were relatively inaccurate in prediction. 
Figure 7.
 
Examples of segmentation of pathological SRF lesions using the proposed U-Net model. We used retinal images from the ISBI challenge 2021 and the Retinal Fundus Multi-Disease Image Dataset (RFMiD), which are publicly available.
Figure 7.
 
Examples of segmentation of pathological SRF lesions using the proposed U-Net model. We used retinal images from the ISBI challenge 2021 and the Retinal Fundus Multi-Disease Image Dataset (RFMiD), which are publicly available.
Discussion
CSC is considered an acute or chronic disease in which the amount of SRF changes depends on the condition and therapeutic intervention. We performed a novel task of SRF lesion segmentation in fundus photographs through simple code implementation of U-Net based on the pix2pix algorithm. The dataset for this study consisted of fundus photographs and mask images of manually annotated SRF lesions. Our study demonstrated that automated segmentation is effective in measuring the pathological area for objective monitoring of CSC status. Our simple code implementation may provide significant progress toward broadening the use of deep learning–based segmentation for the ophthalmology imaging research community. The main challenges for medical segmentation are the lack of training data due to the low prevalence of diseases and the high cost of annotations. In this study, using our small dataset with weak annotation, we discovered that different SRF patterns were well-segmented in pathological fundus photographs with CSC. 
The proposed segmentation model can be applied to monitor SRF progression and assess the efficacy of therapeutic treatments, such as oral medication, intravitreal injection, or focal laser. The automated segmentation algorithm accurately measured the pathological area to determine whether it improved after treatment. The clinical manifestations of CSC, including serous retinal detachment and blurred central vision, may be ignored or misdiagnosed in eye examinations without OCT.30 Because most ophthalmic examinations are based on fundus photographs, a new method for evaluating CSC is needed for fundus photography. In previous studies, deep learning models for fundus photography domain only classified pathological cases for CSC diagnosis but did not focus on SRF lesion detection.4,8 In our experiment using a small dataset (Fig. 8), the conventional saliency map based on Grad-CAM refers to larger areas than the actual SRF and it is unable to highlight the detailed features of CSC. Segmentation based on U-Net provided a more accurate pathological area of CSC compared with the activation map from Grad-CAM. A previous study demonstrated that the saliency map may show the approximate location on which the classifier model focuses, but it is unable to highlight diagnostically relevant regions accurately.31 Therefore, our segmentation approach can be used to assess the condition of a disease, as well as a more useful diagnostic aid by providing detailed pathological features in fundus photographs. However, we cannot confirm that our segmentation algorithm outperforms the conventional CNN classifiers due to our small dataset. 
Figure 8.
 
Comparison of U-Net–based segmentation results and class activation map based on the InceptionV3 classifier model. The InceptionV3 model was trained via transfer learning to classify CSC and healthy classes. The classification performance of the model is shown in Figure 4.
Figure 8.
 
Comparison of U-Net–based segmentation results and class activation map based on the InceptionV3 classifier model. The InceptionV3 model was trained via transfer learning to classify CSC and healthy classes. The classification performance of the model is shown in Figure 4.
According to the literature, previous artificial intelligence–related studies on CSC have mainly used CNNs for classification purposes (Table 2). In addition to the model that simply distinguishes normal retina from CSC,4 there was an attempt to diagnose by subdividing the types of lesions of chorioretinopathies using an advanced CNN model.32 Because choroidal thickness is closely associated with CSC, a previous study targeted to estimate choroidal thickness from fundus photographs using a deep learning approach.33 To evaluate the severity of CSC in fundus photography, a deep learning model was developed to classify the SRF into macula-on or macula-off subretinal detachment.8 However, because previous studies were based on classification algorithms, the quantitative evaluation of retinal involvement of SRF lesions could not be performed in the fundus photography domain. Our proposed segmentation model may overcome drawbacks of the previous studies that investigated CSC using fundus photography. 
Table 2.
 
Summary of the Deep Learning–Based Techniques for CSC in Fundus Photography Imaging Domain
Table 2.
 
Summary of the Deep Learning–Based Techniques for CSC in Fundus Photography Imaging Domain
The insufficiency of high-quality annotations remains a major problem in deep learning research. Because CSC is a relatively rare disease, there is no publicly accessible fundus photography database only for CSC. Although our dataset is relatively small, this study can contribute to the development of ophthalmology data by adding a fundus photography dataset with CSC annotations to the community. Recently, there has been an increase in terms of the environment to analyze patterns of diseases more accurately by disclosing data and codes as open sources.34 Users with little to no coding expertise can interactively work through the Colaboratory pipeline by clicking the play buttons.29 We hope that this study will help with this trend and our sample dataset and codes can be improved upon by other researchers. For example, because the code contains only random cropping and side-to-side flipping, we expect that additional data augmentation by simply replicating input images will improve performance.20 Additionally, by changing the dataset files in the cloud, the U-Net framework based on pix2pix can be applied to other ocular imaging domains and other tasks. 
This study has several limitations. First, the small available training dataset is a major limitation that may affect performance. A previous study described that the performance of U-Net stabilized after training more than 200 radiographic images and data augmentation provided an additional gain in segmentation performance.20 The exploration of further data or augmentation techniques may be required in future work. Second, only three ophthalmologists performed annotations using polygons or free-hand drawing. A more sophisticated annotation using an advanced drawing device will improve segmentation performance. However, weak annotation is inevitable because it is time consuming and laborious. Recent studies have shown that it is possible to effectively train segmentation models using weak annotations with additional machine learning techniques,21 and future work should validate these techniques. Third, there are no OCT data for SRF segmentation. The manual segmentation based on fundus photography is more inaccurate than the segmentation based on volumetric OCT, which means that the segmentation performed by ophthalmologists is not accurate enough. Fourth, the clinical effectiveness of the segmentation method has not been verified. We were unable to validate the algorithm using an external validation design because of the lack of both image data and medical record data. Future research should focus on the application of the proposed technique to real patients with CSC to evaluate the efficacy of therapeutic interventions. 
Conclusions
Our work suggests that the deep learning model based on U-net from the pix2pix algorithm is suitable for the automatic segmentation of SRF lesions to evaluate CSC. The automated segmentation algorithm can be used to monitor CSCs or determine whether they are improved after treatment. The proposed method can be further improved by incorporating additional data and advanced machine-learning methods. Segmenting different lesions in medical imaging is critical for improving decision-making. Therefore, we believe that our code implementation has the potential to facilitate ophthalmology research; in particular, deep learning–based segmentation can assist in the development of pathological lesion detection solutions. 
Acknowledgments
Disclosure: T.K. Yoo, None; B.Y, Kim, None; H.K. Jeong, None; H.K. Kim, None; D. Yang, None; I.H. Ryu, VISUWORKS, Inc. (E) 
References
Wang M, Munch IC, Hasler PW, Prünte C, Larsen M. Central serous chorioretinopathy. Acta Ophthalmol. 2008; 86(2): 126–145, doi:10.1111/j.1600-0420.2007.00889.x. [CrossRef] [PubMed]
Tsai DC, Huang CC, Chen SJ, et al. Central serous chorioretinopathy and risk of ischaemic stroke: a population-based cohort study. Br J Ophthalmol. 2012; 96(12): 1484–1488, doi:10.1136/bjophthalmol-2012-301810. [CrossRef] [PubMed]
Mohabati D, Boon CJF, Yzer S. Risk of recurrence and transition to chronic disease in acute central serous chorioretinopathy. Clin Ophthalmol. 2020; 14: 1165–1175, doi:10.2147/OPTH.S242926. [CrossRef] [PubMed]
Zhen Y, Chen H, Zhang X, Meng X, Zhang J, Pu J. Assessment of central serous chorioretinopathy depicted on color fundus photographs using deep learning. Retina. 2020; 40(8): 1558–1564, doi:10.1097/IAE.0000000000002621. [CrossRef] [PubMed]
Han L, Carvalho JRL, de Parmann R, et al. Central serous chorioretinopathy analyzed by multimodal imaging. Transl Vis Sci Technol. 2021; 10(1): 15, doi:10.1167/tvst.10.1.15. [CrossRef] [PubMed]
Yoon J, Han J, Park JI, et al. Optical coherence tomography-based deep-learning model for detecting central serous chorioretinopathy. Sci Rep. 2020; 10(1): 18852, doi:10.1038/s41598-020-75816-w. [CrossRef] [PubMed]
Yoo TK, Choi JY, Kim HK. Feasibility study to improve deep learning in OCT diagnosis of rare retinal diseases with few-shot classification. Med Biol Eng Comput. 2021; 59(2): 401–415, doi:10.1007/s11517-021-02321-1. [CrossRef] [PubMed]
Xu F, Liu S, Xiang Y, et al. Deep learning for detecting subretinal fluid and discerning macular status by fundus images in central serous chorioretinopathy. Front Bioeng Biotechnol. 2021; 9: 651340, doi:10.3389/fbioe.2021.651340. [CrossRef] [PubMed]
Rizwan I Haque I, Neubert J. Deep learning approaches to biomedical image segmentation. Informatics in Medicine Unlocked. 2020; 18: 100297, doi:10.1016/j.imu.2020.100297. [CrossRef]
Falk T, Mai D, Bensch R, et al. U-Net: deep learning for cell counting, detection, and morphometry. Nat Methods. 2019; 16(1): 67–70, doi:10.1038/s41592-018-0261-2. [CrossRef] [PubMed]
Son J, Park SJ, Jung KH. Towards accurate segmentation of retinal vessels and the optic disc in fundoscopic images with generative adversarial networks. J Digit Imaging. 2018; 31(6): 923–928, doi:10.1007/s10278-018-0126-3. [PubMed]
Zhang Y, Yue N, Su MY, et al. Improving CBCT Quality to CT level using deep-learning with generative adversarial network. Med Phys. 2021; 48(6): 2816–2826, doi:10.1002/mp.14624. [CrossRef] [PubMed]
Creswell A, White T, Dumoulin V, Arulkumaran K, Sengupta B, Bharath AA. Generative adversarial networks: an overview. IEEE Signal Processing Magazine. 2018; 35(1): 53–65, doi:10.1109/MSP.2017.2765202. [CrossRef]
Jerez D, Stuart E, Schmitt K, et al. A deep learning approach to identifying immunogold particles in electron microscopy images. Sci Rep. 2021; 11(1): 7771, doi:10.1038/s41598-021-87015-2. [CrossRef] [PubMed]
Levy JJ, Azizgolshani N, Andersen MJ, et al. A large-scale internal validation study of unsupervised virtual trichrome staining technologies on nonalcoholic steatohepatitis liver biopsies. Mod Pathol. 2021; 34(4): 808–822, doi:10.1038/s41379-020-00718-1. [CrossRef] [PubMed]
Pachade S, Porwal P, Thulkar D, et al. Retinal fundus multi-disease image dataset (RFMiD): a dataset for multi-disease detection research. Data. 2021; 6(2): 14, doi:10.3390/data6020014. [CrossRef]
Cen LP, Ji J, Lin JW, et al. Automatic detection of 39 fundus diseases and conditions in retinal photographs using deep neural networks. Nat Commun. 2021; 12(1): 4828, doi:10.1038/s41467-021-25138-w. [CrossRef] [PubMed]
Yoo TK, Choi JY, Kim HK. A generative adversarial network approach to predicting postoperative appearance after orbital decompression surgery for thyroid eye disease. Comput Biol Med. 2020; 118: 103628, doi:10.1016/j.compbiomed.2020.103628. [CrossRef] [PubMed]
Tavakkoli A, Kamran SA, Hossain KF, Zuckerbrod SL. A novel deep learning conditional generative adversarial network for producing angiography images from retinal fundus photographs. Sci Rep. 2020; 10(1): 21580, doi:10.1038/s41598-020-78696-2. [CrossRef] [PubMed]
Nemoto T, Futakami N, Kunieda E, et al. Effects of sample size and data augmentation on U-Net-based automatic segmentation of various organs. Radiol Phys Technol. 2021; 14(3): 318–327, doi:10.1007/s12194-021-00630-6. [CrossRef] [PubMed]
Tajbakhsh N, Jeyaseelan L, Li Q, Chiang JN, Wu Z, Ding X. Embracing imperfect datasets: a review of deep learning solutions for medical image segmentation. Medical Image Analysis. 2020; 63: 101693, doi:10.1016/j.media.2020.101693. [CrossRef] [PubMed]
Ronneberger O, Fischer P, Brox T. U-Net: convolutional networks for biomedical image segmentation. In: Navab N, Hornegger J, Wells WM, Frangi AF, eds. Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015. Lecture Notes in Computer Science. New York: Springer International Publishing; 2015;234–241, doi:10.1007/978-3-319-24574-4_28.
Isola P, Zhu JY, Zhou T, Efros AA. Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, HI: July 21–26, 2017: 1125–1134.
Zhao H, Qiu X, Lu W, Huang H, Jin X. High-quality retinal vessel segmentation using generative adversarial network with a large receptive field. Int J Imaging Syst Technol. 2020; 30(3): 828–842, doi:10.1002/ima.22428. [CrossRef]
Lei B, Xia Z, Jiang F, et al. Skin lesion segmentation via generative adversarial networks with dual discriminators. Med Image Anal. 2020; 64: 101716, doi:10.1016/j.media.2020.101716. [CrossRef] [PubMed]
Shelhamer E, Long J, Darrell T. Fully convolutional networks for semantic segmentation. IEEE Trans Pattern Anal Mach Intell. 2017; 39(4): 640–651, doi:10.1109/TPAMI.2016.2572683. [CrossRef] [PubMed]
Wang Z, Zhong Y, Yao M, et al. Automated segmentation of macular edema for the diagnosis of ocular disease using deep learning method. Sci Rep. 2021; 11(1): 13392, doi:10.1038/s41598-021-92458-8. [CrossRef] [PubMed]
Kermany DS, Goldbaum M, Cai W, et al. Identifying medical diagnoses and treatable diseases by image-based deep learning. Cell. 2018; 172(5): 1122–1131.e9, doi:10.1016/j.cell.2018.02.010. [CrossRef] [PubMed]
von Chamier L, Laine RF, Jukkala J, et al. Democratising deep learning for microscopy with ZeroCostDL4Mic. Nat Commun. 2021; 12(1): 2276, doi:10.1038/s41467-021-22518-0. [CrossRef] [PubMed]
Sahoo NK, Singh SR, Rajendran A, Shukla D, Chhablani J. Masqueraders of central serous chorioretinopathy. Surv Ophthalmol. 2019; 64(1): 30–44, doi:10.1016/j.survophthal.2018.09.001. [CrossRef] [PubMed]
Saporta A, Gui X, Agrawal A, et al. Deep learning saliency maps do not accurately highlight diagnostically relevant regions for medical image interpretation. medRxiv. Published online March 2, 2021:2021.02.28.21252634, doi:10.1101/2021.02.28.21252634.
Wen Y, Chen L, Qiao L, et al. On automatic detection of central serous chorioretinopathy and central exudative chorioretinopathy in fundus images. In: 2020 IEEE International Conference on Bioinformatics and Biomedicine (BIBM). Seoul, South Korea; December 16–19, 2020: 1161–1165, doi:10.1109/BIBM49941.2020.9313274.
Komuku Y, Ide A, Fukuyama H, et al. Choroidal thickness estimation from colour fundus photographs by adaptive binarisation and deep learning, according to central serous chorioretinopathy status. Sci Rep. 2020; 10(1): 5640, doi:10.1038/s41598-020-62347-7. [CrossRef] [PubMed]
Zarbin MA, Lee AY, Keane PA, Chiang MF. Data science in translational vision science and technology. Transl Vis Sci Technol. 2021; 10(8): 20–20, doi:10.1167/tvst.10.8.20. [CrossRef] [PubMed]
Figure 1.
 
Flowchart of the proposed method for pathological lesion segmentation to evaluate CSC. (A) Definition of SRF lesions. (B) Annotation process and examples of manual segmentation. (C) Automatic segmentation of SRF lesion by U-Net.
Figure 1.
 
Flowchart of the proposed method for pathological lesion segmentation to evaluate CSC. (A) Definition of SRF lesions. (B) Annotation process and examples of manual segmentation. (C) Automatic segmentation of SRF lesion by U-Net.
Figure 2.
 
Diagram of the pix2pix architecture used in this study. (A) Pix2pix consists of the U-Net generator and PatchGAN discriminator networks. (B) The U-Net model was used for an image generator in the pix2pix architecture.
Figure 2.
 
Diagram of the pix2pix architecture used in this study. (A) Pix2pix consists of the U-Net generator and PatchGAN discriminator networks. (B) The U-Net model was used for an image generator in the pix2pix architecture.
Figure 3.
 
Process for using the Google Colaboratory to run the deep learning–based segmentation to evaluate CSC. (A) Dataset preparation. (B) Graphic user interface of the Google Colaboratory notebook. (C) Google Drive for storing and connecting the dataset. (D) Click button for the code execution. The original pix2pix code is available on the TensorFlow webpage (https://www.tensorflow.org/tutorials/generative/pix2pix) under the Apache License v2.0 (Copyright 2019 The TensorFlow Authors), which allows users to use, modify, and redistribute the source code.
Figure 3.
 
Process for using the Google Colaboratory to run the deep learning–based segmentation to evaluate CSC. (A) Dataset preparation. (B) Graphic user interface of the Google Colaboratory notebook. (C) Google Drive for storing and connecting the dataset. (D) Click button for the code execution. The original pix2pix code is available on the TensorFlow webpage (https://www.tensorflow.org/tutorials/generative/pix2pix) under the Apache License v2.0 (Copyright 2019 The TensorFlow Authors), which allows users to use, modify, and redistribute the source code.
Figure 4.
 
Classification performance of the InceptionV3 model for CSC detection and the proposed process for SRF segmentation. The classification performance was measured using five-fold cross-validation. The dataset for training the InceptionV3 included fundus photographs from 194 eyes with CSC and 93 healthy eyes.
Figure 4.
 
Classification performance of the InceptionV3 model for CSC detection and the proposed process for SRF segmentation. The classification performance was measured using five-fold cross-validation. The dataset for training the InceptionV3 included fundus photographs from 194 eyes with CSC and 93 healthy eyes.
Figure 5.
 
Interagreement for three graders' annotation and reference on the whole dataset. We calculated the scores of (A) IoU (Jaccard index) and (B) Dice coefficient.
Figure 5.
 
Interagreement for three graders' annotation and reference on the whole dataset. We calculated the scores of (A) IoU (Jaccard index) and (B) Dice coefficient.
Figure 6.
 
Performance of U-Net segmentation models to evaluate CSC in the test dataset validation. The segmentation results were compared with standard reference (major voting of the graders). We calculated the scores of (A) IoU (Jaccard index) and (B) Dice coefficient.
Figure 6.
 
Performance of U-Net segmentation models to evaluate CSC in the test dataset validation. The segmentation results were compared with standard reference (major voting of the graders). We calculated the scores of (A) IoU (Jaccard index) and (B) Dice coefficient.
Figure 7.
 
Examples of segmentation of pathological SRF lesions using the proposed U-Net model. We used retinal images from the ISBI challenge 2021 and the Retinal Fundus Multi-Disease Image Dataset (RFMiD), which are publicly available.
Figure 7.
 
Examples of segmentation of pathological SRF lesions using the proposed U-Net model. We used retinal images from the ISBI challenge 2021 and the Retinal Fundus Multi-Disease Image Dataset (RFMiD), which are publicly available.
Figure 8.
 
Comparison of U-Net–based segmentation results and class activation map based on the InceptionV3 classifier model. The InceptionV3 model was trained via transfer learning to classify CSC and healthy classes. The classification performance of the model is shown in Figure 4.
Figure 8.
 
Comparison of U-Net–based segmentation results and class activation map based on the InceptionV3 classifier model. The InceptionV3 model was trained via transfer learning to classify CSC and healthy classes. The classification performance of the model is shown in Figure 4.
Table 1.
 
Performance Comparison Between Different Segmentation Models in the Test Dataset Validation
Table 1.
 
Performance Comparison Between Different Segmentation Models in the Test Dataset Validation
Table 2.
 
Summary of the Deep Learning–Based Techniques for CSC in Fundus Photography Imaging Domain
Table 2.
 
Summary of the Deep Learning–Based Techniques for CSC in Fundus Photography Imaging Domain
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×