Open Access
Artificial Intelligence  |   October 2022
Application of Deep Learning for Automated Detection of Polypoidal Choroidal Vasculopathy in Spectral Domain Optical Coherence Tomography
Author Affiliations & Notes
  • Papis Wongchaisuwat
    Department of Industrial Engineering, Faculty of Engineering, Kasetsart University, Bangkok, Thailand
  • Ranida Thamphithak
    Department of Ophthalmology, Faculty of Medicine Siriraj Hospital, Mahidol University, Bangkok, Thailand
  • Peerakarn Jitpukdee
    Department of Industrial Engineering, Faculty of Engineering, Kasetsart University, Bangkok, Thailand
  • Nida Wongchaisuwat
    Department of Ophthalmology, Faculty of Medicine Siriraj Hospital, Mahidol University, Bangkok, Thailand
  • Correspondence: Nida Wongchaisuwat, Department of Ophthalmology, Faculty of Medicine Siriraj Hospital, Mahidol University, 2 Wanglang Road, Bangkoknoi, Bangkok 10700, Thailand. e-mail: nida.wog@mahidol.edu 
Translational Vision Science & Technology October 2022, Vol.11, 16. doi:https://doi.org/10.1167/tvst.11.10.16
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Papis Wongchaisuwat, Ranida Thamphithak, Peerakarn Jitpukdee, Nida Wongchaisuwat; Application of Deep Learning for Automated Detection of Polypoidal Choroidal Vasculopathy in Spectral Domain Optical Coherence Tomography. Trans. Vis. Sci. Tech. 2022;11(10):16. https://doi.org/10.1167/tvst.11.10.16.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Objective: To develop an automated polypoidal choroidal vasculopathy (PCV) screening model to distinguish PCV from wet age-related macular degeneration (wet AMD).

Methods: A retrospective review of spectral domain optical coherence tomography (SD-OCT) images was undertaken. The included SD-OCT images were classified into two distinct categories (PCV or wet AMD) prior to the development of the PCV screening model. The automated detection of PCV using the developed model was compared with the results of gold-standard fundus fluorescein angiography and indocyanine green (FFA + ICG) angiography. A framework of SHapley Additive exPlanations was used to interpret the results from the model.

Results: A total of 2334 SD-OCT images were enrolled for training purposes, and an additional 1171 SD-OCT images were used for external validation. The ResNet attention model yielded superior performance with average area under the curve values of 0.8 and 0.81 for the training and external validation data sets, respectively. The sensitivity/specificity calculated at a patient level was 100%/60% and 85%/71% for the training and external validation data sets, respectively.

Conclusions: A conventional FFA + ICG investigation to differentiate PCV from wet AMD requires intense health care resources and adversely affects patients. A deep learning algorithm is proposed to automatically distinguish PCV from wet AMD. The developed algorithm exhibited promising performance for further development into an alternative PCV screening tool. Enhancement of the model's performance with additional data is needed prior to implementation of this diagnostic tool in real-world clinical practice. The invisibility of disease signs within SD-OCT images is the main limitation of the proposed model.

Translational Relevance: Basic research of deep learning algorithms was applied to differentiate PCV from wet AMD based on OCT images, benefiting a diagnosis process and minimizing a risk of ICG angiogram.

Introduction
Age-related macular degeneration (AMD) is a leading cause of vision impairment and loss among older adults. It is classified into dry or wet AMD. Wet AMD is often taken to be synonymous with exudative AMD, although not entirely so. Currently, neovascular AMD (nvAMD) can either be exudative or nonexudative. It is characterized by hemorrhage or exudate from leakage of abnormal vessels in the macular area (serosanguinous maculopathy). Polypoidal choroidal vasculopathy (PCV) presents with clinical features similar to those found in wet AMD. It is increasingly considered a subtype of wet AMD. Both PCV and wet AMD conditions are associated with frequent relapse and require long-term follow-up treatment with intravitreous injections of anti–vascular endothelial growth factor (anti-VEGF).1,2 A significantly higher prevalence of PCV is found in many East Asian countries compared to Caucasian populations.3,4 
Ophthalmic imaging technologies are widely used for diagnosis of macular diseases, including spectral domain optical coherence tomography (SD-OCT) and simultaneous fundus fluorescein and indocyanine green (FFA + ICG) angiogram. SD-OCT images are commonly used to diagnose AMD. The three main findings of SD-OCT in macular diseases are pigment epithelial detachment (PED), subretinal fluid (SRF), and intraretinal fluid, which is typically referred to as cystoid macular edema (CME) in the macular area (diabetic macular edema, a common term for macular edema caused by diabetic retinopathy, is closely related to CME) (Fig. 1). All three of these features are observed on SD-OCT images in both wet AMD and PCV. 
Figure 1.
 
Characteristics of serosanguinous maculopathy observed by SD-OCT.
Figure 1.
 
Characteristics of serosanguinous maculopathy observed by SD-OCT.
In clinical practice, PCV and wet AMD conditions are closely related and share common characteristics of serosanguinous maculopathy. Simultaneous FFA + ICG is considered the gold-standard investigation for diagnosis of both conditions. The presence of a hypercyanescence polypoidal lesion in the early phase of ICG is required for a diagnosis of PCV. Although the mainstay treatment for both diseases is intravitreal anti-VEGF injections, it is important to differentiate between wet AMD and PCV because laser photodynamic therapy is an additional treatment for PCV. In addition to FFA + ICG being an invasive and time-consuming investigation, patients are also at significant risk of anaphylactic shock or rental shutdown with a relatively less risk of dye-related allergy or kidney injury. In contrast, SD-OCT is noninvasive, fast, less expensive, and more practical for use in routine clinical practice. Wet AMD and PCV share very similar characteristics on SD-OCT images, but PCV also shows sharp variable notching or peak PED, double-hump PED, and branching vascular networks (BVNs). A thumb-like polyp containing hyperreflective rings with or without internal hyporeflective lumen is also observed.5 Clinical interpretation of SD-OCT for PCV screening could reach a sensitivity of 89% to 95% and a specificity of 85% to 93%.6,7 
Artificial intelligence (AI) or specifically deep learning (DL) models are a promising tool for helping ophthalmologists with screening, diagnosis, and recommending proper treatment. Development and use of an automated system to distinguish wet AMD from PCV using SD-OCT images is less expensive, safer, and less time-consuming than using FFA + ICG. It would also help to compensate for the scarcity of retinal specialists in a rural area and lessen the need for referrals to secondary or tertiary care hospitals. Using such a system, cases could be properly diagnosed and treated early, and patients who need to be referred could be referred based on evidence of a strong diagnostic suspicion. 
In this study, we proposed automated detection of PCV based solely on SD-OCT segment photographs using deep learning algorithms. Several deep convolutional neural networks with advanced techniques were thoroughly investigated to enhance the model's performance. We relied on standard network architectures with a transfer learning technique and a simplified version of standard networks with an attention technique. A cross-validation technique was used when training these models in order to guarantee performance stability. Finally, a framework of SHapley Additive exPlanations (SHAP)8 was implemented for interpreting results from the proposed models. 
Model experiments with data application, an evaluation process, and the SHAP explainable framework distinguish our work from others. Our main contribution is applying several deep learning models with advanced techniques to differentiate PCV from wet AMD using SD-OCT images. In addition, we introduced evaluation performance at the patient level by taking multiple cross-sectional images from the same person into account. To the best of our knowledge, no previous study has trained similar deep learning algorithms using Thai population data. Finally, we adopted the SHAP framework to better understand the model's functionality. 
Related Work
In recent years, AI has been applied to various domains, including health care (medicine), and specifically in ophthalmology. Several novel algorithms have been applied to various tasks such as disease diagnosis in ophthalmology, as described in several previous studies.920 Most of these algorithms were introduced to automatically detect and differentiate macular diseases, such as AMD.21 A large number of medical images combined with automated image analysis yielded a satisfactory result from the developed models that was near equal to the results from human evaluation. Fundus photographs and SD-OCT are among the most common imaging tests to diagnose retinal diseases. 
A fundus photograph is normally used to document and sometimes diagnose certain eye conditions, including macular diseases such as AMD and PCV. An automated prediagnosis of AMD using typical machine learning models to detect an appearance of a drusen from fundus images was proposed.22 Even though machine learning methods perform relatively well, a lot of effort is needed to sufficiently train them to identify important features. To address this, deep learning models, especially convolutional neural networks (CNNs) and their variations, have gained popularity over the past few years. A custom-designed CNN was employed to automatically and accurately diagnose AMD at an early stage.23 Another deep convolutional neural network–based model was introduced to predict exudative AMD based on fundus images.24 In addition, DeepSeeNet25 and its extension26 were proposed to detect individual AMD risk factors, which were further used to classify different disease severity stages. Similar works by Burlina et al.,27 Grassmann et al.,28 and Liu et al.29 used CNN-based models to perform this task. Another pool of research work relied on a transfer learning concept that applied knowledge learned from one task to another task. The networks fully trained on a standard large data set were fine-tuned with a retina image data set, as proposed in previous studies.3034 
An ophthalmologist typically uses SD-OCT for a certain diagnosis by observing the macula's distinctive layers to map the abnormal characteristic and measure the central retinal thickness. In an early era of advanced AI, the traditional machine learning method, coupled with specific techniques, was implemented to detect different stages of disease based on SD-OCT images.3538 Other research set forth to identify and segment specific macular fluid from SD-OCT images using a CNN-based autoencoder.39,40 In addition, a large pool of work relied on deep neural networks to classify input SD-OCT images into different abnormalities, such as AMD versus normal. Some of these works trained CNNs from a completely blank network,4144 while others relied on the transfer learning method.4550 Additional complicated techniques were considered when training deep neural networks, such as segmenting retina components51-53 and incorporating a novel residual unit subsuming atrous separable convolution.54 In addition, an attention technique55 was adopted for automated retinal image localization and recognition. For example, Fang et al.56 introduced a novel lesion-aware convolutional neural network (LACNN) using a soft attention map to identify lesion location within SD-OCT images. Mishra et al.57 developed multilevel CNNs with dual-attention, while Wu et al.58 focused on specific parts of an image in the model. A novel joint-attention network consisting of a supervised encoding network and an unsupervised attention network was introduced.59 
Although various models have been proposed to distinguish types of AMD, very few studies have focused on the detection of PCV. Most PCV-related works aimed to automate segmentation of PED or to quantify measurement of PED volume. Optical coherence tomography angiography (OCTA) and multiple image systems were used to evaluate the three-dimensional characteristics of polypoidal structures, BVNs, and PCV.60 Another work by Xu et al.61 introduced dual-stage deep neural networks for PED segmentation in PCV. Recent work has been proposed to directly diagnose PCV disease. Yang et al.62 distinguish normal nvAMD from PCV using ICG angiography images. Xu et al.63 used a bimodality convolutional neural network to differentiate AMD from PCV using fundus and SD-OCT images. 
Methodology
This study developed and validated an automated detection system to categorize a given SD-OCT image as either PCV or wet AMD. Multiple deep CNNs were implemented to compare with the findings of retinal specialists, which was considered the gold-standard evaluation method. The end-to-end flow process chart of our proposed system is illustrated in Figure 2. This retrospective cross-sectional study included SD-OCT imaging of patients who attended the outpatient department of the Department of Ophthalmology of the Faculty of Medicine Siriraj Hospital, Mahidol University, Bangkok, Thailand, from August 2019 to April 2021. The protocol for this study was approved by the Siriraj Institutional Review Board (approval no. 176/2563(IRB1)), and the requirement to obtain written informed consent was waived due to the retrospective nature of the study. 
Figure 2.
 
End-to-end flow process chart.
Figure 2.
 
End-to-end flow process chart.
Data Collection Process and Data Preparation
A retrospective review of chart records was performed, and SD-OCT images were collected and preprocessed. Macular SD-OCT images were routinely processed via a cross-sectional scan by SPECTRALIS (Heidelberg Engineering, Heidelberg, Germany). Twenty-five horizontal raster scans covered the center of the fovea, 20 × 20 degrees horizontally and vertically. Automatic real time averaging was employed with an enhanced depth imaging feature turned off. These images were extracted and saved as JPEG files with no patient name or identification. An example of a cross-sectional scan of an SD-OCT image is shown in Figure 1
These cross-sectional SD-OCT images were manually classified and labeled as either PCV or wet AMD by retinal subspecialists. Several images of the SD-OCT scan that did not have abnormal lesions were excluded from the study. Poor-quality images were also eliminated prior to training of the models. All SD-OCT scans in the PCV training group were from confirmed cases of PCV by FFA + ICG. Diagnosis of PCV was based on EVEREST diagnostic criteria: the presence of focal subretinal hyperfluorescence on ICG within the first 6 minutes plus one of the following criteria. These criteria included nodular appearance of polyp(s) on stereoscopic fundus examination, hypofluorescent halo around nodule(s), presence of a branching vascular network, pulsation of polyp(s) on dynamic ICG, orange subretinal nodules on color fundus photography, or massive submacular hemorrhage (4-disc areas in size).64 
The first set of collected data (set 1) was used to develop CNNs whose performance was internally verified. It was then divided into three groups to create training, validation, and test data sets with approximately 80%, 10%, and 10% for each set, respectively. The second data set (set 2) was collected for external validation. The selected model with fine-tuned parameters obtained from the model development process was evaluated using the external validation set. The categorization of images into different groups is summarized in Table 1
Table 1.
 
Number of SD-OCT Images and Corresponding Subjects (Patient Eyes) in the Model Development and External Validation Data Sets
Table 1.
 
Number of SD-OCT Images and Corresponding Subjects (Patient Eyes) in the Model Development and External Validation Data Sets
Model Development and Verification
CNNs generally yield superior performance in computer vision and image-processing tasks.65 Several experiments among various choices of networks and techniques were performed to identify a suitable model architecture and relevant parameters. The accuracy of the developed systems was repeatedly evaluated until the desired level of performance was achieved. A cross-validation technique was employed to enhance model performance and verify model stability. 
Regarding the transfer learning technique, we retrieved the standard VGG16 and ResNet50 source models with pretrained weights based on the ImageNet data set. The pretrained models were fine-tuned for our specific task to identify PCV and wet AMD characteristics. Specifically, only the fully connected layers or the head of the networks was fine-tuned while the remaining layers were frozen. Using transfer learning, full standard models were developed using much less computational power and computation time. 
Simplified standard models were also used to fully train the networks. With smaller and simpler networks, weights in the whole model were able to be trained from scratch. In this work, a joint-attention ResNet-50 was implemented with the use of the simplified ResNet-50. We also trained the Optic-Net54 consisting of a residual unit subsuming atrous separable convolution, as well as a novel building block based on our data, and further improved it with attention blocks. The joint-attention Optic-Net was also implemented in this work. In summary, we considered four groups of simplified models, which included ResNet,66 ResNet with attention,55 Optic-Net,54 and joint-attention Optic-Net.59 
From our experiments, the best network was selected based on the average performance between the validation and test sets. The selected network was then further evaluated using the external validation data set. The probability threshold, defined as threshold_i to differentiate between two classes (i.e., PCV and wet AMD), was fine-tuned to yield the highest performance at an image level. Then, multiple evaluation metrics were computed based on the findings of the models on external validation. A confusion matrix was constructed, whereas sensitivity (recall), specificity, overall accuracy, and F1 were also calculated, with PCV diagnosis considered a positive class. Receiver operating characteristic curves and area under the receiver operating characteristic (AUC) curve were also generated and calculated. AUC was mainly used when comparing among different models with various parameters, whereas F1 score was employed to fine-tune the probability threshold. 
Several cross-sectional SD-OCT images of each patient's scan were generated as input data for the networks. In addition to image-based evaluation metrics, patient-based metrics were also computed by considering all cross-sectional images extracted from the same eye of the same patient. The patient-level threshold, defined as threshold_p, to classify two distinct classes was also fine-tuned. Finally, the selected model with optimized thresholds was subjected to the external validation process. 
In order to better understand how the proposed models made predictions, we adopted a unified framework named SHAP for interpreting the results. The SHAP value was inspired mainly from the well-known Shapley value in game theory to identify a feature importance value. Applying this technique to an image classification task like ours, each pixel was assigned an estimated SHAP value. Pixels with positive SHAP values represent the increasing probability of the class of interest, while negative SHAP values suggest a reduced probability. 
Results
Multiple models were extensively tested and compared to obtain the desired performance, as well as verify their generalizability. Considering all data folds, the average AUC of the validation and test sets in set 1 data was computed to enhance the reliability of the reported results. As shown in Table 2, the ResNet with attention model yielded the highest average AUC. Hence, this model was selected for external validation, the result of which showed 0.81 AUC (Fig. 3). 
Table 2.
 
AUC Performance and the Average AUC of Experimented Models on Set 1 Data
Table 2.
 
AUC Performance and the Average AUC of Experimented Models on Set 1 Data
Figure 3.
 
The AUC from external validation of the best-performing model.
Figure 3.
 
The AUC from external validation of the best-performing model.
Using the selected model, we fine-tuned the probability threshold based on the test set in set 1 data to achieve the highest F1 score. After varying the threshold_i, the 0.75 threshold was found to yield the highest F1 score of 0.66. This fine-tuned probability threshold was then used to verify the model's performance during external validation (Table 3). We also evaluated the results from a patient perspective by considering all SD-OCT images generated from the same patient. The mean prediction score between predicted PCV and wet AMD was computed for each patient eye. As a result, setting threshold_p at 0.32 yielded the best performance with the highest F1 score. In other words, if the mean prediction score considering all images was greater than 0.32, we diagnosed this patient eye as having PCV. Finally, we tested the preferred model with the fine-tuned thresholds using the external validation set, as shown in Table 3
Table 3.
 
Diagnostic Performance of the ResNet Attention Model With Fine-Tuned Parameters in the Test Set of the Model Development Part (Set 1) and in the External Validation Part (Set 2)
Table 3.
 
Diagnostic Performance of the ResNet Attention Model With Fine-Tuned Parameters in the Test Set of the Model Development Part (Set 1) and in the External Validation Part (Set 2)
In addition, we computed the SHAP value by using the ResNet with an attention model, which provides superior performance based on our experiments. According to multiple images predicted to be PCV, we observed pixels with large positive SHAP values (red-labeled pixels) clustered at high peak PED (Fig. 4A) with a probability of 1.0 and pixels with some positive SHAP values scattered at notching PED (Fig. 4B) with a probability of 0.48. A nonspecific SD-OCT image of SRF without PED, which could be observed in both wet AMD and PCV, showed a low number of pixels with positive SHAP values (Fig. 4C) with a probability of 0.45. In contrast to the PCV images, a far lower number of pixels with positive SHAP values (probability 0.06) could be observed on wet AMD-classified images (Fig. 4D). 
Figure 4.
 
SHAP values on SD-OCT images when applying the ResNet attention model.
Figure 4.
 
SHAP values on SD-OCT images when applying the ResNet attention model.
Discussion
In this study, CNNs to differentiate PCV from wet AMD using cross-sectional SD-OCT images were proposed. Four simplified networks with more advanced techniques, including ResNet, ResNet with attention, Optic-Net, and a joint-attention Optic-Net, could enhance model performance beyond the traditional transfer learning technique. The attention mechanism was adopted in order to specify the area of interest that was essentially related to our task. These models achieved reasonably desirable AUC values ranging from 0.76 to 0.80 in training data. 
Compared to previous work, we adopted various advanced techniques in terms of both network architecture and the validation mechanism in order to differentiate PCV from wet AMD. In our study, the attention technique enhanced the model's performance compared to traditional supervised learning CNNs. However, Optic-Net did not outperform the residual attention networks. Complicated networks typically require large data sets in order to learn hidden insights and perform well. The observed inferior performance of the more advanced Optic-Net architecture in our study can likely be explained by the small size of our data set. 
To our knowledge, no prior work has focused on PCV identification in the Thai population. Since data collection in a medical domain related to human subjects is always challenging, a relatively small data set was included and analyzed in this study. A study from Yang et al.62 used AI models trained on a publicly available AI platform to diagnose PCV based on ICG images. AI performance was comparable to retinal specialists for diagnosing PCV from ICG images with an accuracy of 0.83. To the best of our knowledge, only two studies have used OCT images in training the AI model to detect PCV. Xu et al.63 proposed the model, which takes simultaneously the fundus image and the OCT image as a bimodal input and categorizes it as a four-way output: wet AMD (excluding PCV), dry AMD (atrophic AMD and early stage AMD), PCV, and normal. The OCT-based model alone reached an accuracy of 83.2%, and the bimodal DL model combining fundus and OCT images had an improvement in accuracy of 87.4%. Another study from Chou et al.67 showed a comparable result. They proposed a bimodal model with an ensemble stacking technique, combining color fundus photographs (CFPs) and clinical features of OCT biomarkers to distinguish between PCV and nvAMD with an accuracy of 83.67%, sensitivity of 80.76%, and specificity of 84.72%. In our proposed system, we evaluated the model at a patient level, which was relatively more suitable for a clinical application. We relied solely on OCT images without taking fundus images into consideration. Since only a small data set was available, a cross-validation technique was implemented to enhance model stability. We selected the best model based on the average AUC value between validation and test sets in order to strengthen the reliability of the model. The AUC from external validation is similar to the average AUC corresponding to the model development data. This implies a desirable level of generalizability of the model. 
According to the observed SHAP values, polyp lesions in PCV are observed underneath the retinal pigment epithelium (RPE) layer, so RPE bulging and peak, steep slope PED, or notching PED at the lesion site are observed on SD-OCT images. Pixels with large positive SHAP values clustered at the steep slope of PED (Fig. 4A) indicated specific features with a high possibility of PCV diagnosis, while pixels with extremely low positive SHAP values (Fig. 4D) indicated a high possibility of a wet AMD diagnosis. However, inferior performance of the algorithm was still observed in some SD-OCT images of PCV with prominent notching PED (Fig. 4B). 
Main limitations in differentiating SD-OCT images of serosanguinous macula lesions were that only a few cuts of SD-OCT scans that passed through the polyp lesion sites and showed a high positive SHAP value result were retrieved. From many other cuts in the same patient, polypoidal lesions were not obviously seen on SD-OCT images. These cuts may have exhibited intraretinal fluid, subretinal fluid, or nonspecific PED, which gave unreliable SHAP values and decreased the performance of automated algorithms (Fig. 4C). To enhance the performance of the algorithm, fine-turning different thresholds at a patient level with additional data could better differentiate those with polyp lesions detected. 
In medical practice, multiple cross-sectional SD-OCT images are generally obtained. All of these scans should be collectively considered in order to make a clear diagnosis. We provided the final prediction at a patient level in clinical practice at a threshold_p cutoff of 0.32. With the small patient-level threshold, there is a lower chance that wet AMD will be diagnosed. Our proposed method provided a desirable patient-level performance at a sensitivity of 0.85 and a specificity of 0.71 based on the external validation. As expected, higher sensitivity for detecting PCV was observed when applying the patient-level threshold compared to the image-level threshold. 
Combined multimodal retinal imaging techniques and noninvasive retinal investigations (e.g., OCT angiogram) were employed to identify polypoidal lesions to aid PCV diagnosis without the need for invasive FFA + ICG.6871 However, variable sensitivity was observed in detecting polypoidal lesions in OCTA due to an obscured flow signal by a pigment epithelial layer in the retina or by dense hemorrhage overlying the polypoidal lesions. The main advantage of our proposed model is that deep learning algorithms are used to differentiate PCV from wet AMD using cross-sectional SD-OCT images only. Our algorithm yielded promising performance of deep learning based on the use of only a small sample size. The main limitations include limited data and data quality, which are common problems in the medical field, especially in relatively new practical applications like PCV identification. When a model is implemented in clinical practice, the new data can be used to continuously improve the model. 
Conclusion
An automated detection system to differentiate PCV from wet AMD using SD-OCT images was proposed in this study. The typical transfer learning techniques with standard networks were compared with more advanced techniques, as well as with novel network architectures. The residual attention networks provided superior performance at an image level based on a limited amount of locally collected data. Further analysis at a patient level performed better, which portends the possible benefit of this technology in real-world clinical practice. The SHAP value technique was also implemented for interpretation purposes. Using the available data, a promising performance of the proposed model and an explanation regarding how it interprets data were both achieved. The collection of additional data to further train the model may enhance model performance. 
Acknowledgments
The authors thank Jirat Boonphun for contributing the results of standard networks using the transfer learning technique as the benchmark. 
Supported by a grant from the Kasetsart University Research and Development Institute. However, any opinions, findings, and conclusions or recommendations in this document are those of the authors and do not necessarily reflect views of the sponsor. 
Disclosure: P. Wongchaisuwat, None; R. Thamphithak, None; P. Jitpukdee, None; N. Wongchaisuwat, None 
References
Kim JB, Nirwan RS, Kuriyan AE. Polypoidal choroidal vasculopathy. Curr OphthalmoL Rep. 2017; 5(2): 176–186. [CrossRef] [PubMed]
Kumar A, Kumawat D, Sundar MD, et al. Polypoidal choroidal vasculopathy: a comprehensive clinical update. Ther Adv Ophthalmol. 2019; 11: 251584141983115. [CrossRef]
Sho K, Takahashi K, Yamada H, et al. Polypoidal choroidal vasculopathy: incidence, demographic features, and clinical characteristics. Arch Ophthalmol. 2003; 121(10): 1392–1396. [CrossRef] [PubMed]
Wong C, Yanagi Y, Lee WK, et al. Age-related macular degeneration and polypoidal choroidal vasculopathy in Asians. Prog Retin Eye Res. 2016; 53: 107–139. [CrossRef] [PubMed]
Cheung CMG, Lai TYY, Ruamviboonsuk P, et al. Polypoidal choroidal vasculopathy: definition, pathogenesis, dagnosis, and management. Ophthalmology. 2018; 125(5): 708–724. [CrossRef] [PubMed]
De Salvo G, Vaz-Pareira S, Keane PA, et al. Sensitivity and specificity of spectral-domain optical coherence tomography in detecting idiopathic polypoidal choroidal vasculopathy. Am J Ophthalmol. 2014; 158(6): 1228–1238.e1. [CrossRef] [PubMed]
Liu R, Li J, Li Z, et al. Distinguishing polypoidal choroidal vasculopathy from typical neovascular age-related macular degeneration based on spectral domain optical coherence tomography. Retina. 2016; 36(4): 778–786. [CrossRef] [PubMed]
Lundberg SM, Lee SI. A unified approach to interpreting model predictions. In: Proceedings of the 31st International Conference on Neural Information Processing Systems. Long beach, CA, USA; 2017.
De Fauw J, Ledsam JR, Romera-Paredes B, et al. Clinically applicable deep learning for diagnosis and referral in retinal disease. Nat Med. 2018; 24(9): 1342–1350. [CrossRef] [PubMed]
Du XL, Li WB, Hu BJ. Application of artificial intelligence in ophthalmology. Int J Ophthalmol. 2018; 11(9): 1555. [PubMed]
Grewal PS, Oloumi F, Rubin U, Tennant MT. Deep learning in ophthalmology: a review. Can J Ophthalmol. 2018; 53(4): 309–313. [CrossRef] [PubMed]
Rahimy E . Deep learning applications in ophthalmology. Curr Opin Ophthalmol. 2018; 29(3): 254–260. [CrossRef] [PubMed]
Akkara JD, Kuriakose A. Role of artificial intelligence and machine learning in ophthalmology. Kerala J Ophthalmol. 2019; 31(2): 150. [CrossRef]
Balyen L, Peto T. Promising artificial intelligence-machine learning-deep learning algorithms in ophthalmology. Asia Pac J Ophthalmol (Phila). 2019; 8(3): 264–272. [PubMed]
Date RC, Jesudasen SJ, Weng CY. Applications of deep learning and artificial intelligence in retina. Int Ophthalmol Clin. 2019; 59(1): 39–57. [CrossRef] [PubMed]
Kapoor R, Walters SP, Al-Aswad LA. The current state of artificial intelligence in ophthalmology. Surv Ophthalmol. 2019; 64(2): 233–240. [CrossRef] [PubMed]
Ting DSW, Pasquale LR, Peng L, et al. Artificial intelligence and deep learning in ophthalmology. Br J Ophthalmol. 2019; 103(2): 167–175 [CrossRef] [PubMed]
Badar M, Haris M, Fatima A. Application of deep learning for retinal image analysis: a review. Comput Sci Rev. 2020; 35: 100203. [CrossRef]
Moraru AD, Costin D, Moraru RL, Branisteanu DC. Artificial intelligence and deep learning in ophthalmology-present and future. Exp Ther Med. 2020; 20(4): 3469–3473. [PubMed]
Sengupta S, Singh A, Leopold HA, Gulati T, Lakshminarayanan V. Ophthalmic diagnosis using deep learning with fundus images—a critical review. Artificial Intelligence Med. 2020; 102: 101758. [CrossRef]
Virgili G, Menchini F, Casazza G, et al. Optical coherence tomography (OCT) for detection of macular oedema in patients with diabetic retinopathy. Cochrane Database Syst Rev. 2015(1).
García-Floriano A, Ferreira-Santiago Á, Camacho-Nieto O, Yáñez-Márquez C. A machine learning approach to medical image classification: detecting age-related macular degeneration in fundus images. Comput Electrical Eng. 2019; 75: 218–229. [CrossRef]
Tan JH, Bhandary SV, Sivaprasad S, et al. Age-related macular degeneration detection using deep convolutional neural network. Future Generation Comput Syst. 2018; 87: 127–135. [CrossRef]
Matsuba S, Tabuchi H, Ohsugi H, et al. Accuracy of ultra-wide-field fundus ophthalmoscopy-assisted deep learning, a machine-learning technology, for detecting age-related macular degeneration. Int Ophthalmol. 2019; 39(6): 1269–1275. [CrossRef] [PubMed]
Peng Y, Dharssi S, Chen Q, et al. DeepSeeNet: a deep learning model for automated classification of patient-based age-related macular degeneration severity from color fundus photographs. Ophthalmology. 2019; 126(4): 565–575. [CrossRef] [PubMed]
Chen Q, Peng Y, Keenan T, Dharssi S, Agro E. A multi-task deep learning model for the classification of age-related macular degeneration. AMIA Joint Summits Transl Sci Proc. 2019; 2019: 505–514.
Burlina PM, Joshi N, Pacheco KD, Freund DE, Kong J, Bressler NM. Use of deep learning for detailed severity characterization and estimation of 5-year risk among patients with age-related macular degeneration. JAMA Ophthalmol. 2018; 136(12): 1359–1366. [CrossRef] [PubMed]
Grassmann F, Mengelkamp J, Brandl C, et al. A deep learning algorithm for prediction of age-related eye disease study severity scale for age-related macular degeneration from color fundus photography. Ophthalmology. 2018; 125(9): 1410–1420. [CrossRef] [PubMed]
Liu H, Wong DWK, Fu H, Xu Y, Liu J (2019). DeepAMD: Detect early age-related macular degeneration by applying deep learning in a multiple instance learning framework. In: Jawahar C, Li H, Mori G, Schindler K, eds. Computer Vision ACCV 2018. Lecture Notes in Computer Science(), vol 11365. Chambridge: Springer, https://doi.org/10.1007/978-3-030-20873-8_40.
Burlina P, Freund DE, Joshi N, Wolfson Y, Bressler N. Detection of age-related macular degeneration via deep learning. In: 2016 IEEE 13th International Symposium on Biomedical Imaging (ISBI). Prague, Czech Republic: IEEE; 2016: 184–188.
Burlina P, Pacheco KD, Joshi N, Freund DE, Bressler NM. Comparing humans and deep learning performance for grading AMD: a study in using universal deep features and transfer learning for automated AMD analysis. Comput Biol Med. 2017; 82: 80–86. [CrossRef] [PubMed]
Burlina PM, Joshi N, Pekala M, Pacheco KD, Freund DE, Bressler NM. Automated grading of age-related macular degeneration from color fundus images using deep convolutional neural networks. JAMA Ophthalmol. 2017; 135(11): 1170–1176. [CrossRef] [PubMed]
Burlina P, Joshi N, Pacheco KD, Freund DE, Kong J, Bressler NM. Utility of deep learning methods for referability classification of age-related macular degeneration. JAMA Ophthalmol. 2018; 136(11): 1305–1307. [CrossRef] [PubMed]
Govindaiah A, Smith RT, Bhuiyan A. A new and improved method for automated screening of age-related macular degeneration using ensemble deep neural networks. In: 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). Honolulu, HI, USA: IEEE; 2018: 702–705.
Srinivasan PP, Kim LA, Mettu PS, et al. Fully automated detection of diabetic macular edema and dry age-related macular degeneration from optical coherence tomography images. Biomed Optics Express. 2014; 5(10): 3568–3577. [CrossRef]
Deng J, Xie X, Terry L, et al. Age-related macular degeneration detection and stage classification using choroidal OCT images. In: Campilho A, Karray F, eds. Image Analysis and Recognition. ICIAR 2016. Lecture Notes in Computer Science(), vol 9730. Chambridge: Springer; 2016, https://doi.org/10.1007/978-3-319-41501-7_79.
Wang Y, Zhang Y, Yao Z, Zhao R, Zhou F. Machine learning based detection of age-related macular degeneration (AMD) and diabetic macular edema (DME) from optical coherence tomography (OCT) images. Biomed Optics Express. 2016; 7(12): 4928–4940.
Venhuizen FG, van Ginneken B, van Asten F, et al. Automated staging of age-related macular degeneration using optical coherence tomography. Invest Ophthalmol & Vis Sci. 2017; 58(4): 2318–2328. [PubMed]
Schlegl T, Waldstein SM, Bogunovic H, et al. Fully automated detection and quantification of macular fluid in OCT using deep learning. Ophthalmology. 2018; 125(4): 549–558. [PubMed]
Pekala M, Joshi N, Liu TA, Bressler NM, DeBuc DC, Burlina P. Deep learning based retinal OCT segmentation. Comput Biol Med. 2019; 114: 103445. [PubMed]
Lee CS, Baughman DM, Lee AY. Deep learning is effective for classifying normal versus age-related macular degeneration OCT images. Ophthalmol Retina. 2017; 1(4): 322–327. [PubMed]
Kaymak S, Serener A. Automated age-related macular degeneration and diabetic macular edema detection on OCT images using deep learning. In: 2018 IEEE 14th International Conference on Intelligent Computer Communication and Processing (ICCP). Cluj-Napoca, Romania: IEEE; 2018.
Kuwayama S, Ayatsuka Y, Yanagisono D, et al. Automated detection of macular diseases by optical coherence tomography and artificial intelligence machine learning of optical coherence tomography images. J Ophthalmol. 2019;6319581.
Sunija AP, Kar S, Gayathri S, Gopi VP, Palanisamy P. Octnet: a lightweight CNN for retinal disease classification from optical coherence tomography images. Comput Methods Programs Biomed. 2021; 200: 105877. [PubMed]
Treder M, Lauermann J, Eter N. Automated detection of exudative age-related macular degeneration in spectral domain optical coherence tomography using deep learning. Graefes Arch Clin Exp Ophthalmol. 2018; 256(2): 259–265. [PubMed]
Kermany DS, Goldbaum M, Cai W, et al. Identifying medical diagnoses and treatable diseases by image-based deep learning. Cell. 2018; 172(5): 1122–1131.e9. [PubMed]
Karri SP, Chakraborty D, Chatterjee J. Transfer learning based classification of optical coherence tomography images with diabetic macular edema and dry age-related macular degeneration. Biomed Optics Express. 2017; 8: 579.
Lu W, Tong Y, Yu Y, Xing Y, Chen C. Deep learning-based automated classification of multi-categorical abnormalities from optical coherence tomography images. Transl Vis Sci Technol. 2018; 7(6): 41. [PubMed]
Perdomo O, Otálora S, González FA, Meriaudeau F, Müller H. OCT-NET: a convolutional network for automatic classification of normal and diabetic macular edema using sd-oct volumes. In: 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018). Washington, DC, USA: IEEE; 2018.
Motozawa N, An G, Takagi S, et al. Optical coherence tomography-based deep-learning models for classifying normal and age-related macular degeneration and exudative and non-exudative age-related macular degeneration changes. Ophthalmol Ther. 2019; 8(4): 527–539. [PubMed]
Li X, Shen L, Shen M, Tan F, Qiu CS. Deep learning based early stage diabetic retinopathy detection using optical coherence tomography. Neurocomputing. 2019; 369: 134–144.
Lu D, Heisler M, Lee S, et al. Deep-learning based multiclass retinal fluid segmentation and detection in optical coherence tomography images using a fully convolutional neural network. Med Image Anal. 2019; 54: 100–110. [PubMed]
Wang J, Hormel TT, Gao L, et al. Automated diagnosis and segmentation of choroidal neovascularization in OCT angiography using deep learning. Biomed Optics Express. 2020; 11(2): 927–944.
Kamran SA, Saha S, Sabbir AS, Tavakkoli A. Optic-net: A novel convolutional neural network for diagnosis of retinal diseases from optical tomography images. In: 2019 18th IEEE International Conference on Machine Learning and Applications (ICMLA). Boca Raton, FL, USA:IEEE; 2019.
Vaswani A, Shazeer N, Parmar N, et al. Attention is all you need. In Proceedings of the 31st International Conference on Neural Information Processing Systems (NIPS'17). Red Hook, NY, USA: Curran Associates Inc.; 6000–6010.
Fang L, Wang C, Li S, Rabbani H, Chen X, Liu Z. Attention to lesion: lesion-aware convolutional neural network for retinal optical coherence tomography image classification. IEEE Trans Med Imaging. 2019; 38(8): 1959–1970. [PubMed]
Mishra SS, Mandal B, Puhan N. Multi-level dual-attention based CNN for macular optical coherence tomography classification. IEEE Signal Proc Lett. 2019; 26(12): 1793–1797.
Wu J, Zhang Y, Wang J, et al. AttenNet: deep attention based retinal disease classification in OCT images. In: International Conference on Multimedia Modeling. Daejeon, South Korea: Springer-Verlag; 2020: 565–576.
Kamran SA, Tavakkoli A, Zuckerbrod SL. Improving robustness using joint attention network for detecting retinal degeneration from optical coherence tomography images. arXiv preprint arXiv:.08094. 2020.
Chi YT, Yang CH, Cheng CK. Optical coherence tomography angiography for assessment of the 3-dimensional structures of polypoidal choroidal vasculopathy. JAMA Ophthalmol. 2017; 135(12): 1310–1316. [PubMed]
Xu Y, Yan K, Kim J, et al. Dual-stage deep learning framework for pigment epithelium detachment segmentation in polypoidal choroidal vasculopathy. Biomed Optics Express. 2017; 8(9): 4061–4076.
Yang J, Zhang C, Wang E, Chen Y, Yu W. Utility of a public-available artificial intelligence in diagnosis of polypoidal choroidal vasculopathy. Graefes Arch Clin Exp Ophthalmol. 2020; 258(1): 17–21. [PubMed]
Xu Y, Yan K, Kim J, et al. Automated diagnoses of age-related macular degeneration and polypoidal choroidal vasculopathy using bi-modal deep convolutional neural networks. Br J Ophthalmol. 2020; 105(4): 561–566. [PubMed]
Koh A, Lee WK, Chen LJ, et al. EVEREST study: efficacy and safety of verteporfin photodynamic therapy in combination with ranibizumab or alone versus ranibizumab monotherapy in patients with symptomatic macular polypoidal choroidal vasculopathy. Retina. 2012; 32(8): 1453–1464. [PubMed]
Khan A, Sohail A, Zahoora U, Qureshi AS. A survey of the recent architectures of deep convolutional neural networks. Artificial Intelligence Rev. 2020; 53(8): 5455–5516.
He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, NV, USA; 2016: 770–778.
Chou YB, Hsu CH, Chen WS, et al. Deep learning and ensemble stacking technique for differentiating polypoidal choroidal vasculopathy from neovascular age-related macular degeneration. Sci Rep. 2021; 11(1): 1–9. [PubMed]
Takayama K, Ito Y, Kaneko H, et al. Comparison of indocyanine green angiography and optical coherence tomographic angiography in polypoidal choroidal vasculopathy. Eye. 2017; 31(1): 45–52. [PubMed]
Tanaka K, Mori R, Kawamura A, Nakashizuka H, Wakatsuki Y, Yuzawa M. Comparison of OCT angiography and indocyanine green angiographic findings with subtypes of polypoidal choroidal vasculopathy. Br J Ophthalmol. 2017; 101(1): 51–55. [PubMed]
Wang Y, Yang J, Li B, Yuan M, Chen Y. Detection rate and diagnostic value of optical coherence tomography angiography in the diagnosis of polypoidal choroidal vasculopathy: a systematic review and meta-analysis. J Ophthalmol. 2019;6837601.
Talisa E, Kokame GT, Kaneko KN, Lian R, Lai JC, Wee R. Sensitivity and specificity of detecting polypoidal choroidal vasculopathy with en face optical coherence tomography and optical coherence tomography angiography. Retina. 2019; 39(7): 1343–1352. [PubMed]
Figure 1.
 
Characteristics of serosanguinous maculopathy observed by SD-OCT.
Figure 1.
 
Characteristics of serosanguinous maculopathy observed by SD-OCT.
Figure 2.
 
End-to-end flow process chart.
Figure 2.
 
End-to-end flow process chart.
Figure 3.
 
The AUC from external validation of the best-performing model.
Figure 3.
 
The AUC from external validation of the best-performing model.
Figure 4.
 
SHAP values on SD-OCT images when applying the ResNet attention model.
Figure 4.
 
SHAP values on SD-OCT images when applying the ResNet attention model.
Table 1.
 
Number of SD-OCT Images and Corresponding Subjects (Patient Eyes) in the Model Development and External Validation Data Sets
Table 1.
 
Number of SD-OCT Images and Corresponding Subjects (Patient Eyes) in the Model Development and External Validation Data Sets
Table 2.
 
AUC Performance and the Average AUC of Experimented Models on Set 1 Data
Table 2.
 
AUC Performance and the Average AUC of Experimented Models on Set 1 Data
Table 3.
 
Diagnostic Performance of the ResNet Attention Model With Fine-Tuned Parameters in the Test Set of the Model Development Part (Set 1) and in the External Validation Part (Set 2)
Table 3.
 
Diagnostic Performance of the ResNet Attention Model With Fine-Tuned Parameters in the Test Set of the Model Development Part (Set 1) and in the External Validation Part (Set 2)
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×