Free
Articles  |   April 2015
Automatic Segmentation of Polypoidal Choroidal Vasculopathy from Indocyanine Green Angiography Using Spatial and Temporal Patterns
Author Affiliations & Notes
  • Wei-Yang Lin
    Department of Computer Science and Information Engineering, National Chung Cheng University, Chiayi 621, Taiwan
  • Sheng-Chang Yang
    Department of Computer Science and Information Engineering, National Chung Cheng University, Chiayi 621, Taiwan
  • Shih-Jen Chen
    Department of Ophthalmology, Taipei Veterans General Hospital, Taiwan
    School of Medicine, National Yang Ming University, Taipei 11217, Taiwan
  • Chia-Ling Tsai
    Computer Science Department, Iona College, New Rochelle, New York, USA
  • Shuo-Zhao Du
    Department of Computer Science and Information Engineering, National Chung Cheng University, Chiayi 621, Taiwan
  • Tock-Han Lim
    National Healthcare Group Eye Institute, Tan Tock Seng Hospital, Singapore
  • Correspondence: Shih-Jen Chen, Department of Ophthalmology, Taipei Veterans General Hospital, Taiwan; sjchen@vghtpe.gov.tw 
Translational Vision Science & Technology April 2015, Vol.4, 7. doi:https://doi.org/10.1167/tvst.4.2.7
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Wei-Yang Lin, Sheng-Chang Yang, Shih-Jen Chen, Chia-Ling Tsai, Shuo-Zhao Du, Tock-Han Lim; Automatic Segmentation of Polypoidal Choroidal Vasculopathy from Indocyanine Green Angiography Using Spatial and Temporal Patterns. Trans. Vis. Sci. Tech. 2015;4(2):7. https://doi.org/10.1167/tvst.4.2.7.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose:: To develop a computer-aided diagnostic tool for automated detection and quantification of polypoidal regions in indocyanine green angiography (ICGA) images.

Methods:: The ICGA sequences of 59 polypoidal choroidal vasculopathy (PCV) treatment–naïve patients from five Asian countries (Hong Kong, Singapore, South Korea, Taiwan, and Thailand) were provided by the EVEREST study. The ground truth was provided by the reading center for the presence of polypoidal regions. The proposed detection algorithm used both temporal and spatial features to characterize the severity of polypoidal lesions in ICGA sequences. Leave-one-out cross validation was carried out so that each patient was used once as the validation sample. For each patient, a fixed detection threshold of 0.5 on the severity was applied to obtain sensitivity, specificity, and balanced accuracy with respect to the ground truth.

Results:: Our system achieved an average accuracy of 0.9126 (sensitivity = 0.9125, specificity = 0.9127) for detection of polyps in the 59 ICGA sequences. Among the total of 222 features extracted from ICGA sequence, the spatial variances exhibited best discriminative power in distinguishing between polyp and nonpolyp regions. The results also indicated the importance of combining spatial and temporal features to further improve detection accuracy.

Conclusions:: The developed software provided a means of detecting and quantifying polypoidal regions in ICGA images for the first time.

Translational Relevance:: This preliminary study demonstrated a computer-aided diagnostic tool, which enables objective evaluation of PCV and its progression. Ophthalmologists can easily visualize the polypoidal regions and obtain quantitative information about polyps by using the proposed system.

Introduction
Polypoidal choroidal vasculopathy (PCV) is prevalent in 40% to 50% of the Asian patients with exudative maculopathy.1,2 Diagnosis of PCV depends on multimodal images studies including stereo color fundus photographs, fluorescein angiography, indocyanine green angiography (ICGA), and optical coherence tomography. Of these image studies, ICGA is the gold-standard for diagnosis of PCV.3,4 In the ICGA, polyps appear as single or multiple clusters of nodular hyperfluorescence in 6 minutes and last to the late phase with washout fluorescence or remain as hyperfluorescence.5 Other features of PCV may include branching vascular network (BVN), pulsatile polyp, presence of hypofluorescent halo, and late phase hyperfluorescent plaque.4,6,7 The location and size of polyps in ICGA are important features for surgeons to decide the area of thermal laser application or photodynamic therapy (PDT) because of the higher regression rate of polyps by PDT comparing with monotherapy with antivascular endothelial growth factor (anti-VEGF).6 
However, diagnosis of polyps in ICGA remains a challenge. Generally, the polyps are described as “dilated network of inner choroidal vessels with terminal hyperfluorescent aneurysm-like dialations.”8 Yet the aneurysm-like dilatation may lie within the network and the size of the polyps may range from numerous small hyperfluorescent dots, to coil-like large vessel deformations.9 The variable caliber, tortuosity, and unusual course of the vessels of the BVN, together with the inner choroidal vascular complex, may interfere with localization of polyps. In addition, the hyperfluorescence of polyps as well as the choroidal vessels and BVN beneath the retinal pigment epithelium (RPE) can be imaged together with the hyperfluorescent lesions (e.g., retinal microaneurysm in diabetic retinopathy) above the RPE, hence making the differentiation of polyps difficult. 
Our previous work for characterizing classic choroidal neovascularization (CNV) in fluorescein angiography (FA) had introduced a method by training the computer with temporal features (intensity change over time) of the CNV. Besides, the severity map of the CNV would provide an objective measurement for treatment response.10 The hypothesis in this study is that the polypoidal lesions in an ICGA sequence can be characterized and recognized with higher accuracy by the computerized system with the quantitative information on the temporal and spatial variations of the fluorescence in the sequence. 
Materials and Methods
Data Description
The goal of the EVEREST study6 was to evaluate the treatment outcomes for patients with symptomatic macular PCV. In addition to the angiographic criteria of PCV, patients should have their lesion sizes smaller than 5400 um with best corrected visual acuity between 24 and 73 letters (Snellen equivalent of 0.06–0.5). Exclusion criteria included any prior treatment with focal laser or PDT or intraocular surgery of the lesion eyes. Patients with the history of angioid streaks, high myopia, glaucoma, and RPE tear were also excluded. This study collected ICGA images from 61 PCV treatment–naïve patients from five Asian countries (Hong Kong, Singapore, South Korea, Taiwan, and Thailand). All ICGA images were captured using confocal scanning laser ophthalmoscope (cSLO; Heidelberg Retina Angiograph Spectralis, Heidelberg Engineering, Heidelberg, Germany) where the pixilation was 1536 pixels per 30° arc angle. The resulting imaging pixilation was 768 × 768 or higher. A standardized imaging protocol had been applied to facilitate the analysis and characterization of PCV lesions. The protocol provided a standardized image set that contains a dynamic angiography (first 30 seconds after injection) and still images (captured approximately at 1, 3, 5, 10, and 20 minutes from injection). In this study, we only used still images as the input data to the proposed system. The videos taken by the cSLO system were not used because of the motion distortion caused by eye movement. In addition, the early phase dynamic angiograms usually had high variations in brightness. There were two patients who do not have still images at some of the predefined time points. Thus, we also excluded these two patients from our experimental validation (i.e., using 59 subjects from the EVEREST study). Polyp areas were manually annotated by the investigators in the reading center. The manually labeled polyp locations served as the reference standard in our experiments. The ICGA images and their reference standard were made available to us by the courtesy of the investigators of the EVEREST trial. 
Image Registration
The proposed method for detecting polyps in ICGA images consisted of the three main steps (Fig. 1). We used an existing software11 for the first step of image registration and wrote our own code for the other steps. The whole system was implemented using Matlab R2012a (The MathWorks, Inc., Natick, MA) and Visual Studio C++ 2010 under Microsoft Windows 7 operating system (Microsoft Corp., Redmond, WA). In order to compensate frame-to-frame eye movement, we firstly performed image registration so that the input ICGA sequence was spatially aligned. After transforming the input images into a common coordinate system, we were allowed to characterize the temporal behavior of spatially corresponding pixels. In this study, we chose a registration technique called Edge-Driven Dual-Bootstrap Iterative Closest Point algorithm (Edge-Driven DB-ICP).11 This algorithm is fully automatic and can deal with images with substantial, nonlinear intensity variations, which are typical cases of ICGA images. 
Figure 1
 
Procedure for detecting polyps in ICGA sequence contains three main steps. First, we perform image registration on the input sequence using Edge-Driven DB-ICP algorithm.11 Then, we can extract spatial and temporal features from the spatially aligned ICGA sequence. Finally, we apply the AdaBoost algorithm to choose best features and combine them to produce a strong classifier. The output of our system provides both probability map and severity map.
Figure 1
 
Procedure for detecting polyps in ICGA sequence contains three main steps. First, we perform image registration on the input sequence using Edge-Driven DB-ICP algorithm.11 Then, we can extract spatial and temporal features from the spatially aligned ICGA sequence. Finally, we apply the AdaBoost algorithm to choose best features and combine them to produce a strong classifier. The output of our system provides both probability map and severity map.
Feature Extraction
To facilitate the characterization of polyps in ICGA sequence, we extracted features from both temporal and spatial domains with a total of 222 quantitative features for each set of mapped pixels in the sequence. The temporal features included the intensity values (In), normalized intensity values (ηn), slope of intensity change (θn), regression coefficients (α and β), mean and variance of intensity values at five different time points (i.e., 1, 3, 5, 10, and 20 minutes). The spatial features included mean and variance of the intensity values in a ring-shaped area for each image frame (Fig. 2). These features are summarized in Table 1 and their details are provided in the Appendix. 
Figure 2
 
We extract the spatial features using a set of P points on a circle of radius R. We set P = 8R in our experiments. During the process for training a classifier, the AdaBoost algorithm is capable of learning from training samples to determine the appropriate values of R.
Figure 2
 
We extract the spatial features using a set of P points on a circle of radius R. We set P = 8R in our experiments. During the process for training a classifier, the AdaBoost algorithm is capable of learning from training samples to determine the appropriate values of R.
Table 1
 
Summary of 222 Quantitative Features Used in Our Experiments
Table 1
 
Summary of 222 Quantitative Features Used in Our Experiments
Training, Classification, and Generation of Probability Map
The training data contained the feature vectors extracted from polyp regions (positive samples) and the feature vectors extracted from nonpolyp regions (negative samples). In our experiment, we randomly selected 40 positive and 40 negative samples from each ICGA sequence. Given training samples, the enhanced AdaBoost algorithm12 iteratively constructed the weak classifiers. In each iteration of the training process, the weighting coefficients associated with the training samples were adjusted so that misclassified samples receive greater weights. In the next iteration, a new weak classifier was trained using the training samples with updated weights. As a result, previously misclassified samples will exert more influence in training a subsequent classifier. After T iterations, the resulting strong classifier can be written as  where x denotes a 222-dimensional feature vector and ωt represents a measure of classification accuracy for the weak classifier ht. The number of iterations T was set to 400 in our experiment. Given the feature vector x extracted from a pixel location, the strong classifier g(x) generates a probability value P indicating how likely this location belongs to a polyp region.  
Converting Probability Map to Severity Map
In order to achieve better visualization of polyp regions, we applied contrast enhancement on the probability map and the result was called a “severity map.” The transformation function between a probability map and a severity map was defined as:  where P denotes probability value, Pmax and Pmin denote the maximum and minimum values in the probability map, respectively. The aim of this function is to highlight information contained in the range of available probability values (i.e., [Pmin, Pmax]). It is worth noting that the probability value 0.5 will not be altered after transformation. Thus, one can perform classification on a probability map or a severity map by using 0.5 as the threshold value. The obtained classification results will be identical. Figure 3 shows the result of converting a probability map (Fig. 3b) into a severity map (Fig. 3c). One can observe a considerable enhancement in image contrast from this sample result.  
Figure 3
 
(a) The ICGA image captured at 10 minutes after injection. (b) The probability map and (c) the severity map are displayed using the same color bar (shown in the middle). The color bar extends from dark blue to red to represent the lowest to the highest values (i.e., 0.0 ∼ 1.0). The center of the color bar corresponds to the value of 0.5, which is typically used as classification threshold. The higher the score in these two maps represents higher possibility of having polypoidal lesions. The severity map contains similar information as the probability map but yields better visualization of polyp regions.
Figure 3
 
(a) The ICGA image captured at 10 minutes after injection. (b) The probability map and (c) the severity map are displayed using the same color bar (shown in the middle). The color bar extends from dark blue to red to represent the lowest to the highest values (i.e., 0.0 ∼ 1.0). The center of the color bar corresponds to the value of 0.5, which is typically used as classification threshold. The higher the score in these two maps represents higher possibility of having polypoidal lesions. The severity map contains similar information as the probability map but yields better visualization of polyp regions.
Results
The performance of our system was evaluated using 59 ICGA sequences (one from each patient) collected in the EVEREST trial. We adopted leave-one-out scheme for computing the accuracy of polyp detection. That is, use 58 out of the 59 sequences to train an AdaBoost classifier and the other sequence is used in testing. Repeat this procedure for each of the 59 sequences (i.e., use the classifier, trained on the other 58 sequences, to identify polyp regions on it).10 We measured the correctness of detection results using sensitivity, specificity, and balanced accuracy. The sensitivity and specificity can be written as follows:   where TP denotes true positive (correctly identified polyp region); FN denotes false negative (incorrectly identified polyp region); TN denotes true negative (correctly identified nonpolyp region); and FP denotes false positive (incorrectly identified nonpolyp region). An example result is shown in Figure 4, where Figure 4c was obtained using the reference standard in Figure 4a and the severity map in Figure 4b. The balanced accuracy is defined as  which avoids inflated performance incurred if the sample sizes of positive (polyp present) and negative (polyp absent) groups are highly imbalanced. The results of polyp detection on the 59 subjects are shown in Table 2. Our proposed system achieved an average balanced accuracy of 0.9126 (sensitivity = 0.9125, specificity = 0.9127). Figure 5 shows two extreme results (best and worst) from this experiment.  
Figure 4
 
(a) The red contours, drawn by the ophthalmologist, enclose polyp regions. This serves as the reference standard used to validate the proposed method. (b) The severity map of polyp is displayed using color map. (c) The true positive (orange), false negative (dark blue), true negative (yellow-green), and false positive (light blue) regions are determined by comparing the AdaBoost classification result to the reference standard. The resulting sensitivity and specificity are 0.9965 and 0.9807, respectively in this case.
Figure 4
 
(a) The red contours, drawn by the ophthalmologist, enclose polyp regions. This serves as the reference standard used to validate the proposed method. (b) The severity map of polyp is displayed using color map. (c) The true positive (orange), false negative (dark blue), true negative (yellow-green), and false positive (light blue) regions are determined by comparing the AdaBoost classification result to the reference standard. The resulting sensitivity and specificity are 0.9965 and 0.9807, respectively in this case.
Table 2
 
Results of Polyp Detection Using 59 ICGA Sequences From the EVEREST Study
Table 2
 
Results of Polyp Detection Using 59 ICGA Sequences From the EVEREST Study
Figure 5
 
Examples of polyp detection results in two extreme cases. (a, b, c): a case with sensitivity of 1.0 and specificity of 0.9769. The resulting balanced accuracy is 0.9884. (d, e, f): one of the two cases with balanced accuracy below 0.8. In this case, the sensitivity is 0.4306 and the specificity is 0.9164. (a, d) The manually labeled polyp regions (i.e., reference standard) are enclosed in red contours. (b, e) The severity maps generated by our polyp detection system. (c, f) The true positive, false negative, true negative, and false positive regions are displayed in orange, dark blue, yellow-green, and light blue colors, respectively. Comparing with (c) of no false negative dark blue spots, there are many small dark blue spots in (f) of which the system cannot recognize these as polyps. The poor sensitivity in this case is probably due to the lack of early phase video data.
Figure 5
 
Examples of polyp detection results in two extreme cases. (a, b, c): a case with sensitivity of 1.0 and specificity of 0.9769. The resulting balanced accuracy is 0.9884. (d, e, f): one of the two cases with balanced accuracy below 0.8. In this case, the sensitivity is 0.4306 and the specificity is 0.9164. (a, d) The manually labeled polyp regions (i.e., reference standard) are enclosed in red contours. (b, e) The severity maps generated by our polyp detection system. (c, f) The true positive, false negative, true negative, and false positive regions are displayed in orange, dark blue, yellow-green, and light blue colors, respectively. Comparing with (c) of no false negative dark blue spots, there are many small dark blue spots in (f) of which the system cannot recognize these as polyps. The poor sensitivity in this case is probably due to the lack of early phase video data.
There are two cases with balanced accuracy below 80%, both of which have poor sensitivity but acceptable specificity. Both of these two cases have multiple relatively small polypoidal lesions. Thus, it is very challenging for automatic detection algorithms or experienced ophthalmologists to identify the polypoidal regions exactly. In addition, the polyps in case 19 appear hyperfluorescent in the early phase but become hypofluorescent much more rapidly in the middle phase. In this case, the intensity changes are quite different from that of the others. For most of the cases in the EVEREST trial, polyps appear hyperfluorescent in the early phase and their fluorescence continues to increase in the middle phase. 
Discussion
Based on the ICGA images from treatment–naïve patients with PCV as the training and testing base, our system achieved an accuracy of 91.3% in delineating polyps. The robustness was due to the combined analysis of spatial and temporal features. The features extracted from an image, which is captured at a fixed time point, are called spatial features. In this article, the spatial features included mean and variance of intensity values over a ring-shaped region (Fig. 2). The higher the spatial variance, the more likely that there exists a polypoidal structure. On the contrary, the features extracted from a sequence of still images, which are captured over a series of time steps, are called temporal features. The temporal features were designed to characterize the intensity changes over time. 
Notice that our previous work on CNV segmentation10 only used temporal features (i.e., intensity, slope, and regression coefficients). However, in the ICGA sequences, some tissues (e.g., arteries, veins, BVN) may have similar temporal characteristics as polyps. Thus, relying solely on temporal features inevitably causes some ambiguity in detecting polyps. This observation motivated us to exploit features in spatial domain. Table 3 summarizes the results of using the temporal features, the spatial features, and the combined features. The significance of these results is 3-fold. First, the spatial features achieved higher balanced accuracy than the temporal features did. This indicated that the spatial features are more discriminative than the temporal features. Second, the spatial features achieved much higher specificity as compared with that of the temporal features. In other words, by taking spatial characteristics into account, the tissues with similar temporal patterns as polyps were less likely to be misclassified as polyp (for a representative example, compare Fig. 6e, 6f). This confirmed our hypothesis that temporal features are insufficient to distinguish polyps from other tissues. It also showed how the spatial features contribute to the characterization of polyps in addition to the temporal features. Third, the overall performance was significantly improved by combining temporal and spatial features. This clearly demonstrated the benefits of using both temporal and spatial features. Because temporal and spatial features were extracted from different domains, the corresponding outcomes of polyp detection may not agree with each other. Consequently, the misclassification made by one might possibly be suppressed by the other. Such a synergistic effect leaded to much better discriminative capability and made the proposed polyp detection system much more reliable. Figure 6 shows polyp detection results with and without using the spatial features that allow the reader to see differences. 
Table 3
 
Results of Polyp Detection Using Different Types of Features
Table 3
 
Results of Polyp Detection Using Different Types of Features
Figure 6
 
Examples showing the benefits of using the combined features in polyp detection. (a, d): ICGA images with manually drawn polyp regions. (b, e): detection results obtained only using temporal features. (c, f): detection results obtained using temporal and spatial features. Note that the true positive, false negative, true negative, and false positive regions are displayed in orange, dark blue, yellow-green, and light blue colors, respectively. (a, b, c) In this case, the temporal features are insufficient to allow us correctly identify polyps. By combining temporal and spatial features, this case shows a substantial improvement in sensitivity (i.e., true positive orange spots). (d, e, f) In this case, the spatial features can help reduce false positive results (i.e., light blue spots) while maintaining acceptable detection accuracy. By including the spatial features, our proposed method achieves the balanced accuracy of 0.9304 in this case.
Figure 6
 
Examples showing the benefits of using the combined features in polyp detection. (a, d): ICGA images with manually drawn polyp regions. (b, e): detection results obtained only using temporal features. (c, f): detection results obtained using temporal and spatial features. Note that the true positive, false negative, true negative, and false positive regions are displayed in orange, dark blue, yellow-green, and light blue colors, respectively. (a, b, c) In this case, the temporal features are insufficient to allow us correctly identify polyps. By combining temporal and spatial features, this case shows a substantial improvement in sensitivity (i.e., true positive orange spots). (d, e, f) In this case, the spatial features can help reduce false positive results (i.e., light blue spots) while maintaining acceptable detection accuracy. By including the spatial features, our proposed method achieves the balanced accuracy of 0.9304 in this case.
Since we had totally created 222 features in the ICGA, it was of great interest to know whether some features have better discriminative power than the others. A feature with high discriminative capability means that it yields low error rate for a particular classification problem. During the AdaBoost training process, the algorithm iteratively found the features with lowest error rates and used them to construct weak classifiers. In other words, if a feature is more discriminative than the others, it will be selected earlier during the training process. For each ICGA sequence, we carried out an AdaBoost procedure using the other 58 sequences as training samples and obtained the ranking of how the 222 features were iteratively selected. For different training sets, of course, the rankings of the selected features were not necessarily the same. Thus, we can compute the mean rank and the corresponding SD for each of the 222 features. 
Table 4 presents the top 30 features, according to their mean ranks, chosen by the AdaBoost algorithm. It is worth noting that the spatial variance VAR 4 24,3 MathType@MTEF@5@4@+=feaagCart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGacaGaaiaabeqaamaabaabaaGcbaGaaeOvaiaabgeacaqGsbWaa0baaSqaaiaabsdaaeaacaqGYaGaaeinaiaabYcacaqGZaaaaaaa@3C19@ consistently ranked first during the leave-one-out cross-validation process for all 59 sequences. Among the top 10 features, seven of them were spatial variances from the late phase (i.e., t3 = 5, t4 = 10, and t5 = 20 minutes). This indicated that the intensity variance on a ring-shaped area yields better discriminative power at late phase. The reason is due to the fact that polyps may remain hyperfluorescent while the rest becomes hypofluorescent in the late phase of angiography.5 In addition, the radii of the top 10 features ranged from one to nine pixels. About 70% of the polyp sizes were within this range (i.e., smaller than 270 μm in diameter). Overall, the majority of the top 30 features (i.e., 28 of 30) belonged to spatial features. The results clearly demonstrated the importance of using spatial features in polyp detection and emphasized that the detection of polyps depends a lot on the contrast between polyps and surrounding background instead of intensity change over time.  
Table 4
 
Top 30 Features Selected by the AdaBoost Algorithm
Table 4
 
Top 30 Features Selected by the AdaBoost Algorithm
As shown in Figure 1, our system generated both quantitative and qualitative results (i.e., probability and severity maps). Since the values in a probability map typically concentrated in a narrow range, we expanded these values to a wider range for better visualization. For example, consider a probability map with the maximum and minimum values of 0.7 and 0.3, which are respectively rescaled to 1.0 and 0.0 by the function f (P). The ranges of pixel values, before and after transformation, are 0.4 and 1.0, respectively. Thus, the resulting severity map has a wider range of pixel values and yields better contrast than the original probability map. More simply put, the probability map is suitable for providing quantitative information (i.e., absolute probability values) and the severity map is suitable for qualitative analysis of polypoidal regions (i.e., showing relative levels of severity between different locations in the choroid). 
In order to provide a clinically useful tool for diagnosing PCV, several issues remain to be explored in the future research. First, it remains as a challenge to use the early phase video in the proposed system. It is reasonable to expect that the information contained in the early phase (i.e., first 30 seconds after injection) could better characterize polyps. However, in clinical practice, a technician usually needs to manually adjust the brightness of video during the early phase. Thus, the intensity changes occurred in the video are not only due to the circulation of indocyanine green dye, but also due to the manually adjusted brightness. This imposes a technical challenge for taking the early phase video into consideration. To overcome this difficulty, we will need to come up with a filtering method for rejecting the undesirable component in the early phase video. The second is that our method only detects polyps at this stage. To consider pathological conditions associated with PCV (e.g., BVN) pigment epithelial detachment, and so on, we will need to develop a more sophisticated scheme for this multicategory classification problem. 
Our current study was based on the EVEREST trial, which only enrolled eyes with smaller lesion size (<5400 μm) and fair visual acuity between 0.06 and 0.5. However, PCV has wide clinical presentation from small central serous chorioretinopathy-like picture with excellent visual acuity13 to massive submacular hemorrhage or diffuse RPE atrophy and fibrosis with poor visual outcome. The characterization of these polyps or polyps after treatment will be of interest and could be tested by our system in the future. 
Conclusion
In conclusion, this work firstly showed that polypoidal regions in an ICGA sequence can be reliably detected using the temporal and spatial features. The proposed polyp detection system provided both quantitative and qualitative results, which could be useful for the diagnosis of PCV and the assessment of treatment outcomes. Results of this preliminary study further supported efforts to develop computer-aided tools that can automatically classify other pathologic conditions commonly found in PCV. 
Acknowledgments
The authors thank the investigators in the EVEREST trial for sharing the image data: Adrian Koh, Won Ki Lee, Lee-Jen Chen, Hakyoung Kim, Timothy Lai, and Paisan Ruamviboonsuk. Supported by grants from the Ministry of Science and Technology, Taiwan (Grant No. 102-2221-E-194-056). 
Disclosure: W.Y. Lin, None; S.C. Yang, None; S.J. Chen, None; C.L. Tsai, None; S.Z. Du, None; T.H. Lim, None 
References
Gomi F, Ohji M, Sayanagi K, et al. One-year outcomes of photodynamic therapy in age-related macular degeneration and polypoidal choroidal vasculopathy in Japanese patients. Ophthalmology. 2008; 115: 141–146.
Chang YC, Wu WC. Polypoidal choroidal vasculopathy in Taiwanese patients. Ophthalmic Surg Lasers Imaging. 2009; 40: 576–581.
Koh AH, Chen LJ, Chen SJ, et al. Polypoidal choroidal vasculopathy: evidence-based guidelines for clinical diagnosis and treatment. Retina. 2013; 33: 686–716.
Lim TH, Laude A, Tan CS. Polypoidal choroidal vasculopathy: an angiographic discussion. Eye. 2010; 24: 483–490.
Silva RM, Figueira J, Cachulo ML, Duarte L, de Abreu JRF, Cunha-Vaz JG. Polypoidal choroidal vasculopathy and photodynamic therapy with verteporfin. Graefes Arch Clin Exp Ophthalmol. 2005; 243: 973–979.
Koh A, Lee WK, Chen LJ, et al. EVEREST study: efficacy and safety of verteporfin photodynamic therapy in combination with ranibizumab or alone versus ranibizumab monotherapy in patients with symptomatic macular polypoidal choroidal vasculopathy. Retina. 2012; 32: 1453–1464.
Kang SW, Chung SE, Shin WJ, Lee JH. Polypoidal choroidal vasculopathy and late geographic hyperfluorescence on indocyanine green angiography. Br J Ophthalmol. 2009; 93: 759–764.
Leal S, Silva R, Figueira J, et al. Photodynamic therapy with verteporfin in polypoidal choroidal vasculopathy: results after 3 years of follow-up. Retina. 2010; 30: 1197–1205.
Yuzawa M, Mori R, Kawamura A. The origins of polypoidal choroidal vasculopathy. Br J Ophthalmol. 2005; 89: 602–607.
Tsai CL, Yang YL, Chen SJ, Lin KS, Chan CH, Lin WY. Automatic characterization of classic choroidal neovascularization by using AdaBoost for supervised learning. Invest Ophthalmol Vis Sci. 2011; 52: 2767–2774.
Tsai CL, Li CY, Yang G, Lin KS. The edge-driven dual-bootstrap iterative closest point algorithm for registration of multimodal fluorescein angiogram sequence. IEEE Trans Med Imaging. 2010; 29: 636–649.
Domingo C, Watanabe O. MadaBoost: a modification of AdaBoost. Annual Conference on Computational Learning Theory. 2000; 180–189.
Yannuzzi LA, Freund KB, Goldbaum M, et al. Polypoidal choroidal vasculopathy masquerading as central serous chorioretinopathy. Ophthalmology. 2000; 107: 767–777.
Recktenwald GW. Numerical methods with MATLAB: implementations and applications. Upper Saddle River, NJ: Prentice Hall; 2000.
Appendix
Table 1 summarizes the 222 quantitative features that are proposed in this study. In the following, we will explain the properties that make the proposed features suitable for the detection of polyps. 
Temporal Intensity Values
The standard diagnosis of PCV relies largely on the change of pixel intensity in ICGA sequence. Thus, it is naturally to choose intensity values at different time intervals as features in our system. We used In to denote the intensity values at time tn, where n = 1 … 5 (i.e., 1, 3, 5, 10, and 20 minutes). 
Normalized Temporal Intensity Values
In order to better reflect the extent of fluorescence leakage in ICGA sequence, we subtracted the initial intensity value from the intensity values obtained at later time points. These features, called normalized intensity values, were calculated as ηn = InI1. This simple procedure can also help reducing intensity fluctuation due to illumination variations in image background. 
Temporal Slope
The slope of temporal intensity profile represents the rate of change in fluorescence over time. In the proposed method, the n-th slope was calculated as θn = (InIn-1)/(tntn-1). Note that the intensity value at the very beginning was assumed to be zero (i.e., I0 = 0 and t0 = 0). Consequently, we can derive slope values indicating the average leakage speed of fluorescence in different time intervals. 
Temporal Regression Coefficients
Regression is the process of finding an analytical function to approximate a set of experimental measurements. In particular, a set of time-intensity data points {tn, In} can be approximated by f(t) = αt + β. The coefficients α and β are the slope and intercept of the line that approximates data points, respectively. The values of α and β can be determined by solving the normal equation.14 In addition to performing regression on the whole ICGA sequence, we also found regression coefficients (denoted by α5min and β5min) from the first 5 minutes. 
Temporal Mean
A typical presentation of polyps in ICGA images is the presence of nodular hyperfluorescence. The polyps usually appear within the first 5 minutes, and persist to the late phase. Therefore, the average intensity value around polyp area is higher than that of the other areas. We used Display FormulaImage not available to denote the mean of the intensity values In. Similarly, Display FormulaImage not available denotes the mean of the normalized intensity values ηn
Temporal Variance
Because the intensity of polyp region increases rapidly in early and middle phases, its variance tends to have a lager value as compared with that of the background region. Based on this temporal fluorescence characteristic, we can define a discriminative feature using the variance of temporal intensities In. The VAR and VAR5min denote the temporal variance of the whole sequence and the temporal variance within the first 5 minutes, respectively. 
Spatial Mean and Variance
As documented in the literature, a polyp typically has a nodular appearance in fundus ICGA images. Inspired by this diagnostic criteria, we proposed a novel feature, called spatial mean and variance, to reflect how intensity changes in a ring-shaped area. These spatial features were defined on a circular region centered at each pixel location. Figure 2 shows an example of a central pixel gc and its neighboring pixels {g0, g1gP-1}, which are uniformly sampled from a circle with radius R. We used first- and second-order statistics (i.e., mean and variance) to encode information contained in this ring-shaped region. Formally, given a pixel gc on the image captured at time tn, we computed the mean (denoted by μ n P , R MathType@MTEF@5@4@+=feaagCart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaeqiVd02aa0baaSqaaiaad6gaaeaacaWGqbGaaiilaiaadkfaaaaaaa@3B27@ ) and variance (denoted by VAR n P , R MathType@MTEF@5@4@+=feaagCart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaaeOvaiaabgeacaqGsbWaa0baaSqaaiaad6gaaeaacaWGqbGaaiilaiaadkfaaaaaaa@3BE3@ ) from a set of P points on a circle of radius R. One can adjust the resolution of these spatial features using the parameter R, whereas the quantization in the angular space is determined by P. In our experiments, we set R = 1, 2 … 20 and P = 8R. Thus, at each fixed time point tn, we can generate 20 spatial means and 20 spatial variances from 20 circles with various radii. Since there were five different time points, we totally had 100 spatial means and 100 spatial variances. This would create a huge pool of potentially useful features. In this study, we used the AdaBoost algorithm to evaluate the importance of individual features and found the most discriminative ones among them.  
Figure 1
 
Procedure for detecting polyps in ICGA sequence contains three main steps. First, we perform image registration on the input sequence using Edge-Driven DB-ICP algorithm.11 Then, we can extract spatial and temporal features from the spatially aligned ICGA sequence. Finally, we apply the AdaBoost algorithm to choose best features and combine them to produce a strong classifier. The output of our system provides both probability map and severity map.
Figure 1
 
Procedure for detecting polyps in ICGA sequence contains three main steps. First, we perform image registration on the input sequence using Edge-Driven DB-ICP algorithm.11 Then, we can extract spatial and temporal features from the spatially aligned ICGA sequence. Finally, we apply the AdaBoost algorithm to choose best features and combine them to produce a strong classifier. The output of our system provides both probability map and severity map.
Figure 2
 
We extract the spatial features using a set of P points on a circle of radius R. We set P = 8R in our experiments. During the process for training a classifier, the AdaBoost algorithm is capable of learning from training samples to determine the appropriate values of R.
Figure 2
 
We extract the spatial features using a set of P points on a circle of radius R. We set P = 8R in our experiments. During the process for training a classifier, the AdaBoost algorithm is capable of learning from training samples to determine the appropriate values of R.
Figure 3
 
(a) The ICGA image captured at 10 minutes after injection. (b) The probability map and (c) the severity map are displayed using the same color bar (shown in the middle). The color bar extends from dark blue to red to represent the lowest to the highest values (i.e., 0.0 ∼ 1.0). The center of the color bar corresponds to the value of 0.5, which is typically used as classification threshold. The higher the score in these two maps represents higher possibility of having polypoidal lesions. The severity map contains similar information as the probability map but yields better visualization of polyp regions.
Figure 3
 
(a) The ICGA image captured at 10 minutes after injection. (b) The probability map and (c) the severity map are displayed using the same color bar (shown in the middle). The color bar extends from dark blue to red to represent the lowest to the highest values (i.e., 0.0 ∼ 1.0). The center of the color bar corresponds to the value of 0.5, which is typically used as classification threshold. The higher the score in these two maps represents higher possibility of having polypoidal lesions. The severity map contains similar information as the probability map but yields better visualization of polyp regions.
Figure 4
 
(a) The red contours, drawn by the ophthalmologist, enclose polyp regions. This serves as the reference standard used to validate the proposed method. (b) The severity map of polyp is displayed using color map. (c) The true positive (orange), false negative (dark blue), true negative (yellow-green), and false positive (light blue) regions are determined by comparing the AdaBoost classification result to the reference standard. The resulting sensitivity and specificity are 0.9965 and 0.9807, respectively in this case.
Figure 4
 
(a) The red contours, drawn by the ophthalmologist, enclose polyp regions. This serves as the reference standard used to validate the proposed method. (b) The severity map of polyp is displayed using color map. (c) The true positive (orange), false negative (dark blue), true negative (yellow-green), and false positive (light blue) regions are determined by comparing the AdaBoost classification result to the reference standard. The resulting sensitivity and specificity are 0.9965 and 0.9807, respectively in this case.
Figure 5
 
Examples of polyp detection results in two extreme cases. (a, b, c): a case with sensitivity of 1.0 and specificity of 0.9769. The resulting balanced accuracy is 0.9884. (d, e, f): one of the two cases with balanced accuracy below 0.8. In this case, the sensitivity is 0.4306 and the specificity is 0.9164. (a, d) The manually labeled polyp regions (i.e., reference standard) are enclosed in red contours. (b, e) The severity maps generated by our polyp detection system. (c, f) The true positive, false negative, true negative, and false positive regions are displayed in orange, dark blue, yellow-green, and light blue colors, respectively. Comparing with (c) of no false negative dark blue spots, there are many small dark blue spots in (f) of which the system cannot recognize these as polyps. The poor sensitivity in this case is probably due to the lack of early phase video data.
Figure 5
 
Examples of polyp detection results in two extreme cases. (a, b, c): a case with sensitivity of 1.0 and specificity of 0.9769. The resulting balanced accuracy is 0.9884. (d, e, f): one of the two cases with balanced accuracy below 0.8. In this case, the sensitivity is 0.4306 and the specificity is 0.9164. (a, d) The manually labeled polyp regions (i.e., reference standard) are enclosed in red contours. (b, e) The severity maps generated by our polyp detection system. (c, f) The true positive, false negative, true negative, and false positive regions are displayed in orange, dark blue, yellow-green, and light blue colors, respectively. Comparing with (c) of no false negative dark blue spots, there are many small dark blue spots in (f) of which the system cannot recognize these as polyps. The poor sensitivity in this case is probably due to the lack of early phase video data.
Figure 6
 
Examples showing the benefits of using the combined features in polyp detection. (a, d): ICGA images with manually drawn polyp regions. (b, e): detection results obtained only using temporal features. (c, f): detection results obtained using temporal and spatial features. Note that the true positive, false negative, true negative, and false positive regions are displayed in orange, dark blue, yellow-green, and light blue colors, respectively. (a, b, c) In this case, the temporal features are insufficient to allow us correctly identify polyps. By combining temporal and spatial features, this case shows a substantial improvement in sensitivity (i.e., true positive orange spots). (d, e, f) In this case, the spatial features can help reduce false positive results (i.e., light blue spots) while maintaining acceptable detection accuracy. By including the spatial features, our proposed method achieves the balanced accuracy of 0.9304 in this case.
Figure 6
 
Examples showing the benefits of using the combined features in polyp detection. (a, d): ICGA images with manually drawn polyp regions. (b, e): detection results obtained only using temporal features. (c, f): detection results obtained using temporal and spatial features. Note that the true positive, false negative, true negative, and false positive regions are displayed in orange, dark blue, yellow-green, and light blue colors, respectively. (a, b, c) In this case, the temporal features are insufficient to allow us correctly identify polyps. By combining temporal and spatial features, this case shows a substantial improvement in sensitivity (i.e., true positive orange spots). (d, e, f) In this case, the spatial features can help reduce false positive results (i.e., light blue spots) while maintaining acceptable detection accuracy. By including the spatial features, our proposed method achieves the balanced accuracy of 0.9304 in this case.
Table 1
 
Summary of 222 Quantitative Features Used in Our Experiments
Table 1
 
Summary of 222 Quantitative Features Used in Our Experiments
Table 2
 
Results of Polyp Detection Using 59 ICGA Sequences From the EVEREST Study
Table 2
 
Results of Polyp Detection Using 59 ICGA Sequences From the EVEREST Study
Table 3
 
Results of Polyp Detection Using Different Types of Features
Table 3
 
Results of Polyp Detection Using Different Types of Features
Table 4
 
Top 30 Features Selected by the AdaBoost Algorithm
Table 4
 
Top 30 Features Selected by the AdaBoost Algorithm
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×