**Purpose:**:
To develop a computer-aided diagnostic tool for automated detection and quantification of polypoidal regions in indocyanine green angiography (ICGA) images.

**Methods:**:
The ICGA sequences of 59 polypoidal choroidal vasculopathy (PCV) treatment–naïve patients from five Asian countries (Hong Kong, Singapore, South Korea, Taiwan, and Thailand) were provided by the EVEREST study. The ground truth was provided by the reading center for the presence of polypoidal regions. The proposed detection algorithm used both temporal and spatial features to characterize the severity of polypoidal lesions in ICGA sequences. Leave-one-out cross validation was carried out so that each patient was used once as the validation sample. For each patient, a fixed detection threshold of 0.5 on the severity was applied to obtain sensitivity, specificity, and balanced accuracy with respect to the ground truth.

**Results:**:
Our system achieved an average accuracy of 0.9126 (sensitivity = 0.9125, specificity = 0.9127) for detection of polyps in the 59 ICGA sequences. Among the total of 222 features extracted from ICGA sequence, the spatial variances exhibited best discriminative power in distinguishing between polyp and nonpolyp regions. The results also indicated the importance of combining spatial and temporal features to further improve detection accuracy.

**Conclusions:**:
The developed software provided a means of detecting and quantifying polypoidal regions in ICGA images for the first time.

**Translational Relevance:**:
This preliminary study demonstrated a computer-aided diagnostic tool, which enables objective evaluation of PCV and its progression. Ophthalmologists can easily visualize the polypoidal regions and obtain quantitative information about polyps by using the proposed system.

^{1,2}Diagnosis of PCV depends on multimodal images studies including stereo color fundus photographs, fluorescein angiography, indocyanine green angiography (ICGA), and optical coherence tomography. Of these image studies, ICGA is the gold-standard for diagnosis of PCV.

^{3,4}In the ICGA, polyps appear as single or multiple clusters of nodular hyperfluorescence in 6 minutes and last to the late phase with washout fluorescence or remain as hyperfluorescence.

^{5}Other features of PCV may include branching vascular network (BVN), pulsatile polyp, presence of hypofluorescent halo, and late phase hyperfluorescent plaque.

^{4,6,7}The location and size of polyps in ICGA are important features for surgeons to decide the area of thermal laser application or photodynamic therapy (PDT) because of the higher regression rate of polyps by PDT comparing with monotherapy with antivascular endothelial growth factor (anti-VEGF).

^{6}

^{8}Yet the aneurysm-like dilatation may lie within the network and the size of the polyps may range from numerous small hyperfluorescent dots, to coil-like large vessel deformations.

^{9}The variable caliber, tortuosity, and unusual course of the vessels of the BVN, together with the inner choroidal vascular complex, may interfere with localization of polyps. In addition, the hyperfluorescence of polyps as well as the choroidal vessels and BVN beneath the retinal pigment epithelium (RPE) can be imaged together with the hyperfluorescent lesions (e.g., retinal microaneurysm in diabetic retinopathy) above the RPE, hence making the differentiation of polyps difficult.

^{10}The hypothesis in this study is that the polypoidal lesions in an ICGA sequence can be characterized and recognized with higher accuracy by the computerized system with the quantitative information on the temporal and spatial variations of the fluorescence in the sequence.

^{6}was to evaluate the treatment outcomes for patients with symptomatic macular PCV. In addition to the angiographic criteria of PCV, patients should have their lesion sizes smaller than 5400 um with best corrected visual acuity between 24 and 73 letters (Snellen equivalent of 0.06–0.5). Exclusion criteria included any prior treatment with focal laser or PDT or intraocular surgery of the lesion eyes. Patients with the history of angioid streaks, high myopia, glaucoma, and RPE tear were also excluded. This study collected ICGA images from 61 PCV treatment–naïve patients from five Asian countries (Hong Kong, Singapore, South Korea, Taiwan, and Thailand). All ICGA images were captured using confocal scanning laser ophthalmoscope (cSLO; Heidelberg Retina Angiograph Spectralis, Heidelberg Engineering, Heidelberg, Germany) where the pixilation was 1536 pixels per 30° arc angle. The resulting imaging pixilation was 768 × 768 or higher. A standardized imaging protocol had been applied to facilitate the analysis and characterization of PCV lesions. The protocol provided a standardized image set that contains a dynamic angiography (first 30 seconds after injection) and still images (captured approximately at 1, 3, 5, 10, and 20 minutes from injection). In this study, we only used still images as the input data to the proposed system. The videos taken by the cSLO system were not used because of the motion distortion caused by eye movement. In addition, the early phase dynamic angiograms usually had high variations in brightness. There were two patients who do not have still images at some of the predefined time points. Thus, we also excluded these two patients from our experimental validation (i.e., using 59 subjects from the EVEREST study). Polyp areas were manually annotated by the investigators in the reading center. The manually labeled polyp locations served as the reference standard in our experiments. The ICGA images and their reference standard were made available to us by the courtesy of the investigators of the EVEREST trial.

^{11}for the first step of image registration and wrote our own code for the other steps. The whole system was implemented using Matlab R2012a (The MathWorks, Inc., Natick, MA) and Visual Studio C++ 2010 under Microsoft Windows 7 operating system (Microsoft Corp., Redmond, WA). In order to compensate frame-to-frame eye movement, we firstly performed image registration so that the input ICGA sequence was spatially aligned. After transforming the input images into a common coordinate system, we were allowed to characterize the temporal behavior of spatially corresponding pixels. In this study, we chose a registration technique called Edge-Driven Dual-Bootstrap Iterative Closest Point algorithm (Edge-Driven DB-ICP).

^{11}This algorithm is fully automatic and can deal with images with substantial, nonlinear intensity variations, which are typical cases of ICGA images.

**Figure 1**

**Figure 1**

*I*), normalized intensity values (

_{n}*η*), slope of intensity change (

_{n}*θ*), regression coefficients (

_{n}*α*and

*β*), mean and variance of intensity values at five different time points (i.e., 1, 3, 5, 10, and 20 minutes). The spatial features included mean and variance of the intensity values in a ring-shaped area for each image frame (Fig. 2). These features are summarized in Table 1 and their details are provided in the Appendix.

**Figure 2**

**Figure 2**

**Table 1**

^{12}iteratively constructed the weak classifiers. In each iteration of the training process, the weighting coefficients associated with the training samples were adjusted so that misclassified samples receive greater weights. In the next iteration, a new weak classifier was trained using the training samples with updated weights. As a result, previously misclassified samples will exert more influence in training a subsequent classifier. After

*T*iterations, the resulting strong classifier can be written as where

**x**denotes a 222-dimensional feature vector and

*ω*represents a measure of classification accuracy for the weak classifier

_{t}*h*. The number of iterations

_{t}*T*was set to 400 in our experiment. Given the feature vector

**x**extracted from a pixel location, the strong classifier

*g*(

**x**) generates a probability value

*P*indicating how likely this location belongs to a polyp region.

*P*denotes probability value,

*P*

_{max}and

*P*

_{min}denote the maximum and minimum values in the probability map, respectively. The aim of this function is to highlight information contained in the range of available probability values (i.e., [

*P*

_{min},

*P*

_{max}]). It is worth noting that the probability value 0.5 will not be altered after transformation. Thus, one can perform classification on a probability map or a severity map by using 0.5 as the threshold value. The obtained classification results will be identical. Figure 3 shows the result of converting a probability map (Fig. 3b) into a severity map (Fig. 3c). One can observe a considerable enhancement in image contrast from this sample result.

**Figure 3**

**Figure 3**

^{10}We measured the correctness of detection results using sensitivity, specificity, and balanced accuracy. The sensitivity and specificity can be written as follows: where TP denotes true positive (correctly identified polyp region); FN denotes false negative (incorrectly identified polyp region); TN denotes true negative (correctly identified nonpolyp region); and FP denotes false positive (incorrectly identified nonpolyp region). An example result is shown in Figure 4, where Figure 4c was obtained using the reference standard in Figure 4a and the severity map in Figure 4b. The balanced accuracy is defined as which avoids inflated performance incurred if the sample sizes of positive (polyp present) and negative (polyp absent) groups are highly imbalanced. The results of polyp detection on the 59 subjects are shown in Table 2. Our proposed system achieved an average balanced accuracy of 0.9126 (sensitivity = 0.9125, specificity = 0.9127). Figure 5 shows two extreme results (best and worst) from this experiment.

**Figure 4**

**Figure 4**

**Table 2**

**Figure 5**

**Figure 5**

^{10}only used temporal features (i.e., intensity, slope, and regression coefficients). However, in the ICGA sequences, some tissues (e.g., arteries, veins, BVN) may have similar temporal characteristics as polyps. Thus, relying solely on temporal features inevitably causes some ambiguity in detecting polyps. This observation motivated us to exploit features in spatial domain. Table 3 summarizes the results of using the temporal features, the spatial features, and the combined features. The significance of these results is 3-fold. First, the spatial features achieved higher balanced accuracy than the temporal features did. This indicated that the spatial features are more discriminative than the temporal features. Second, the spatial features achieved much higher specificity as compared with that of the temporal features. In other words, by taking spatial characteristics into account, the tissues with similar temporal patterns as polyps were less likely to be misclassified as polyp (for a representative example, compare Fig. 6e, 6f). This confirmed our hypothesis that temporal features are insufficient to distinguish polyps from other tissues. It also showed how the spatial features contribute to the characterization of polyps in addition to the temporal features. Third, the overall performance was significantly improved by combining temporal and spatial features. This clearly demonstrated the benefits of using both temporal and spatial features. Because temporal and spatial features were extracted from different domains, the corresponding outcomes of polyp detection may not agree with each other. Consequently, the misclassification made by one might possibly be suppressed by the other. Such a synergistic effect leaded to much better discriminative capability and made the proposed polyp detection system much more reliable. Figure 6 shows polyp detection results with and without using the spatial features that allow the reader to see differences.

**Table 3**

**Figure 6**

**Figure 6**

*t*

_{3}= 5,

*t*

_{4}= 10, and

*t*

_{5}= 20 minutes). This indicated that the intensity variance on a ring-shaped area yields better discriminative power at late phase. The reason is due to the fact that polyps may remain hyperfluorescent while the rest becomes hypofluorescent in the late phase of angiography.

^{5}In addition, the radii of the top 10 features ranged from one to nine pixels. About 70% of the polyp sizes were within this range (i.e., smaller than 270 μm in diameter). Overall, the majority of the top 30 features (i.e., 28 of 30) belonged to spatial features. The results clearly demonstrated the importance of using spatial features in polyp detection and emphasized that the detection of polyps depends a lot on the contrast between polyps and surrounding background instead of intensity change over time.

**Table 4**

*f*(

*P*). The ranges of pixel values, before and after transformation, are 0.4 and 1.0, respectively. Thus, the resulting severity map has a wider range of pixel values and yields better contrast than the original probability map. More simply put, the probability map is suitable for providing quantitative information (i.e., absolute probability values) and the severity map is suitable for qualitative analysis of polypoidal regions (i.e., showing relative levels of severity between different locations in the choroid).

^{13}to massive submacular hemorrhage or diffuse RPE atrophy and fibrosis with poor visual outcome. The characterization of these polyps or polyps after treatment will be of interest and could be tested by our system in the future.

**W.Y. Lin**, None;

**S.C. Yang**, None;

**S.J. Chen**, None;

**C.L. Tsai**, None;

**S.Z. Du**, None;

**T.H. Lim**, None

*. 2008; 115: 141–146.*

*Ophthalmology**. 2009; 40: 576–581.*

*Ophthalmic Surg Lasers Imaging**. 2013; 33: 686–716.*

*Retina**. 2010; 24: 483–490.*

*Eye**. 2005; 243: 973–979.*

*Graefes Arch Clin Exp Ophthalmol**. 2012; 32: 1453–1464.*

*Retina**. 2009; 93: 759–764.*

*Br J Ophthalmol**. 2010; 30: 1197–1205.*

*Retina**. 2005; 89: 602–607.*

*Br J Ophthalmol**. 2011; 52: 2767–2774.*

*Invest Ophthalmol Vis Sci**. 2010; 29: 636–649.*

*IEEE Trans Med Imaging**. 2000; 180–189.*

*Annual Conference on Computational Learning Theory**. 2000; 107: 767–777.*

*Ophthalmology**. Upper Saddle River, NJ: Prentice Hall; 2000.*

*Numerical methods with MATLAB: implementations and applications**I*to denote the intensity values at time

_{n}*t*, where

_{n}*n*= 1 … 5 (i.e., 1, 3, 5, 10, and 20 minutes).

*η*=

_{n}*I*−

_{n}*I*

_{1}. This simple procedure can also help reducing intensity fluctuation due to illumination variations in image background.

*n*-th slope was calculated as

*θ*= (

_{n}*I*−

_{n}*I*

_{n-1})/(

*t*−

_{n}*t*

_{n-1}). Note that the intensity value at the very beginning was assumed to be zero (i.e.,

*I*

_{0}= 0 and

*t*

_{0}= 0). Consequently, we can derive slope values indicating the average leakage speed of fluorescence in different time intervals.

*t*,

_{n}*I*} can be approximated by

_{n}*f*(

*t*) =

*αt*+

*β*. The coefficients

*α*and

*β*are the slope and intercept of the line that approximates data points, respectively. The values of

*α*and

*β*can be determined by solving the normal equation.

^{14}In addition to performing regression on the whole ICGA sequence, we also found regression coefficients (denoted by

*α*

_{5min}and

*β*

_{5min}) from the first 5 minutes.

*I*. Similarly,

_{n}*η*.

_{n}*I*. The VAR and VAR

_{n}_{5min}denote the temporal variance of the whole sequence and the temporal variance within the first 5 minutes, respectively.

*g*and its neighboring pixels {

_{c}*g*

_{0},

*g*

_{1}…

*g*

_{P-1}}, which are uniformly sampled from a circle with radius

*R*. We used first- and second-order statistics (i.e., mean and variance) to encode information contained in this ring-shaped region. Formally, given a pixel

*g*on the image captured at time

_{c}*t*, we computed the mean (denoted by $ \mu n P , R MathType@MTEF@5@4@+=feaagCart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaeqiVd02aa0baaSqaaiaad6gaaeaacaWGqbGaaiilaiaadkfaaaaaaa@3B27@ $) and variance (denoted by $ VAR n P , R MathType@MTEF@5@4@+=feaagCart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaaeOvaiaabgeacaqGsbWaa0baaSqaaiaad6gaaeaacaWGqbGaaiilaiaadkfaaaaaaa@3BE3@ $) from a set of

_{n}*P*points on a circle of radius

*R*. One can adjust the resolution of these spatial features using the parameter

*R*, whereas the quantization in the angular space is determined by

*P*. In our experiments, we set

*R*= 1, 2 … 20 and

*P*= 8

*R*. Thus, at each fixed time point

*t*, we can generate 20 spatial means and 20 spatial variances from 20 circles with various radii. Since there were five different time points, we totally had 100 spatial means and 100 spatial variances. This would create a huge pool of potentially useful features. In this study, we used the AdaBoost algorithm to evaluate the importance of individual features and found the most discriminative ones among them.

_{n}