July 2024
Volume 13, Issue 7
Open Access
Artificial Intelligence  |   July 2024
Multi-Plexus Nonperfusion Area Segmentation in Widefield OCT Angiography Using a Deep Convolutional Neural Network
Author Affiliations & Notes
  • Yukun Guo
    Casey Eye Institute, Oregon Health & Science University, Portland, OR, USA
    Department of Biomedical Engineering, Oregon Health & Science University, Portland, OR, USA
  • Tristan T. Hormel
    Casey Eye Institute, Oregon Health & Science University, Portland, OR, USA
  • Min Gao
    Casey Eye Institute, Oregon Health & Science University, Portland, OR, USA
    Department of Biomedical Engineering, Oregon Health & Science University, Portland, OR, USA
  • Qisheng You
    Kresge Eye Institute, Wayne State University, Detroit, MI, USA
  • Jie Wang
    Casey Eye Institute, Oregon Health & Science University, Portland, OR, USA
  • Christina J. Flaxel
    Casey Eye Institute, Oregon Health & Science University, Portland, OR, USA
  • Steven T. Bailey
    Casey Eye Institute, Oregon Health & Science University, Portland, OR, USA
  • Thomas S. Hwang
    Casey Eye Institute, Oregon Health & Science University, Portland, OR, USA
  • Yali Jia
    Casey Eye Institute, Oregon Health & Science University, Portland, OR, USA
    Department of Biomedical Engineering, Oregon Health & Science University, Portland, OR, USA
  • Correspondence: Yali Jia, Casey Eye Institute, Oregon Health & Science University, 515 SW Campus Drive, Portland, OR 97239, USA. e-mail: jiaya@ohsu.edu 
Translational Vision Science & Technology July 2024, Vol.13, 15. doi:https://doi.org/10.1167/tvst.13.7.15
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Yukun Guo, Tristan T. Hormel, Min Gao, Qisheng You, Jie Wang, Christina J. Flaxel, Steven T. Bailey, Thomas S. Hwang, Yali Jia; Multi-Plexus Nonperfusion Area Segmentation in Widefield OCT Angiography Using a Deep Convolutional Neural Network. Trans. Vis. Sci. Tech. 2024;13(7):15. https://doi.org/10.1167/tvst.13.7.15.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose: To train and validate a convolutional neural network to segment nonperfusion areas (NPAs) in multiple retinal vascular plexuses on widefield optical coherence tomography angiography (OCTA).

Methods: This cross-sectional study included 202 participants with a full range of diabetic retinopathy (DR) severities (diabetes mellitus without retinopathy, mild to moderate non-proliferative DR, severe non-proliferative DR, and proliferative DR) and 39 healthy participants. Consecutive 6 × 6-mm OCTA scans at the central macula, optic disc, and temporal region in one eye from 202 participants in a clinical DR study were acquired with a 70-kHz OCT commercial system (RTVue-XR). Widefield OCTA en face images were generated by montaging the scans from these three regions. A projection-resolved OCTA algorithm was applied to remove projection artifacts at the voxel scale. A deep convolutional neural network with a parallel U-Net module was designed to detect NPAs and distinguish signal reduction artifacts from flow deficits in the superficial vascular complex (SVC), intermediate capillary plexus (ICP), and deep capillary plexus (DCP). Expert graders manually labeled NPAs and signal reduction artifacts for the ground truth. Sixfold cross-validation was used to evaluate the proposed algorithm on the entire dataset.

Results: The proposed algorithm showed high agreement with the manually delineated ground truth for NPA detection in three retinal vascular plexuses on widefield OCTA (mean ± SD F-score: SVC, 0.84 ± 0.05; ICP, 0.87 ± 0.04; DCP, 0.83 ± 0.07). The extrafoveal avascular area in the DCP showed the best sensitivity for differentiating eyes with diabetes but no retinopathy (77%) from healthy controls and for differentiating DR by severity: DR versus no DR, 77%; referable DR (rDR) versus non-referable DR (nrDR), 79%; vision-threatening DR (vtDR) versus non–vision-threatening DR (nvtDR), 60%. The DCP also showed the best area under the receiver operating characteristic curve for distinguishing diabetes from healthy controls (96%), DR versus no DR (95%), and rDR versus nrDR (96%). The three-plexus-combined OCTA achieved the best result in differentiating vtDR and nvtDR (81.0%).

Conclusions: A deep learning network can accurately segment NPAs in individual retinal vascular plexuses and improve DR diagnostic accuracy.

Translational Relevance: Using a deep learning method to segment nonperfusion areas in widefield OCTA can potentially improve the diagnostic accuracy of diabetic retinopathy by OCT/OCTA systems.

Introduction
Capillary nonperfusion is a key pathologic feature of diabetic retinopathy (DR).1 Numerous studies have shown that nonperfusion area (NPA) measured with optical coherence tomography angiography (OCTA) is associated with DR clinical severity.25 However, the NPA in DR is frequently present outside the central macula.68 Evaluating and quantifying extramacular NPAs using widefield OCTA may further improve OCTA-based evaluation for DR. 
There are several automated or semi-automated algorithms for segmenting NPAs from OCTA data. Our group first reported automated methods for NPA quantification based on either vessel density maps or capillary distance maps.911 Schottenhamml et al.12 proposed an automated method for quantifying capillary dropout based on the intercapillary area. Krawitz et al.13 also described the quantification of NPAs in 3 × 3-mm OCTA central macular scans. Their study required manually registering and excluding low-quality scans, which would be impractical in clinical practice. In general, traditional image processing methods such as the above are vulnerable to noise and artifacts. To overcome this limitation, our group and others reported deep learning–based methods to segment NPAs.1417 Using the powerful feature extraction capability of deep-learning models, these methods greatly improved NPA segmentation accuracy.18 However, most of these algorithms were tested on OCTA images with a small field of view.912 Because widefield OCTAs can have significantly different resolutions, signal-to-noise ratios, and severity of shadow artifacts, the reported algorithms cannot be assumed to perform comparably in widefield OCTA images. Shadow artifacts, frequently encountered in widefield OCTA,19 share features with the NPA in OCTA images. Rule-based algorithms often cannot distinguish between shadow artifacts and NPAs and require manual correction to achieve accurate segmentation. 
Our previous work on deep-learning-based widefield NPA quantification focused on the superficial vascular complex (SVC).17 However, the importance of the deep vascular complex in DR has been well documented.11 In this study, we propose a deep learning–based method to quantify the NPA on widefield OCTA with a 17 × 6-mm field of view in the SVC, intermediate capillary plexus (ICP), and deep capillary plexus (DCP) in diabetic eyes. We also examined the relationship in this dataset between NPAs in each of these complexes/plexuses and clinical DR severity. 
Methods
Data Acquisition
The institutional review board of the Oregon Health & Science University approved this study, which was performed in accordance with the tenets of the Declaration of Helsinki. The OCT and OCTA scans were acquired using a 6 × 6-mm scan pattern centered on the macula and the adjacent areas nasal and temporal to the macular scan using a 70-kHz OCT AngioVue system (RTVue-XR; Optovue/Visionix, Lombard, IL). Two repeated B-scans were obtained at 400 raster positions, and each B-scan contained 400 A-lines. The system operates at a central wavelength of 850 nm with a full-width half-maximum bandwidth of 45 nm. At each region, an x-fast scan and a y-fast scan were acquired, registered, and merged, minimizing motion artifacts. OCTA data were processed using a commercial version of the split-spectrum amplitude-decorrelation angiography (SSADA) algorithm.8 The widefield OCTA en face images were acquired by montaging the three regional scans. 
Dataset Preparation
In this study, we quantified the NPA in the SVC (including nerve fiber layer plexus and ganglion cell layer plexus), ICP, and DCP in en face widefield OCTA images. We first applied a guided bidirectional graph search algorithm20 to segment retinal layer boundaries. This improved the accuracy of retinal layer segmentation by introducing a bidirectional graph search and path-merging algorithms.20 Then, we generated the SVC, ICP, and DCP en face images by a maximum projection method21 to calculate the maximum value along the A-line of OCT data within the relevant slabs.3 To reduce the influence of noise on NPA quantification, we applied a deep learning-based retinal capillary reconstruction algorithm previously developed by our group; this algorithm can eliminate the noise and enhance the vasculature signal in OCTA data.22 The reconstructed angiograms were fused with the original images by pixel-wise averaging to reduce the noise of the vascular plexus. Widefield OCTA en face images were generated by montaging the scans from three regions: central macula, temporal region, and optic disc.23 The montage algorithm is based on the structural similarity quantified by the Speeded-Up Robust Features (SURF) descriptor.24 Temporal to the macula, the ICP and DCP merge into a single layer,25 which creates a dilemma: Should this region be displayed as the ICP or the DCP? We chose to include this region in the DCP only to avoid duplicating the same data in both layers. This means that the temporal region lacks ICP images. We removed the projection artifacts from the deeper layers using a previously reported algorithm, which uses OCT reflectance to enhance the OCTA signal and suppress the projection artifacts.26 To help detect shadow artifacts that can cause localized decreased flow signal affecting NPA segmentation, we also generated the OCT reflectance en face images and inner retinal thickness maps, as in our previous work.17,27 The reflectance en face image was projected from the target slabs from OCT data using a mean projection method. The inner retinal thickness map (from the inner limiting membrane to the outer plexiform layer) for each scan was calculated using the retinal layer boundary segmentation results. The thickness map and the OCT image were registered using the same transformations as the OCTA data. 
When training a deep learning–based method, the reliability of the ground truth is critical. To reduce the time for data delineating, we used our previously trained convolutional neural network model17 on our dataset (on all three vascular en face images) to produce pseudo–ground truth maps. Then, three certified graders (YG, QY, MG) manually corrected NPAs and shadow artifacts independently based on the initial ground truth maps. 
Development of the Deep Learning System
We designed a convolutional neural network to segment NPAs in three retinal vascular plexuses on widefield OCTA images (Fig. 1). The network contains five subnetworks of two different types: (1) multiscale U-net modules (Fig. 1, D1–D4) and (2) a fusion module (Fig. 1, D5). The multiscale U-net module is a U-net–like convolutional network that combines multiscale feature extraction modules. The input to the network includes the SVC thickness map (Fig. 1A), en face reflectance image of the inner retina (Fig. 1B), and the OCTA en face images (including the SVC, ICP, and DCP) (Fig. 1C). The first subnetwork, D1, takes the thickness map and reflectance image as input to extract features from the shadow artifact affected areas. The second subnetwork, D2, is designed to extract features from widefield OCTA en face images. Then, the outputs of D1 and D2 are combined by concatenating feature maps along the channel axis and are fed into D3 and D4, where we expect the network to learn both NPA and shadow artifact–associated features. D3 and D4 form a parallel learning structure that makes the network wider in order to learn more redundant information for better classification between shadow artifacts and NPAs. The last subnet, D5, which contains two convolutional layers, fuses the output of D3 and D4 and outputs (feature maps are concatenated along the channel axis) the final segmentation result. 
Figure 1.
 
The architecture of the deep-learning algorithm for NPA segmentation on widefield OCTA in three retinal vascular plexuses. The thickness map of the inner retina (A) and the reflectance image acquired by projecting the OCT reflectance within the inner retina (B) were fed to one branch network (D1) to extract features associated with signal reduction areas. The en face angiogram of the retinal vascular area (C) was fed to another branch network (D2) to extract NPA features. Then, the network combined these features (by concatenating feature maps along the channel axis) and fed them to the branches D3 and D4 to segment NPAs and signal reduction artifacts. The last branch combined (i.e., concatenates feature maps along the channel axis) the output of D3 and D4 to output the result, including NPA (E, blue) and signal reduction artifact areas (E, yellow).
Figure 1.
 
The architecture of the deep-learning algorithm for NPA segmentation on widefield OCTA in three retinal vascular plexuses. The thickness map of the inner retina (A) and the reflectance image acquired by projecting the OCT reflectance within the inner retina (B) were fed to one branch network (D1) to extract features associated with signal reduction areas. The en face angiogram of the retinal vascular area (C) was fed to another branch network (D2) to extract NPA features. Then, the network combined these features (by concatenating feature maps along the channel axis) and fed them to the branches D3 and D4 to segment NPAs and signal reduction artifacts. The last branch combined (i.e., concatenates feature maps along the channel axis) the output of D3 and D4 to output the result, including NPA (E, blue) and signal reduction artifact areas (E, yellow).
Evaluation and Statistical Analysis
To evaluate the performance of our proposed algorithm, we applied sixfold cross-validation to our dataset. The entire dataset was split into six subsets, each containing a similar proportion of different DR severities. In total, six cross-validations were completed to evaluate the performance of the proposed method across the entire dataset. For each training and validation step, one subset was used for testing, while the remaining five subsets were used for training. Care was taken to ensure that the scans from the same participant were not included in both the testing and training datasets. Considering hardware limitations, the model was trained on the 6 × 6-mm field of view OCT/OCTA scans. In the evaluation step, the three retinal regions were montaged into the wide field of view OCT/OCTA scan. The agreement between the manually delineated ground truth and the output of our algorithm was quantified on the montaged images using the F1 score,  
\begin{eqnarray} {{F}_1}{\rm{\ score}} = \frac{{2\ \times \ {\rm{TP}}}}{{2\ \times \ {\rm{TP}} + {\rm{FP}} + {\rm{FN}}}}\quad \end{eqnarray}
(1)
where TP is the number of true-positive predictions (the number of pixels that were correctly predicted to be NPA); TN is the number of true-negative predictions (the number of pixels that were correctly predicted to be normal/perfused); FP is the number of false-positive predictions (the number of pixels that were incorrectly predicted to be NPA); and FN is the number of false-negative predictions (the number of pixels that were incorrectly predicted to be normal). 
We also explored the ability of the NPA segmentations generated by this network to diagnose DR at different severity levels. To align with the clinical scenarios, we proposed performing diagnoses at multiple levels: (1) healthy control versus patients with diabetes mellitus, which included any stage of DR and patients with diabetes mellitus without retinopathy; (2) eyes with DR versus eyes without retinopathy (both those with and without diabetes); (3) referable diabetic retinopathy (rDR), including moderate to severe nonproliferative diabetic retinopathy (NPDR), proliferative diabetic retinopathy (PDR), or DR with diabetic macular edema, versus non-referable diabetic retinopathy (nrDR; mild NPDR or without retinopathy); (4) and vision-threatening diabetic retinopathy (vtDR; severe NPDR, PDR, or DR with diabetic macular edema) versus non–vision-threatening diabetic retinopathy (nvtDR; moderate NPDR or less severe DR, diabetes without DR, or healthy). The severity of DR was graded by trained retinal specialists based on the Early Treatment of Diabetic Retinopathy Study (ETDRS) scale using clinical examination and fundus photography.28 The sensitivity with specificity fixed at 95% and the area under the receiver operating characteristic curve (AUC) were used to evaluate the diagnostic performance on three retinal plexuses separately and jointly. We also quantified the correlation between NPA and DR severities and best corrected visual acuity (VA) using Spearman’s and Pearson’s correlation tests, respectively. During these evaluations, we excluded the 1-mm-diameter circle centered on the fovea in all three vascular plexuses, shown in a different NPA segmentation study, to improve the diagnostic power of NPA measurements.9 The reason for this improvement is that the foveal avascular zone (FAZ) is avascular in healthy eyes, but its size varies substantially without reflecting any underlying pathology. For similar reasons, we also excluded the 2-mm-diameter circle centered on the optic disc in disc scans in all three vascular plexuses. 
Results
Study Participation
We acquired a total of 404 montaged widefield scans from one eye from 202 participants (each eye had two repeated scans), including 39 healthy control participants, 41 participants with diabetes without retinopathy, 75 participants with mild or moderate NPDR, and 47 participants with severe NPDR or PDR. We excluded 48 montaged widefield scans from 24 participants (four healthy control participants, nine participants with diabetes without retinopathy, seven participants with mild or moderate NPDR, and four participants with severe NPDR or PDR) for low signal quality (signal strength index <50) or strong motion artifacts. Table 1 shows the clinical characteristics of the participants. 
Table 1.
 
Clinical Characteristics of the Study Participants
Table 1.
 
Clinical Characteristics of the Study Participants
Algorithm Evaluation
On visual inspection, the proposed method output was consistent with the manually delineated NPA and shadow artifact ground truths in both healthy and PDR eyes (Fig. 2). The quantitative assessment confirmed a high agreement between the proposed method output and the manually delineated ground truth (Table 2), with average F1 scores (mean ± SD) of 0.81 ± 0.07, 0.87 ± 0.05, and 0.84 ± 0.05 in the SVC, ICP, and DCP, respectively. The performance for these different severities was similar (P = 0.07). 
Figure 2.
 
Nonperfusion area segmentation resulted in widefield OCTA in a healthy control (A–C) and an eye with PDR (D–F). In each case, the first row is montaged widefield en face views of the SVC, ICP, and DCP; the second row is ground truth manually delineated NPAs (green) and shadow areas (magenta); and the third row is the automated segmentation result showing NPAs (blue) and shadow artifact areas (yellow). The output appears to match the ground truth closely. The blue circles in A1 indicate the foveal avascular zone (1-mm diameter) and optic disc region (2-mm diameter) regions excluded in the DR diagnosis experiments.
Figure 2.
 
Nonperfusion area segmentation resulted in widefield OCTA in a healthy control (A–C) and an eye with PDR (D–F). In each case, the first row is montaged widefield en face views of the SVC, ICP, and DCP; the second row is ground truth manually delineated NPAs (green) and shadow areas (magenta); and the third row is the automated segmentation result showing NPAs (blue) and shadow artifact areas (yellow). The output appears to match the ground truth closely. The blue circles in A1 indicate the foveal avascular zone (1-mm diameter) and optic disc region (2-mm diameter) regions excluded in the DR diagnosis experiments.
Table 2.
 
Agreement Between Algorithm Output and Ground Truth for Different Diabetic Retinopathy Severities
Table 2.
 
Agreement Between Algorithm Output and Ground Truth for Different Diabetic Retinopathy Severities
Performance in Diabetic Retinopathy Diagnosis
We evaluated the diagnostic power of NPAs (excluding the areas within the FAZ and the optic disc region) on montaged three vascular plexuses en face images for diagnosing DR at multiple severity levels (Table 3). With specificity at 95%, the combined multiple-plexus OCTA images (with the NPA, which was measured separately in SVC, ICP, and DCP, and these outputs were summed for the combined plexus) achieved the highest sensitivity for distinguishing patients with diabetes from healthy participants and detecting patients with different severities of DR. The NPA in the DCP had the highest AUC for detecting healthy control participants from patients with diabetes, detecting patients with DR from those without, and distinguishing rDR from nrDR (mild NPDR, DM without DR, or healthy). The combined three-vascular-plexus OCTA image achieved the best AUC for distinguishing patients with vtDR from nvtDR (moderate NPDR or less severe DR, DM without DR, or healthy). 
Table 3.
 
Nonperfusion Area Diagnostic Accuracy
Table 3.
 
Nonperfusion Area Diagnostic Accuracy
Correlation of Nonperfusion Area With DR Severity and VA
The DCP NPA showed the highest correlation with DR severity. The best-corrected VA showed a negative correlation with the NPA, and the DCP NPA had the strongest correlation with the best-corrected VA (Table 4). 
Table 4.
 
Correlation Coefficients for Nonperfusion Area with DR Severity and Best-Corrected VA in Different Regions
Table 4.
 
Correlation Coefficients for Nonperfusion Area with DR Severity and Best-Corrected VA in Different Regions
Discussion
In this study, we designed and validated a deep learning–based method to quantify NPAs in three vascular plexuses in widefield en face OCTA images. We performed sixfold cross-validation to comprehensively evaluate the performance of our algorithm. The algorithm showed high agreement with a manually delineated ground truth (average F1 score >0.83). We also demonstrated that NPAs (excluding FAZ and optic disc region) in the SVC, ICP, and DCP correlate with DR severity and VA. 
Because the quantification of NPAs in OCTA images with a wider field of view was shown to have higher sensitivity in the assessment of DR than those from a smaller field of view,17 multiple methods have been proposed to quantify NPAs in widefield OCTA images.4,29,30 However, due to the influence of signal attenuation (which may be caused by pupil vignetting, vitreous floaters, or eyelash shadows, among other sources), some of these methods require manual intervention to exclude low-signal artifacts4 or remain semiautomated.31 To avoid manual intervention in the quantification process, we proposed a deep learning–based method27 to automatically exclude the shadow artifacts areas from NPA quantification. In this study, we also included the reflectance en face image of the SVC and the thickness map of the inner retina as inputs, which enabled our algorithm to distinguish shadow artifacts from NPAs. 
The proposed algorithm can automatically quantify NPAs in widefield OCTA scans of the ICP and DCP in addition to the SVC. We have previously reported on an algorithm to quantify the NPAs in deeper vascular plexuses on 3 × 3-mm central macular scans.11 The proposed algorithm in this study is the first automated method for quantifying NPAs in multiple plexuses in widefield OCTA images. The deeper plexuses are subject to projection artifacts and stronger noise, interfering with accurate quantification. We employed a reflectance-based projection-resolved OCTA algorithm26 to suppress the projection artifacts in the data processing step. We also adopted a retinal capillary reconstruction algorithm22 to enhance the capillaries and eliminate the background noise. These preprocessing steps suppressed artifacts and enhanced the images sufficiently in the ICP and DCP to allow the model to learn to distinguish NPAs from signal reduction artifacts. 
Different plexuses achieved varying levels of performance for DR characterization and diagnosis. Compared with the individual quantifications in the SVC, ICP, DCP, and the combined NPA, the DCP and combined plexus showed a higher sensitivity for detecting patients with diabetes from healthy participants and detecting patients with different clinical severities of DR. This may be because the DCP is more vulnerable to damage from hyperglycemia in early stages of the disease.31 However, the combined plexus had a higher diagnostic accuracy than any single plexus for differentiating patients with non–vision-threatening DR from vision-threatening DR (Table 3). The DCP NPAs also had a stronger correlation with DR severity and best-corrected VA (Table 4). The importance of the DCP in DR is consistent with observations from histologic studies.32 Its relationship with visual acuity has been demonstrated in other studies.3335 However, it appears that in more advanced stages of the disease, the more superficial layers may have additional information that is independent of changes in the DCP. Interestingly, our previous work quantifying NPA in three layers found the SVC to be the most diagnostic.9,11 This may be because these findings were based on the central 3 × 3-mm area and the ischemic changes in the peripheral DCP are more closely related to the overall progression of retinopathy.34 It is also possible that improved image processing made a difference in this study, allowing a more accurate characterization of the DCP. The retinal capillary reconstruction algorithm, not used in our previous work, may have been important in reducing the noise encountered in the DCP. Whatever the cause, it is clear that there is a difference in how DR affects the different vascular layers. Improved imaging techniques and signal processing can further elucidate these differences. 
This study has several limitations. One is that our algorithm categorized the overlapping region of true NPAs and strong shadow artifacts as shadow areas because the precise location of the flow signal cannot be identified by graders. Solving this issue will require additional innovations, as signal cannot be assessed in regions where shadow artifacts obscure it. We also limited our investigation to eyes with DR, but multiple diseases, such as vein occlusions,36 paracentral acute middle maculopathy,37 and some rare genetic diseases,38,39 may also result in NPAs. Future work could explore the ability of our method to characterize NPAs across multiple diseases. Another limitation of our study is that our healthy controls were not age matched to the participants with DR, which means that the diagnostic accuracy for distinguishing between healthy and DR cases may not reflect performance in a real-world clinical environment. This shortcoming will be addressed in future work as our available datasets mature. 
In conclusion, the proposed deep learning–based algorithm can automatically quantify NPA in three vascular plexuses in widefield OCTA images with high accuracy. The algorithm can also distinguish shadow artifacts from NPAs, and NPA measurements made with this approach could correctly diagnose DR at multiple levels. 
Acknowledgments
Supported by grants from the National Institute of Health (R01 EY027833, R01 EY035410, R01 EY 036429, R01 EY024544, R01 EY031394, R01 EY023285, T32 EY023211, UL1TR002369, P30 EY010572); the Malcolm M. Marquis, MD, Endowed Fund for Innovation; an unrestricted departmental funding grant and Dr. H. James and Carole Free Catalyst Award from Research to Prevent Blindness; Edward N. & Della L. Thome Memorial Foundation Award; and grants from the Bright Focus Foundation (G2020168, M20230081). 
Disclosure: Y. Guo, Optovue/Visionix (P), Genentech (P, R); T.T. Hormel, None; M. Gao, None; Q. You, None; J. Wang, Optovue/Visionix (P, R), Genentech (P, R); C.J. Flaxel, None; S.T. Bailey, None; T.S. Hwang, None; Y. Jia, Optovue/Visionix (P, R), Optos (P), Genentech (P, R, F) 
References
Hwang TS, Jia Y, Gao SS, et al. Optical coherence tomography angiography features of diabetic retinopathy. Retina. 2015; 35(11): 2371–2376. [CrossRef] [PubMed]
Wykoff CC, Yu HJ, Avery RL, Ehlers JP, Tadayoni R, Sadda SR. Retinal non-perfusion in diabetic retinopathy. Eye (Lond). 2022; 36(2): 249–256. [CrossRef] [PubMed]
Hormel TT, Jia Y, Jian Y, et al. Plexus-specific retinal vascular anatomy and pathologies as seen by projection-resolved optical coherence tomographic angiography. Prog Retin Eye Res. 2021; 80: 100878. [CrossRef] [PubMed]
Alibhai AY, De Pretto LR, Moult EM, et al. Quantification of retinal capillary nonperfusion in diabetics using wide-field optical coherence tomography angiogerphy. Retina. 2020; 40(3): 412–420. [CrossRef] [PubMed]
You QS, Wang J, Guo Y, et al. Optical coherence tomography angiography avascular area association with 1-year treatment requirement and disease progression in diabetic retinopathy. Am J Ophthalmol. 2020; 217: 268–277. [CrossRef] [PubMed]
Hirano T, Kakihara S, Toriyama Y, Nittala MG, Murata T, Sadda S. Wide-field en face swept-source optical coherence tomography angiography using extended field imaging in diabetic retinopathy. Br J Ophthalmol. 2018; 102(9): 1199–1203. [CrossRef] [PubMed]
Nicholson L, Ramu J, Chan EW, et al. Retinal nonperfusion characteristics on ultra-widefield angiography in eyes with severe nonproliferative diabetic retinopathy and proliferative diabetic retinopathy. JAMA Ophthalmol. 2019; 137(6): 626–631. [CrossRef] [PubMed]
Couturier A, Rey PA, Erginay A, et al. Widefield OCT-angiography and fluorescein angiography assessments of nonperfusion in diabetic retinopathy and edema treated with anti–vascular endothelial growth factor. Ophthalmology. 2019; 126(12): 1685–1694. [CrossRef] [PubMed]
Zhang M, Hwang TS, Dongye C, Wilson DJ, Huang D, Jia Y. Automated quantification of nonperfusion in three retinal plexuses using projection-resolved optical coherence tomography angiography in diabetic retinopathy. Invest Opthalmol Vis Sci. 2016; 57(13): 5101–5106. [CrossRef]
Hwang TS, Gao SS, Liu L, et al. Automated quantification of capillary nonperfusion using optical coherence tomography angiography in diabetic retinopathy. JAMA Ophthalmol. 2016; 134(4): 367–373. [CrossRef] [PubMed]
Hwang TS, Hagag AM, Wang J, et al. Automated quantification of nonperfusion areas in 3 vascular plexuses with optical coherence tomography angiography in eyes of patients with diabetes. JAMA Ophthalmol. 2018; 136(8): 929–936. [CrossRef] [PubMed]
Schottenhamml J, Moult EM, Ploner S, et al. An automatic, intercapillary area based algorithm for quantifying diabetes related capillary dropout using OCT angiography. Retina. 2016; 36(suppl 1): S93–S101. [PubMed]
Krawitz BD, Phillips E, Bavier RD, et al. Parafoveal nonperfusion analysis in diabetic retinopathy using optical coherence tomography angiography. Transl Vis Sci Technol. 2018; 7(4): 4. [CrossRef] [PubMed]
Guo Y, Camino A, Wang J, Huang D, Hwang TS, Jia Y. MEDnet, a neural network for automated detection of avascular area in OCT angiography. Biomed Opt Express. 2018; 9(11): 5147–5158. [CrossRef] [PubMed]
Nagasato D, Tabuchi H, Masumoto H, et al. Automated detection of a nonperfusion area caused by retinal vein occlusion in optical coherence tomography angiography images using deep learning. PLoS One. 2019; 14(11): e0223965. [CrossRef] [PubMed]
Wang J, Hormel TT, You Q, et al. Robust non-perfusion area detection in three retinal plexuses using convolutional neural network in OCT angiography. Biomed Opt Express. 2020; 11(1): 330–345. [CrossRef] [PubMed]
Guo Y, Hormel TT, Gao L, et al. Quantification of nonperfusion area in montaged widefield OCT angiography using deep learning in diabetic retinopathy. Ophthalmol Sci. 2021; 1(2): 100027. [CrossRef] [PubMed]
Hormel TT, Hwang TS, Bailey ST, Wilson DJ, Huang D, Jia Y. Artificial intelligence in OCT angiography. Prog Retin Eye Res. 2021; 85: 100965. [CrossRef] [PubMed]
Hormel TT, Huang D, Jia Y. Artifacts and artifact removal in optical coherence tomographic angiography. Quant Imaging Med Surg. 2021; 11(3): 1120–1133. [CrossRef] [PubMed]
Guo Y, Camino A, Zhang M, et al. Automated segmentation of retinal layer boundaries and capillary plexuses in wide-field optical coherence tomographic angiography. Biomed Opt Express. 2018; 9(9): 4429–4442. [CrossRef] [PubMed]
Hormel TT, Wang J, Bailey ST, Hwang TS, Huang D, Jia Y. Maximum value projection produces better en face OCT angiograms than mean value projection. Biomed Opt Express. 2018; 9(12): 6412–6424. [CrossRef] [PubMed]
Gao M, Hormel TT, Wang J, et al. An open-source deep learning network for reconstruction of high-resolution oct angiograms of retinal intermediate and deep capillary plexuses. Transl Vis Sci Technol. 2021; 10(13): 13. [CrossRef] [PubMed]
Wang J, Camino A, Hua X, et al. Invariant features-based automated registration and montage for wide-field OCT angiography. Biomed Opt Express. 2019; 10(1): 120–136. [CrossRef] [PubMed]
Bay H, Ess A, Tuytelaars T, Van Gool L. Speeded-Up Robust Features (SURF). Comput Vis Image Und. 2008; 110(3): 346–359. [CrossRef]
Campbell JP, Zhang M, Hwang TS, et al. Detailed vascular anatomy of the human retina by projection-resolved optical coherence tomography angiography. Sci Rep. 2017; 7(1): 1–11. [PubMed]
Wang J, Zhang M, Hwang TS, et al. Reflectance-based projection-resolved optical coherence tomography angiography [Invited]. Biomed Opt Express. 2017; 8(3): 1536–1548. [CrossRef] [PubMed]
Guo Y, Hormel TT, Xiong H, et al. Development and validation of a deep learning algorithm for distinguishing the nonperfusion area from signal reduction artifacts on OCT angiography. Biomed Opt Express. 2019; 10(7): 3257–3268. [CrossRef] [PubMed]
Early Treatment Diabetic Retinopathy Study Research Group. Fundus photographic risk factors for progression of diabetic retinopathy: ETDRS Report Number 12. Ophthalmology. 1991; 98(5): 823–833. [CrossRef] [PubMed]
Garg I, Uwakwe C, Le R, et al. Nonperfusion area and other vascular metrics by wider field swept-source OCT angiography as biomarkers of diabetic retinopathy severity. Ophthalmol Sci. 2022; 2(2): 100144. [CrossRef] [PubMed]
De Pretto LR, Moult EM, Alibhai AY, et al. Controlling for artifacts in widefield optical coherence tomography angiography measurements of non-perfusion area. Sci Rep. 2019; 9(1): 9096. [CrossRef] [PubMed]
Sambhav K, Abu-Amero KK, Chalam KV. Deep capillary macular perfusion indices obtained with OCT angiography correlate with degree of nonproliferative diabetic retinopathy. Eur J Ophthalmol. 2017; 27(6): 716–729.
Bek T . Transretinal histopathological changes in capillary-free areas of diabetic retinopathy. Acta Ophthalmol. 1994; 72(4): 409–415. [CrossRef]
Dupas B, Minvielle W, Bonnin S, et al. Association between vessel density and visual acuity in patients with diabetic retinopathy and poorly controlled type 1 diabetes. JAMA Ophthalmol. 2018; 136(7): 721–728. [CrossRef] [PubMed]
Seknazi D, Coscas F, Sellam A, et al. Optical coherence tomography angiography in retinal vein occlusion. Retina. 2018; 38(8): 1562–1570. [CrossRef] [PubMed]
Silva PS, Dela Cruz AJ, Ledesma MG, et al. Diabetic retinopathy severity and peripheral lesions are associated with nonperfusion on ultrawide field angiography. Ophthalmology. 2015; 122(12): 2465–2472. [CrossRef] [PubMed]
Shiraki A, Sakimoto S, Tsuboi K, et al. Evaluation of retinal nonperfusion in branch retinal vein occlusion using wide-field optical coherence tomography angiography. Acta Ophthalmol. 2019; 97(6): e913–e918. [CrossRef] [PubMed]
Ghasemi Falavarjani K, Phasukkijwatana N, Freund KB, et al. En face optical coherence tomography analysis to assess the spectrum of perivenular ischemia and paracentral acute middle maculopathy in retinal vein occlusion. Am J Ophthalmol. 2017; 177, 131–138. [CrossRef] [PubMed]
Boese EA, Jain N, Jia Y, et al. Characterization of chorioretinopathy associated with mitochondrial trifunctional protein disorders: long-term follow-up of 21 cases. Ophthalmology. 2016; 123(10): 2183–2195. [CrossRef] [PubMed]
Agrawal KU, Kalafatis NE, Shields CL. Coats plus syndrome with new observation of drusenoid retinal pigment epithelial detachments in a teenager. Am J Ophthalmol Case Rep. 2022; 28: 101713. [CrossRef] [PubMed]
Figure 1.
 
The architecture of the deep-learning algorithm for NPA segmentation on widefield OCTA in three retinal vascular plexuses. The thickness map of the inner retina (A) and the reflectance image acquired by projecting the OCT reflectance within the inner retina (B) were fed to one branch network (D1) to extract features associated with signal reduction areas. The en face angiogram of the retinal vascular area (C) was fed to another branch network (D2) to extract NPA features. Then, the network combined these features (by concatenating feature maps along the channel axis) and fed them to the branches D3 and D4 to segment NPAs and signal reduction artifacts. The last branch combined (i.e., concatenates feature maps along the channel axis) the output of D3 and D4 to output the result, including NPA (E, blue) and signal reduction artifact areas (E, yellow).
Figure 1.
 
The architecture of the deep-learning algorithm for NPA segmentation on widefield OCTA in three retinal vascular plexuses. The thickness map of the inner retina (A) and the reflectance image acquired by projecting the OCT reflectance within the inner retina (B) were fed to one branch network (D1) to extract features associated with signal reduction areas. The en face angiogram of the retinal vascular area (C) was fed to another branch network (D2) to extract NPA features. Then, the network combined these features (by concatenating feature maps along the channel axis) and fed them to the branches D3 and D4 to segment NPAs and signal reduction artifacts. The last branch combined (i.e., concatenates feature maps along the channel axis) the output of D3 and D4 to output the result, including NPA (E, blue) and signal reduction artifact areas (E, yellow).
Figure 2.
 
Nonperfusion area segmentation resulted in widefield OCTA in a healthy control (A–C) and an eye with PDR (D–F). In each case, the first row is montaged widefield en face views of the SVC, ICP, and DCP; the second row is ground truth manually delineated NPAs (green) and shadow areas (magenta); and the third row is the automated segmentation result showing NPAs (blue) and shadow artifact areas (yellow). The output appears to match the ground truth closely. The blue circles in A1 indicate the foveal avascular zone (1-mm diameter) and optic disc region (2-mm diameter) regions excluded in the DR diagnosis experiments.
Figure 2.
 
Nonperfusion area segmentation resulted in widefield OCTA in a healthy control (A–C) and an eye with PDR (D–F). In each case, the first row is montaged widefield en face views of the SVC, ICP, and DCP; the second row is ground truth manually delineated NPAs (green) and shadow areas (magenta); and the third row is the automated segmentation result showing NPAs (blue) and shadow artifact areas (yellow). The output appears to match the ground truth closely. The blue circles in A1 indicate the foveal avascular zone (1-mm diameter) and optic disc region (2-mm diameter) regions excluded in the DR diagnosis experiments.
Table 1.
 
Clinical Characteristics of the Study Participants
Table 1.
 
Clinical Characteristics of the Study Participants
Table 2.
 
Agreement Between Algorithm Output and Ground Truth for Different Diabetic Retinopathy Severities
Table 2.
 
Agreement Between Algorithm Output and Ground Truth for Different Diabetic Retinopathy Severities
Table 3.
 
Nonperfusion Area Diagnostic Accuracy
Table 3.
 
Nonperfusion Area Diagnostic Accuracy
Table 4.
 
Correlation Coefficients for Nonperfusion Area with DR Severity and Best-Corrected VA in Different Regions
Table 4.
 
Correlation Coefficients for Nonperfusion Area with DR Severity and Best-Corrected VA in Different Regions
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×