Open Access
Retina  |   March 2023
Quantitative Analysis of Vascular Abnormalities in Full-Term Infants With Mild Familial Exudative Vitreoretinopathy
Author Affiliations & Notes
  • Peng Li
    College of Electronic and Information Engineering, Tongji University, Shanghai, China
    Department of Electronic and Information Engineering, Tongji Zhejiang College, Jiaxing, China
  • Jia Liu
    Optometry Center, Jiaxing Maternity and Child Health Care Hospital, Jiaxing, China
  • Correspondence: Peng Li, Tongjizhejiang College, Business Road, no. 168, Zhejiang 314005, China. e-mail: airrob@163.com 
Translational Vision Science & Technology March 2023, Vol.12, 16. doi:https://doi.org/10.1167/tvst.12.3.16
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Peng Li, Jia Liu; Quantitative Analysis of Vascular Abnormalities in Full-Term Infants With Mild Familial Exudative Vitreoretinopathy. Trans. Vis. Sci. Tech. 2023;12(3):16. https://doi.org/10.1167/tvst.12.3.16.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose: Our goal was to build a system that combined deep convolutional neural networks (DCNNs) and feature extraction algorithms, which automatically extracted and quantified vascular abnormalities in posterior pole retinal images of full-term infants clinically diagnosed with mild familial exudative retinopathy (FEVR).

Methods: Using posterior pole retinal images taken from 4628 full-term infants with a total of 9256 eyes, we created data sets, trained DCNNs, and performed tests and comparisons. With the segmented images, our system extracted peripapillary vascular densities, mean tortuosities, and maximum diameter ratios within the region of interest. We also compared them with normal eyes statistically.

Results: In the test data set, the trained system obtained a sensitivity of 0.78 and a specificity of 0.98 for vascular segmentation, with 0.94 and 0.99 for optic disc, respectively. While in the comparison data set, compared with normal, we found a significant increase in vascular densities in retinal images with mild FEVR (5.3211% ± 0.7600% vs. 4.5998% ± 0.6586%) and a significant increase in the maximum diameter ratios (1.8805 ± 0.3197 vs. 1.5087 ± 0.2877), while the mean tortuosities significantly decreased (2.1018 ± 0.2933 [104 cm−3] vs. 3.3344 ± 0.3890 [104 cm−3]). All values were statistically significantly different.

Conclusions: Our system could automatically segment the posterior pole retinal images and extract from vascular features associated with mild FEVR. Quantitative analysis of these parameters may help ophthalmologists in the early detection of FEVR.

Translational Relevance: This system may contribute to the early detection of FEVR and facilitate the promotion of artificial intelligence–assisted diagnostic techniques in clinical applications.

Introduction
Familial exudative vitreoretinopathy (FEVR) is an inherited ocular disease characterized by abnormal retinal vascular development, first proposed and named by Criswick and Schepens in 1969.1 The clinical presentation and course of the disease are varied, with fundus changes in the posterior region resembling retinopathy of prematurity (ROP)2 and lifelong vasculopathy in severe cases. The slowly progressive nature of this disease3 makes the early detection of mild FEVR (stages I and II, concerning the clinical staging)4 very important. 
According to Pendergast and Trese,5 mild FEVR was characterized by peripheral avascular retinal or retinal neovascularization at the junction of vascularity and avascularity. Some abnormalities of vascular structures had been reported3 and found in the posterior pole images with mild FEVR. If the retinal periphery could not be observed, posterior pole images were also recorded to provide clues for the diagnosis of FEVR.3 Therefore, it is of great interest to detect subtle morphologic changes in the posterior pole retinal images. 
Yuan et al.6 manually extracted and quantified the clinical features of retinal vascular in posterior pole retinal images of adults with mild FEVR. However, manual extraction is very time-consuming and poorly reproducible.7 
In recent years, there has been a significant amount of technical work focused on the automatic processing of retinal vascular. From unsupervised image and feature-based approaches810 to supervised deep learning models, the performance had been significantly improved.1113 Despite the great progress, these studies had typically focused on performing individual functions, neglecting downstream tasks such as feature measurements and quantitative analysis.11 
In addition, reports for FEVR had been studied in children and adolescents or adults.4 However, Tang et al.14 found that the prevalence of FEVR in Chinese newborns was not low (1.19% in full-term infants). The diagnosis of FEVR requires a combination of fluorescein angiography (FA) tests,4 genetic tests, and other tests. At the same time, many hospitals performing universal newborn screening are not equipped for such differential diagnoses. The comprehensive fundus examinations of neonates using a wide-field fundus imaging system (Ret Cam) had been reported to clinically diagnose neonatal ocular abnormalities such as retinal hemorrhage (RH) and FEVR.1416 
In this study, we used posterior pole retinal images taken from full-term infants (with Ret Cam III), constructed data sets, trained deep convolutional neural networks (DCNNs) to segment, and automatically extracted features of posterior pole vascular abnormalities in eyes with mild FEVR and compared them to eyes with non-FEVR. 
Methods
This study was approved by the Ethics Review Committee of Jiaxing Maternal and Child Health Care Hospital, China, and followed the principles of the Declaration of Helsinki. 
Materials and Data Sets
Retinal images were collected from full-term infants born between January 2021 and June 2021 at a tertiary care hospital (Jiaxing Maternal and Child Health Hospital, Jiaxing, China). Full-term infants born within 72 hours were examined after a 30-minute fast. After pupil dilatations, the medical staff used a Ret Cam III (Natus Medical, Inc., Pleasanton, CA, USA) to collect retinal images from different fields of view, first from the right eye and then from the left eye. The size of these retinal images was 1600 × 1200 pixels, from which we selected the posterior pole retinal images to construct three separate data sets. 
Criteria for images included the following in the data sets: (1) from full-term infants with a gestation of 37 weeks or more, (2) from full-term infants without a history of asphyxia or oxygenation, and (3) with a clinical diagnosis of mild FEVR with small peripheral vascular abnormalities, such as retinal areas without perfusion, increased posterior retinal vascular branches, and retinal neovascularization. 
We assigned a reference diagnostic criterion to each image (mild FEVR containing stages I and II, as well as normal). The determination of the reference relied on the diagnostic consensus from three ophthalmologists with more than 10 years of experience. 
As shown in Table 1, we created three separate data sets using 7644, 1532, and 80 retinal images collected from 3822, 766, and 40 different full-term infants. A total of 48 cases (1.03%) of mild FEVR were detected in the total of 4628. 
Table 1.
 
Characteristics of The Three Data Sets
Table 1.
 
Characteristics of The Three Data Sets
Table 2.
 
Independent Samples Test of the Three Features
Table 2.
 
Independent Samples Test of the Three Features
System Structure
The structure of our system is shown in Figure 1. We divided the whole process into three main parts. First, the retinal image input to the system was segmented by two DCNNs separately (A). Then, the system located and calculated the region of interest (ROI) based on the segmented image (B). Finally, the system took the features from the ROI of the segmented image (C). 
Figure 1.
 
Structure of the proposed system.
Figure 1.
 
Structure of the proposed system.
Figure 2.
 
Fivefold cross-validation of segmentation on the training data set.
Figure 2.
 
Fivefold cross-validation of segmentation on the training data set.
Training and Evaluation of DCNNs
As shown in Figure 1, at the beginning of our system, the input image was segmented by two DCNNs separately. One DCNN segmented the optic disc (OD) for localization and calculating of the ROI, and the other DCNN segmented the retinal vascular. To train the vascular segmentation DCNN, we used 7644 images from the training data set, and we also randomly selected 1528 images from the training data set to train the OD segmentation DCNN. 
Retinal images in the training and test data sets were manually annotated by a professional ophthalmologist for training and testing. The ophthalmologist used medical imaging-specific annotation software to manually label the OD or used a brush annotation tool to annotate the vascular. 
For the DCNNs, our system used two open-source modified U-Nets based on an encoder-decoder architecture (Retina U-Net).17 During training, we modified the program parameters by setting the number of interlayer convolutional kernels to 64, the momentum to 0.9, the learning rate to 0.0001, the decay rate to 1e-6, and the gradient crop to 5.0. Our system also selected a stochastic gradient and processed 16 images per batch. The two DCNNs were all initialized from a Gaussian distribution N (0, 0.01). 
We used the area under curve (AUC) score under the receiver operating characteristic curve to measure the performance of the segmentation networks during the training. To avoid overfitting and underfitting, we divided the training data set into five parts, randomly selected four parts for training, and used the remaining for testing, and the cross-validations were repeated five times (fivefold cross-validation). We used the Scikit-Learn library tools (French Institute for Research in Computer Science and Automation, Rocquencourt, France) to calculate the AUC scores and calculated the 95% confidence intervals using the formula of Hanley and McNeil.18 
Based on the AUC scores, we selected the best configuration and conducted performance tests. To measure the performance of the segmentation networks, we calculated the sensitivities and specificities. 
Location and Calculation of ROI
After converting the OD segmentation images to gray, our system used the canny19 edge detection algorithm to obtain the binarized edges and then compress the data points by the Reumann–Witkam20 algorithm to obtain the contours, the Sklansky21 algorithm to find the convex packets, and the rotating caliper algorithm22 to calculate the minimum outer rectangle for the contour. 
Our system also solved the minimum enclosing circle of the four points iteratively with the four points of the rectangle and calculated the circle centers and radios. Finally, a circle ROI was obtained using the center and three times the diameter of the enclosing circle. To be able to focus on the vascular, we segmented the fusion images of retinal vessels and OD within the ROI. 
Features Extraction
The system inverted the color of the OD segmented images to set the object and the background for processing. For the vascular segmentation images, the system used the canny edge detection19 algorithm to detect contours and calculated the vascular widths by Euclidean distance conversion, which was used to find the maximum diameter ratios. The system also used the Zhan-Suen fast parallel refinement algorithm23 to extract the centerline of the vascular, the total curvature squared normalized by arc length as the measure of vascular tortuosities.24 We analyzed only the vascular densities, widths, and tortuosities within the ROI. For vascular widths, we took the maximum ratios, and for vascular tortuosities, we took the means. 
Statistical Analysis
We used the Shapiro–Wilk test to validate the features extracted from retinal images with mild FEVR or normal. Independent tests were used to compare differences between them when they conformed to a normal distribution, and Wilcoxon rank-sum tests were used when parameters did not. P values less than 0.05 were statistically significant. Statistical analyses were performed using the statistical software SPSS Statistics 26.0 (IBM, Armonk, NY, USA). 
Random Forest Algorithm
The random forest (RF)25 algorithm is a supervised learning algorithm that is very classical and easy to implement, shows an impressive performance in classification and regression, and is also capable of detecting the interactions between different features during the training. 
For the RF algorithm, we used the ensemble function from the Scikit-Learn library tool to construct it. To train the model, we evaluated the number of trees in the random forest, from 1 to 20, and finally selected the best performance: seven trees (super residual setting: n_estimators = 7). We also performed a regression analysis using a random forest for different combinations between features (from 1 to 3) and evaluated the consistency of the fitted results with the manual reading results using κ values. 
Experiments
All networks were implemented in Tensorflow 1.13 (NVIDIA, Santa Clara, CA, USA) and evaluated on a computer with an NVIDIA GeForce 3090 GPU, 64G RAM, and an i9-12900KF CPU. All models and feature extraction algorithms were implemented by Python 3.6.5 (Python Software Foundation, Inc., Wilmington, DE, USA). We trained our DCNNs on the training data set, tested their performance on the test data set, and evaluated the features obtained on the comparison data set using SPSS 26.0 and the RF algorithm. 
Results
Training of the Segmentation Networks
On the training data set, the average AUC score was 0.9286 for segmented retinal vascular (Fig. 2A) and 0.9845 for segmented optic discs (Fig. 2B); based on the results of the fivefold cross-validation, we selected the best performance (the purple curve), and the overall average AUC score was 0.9601. 
Figure 3.
 
Processing of retinal images with mild FEVR. (A) Input retinal image. (B) Temporal image of the input image, which was only used to show abnormalities in the periphery and was not added to the data set. (C) vascular segmentation image. (D) OD segmentation image. (E) OD segmentation image with ROI. (F) Fusion of vascular, OD segmentation images with ROI. (G) ROI segmentation images with vascular and OD. (H) ROI segmentation images with vascular and inverted OD (after binarization).
Figure 3.
 
Processing of retinal images with mild FEVR. (A) Input retinal image. (B) Temporal image of the input image, which was only used to show abnormalities in the periphery and was not added to the data set. (C) vascular segmentation image. (D) OD segmentation image. (E) OD segmentation image with ROI. (F) Fusion of vascular, OD segmentation images with ROI. (G) ROI segmentation images with vascular and OD. (H) ROI segmentation images with vascular and inverted OD (after binarization).
Performance of the Segmentation Networks
The trained network used to segment the retinal vascular obtained a sensitivity of 0.78 and a specificity of 0.98 on the test data set. Previous studies using convolutional neural network methods for segmenting retinal vascular in adults obtained sensitivity and specificity values of 0.7 to 0.9 and 0.8 to 0.9, respectively.26 Vascular in neonatal retinal images have lower contrast and are not uniformly illuminated compared to adults. Mao et al.27 obtained a sensitivity of 0.72 and a specificity of 0.99 for neonatal retinal vascular segmentation. In this work, another network used for OD segmentation reached a sensitivity of 0.94 and a specificity of 0.99. 
Figure 3 shows the flow of the input retinal image with mild FEVR being processed; throughout the process, the image remained at its original size (1600 × 1200 pixels), which facilitated the fusion of the two segmentation DCNNs as well as the segmented images and avoided pixel loss and positional errors that arise during image scaling.28 After the binarization, we performed an inverse operation on the OD to solve the calculation of the vascular density using the ratios of the vascular area within the ROI area without OD.29 
We also performed segmentation and feature extraction on the normal retinal images, and the processing flow is shown in Figure 4
Figure 4.
 
Processing of the normal retinal images. (A) Input retinal image. (B) vascular segmentation image. (C) OD segmentation image. (D) OD segmentation image with ROI. (E) Fusion of vascular, OD segmentation images with ROI. (F) ROI segmentation images with vascular and inverted OD (after binarization).
Figure 4.
 
Processing of the normal retinal images. (A) Input retinal image. (B) vascular segmentation image. (C) OD segmentation image. (D) OD segmentation image with ROI. (E) Fusion of vascular, OD segmentation images with ROI. (F) ROI segmentation images with vascular and inverted OD (after binarization).
Statistical Analysis
We performed an independent samples test on the features extracted from the posterior pole retinal images with mild FEVR (including vessel densities, mean tortuosities, and maximum diameter ratios within the ROI) from the comparison data set with those extracted from the normal images, and the results were shown in Table 2, showing a significant different between them (P < 0.001). 
As shown in Figure 5, there were significant differences between the features extracted from the images with mild FEVR and normal, indicating that these features could be used effectively as a basis for diagnosis. 
Figure 5.
 
Quantifications of extracted features (95% confidence interval)
Figure 5.
 
Quantifications of extracted features (95% confidence interval)
Random Forest Algorithm
We applied the RF algorithm to evaluate different combinations of the three features, and the κ consistency results of the regression results with the manual reading results are presented in Table 3
Table 3.
 
The κ Values for Different Feature Combinations
Table 3.
 
The κ Values for Different Feature Combinations
As can be seen from Table 3 and Figure 6, the best performance was obtained from the system with all three features. In the best condition, we obtained an overall accuracy of 88.75% and a κ value of 0.775 on the comparison data set, with sensitivities and specificities of 0.8780 and 0.8537, respectively. The RF algorithm also showed the importance of the three features, with densities, mean tortuosities, and maximum diameter ratios of 0.0825, 0.5288, and 0.3887, respectively, and the vascular tortuosity was the most important single feature. 
Figure 6.
 
Quantifications of κ values for different combinations. 1, vascular densities; 2, mean tortuosities; 3, maximum diameter ratios. One, two, and three refer to the number of features.
Figure 6.
 
Quantifications of κ values for different combinations. 1, vascular densities; 2, mean tortuosities; 3, maximum diameter ratios. One, two, and three refer to the number of features.
Discussion
In the current study, we developed a system capable of automatically segmenting and extracting abnormal vascular features from posterior pole retinal images (acquired by Ret Cam III) of full-term infants. We also performed quantitative analysis on the features within the ROI (densities, mean tortuosities, and maximum diameter ratios). The results showed that these features differed significantly between full-term infants with mild FEVR and normal controls. 
Unlike the manual extraction of abnormal features in the posterior polar retinal images of adults with mild FEVR by Yuan et al.,6 we used two DCNNs to automatically segment the OD and the retinal vascular, respectively. The OD segmentation images were used to localize and calculate the ROI, while the vascular segmentation images were used to extract features (within the ROI). This helped to improve the disadvantages of manual extraction, which is very time-consuming, has subjective variation, and is poorly reproducible.7 
We have not found reports that automatically segmented and extracted features from posterior pole retinal images of full-term infants with mild FEVR. Meanwhile, we noted that Redd et al.30 used a U-Net to segment retinal vascular and OD for the study of the automatic diagnosis of ROP. We also noted that Yildiz et al.31 and Mao et al.27 had also used U-Net, segmented retinal vascular, and OD for the studies of plus disease in retinopathy of prematurity. 
Unlike them, we used the whole images when training the networks and kept the size consistent with the original image (1600 × 1200 pixels) throughout the whole process. This was advantageous for us to perform operations such as fusion, inversion, and so on. A similar study by Kim et al.28 showed that retinal appearance assessment based on the whole image provided more accurate and reliable performance compared to quadrant-based assessment. Of course, a direct comparison of the performance was not appropriate because of the differences in data sets and tasks. 
Many reports described abnormalities of retinal vascular in patients with mild FEVR.3,6,32 Therefore, we designed the system to automatically extract the vascular features (densities, mean tortuosities, and maximum diameter ratios) within the ROI. All three parameters showed significant differences between infants with mild FEVR and normal. These quantitative features could be used as supporting evidence to assist physicians in the clinical diagnosis of mild FEVR in newborn screening. 
In a previous study, Yuan et al.6 found that for patients with mild FEVR, more vascular radiated from the OD within two times the OD diameter (24.53 ± 3.1 vs. 21.39 ± 2.65), and our study showed a significant increase in vessel densities within the ROI (three times the OD diameter): 5.3211% ± 0.7600% vs. 4.5998% ± 0.6586%. 
In OMIM 133780 (Online Mendelian Inheritance in Man; https://www.ncbi.nlm.nih.gov/omim provided in the public domain by the National Center for Biotechnology Information, Bethesda, MD, USA), abnormalities of stretched vascular in patients with mild FEVR were presented. In contrast, we found a significant reduction in mean vascular tortuosities within the ROI (2.1018 ± 0.2933 [104 cm−3] vs. 3.3344 ± 0.3890 [104 cm−3]). In addition to this, Kashani et al.32 reported that patients with mild FEVR had dilated veins due to hypoxia, and the maximum vascular diameter ratios within the ROI were significantly increased in our study (1.8805 ± 0.3197 vs. 1.5087 ± 0.2877). 
To our knowledge, this was the first time that automatically extracted quantitative vascular features (densities, mean tortuosities, and maximum diameter) had been applied to analyze abnormalities in full-term infants with mild FEVR. 
After identifying the significant differences in vascular features in the comparison data set, we developed a RF classifier that achieved the best performance including a combination of all three features. In our model, the mean tortuosity of the vascular was the most important single feature. 
At the same time, the results of our model using random forest (κ value of 0.775 with the manual reading results) also suggested that we needed more related features to improve the performance. Going further to investigate the features learned by DCNNs will be our next task. 
FA and genetic diagnosis are very important to confirm the diagnosis of FEVR.32 Performing FA in neonates requires a high threshold of collaboration between the anesthesiology and neonatology departments, pediatric angiography facilities, and infant anesthesia departments. However, many hospitals performing universal neonatal screening at the primary level do not have such a differential diagnosis. Comprehensive fundus examinations of neonates using a wide-field fundus imaging system (Ret Cam) had been reported to detect neonatal RH, FEVR, and other neonatal ocular abnormalities.1416 We found it interesting to study the vascular abnormalities in full-term infants with mild FEVR from images (acquired by Ret Cam III). 
Limitations
The images used to construct the data sets of full-term infants were a potential limitation of our system. The images we needed to construct the data sets were taken only at Jiaxing Maternal and Child Health Care Hospital, China, between January 2021 and June 2021, and we selected only the posterior polar field images, which were very detrimental to the generalization of the model. Meanwhile, low-quality images due to light leakage, blurred shooting, and underexposure could affect the performance of the system. Zhou et al.33 proposed that automatic analysis of retinal vascular morphology from color retinal images should include automatic image grading, segmentation, and morphologic feature measurement. 
Future Work
In future work, we will continue to study the visualization of learned features from DCNNs and find more relevant features to enhance the performance of our system, as well as investigate the image preprocessing module to automatically and objectively grade and filter the images incorporated into the data sets. We will also try to work with different hospitals to obtain more retinal images of full-term infants. 
Conclusion
We present a system that automatically extracted vascular features from posterior pole retinal images of full-term infants and quantified vascular abnormalities in mild FEVR. These studies may help in the early detection of FEVR. 
Acknowledgments
The authors thank Liu Jia of Jiaxing Maternal and Child Health Hospital, who provided the retinal images of preterm infants used for this study. We also thank doctors in ophthalmology who provided much help, including manual diagnosis. 
Supported by the Jiaxing Science and Technology Bureau project “Retinal Vascular Quantitative Analysis Tool for Retina Images Based on Deep Convolutional Neural Networks” (2022AY10011). 
Disclosure: P. Li, None; J. Liu, None 
References
Criswick VG, Schepens CL. Familial exudative vitreoretinopathy. Am J Ophthalmol. 1969; 68(4): 578–594. [CrossRef] [PubMed]
Miyakubo H, Hashimoto K, Miyakubo S. Retinal vascular pattern in familial exudative vitreoretinopathy. Ophthalmology. 1984; 91(12): 1524–1530. [CrossRef] [PubMed]
Boonstra FN, van Nouhuys CE, Schuil J, et al. Clinical and molecular evaluation of probands and family members with familial exudative vitreoretinopathy. Invest Ophthalmol Vis Sci. 2009; 50(9): 4379–4385. [CrossRef] [PubMed]
Ranchod TM, Ho LY, Drenser KA, et al. Clinical presentation of familial exudative vitreoretinopathy. Ophthalmology. 2011; 118(10): 2070–2075. [CrossRef] [PubMed]
Pendergast SD, Trese MT. Familial exudative vitreoretinopathy: results of surgical management. Ophthalmology. 1998; 105(6): 1015–1023. [CrossRef] [PubMed]
Yuan M, Yang Y, Yu S, et al. Posterior pole retinal abnormalities in mild asymptomatic FEVR. Invest Ophthalmol Vis Sci. 2015; 56(1): 458–463. [CrossRef]
Couper DJ, Klein R, Hubbard LD, et al. Reliability of retinal photography in the assessment of retinal microvascular characteristics: the Atherosclerosis Risk in Communities Study. Am J Ophthalmol. 2002; 133(1): 78–88. [CrossRef] [PubMed]
Relan D, MacGillivray T, Ballerini L, Trucco E. Automatic retinal vessel classification using a least square-support vector machine in VAMPIRE. 2014 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society. Chicago, Illinois: IEEE; 2014: 142–145.
Vázquez SG, Cancela B, Barreira N, et al. Improving retinal artery and vein classification using a minimal path approach. Machine Vis Applications. 2013; 24(5): 919–930. [CrossRef]
Alam M, Son T, Toslak D, et al. Combining ODR and blood vessel tracking for artery–vein classification and analysis in color fundus images. Transl Vis Sci Technol. 2018; 7(2): 23–23. [CrossRef] [PubMed]
Ronneberger O, Fischer P, Brox T. U-net: Convolutional networks for biomedical image segmentation. International Conference on Medical Image Computing and Computer-Assisted Intervention. Munich, Germany: Springer; 2015: 234–241.
Perez-Rovira A, MacGillivray T, Trucco E, et al. VAMPIRE: vessel assessment and measurement platform for images of the REtina. 2011 Annual International Conference of the IEEE Engineering in Medicine and Biology Society. Boston, Massachusetts: IEEE; 2011: 3391–3394.
Muller J, Alonso-Caneiro D, Read SA, et al. Application of deep learning methods for binarization of the choroid in optical coherence tomography images. Transl Vis Sci Technol. 2022; 11(2): 23–23. [CrossRef] [PubMed]
Tang H, Li N, Li Z, et al. Fundus examination of 199 851 newborns by digital imaging in China: a multicentre cross-sectional study. Br J Ophthalmol. 2018; 102(12): 1742–1746. [CrossRef] [PubMed]
Li LH, Li N, Zhao JY, et al. Findings of perinatal ocular examination performed on 3573, healthy full-term newborns. Br J Ophthalmol. 2013; 97(5): 588–591. [CrossRef] [PubMed]
Vinekar A, Govindaraj I, Jayadev C, et al. Universal ocular screening of 1021 term infants using wide-field digital imaging in a single public hospital in India—a pilot study. Acta Ophthalmol. 2015; 93(5): E372–E376. [CrossRef] [PubMed]
Jaeger PF, Kohl SA, Bickelhaupt S, et al. Retina U-net: Embarrassingly simple exploitation of segmentation supervision for medical object detection. Machine Learning for Health Workshop. Virtual: PMLR; 2020: 171–183.
Hanley JA, McNeil BJ. The meaning and use of the area under a receiver operating characteristic (ROC) curve. Radiology. 1982; 143(1): 29–36. [CrossRef] [PubMed]
Canny J . A computational approach to edge detection. IEEE Trans Pattern Anal Machine Intell. 1986(6): 679–698.
Reumann K, Witkam APM. Optimizing curve segmentation in computer graphics. Int Comput Symp. 1974: 467–472.
Sklansky J, Nahin PJ. A parallel mechanism for describing silhouettes. IEEE Trans Comput. 1972; 100(11): 1233–1239. [CrossRef]
Gomez JIV, Melchor MM, Lozada JCH. Optimal coverage path planning based on the rotating calipers algorithm. In: 2017 International Conference on Mechatronics, Electronics and Automotive Engineering (ICMEAE). Cuernavaca, Mexico: IEEE; 2017: 140–144.
Zhang TY, Suen CY. A fast parallel algorithm for thinning digital patterns. Communications ACM. 1984; 27(3): 236–239. [CrossRef]
Turior R, Onkaew D, Uyyanonvara B, et al. Quantification and classification of retinal vessel tortuosity. Sci Asia. 2013; 39: 265–277. [CrossRef]
Breiman L . Random forests. Machine Learn. 2001; 45(1): 5–32. [CrossRef]
Wang S, Yin Y, Cao G, et al. Hierarchical retinal blood vessel segmentation based on feature and ensemble learning. Neurocomputing. 2015; 149: 708–717. [CrossRef]
Mao J, Luo Y, Liu L, et al. Automated diagnosis and quantitative analysis of plus disease in retinopathy of prematurity based on deep convolutional neural networks. Acta Ophthalmol. 2020; 98(3): E339–E345. [CrossRef] [PubMed]
Kim SJ, Campbell JP, Kalpathy-Cramer J, et al. Accuracy and reliability of eye-based vs quadrant-based diagnosis of plus disease in retinopathy of prematurity. JAMA Ophthalmol. 2018; 136(6): 648–655. [CrossRef] [PubMed]
Sprödhuber A, Wolz J, Budai A, et al. The role of retinal vascular density as a screening tool for aging and stroke. Ophthalmic Res. 2018; 60(1): 1–8. [CrossRef] [PubMed]
Redd TK, Campbell JP, Brown JM, et al. Evaluation of a deep learning image assessment system for detecting severe retinopathy of prematurity. Br J Ophthalmol. 2019; 103(5): 580–584. [CrossRef]
Yildiz VM, Tian P, Yildiz I, et al. Plus disease in retinopathy of prematurity: convolutional neural network performance using a combine d neural network and feature extraction approach. Transl Vis Sci Technol. 2020; 9(2): 10–10. [CrossRef] [PubMed]
Kashani AH, Brown KT, Chang E, et al. Diversity of retinal vascular anomalies in patients with familial exudative vitreoretinopathy. Ophthalmology. 2014; 121(11): 2220–2227. [CrossRef] [PubMed]
Zhou Y, Wagner SK, Chia MA, et al. AutoMorph: automated retinal vascular morphology quantification via a deep learning pipeline. Transl Vis Sci Technol. 2022; 11(7): 12–12. [CrossRef] [PubMed]
Figure 1.
 
Structure of the proposed system.
Figure 1.
 
Structure of the proposed system.
Figure 2.
 
Fivefold cross-validation of segmentation on the training data set.
Figure 2.
 
Fivefold cross-validation of segmentation on the training data set.
Figure 3.
 
Processing of retinal images with mild FEVR. (A) Input retinal image. (B) Temporal image of the input image, which was only used to show abnormalities in the periphery and was not added to the data set. (C) vascular segmentation image. (D) OD segmentation image. (E) OD segmentation image with ROI. (F) Fusion of vascular, OD segmentation images with ROI. (G) ROI segmentation images with vascular and OD. (H) ROI segmentation images with vascular and inverted OD (after binarization).
Figure 3.
 
Processing of retinal images with mild FEVR. (A) Input retinal image. (B) Temporal image of the input image, which was only used to show abnormalities in the periphery and was not added to the data set. (C) vascular segmentation image. (D) OD segmentation image. (E) OD segmentation image with ROI. (F) Fusion of vascular, OD segmentation images with ROI. (G) ROI segmentation images with vascular and OD. (H) ROI segmentation images with vascular and inverted OD (after binarization).
Figure 4.
 
Processing of the normal retinal images. (A) Input retinal image. (B) vascular segmentation image. (C) OD segmentation image. (D) OD segmentation image with ROI. (E) Fusion of vascular, OD segmentation images with ROI. (F) ROI segmentation images with vascular and inverted OD (after binarization).
Figure 4.
 
Processing of the normal retinal images. (A) Input retinal image. (B) vascular segmentation image. (C) OD segmentation image. (D) OD segmentation image with ROI. (E) Fusion of vascular, OD segmentation images with ROI. (F) ROI segmentation images with vascular and inverted OD (after binarization).
Figure 5.
 
Quantifications of extracted features (95% confidence interval)
Figure 5.
 
Quantifications of extracted features (95% confidence interval)
Figure 6.
 
Quantifications of κ values for different combinations. 1, vascular densities; 2, mean tortuosities; 3, maximum diameter ratios. One, two, and three refer to the number of features.
Figure 6.
 
Quantifications of κ values for different combinations. 1, vascular densities; 2, mean tortuosities; 3, maximum diameter ratios. One, two, and three refer to the number of features.
Table 1.
 
Characteristics of The Three Data Sets
Table 1.
 
Characteristics of The Three Data Sets
Table 2.
 
Independent Samples Test of the Three Features
Table 2.
 
Independent Samples Test of the Three Features
Table 3.
 
The κ Values for Different Feature Combinations
Table 3.
 
The κ Values for Different Feature Combinations
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×