Open Access
Articles  |   June 2021
Five-Category Intelligent Auxiliary Diagnosis Model of Common Fundus Diseases Based on Fundus Images
Author Affiliations & Notes
  • Bo Zheng
    School of Information Engineering, Huzhou University, Huzhou, Zhejiang, China
    Zhejiang Province Key Laboratory of Smart Management & Application of Modern Agricultural Resources, Huzhou University, Huzhou, Zhejiang Province, China
  • Qin Jiang
    Affiliated Eye Hospital of Nanjing Medical University, Nanjing, Jiangsu, China
  • Bing Lu
    School of Information Engineering, Huzhou University, Huzhou, Zhejiang, China
    Zhejiang Province Key Laboratory of Smart Management & Application of Modern Agricultural Resources, Huzhou University, Huzhou, Zhejiang Province, China
  • Kai He
    School of Information Engineering, Huzhou University, Huzhou, Zhejiang, China
    Zhejiang Province Key Laboratory of Smart Management & Application of Modern Agricultural Resources, Huzhou University, Huzhou, Zhejiang Province, China
  • Mao-Nian Wu
    School of Information Engineering, Huzhou University, Huzhou, Zhejiang, China
    Zhejiang Province Key Laboratory of Smart Management & Application of Modern Agricultural Resources, Huzhou University, Huzhou, Zhejiang Province, China
  • Xiu-Lan Hao
    School of Information Engineering, Huzhou University, Huzhou, Zhejiang, China
    Zhejiang Province Key Laboratory of Smart Management & Application of Modern Agricultural Resources, Huzhou University, Huzhou, Zhejiang Province, China
  • Hong-Xia Zhou
    School of Information Engineering, Huzhou University, Huzhou, Zhejiang, China
    Zhejiang Province Key Laboratory of Smart Management & Application of Modern Agricultural Resources, Huzhou University, Huzhou, Zhejiang Province, China
    College of Computer and Information, Hehai University, Nanjing, Jiangsu, China
  • Shao-Jun Zhu
    School of Information Engineering, Huzhou University, Huzhou, Zhejiang, China
    Zhejiang Province Key Laboratory of Smart Management & Application of Modern Agricultural Resources, Huzhou University, Huzhou, Zhejiang Province, China
  • Wei-Hua Yang
    Affiliated Eye Hospital of Nanjing Medical University, Nanjing, Jiangsu, China
  • Correspondence: Wei-Hua Yang, Affiliated Eye Hospital of Nanjing Medical University, 138 Han-zhong Road, Nanjing 210029, Jiangsu, China. e-mail: benben0606@139.com 
  • Shao-Jun Zhu, School of Information Engineering Huzhou University, Huzhou 313000, Zhejiang, China. e-mail: zhushaojun@zjhu.edu.cn 
  • Footnotes
    *  BZ and QJ are co-first authors.
Translational Vision Science & Technology June 2021, Vol.10, 20. doi:https://doi.org/10.1167/tvst.10.7.20
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Bo Zheng, Qin Jiang, Bing Lu, Kai He, Mao-Nian Wu, Xiu-Lan Hao, Hong-Xia Zhou, Shao-Jun Zhu, Wei-Hua Yang; Five-Category Intelligent Auxiliary Diagnosis Model of Common Fundus Diseases Based on Fundus Images. Trans. Vis. Sci. Tech. 2021;10(7):20. doi: https://doi.org/10.1167/tvst.10.7.20.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose: The discrepancy of the number between ophthalmologists and patients in China is large. Retinal vein occlusion (RVO), high myopia, glaucoma, and diabetic retinopathy (DR) are common fundus diseases. Therefore, in this study, a five-category intelligent auxiliary diagnosis model for common fundus diseases is proposed; the model's area of focus is marked.

Methods: A total of 2000 fundus images were collected; 3 different 5-category intelligent auxiliary diagnosis models for common fundus diseases were trained via different transfer learning and image preprocessing techniques. A total of 1134 fundus images were used for testing. The clinical diagnostic results were compared with the diagnostic results. The main evaluation indicators included sensitivity, specificity, F1-score, area under the concentration-time curve (AUC), 95% confidence interval (CI), kappa, and accuracy. The interpretation methods were used to obtain the model's area of focus in the fundus image.

Results: The accuracy rates of the 3 intelligent auxiliary diagnosis models on the 1134 fundus images were all above 90%, the kappa values were all above 88%, the diagnosis consistency was good, and the AUC approached 0.90. For the 4 common fundus diseases, the best results of sensitivity, specificity, and F1-scores of the 3 models were 88.27%, 97.12%, and 84.02%; 89.94%, 99.52%, and 93.90%; 95.24%, 96.43%, and 85.11%; and 88.24%, 98.21%, and 89.55%, respectively.

Conclusions: This study designed a five-category intelligent auxiliary diagnosis model for common fundus diseases. It can be used to obtain the diagnostic category of fundus images and the model's area of focus.

Translational Relevance: This study will help the primary doctors to provide effective services to all ophthalmologic patients.

Introduction
Retinal vein occlusion, high myopia, glaucoma, and diabetic retinopathy (DR) are common ophthalmological diseases that can usually be diagnosed through fundus images.13 Retinal vein occlusion is the most common fundus vascular disease in the elderly and treatment timely may avoid blindness. There are many groups of myopia in China. More than 90% of college students have myopia, and high myopia may cause other severe fundus diseases, which requires early prevention and early screening.4 Glaucoma and DR are the diseases with the highest rate of blindness. At the same time, fundus images are obtained by a non-mydriatic fundus color camera, and professional ophthalmologists can diagnose these eye diseases by viewing the images. 
There are more than 40 million patients with fundus diseases and about 44,800 ophthalmologists in China, but only about 3000 doctors can perform professional diagnosis of fundus diseases.4,5 However, the regional development of China is unbalanced.4 Most ophthalmologists in the country are concentrated in the economically developed eastern coastal areas.5 There is a huge gap on the level of diagnosis and treatment between the hospitals above the city level and county-level and community hospitals.4 Compared with ophthalmologists in large hospitals, the technical level of ophthalmologists in county-level community hospitals is still relatively low.5 Thus, basic-level hospitals cannot meet the demands of the growing number of ophthalmology patients. To address this problem, a five-category (normal and 4 common fundus diseases) intelligent auxiliary diagnosis model is designed in this study that can assist nonprofessional ophthalmologists at the basic level in the preliminary diagnosis of patients and help them achieve accurate referrals and disease classification. This approach can help solve the problem of basic-level doctors being unable to provide effective services to ophthalmology patients due to the large discrepancy between the numbers of doctors and patients. 
In recent years, traditional machine learning methods were usually used to automatically extract manually selected features used to diagnose eye diseases. For example, traditional machine learning technologies can be used to detect the thickness of the retinal nerve fiber layer (RNFL) in myopia. It can extract the optic disk and optic cup from fundus images and calculate the cup-to-disk ratio to help in diagnosing glaucoma.69 Macular edema,10,11 exudate,11 cotton wool,12 microaneurysms,13,14 and neovascularization on the optic disk15 can be detected to provide early DR diagnosis. However, although the traditional machine learning technologies can automatically extract manually selected features, some complex features are more difficult for humans to discover; therefore, newer deep learning methods that can automatically select features during the training process have increasingly been adopted and provide a better recognition effect. 
Deep learning has developed rapidly since 2012. Researchers can automatically extract image features through a convolutional neural network (CNN), allowing a closer integration of artificial intelligence and ophthalmology. Daisuke, etc. used both a deep learning method and a support vector machine (SVM) to detect central retinal vein occlusion and branch retinal vein occlusion in ultra-wide-field fundus images and then compared the results of the two methods. The results showed that the deep learning method achieved better sensitivity and specificity than did the SVM method.16,17 Mark Christopher et al. used deep learning models to distinguish glaucoma (GON) from healthy eyes and compared the results of three deep learning models and transfer learning.18 Hanruo Liu et al. established a deep learning system to diagnose GON, reaching a sensitivity of 96.2% and a specificity of 97.7%.19 Felipe et al. built a deep learning model to measure the thickness of the RNFL from fundus images to detect changes in glaucoma.20 The Google team trained a deep learning model to diagnose DR and even automatically grade DR through fundus images in 2016.21 Since then, an increasing number of studies have used deep learning models to diagnose DR.2225 Many studies have focused on using deep learning methods to diagnose related fundus diseases,2629 but currently, the diagnosis of fundus diseases largely consists of single-disease diagnosis—that is, one model diagnoses one disease. Unfortunately, models that can diagnose only a single disease cannot meet the needs of doctors in basic hospitals who need to be able to diagnose large numbers of patients who may have different ophthalmic diseases. 
This study uses transfer learning to design a five-category intelligent auxiliary diagnosis model for common fundus diseases. The model detects normal eyes and four common fundus diseases from fundus images, and it can reveal the area of focus in an image through interpretable methods. The models developed in this study can help nonprofessional ophthalmologists at the primary level perform initial patient evaluations, determine their conditions, and make timely referrals, allowing basic-level hospitals to provide more effective patient services. 
Materials and Methods
Data Source
The images used in the study were mainly acquired from the Affiliated Eye Hospital of Nanjing Medical University and the Intelligent Ophthalmology Database of the Zhejiang Academy of Mathematical Medicine. The images were obtained from multiple models of non-mydriatic fundus cameras, and the image sizes are diverse. In this study, 2000 fundus images were used to train a 5-category intelligent auxiliary diagnosis model for common fundus diseases. The data set consists of 400 images for each category, and 1134 images for model testing. There were no restrictions on the age or gender of the patients represented by the images. In addition, the images were all anonymized: all patient-related personal information was removed to avoid infringing on patient privacy; thus, there are no relevant patient statistics. 
The quality of the images selected for this study was high, allowing ophthalmologists to clearly diagnose whether the image shows a normal fundus or any of four diseases. The fundus images are either normal or show a single disease (e.g. any given image can be diagnosed only as normal or classified as one disease). The true diagnostic results of the fundus images were diagnosed independently by two professional ophthalmologists. Identical diagnoses by the two doctors formed the final diagnosis result. When the two doctors provided different diagnoses, the result given by an expert ophthalmologist was adopted as the final clinical diagnostic result. 
The criteria for diagnosing each disease through color fundus photographs are mainly based on classic textbook30 and related guidelines. The diagnostic criteria for venous occlusion are retinal blood stasis, retinal hemorrhage, edema, and exudation; for high myopia are leopard-shaped fundus, choroidal atrophy spots, and atrophic changes of the macular area; for glaucoma are comprehensive judgments, such as enlarged cup-to-disk ratio, narrowing of the disk edge, and localized or diffuse RNLF defect; for DR are from Diabetic Retinopathy PPP 2019 of American Academy of Ophthalmology (AAO).31 
Model Training
For transfer learning, this study uses a ResNet32 model with its initial parameters pretrained on the ImageNet Large Scale Visual Recognition Challenge (ILSVRC)33 dataset. Using 400 images each of normal eyes and 4 common fundus diseases, a total of 2000 fundus images were used to train a 5-category intelligent auxiliary diagnosis model for common fundus diseases. The trained model can classify normal fundus images and four common fundus diseases and reveal the area of focus in the image through visualization methods. 
Various ResNet models exist, including ResNet-18, ResNet-34, ResNet-50, ResNet-101, and ResNet-152. The main differences among these models are the depths of their network structures. This research adopts ResNet-50 for transfer learning. The basic network structure of the ResNet-50 model includes convolutional layers, pooling layers, activation layers, and a fully connected layer. The “50” in the name ResNet-50 refers to an architecture with 7 × 7 convolutional layers and 16 building blocks (each building block includes 3 convolutional layers), forming a total of 48 convolutional layers, and a fully connected layer. In this study, 400 images of each type were selected to train models. The images were preprocessed and resized; the images input to ResNet-50 were 224 × 224. The final five-category intelligent auxiliary diagnosis model was obtained after training. 
Two main methods were used to perform transfer learning. The first method preserved the network structure of the ResNet model; it changed only the final output of the fully connected layer to five categories. The second method changed the fully connected layer structure: the original fully connected layer was changed to two fully connected layers (ReLU and dropout), which output the final classification results. Therefore, in theory, the model using the first method has a simple structure and low complexity, and the model using the second method has high complexity and strong learning ability. During transfer training, the convolutional layers were frozen, leaving the initial parameters of the convolutional layers unchanged. The parameters of the fully connected layer are updated after the training of each batch of training data and then the trained model is retrained, during which all the initial parameters are updated iteratively. Three models were trained in this study. The first transfer method was adopted for models 1 and 2, whereas the second transfer method was adopted for model 3. The model structures after transfer learning are shown in Figure 1
Figure 1.
 
Structure of the three models.
Figure 1.
 
Structure of the three models.
Model Interpretability
Deep learning models are a “black box.” This study used only data to train a model—input an image, the model will output the category to which the image belongs, that is, the diagnosis result predicted by the model. However, such diagnostic results are not very convincing; it is also necessary to provide some basis for the corresponding judgment. In this study, the model's area of focus is obtained by visualizing a heat map34 and the LIME method,35 which reveals the basis by which the model judged the image category. 
The ResNet-50 model requires the input images to be sized to 224 × 224, but the image sizes in this study were all larger than 224 × 224. Therefore, the images needed to be scaled during image preprocessing. The scaling process can cause deformation of the fundus images; therefore, the images of the model's area of focus may also be deformed. Therefore, when preprocessing the input image, the extra black edges of the image are first removed and the image is expanded to a square based on the length of the long side of the image. Models 2 and 3 were trained using this preprocessing method, whereas model 3 was trained slightly differently. The differences among the three models are shown in Table 1
Table 1.
 
Differences Between the Three Models
Table 1.
 
Differences Between the Three Models
Statistical Analysis
The statistical analyses were performed using SPSS 22.0 statistical software. The count data were expressed as the number of images and percentages. The sensitivity, specificity, F1-score, and other indicators for the diagnostic models of common fundus diseases were calculated for the normal fundus images and the four common disease images; then, receiver operating characteristic (ROC) curves were plotted. The kappa test was used to evaluate the consistency between the expert diagnosis group and the model diagnostic results. Taking the results of the expert diagnosis group as the ground truth, a kappa value of 0.61 to 0.80 was considered to be significantly consistent, and a kappa value >0.80 was considered to be highly consistent. 
Results
A total of 1134 fundus images were used to test the 5-category intelligent auxiliary diagnosis model for common fundus diseases. The expert diagnosis group diagnosed 300 images as normal fundus, 162 as RVO, 308 as high myopia, 126 as glaucoma, and 238 as DR. Model 1 diagnosed 299 fundus images as normal fundus, 182 as RVO, 282 as high myopia, 157 as glaucoma, and 214 as DR. Model 2 diagnosed 298 fundus images as normal fundus, 176 as RVO, 273 as high myopia, 156 as glaucoma, and 231 as DR. Model 3 diagnosed 299 fundus images as normal fundus, 161 as RVO, 281 as high myopia, 160 as glaucoma, and 233 as DR. The diagnostic results of the three models are listed in Tables 23, and 4, respectively. 
Table 2.
 
Diagnostic Results of Model 1
Table 2.
 
Diagnostic Results of Model 1
Table 3.
 
Diagnostic Results of Model 2
Table 3.
 
Diagnostic Results of Model 2
Table 4.
 
Diagnostic Results of Model 3
Table 4.
 
Diagnostic Results of Model 3
Compared with the expert diagnosis group, the true positive rate of the 3 intelligent auxiliary diagnosis models in diagnosing normal fundus is almost 100%. The highest accuracies for RVO, high myopia, glaucoma, and DR diagnoses are 88.27% (model 1), 89.94% (model 1), 95.24% (models 2 and 3), and 88.24% (model 2), respectively. The specificities of the 3 models for diagnosing all diseases are all above 95%, and the specificity of diagnosis for normal fundus and high myopia are above 99%, indicating the models’ low misdiagnosis rates. The sensitivity of the 3 models for diagnosing normal fundus is extremely high; the lowest is 99.33%. The lowest sensitivity for glaucoma diagnosis was 92.86%, whereas the sensitivity scores for RVO, high myopia, and DR diagnoses were all low, mostly below 90%. The area under the concentration-time curve (AUC) values of the 3 models for diagnosing all diseases approached 90%; model 2 obtained the highest AUC of 0.998 (for normal fundus). A comparison of the evaluation index results of the 3 models is shown in Table 5
Table 5.
 
Evaluation Index Results of the Three Models
Table 5.
 
Evaluation Index Results of the Three Models
Among the 3 models, model 1 and model 2 were trained using the same transfer learning method. The differences between these models primarily involves whether the black border was removed during image preprocessing, and there are only small differences between the evaluation index results of these two models. The black borders were removed for models 2 and 3 during preprocessing, but their transfer learning methods were different. Other than the larger difference between the RVO and DR sensitivity results of the two models, the results of the other evaluation indicators were relatively close. The sensitivity scores of model 1 were higher for RVO and high myopia diagnoses. For glaucoma and DR diagnoses, model 2 obtained the best results, whereas model 3 obtained the best results for normal fundus diagnosis. A comparison of the ROC curves of the 3 models when diagnosing normal fundus and 4 common fundus diseases is shown in Figure 2, and the heat maps and LIME maps are shown in Figure 3
Figure 2.
 
ROC curves of the three models for normal fundus and four common fundus diseases.
Figure 2.
 
ROC curves of the three models for normal fundus and four common fundus diseases.
Figure 3.
 
Heat maps and LIME maps of the three models.
Figure 3.
 
Heat maps and LIME maps of the three models.
The three models are just for normal, venous obstruction, high myopia, glaucoma, and DR through color fundus images, so it cannot identify other diseases. A total of 20 fundus images were used to test the 3 models. The expert diagnosis group diagnosed 20 images as macular degeneration (MD). The 3 models diagnosed 20 fundus images as the 4 common fundus diseases, and no one was diagnosed as normal fundus. The diagnostic results of the 3 models are listed in Table 6
Table 6.
 
Diagnosis Results of MD by the Three Models
Table 6.
 
Diagnosis Results of MD by the Three Models
Discussion
In 2012, the AlexNet model achieved the best classification results in the ILSVRC competition. Since then, deep learning has developed rapidly, and many CNN-based classification models have been proposed. The AlexNet network structure has 7 layers, whereas the Visual Geometry Group (VGG) model network structure has up to 19 layers. The ResNet model selected in this study has 18, 34, 50, 101, 152, and other different network layer structures. Most of the layers are deep and can extract fundus image features well. However, models with greater depths (such as 101 and 152) have more parameters, the model requires more computing resources, and the training time is longer for deeper models. Therefore, ResNet-50 was selected for training in this study; it extracts features well and has a moderate number of parameters. 
Table 5 shows that the accuracy scores of the models’ diagnostic results lie largely between 90% and 92%, which is not particularly high. The main reason for the low scores is that the training samples were limited: only 400 fundus images of each type were used to fine-tune the models. To achieve higher accuracy, sensitivity, and specificity scores, data enhancement methods and generative models could be used to generate additional samples and expand the number of training samples. The comparison of the diagnostic results of models 1 and 2 showed that the difference in accuracy between the 2 models is small, which indicates that removing the black borders from the input image has little effect on the model training results. However, there are certain differences in the sensitivity and specificity of the diagnostic results of different common diseases. 
In Table 5, the sensitivity of high myopia and DR is low. Diseased images are not diagnosed as normal images by models 2 and 3; these models only misdiagnose one disease as another. Primary hospitals usually provide only preliminary screenings. Thus, when a patient's diagnosis is abnormal, they are recommended to go to a higher-level hospital for further diagnosis and confirmation. Consequently, even if the diagnosis result indicating a certain disease is wrong, the correct diagnosis result will be obtained after the referral, but missed diagnoses will be reduced. 
The 3 models in this study diagnosed 6.8% to 8.8% of the high myopia fundus images as glaucoma, which is a high misdiagnosis rate. The lesion area of high myopia in fundus images occurs mainly near the optic disk and the posterior pole. In fundus images, glaucoma is mainly diagnosed by the cup-to-disk ratio of the optic disk area and the thickness of the RNFL. The common lesion areas of the two are the optic disk and posterior pole. Consequently, clinical experts also have difficulty making an accurate diagnosis through fundus images alone without other clinical results. 
Similar to the misdiagnosis of high myopia as glaucoma, the 3 models diagnosed 9.2% to 11.3% of the DR fundus images as RVO. Both DR and RVO are retinal vascular diseases. Mild central retinal vein occlusion (CRVO) and DR can be difficult to diagnose based solely on single-modal fundus image data. In such cases, clinical experts generally need to ask the patient about their systemic disease history, and some cases that are difficult to identify need to undergo fundus angiography or optical coherence tomography angiography (OCTA) to confirm the diagnosis or determine whether the condition is a combination of DR and CRVO. 
The intelligent auxiliary diagnosis model presented here output both the image category and area of interest from new fundus images. The area of interest is visualized using the Grad-CAM (class-activation heat map) and LIME methods. Figure 3 shows that the heat map results are largely accurate. The marked key areas are generally in line with the areas of interest to ophthalmologists during diagnosis. In contrast, the areas marked by the LIME algorithm are less accurate and differ notably from the areas of interest to ophthalmologists during diagnosis; the areas identified by the LIME algorithm do not meet the clinical diagnosis and treatment requirements of ophthalmologists. 
The images for training the models have high quality, and most of them can have a diagnosis as one common fundus clearly. However, there are many borderline cases in the real world, so the models in the paper can give the top three diagnosis results with their probability. Then the borderline cases may have two or three diagnosis results. At the same time, the models only can diagnosis the four common fundus diseases in the paper, if it's used clinically, other diseases would be diagnosed as one of the four diseases. This is the limitation of the models, although no one of the other diseases were diagnosed as normal in Table 6. Currently, there is no algorithm that can diagnose all fundus diseases. In the future, the model will be improved so that it can diagnose more and more fundus diseases and adapt to more situations (e.g. borderline cases, low-quality images, etc.). 
Acknowledgments
Supported by the National Natural Science Foundation of China (61906066), the Natural Science Foundation of Zhejiang Province (LQ18F020002) and the Zhejiang Medical and Health Research Project (2018PY066). 
Disclosure: B. Zheng, None; Q. Jiang, None; B. Lu, None; K. He, None; M.-N. Wu, None; X.-L. Hao, None; Z.-X. Zhou, None; S.-J. Zhu, None; W.-H. Yang, None 
References
Nagasawa T, Tabuchi H, Masumoto H, et al. Accuracy of ultrawide-field fundus ophthalmoscopy-assisted deep learning for detecting treatment-naïve proliferative diabetic retinopathy. Int Ophthalmol. 2019; 39: 2153–2159. [CrossRef] [PubMed]
Hagiwara Y, Koh JEW, Tan JH, et al. Computer-aided diagnosis of glaucoma using fundus images: a review. Comput Methods Programs Biomed. 2018; 165: 1–12. [CrossRef] [PubMed]
Yang WH, Zheng B, Wu MN, et al. An evaluation system of fundus photograph-based intelligent diagnostic technology for diabetic retinopathy and applicability for research. Diabetes Ther. 2019; 10: 1811–1822. [CrossRef] [PubMed]
Propaganda Department. Transcript of the regular press conference of the National Health Commission on June 5, 2020[EB/OL], http://www.nhc.gov.cn/xcs/s3574/202006/1f519d91873948d88a77a35a427c3944.shtml.2020.6.5.
Southern Weekend. Fundus disease is the leading cause of blindness, Li Suyan, deputy to the National People's Congress: Promoting standardized treatment of eye diseases [EB/OL], https://mp.weixin.qq.com/s/BpB1jJi0eWRfoOghvyZDMQ.2021.3.9.
Salman Haleem M, Han L , van Hemert J, Li B., Automatic extraction of retinal features from colour retinal images for glaucoma diagnosis: a review. Comput Med Imaging Graph. 2013; 37: 581–596. [CrossRef] [PubMed]
Raja C, Gangatharan N. A hybrid swarm algorithm for optimizing glaucoma diagnosis. Comput Biol Med. 2015; 63: 196–207. [CrossRef] [PubMed]
Singh A, Dutta MK, ParthaSarathi M, Uher V, Burget R. Image processing based automatic diagnosis of glaucoma using wavelet features of segmented optic disc from fundus image. Comput Methods Programs Biomed. 2016; 124: 108–120. [CrossRef] [PubMed]
Haleem MS, Han L, Hemert JV, et al. Regional image features model for automatic classification between normal and glaucoma in fundus and scanning laser ophthalmoscopy (SLO) images. J Med Syst. 2016; 40(6): 132. [CrossRef] [PubMed]
Hassan T, Akram MU, Hassan B, Syed AM, Bazaz SA. Automated segmentation of subretinal layers for the detection of macular edema[J]. Appl Opt. 2016; 55(3): 454–461. [CrossRef] [PubMed]
Akram MU, Tariq A, Khan SA, Javed MY. Automated detection of exudates and macula for grading of diabetic macular edema. Comput Methods Programs Biomed. 2014; 114(2): 141–152. [CrossRef] [PubMed]
Niemeijer M, van Ginneken B, Russell SR, Suttorp-Schulten MS, Abràmoff MD. Automated detection and differentiation of drusen, exudates, and cotton-wool spots in digital color fundus photographs for diabetic retinopathy diagnosis. Invest Opthalmol Vis Sci. 2007; 48(5): 2260–2267. [CrossRef]
Wang S, Tang HL, Al Turk LI, et al. Localizing microaneurysms in fundus images through singular spectrum analysis. IEEE Trans Biomed Eng. 2017; 64(5): 990–1002. [CrossRef] [PubMed]
Wu J, Xin J, Hong L, You J. New hierarchical approach for microaneurysms detection with matched filter and machine learning. Annu Int Conf IEEE Eng Med Biol Soc. 2015; 2015: 4322–4325.
Yu S, Xiao D, Kanagasingam Y. Automatic detection of neovascularization on optic disk region with feature extraction and support vector machine. Annu Int Conf IEEE Eng Med Biol Soc. 2016; 2016: 1324–1327.
Nagasato D, Tabuchi H, Ohsugi H, et al. Deep neural network-based method for detecting central retinal vein occlusion using ultrawide-field fundus ophthalmoscopy. J Ophthalmol. 2018; 2018: 1875431. [CrossRef] [PubMed]
Nagasato D, Tabuchi H, Ohsugi H, et al. Deep-learning classifier with ultrawide-field fundus ophthalmoscopy for detecting branch retinal vein occlusion. Int J Ophthalmol. 2019; 12(1): 94–99. [PubMed]
Christopher M, Belghith A, Bowd C, et al. Performance of deep learning architectures and transfer learning for detecting glaucomatous optic neuropathy in fundus photographs. Sci Rep. 2018; 8(1): 16685. [CrossRef] [PubMed]
Liu H, Li L, Wormstone IM, et al. Development and validation of a deep learning system to detect glaucomatous optic neuropathy using fundus photographs. JAMA Ophthalmol. 2019; 137(12): 1353–1360. [CrossRef] [PubMed]
Medeiros FA, Jammal AA, Mariottoni EB. Detection of progressive glaucomatous optic nerve damage on fundus photographs with deep learning. Ophthalmology. 2020; 128(3): 383–392. [CrossRef] [PubMed]
Gulshan V, Peng L, Coram M, et al. Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. JAMA. 2016; 316(22): 2402–2410. [CrossRef] [PubMed]
Raman R, Srinivasan S, Virmani S, et al. Fundus photograph-based deep learning algorithms in detecting diabetic retinopathy. Eye (Lond). 2019; 33(1): 97–109. [CrossRef] [PubMed]
Kermany DS, Goldbaum M, Cai W, et al. Identifying medical diagnoses and treatable diseases by image-based deep learning. Cell. 2018; 172(5): 1122–1131. [CrossRef] [PubMed]
Raju M, Pagidimarri V, Barreto R, et al. Development of a deep learning algorithm for automatic diagnosis of diabetic retinopathy. Stud Health Technol Inform. 2017; 245(1): 559–563. [PubMed]
Schmidt-Erfurth U, Sadeghipour A, Gerendas BS, et al. Artificial intelligence in retina. Prog Retin Eye Res. 2018; 67: 1–29. [CrossRef] [PubMed]
Poplin R, Varadarajan AV, Blumer K, et al. Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning. Nat Biomed Eng. 2018; 2(3): 158–164. [CrossRef] [PubMed]
Kermany S, Goldbaum M, Cai W, et al. Identifying medical diagnoses and treatable diseases by image-based deep learning. Cell. 2018; 172(5): 1122–1131. [CrossRef] [PubMed]
Long E, Lin H, Liu Z, et al. An artificial intelligence platform for the multihospital collaborative management of congenital cataracts. Nat Biomed Eng. 2017; 1(2): 1–8. [CrossRef]
Rohm M, Tresp V, Müller M, et al. Predicting visual acuity by using machine learning in patients treated for neovascular age-related macular degeneration. Ophthalmology. 2018; 125(7): 1028–1036. [CrossRef] [PubMed]
Chinese Medical Association. Clinical Diagnosis and Treatment Guidelines/Ophthalmology Section[M]. Beijing: People's Medical Publishing House; 2007.
American Academy of Ophthalmology, Diabetic Retinopathy PPP 2019. Available at: https://www.aao.org/preferred-practice-pattern/diabetic-retinopathy-ppp. 2019.
He K, Zhang X, Ren S, et al. Deep residual learning for image recognition. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE.org 2016. Available at: https://ieeexplore.ieee.org/document/7780459.
Deng J, Dong W, Socher R, et al. ImageNet: A large-scale hierarchical image database. 2009 IEEE Conference on Computer Vision & Pattern Recognition. Available at: https://ieeexplore.ieee.org/document/5206848. 2009.
Selvaraju RR, Cogswell M, Das A, et al. Grad-cam: Visual explanations from deep networks via gradient-based localization. Proceedings of the IEEE international conference on computer vision. Available at: https://openaccess.thecvf.com/content_ICCV_2017/papers/Selvaraju_Grad-CAM_Visual_Explanations_ICCV_2017_paper.pdf. 2017: 618–626.
Ribeiro MT, Singh S, Guestrin C. “Why should I trust you?” Explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. Available at: https://www.kdd.org/kdd2016/papers/files/rfp0573-ribeiroA.pdf. 2016: 1135–1144.
Figure 1.
 
Structure of the three models.
Figure 1.
 
Structure of the three models.
Figure 2.
 
ROC curves of the three models for normal fundus and four common fundus diseases.
Figure 2.
 
ROC curves of the three models for normal fundus and four common fundus diseases.
Figure 3.
 
Heat maps and LIME maps of the three models.
Figure 3.
 
Heat maps and LIME maps of the three models.
Table 1.
 
Differences Between the Three Models
Table 1.
 
Differences Between the Three Models
Table 2.
 
Diagnostic Results of Model 1
Table 2.
 
Diagnostic Results of Model 1
Table 3.
 
Diagnostic Results of Model 2
Table 3.
 
Diagnostic Results of Model 2
Table 4.
 
Diagnostic Results of Model 3
Table 4.
 
Diagnostic Results of Model 3
Table 5.
 
Evaluation Index Results of the Three Models
Table 5.
 
Evaluation Index Results of the Three Models
Table 6.
 
Diagnosis Results of MD by the Three Models
Table 6.
 
Diagnosis Results of MD by the Three Models
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×