December 2023
Volume 12, Issue 12
Open Access
Artificial Intelligence  |   December 2023
Bridging the Camera Domain Gap With Image-to-Image Translation Improves Glaucoma Diagnosis
Author Affiliations & Notes
  • Shuang He
    State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-Sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangzhou, China
    Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
  • Sanil Joseph
    Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Australia
    Department of Surgery (Ophthalmology), The University of Melbourne, Melbourne, Australia
    Lions Aravind Institute of Community Ophthalmology, Aravind Eye Care System, Madurai, India
  • Gabriella Bulloch
    Department of Surgery (Ophthalmology), The University of Melbourne, Melbourne, Australia
  • Feng Jiang
    State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-Sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangzhou, China
    Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
  • Hariharasubramanian Kasturibai
    Aravind Eye Hospital and Post Graduate Institute, Madurai, India
  • Ramasamy Kim
    Aravind Eye Hospital and Post Graduate Institute, Madurai, India
  • Thulasiraj D. Ravilla
    Lions Aravind Institute of Community Ophthalmology, Aravind Eye Care System, Madurai, India
  • Yueye Wang
    State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-Sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangzhou, China
    Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
  • Danli Shi
    School of Optometry, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China
  • Mingguang He
    State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-Sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangzhou, China
    Aravind Eye Hospital and Post Graduate Institute, Madurai, India
  • Correspondence: Mingguang He, School of Optometry, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China. e-mail: mingguang.he@polyu.edu.hk 
  • Danli Shi, School of Optometry, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China. e-mail: danli.shi@polyu.edu.hk 
  • Footnotes
     SH, SJ, and GB contributed equally to this work.
Translational Vision Science & Technology December 2023, Vol.12, 20. doi:https://doi.org/10.1167/tvst.12.12.20
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Shuang He, Sanil Joseph, Gabriella Bulloch, Feng Jiang, Hariharasubramanian Kasturibai, Ramasamy Kim, Thulasiraj D. Ravilla, Yueye Wang, Danli Shi, Mingguang He; Bridging the Camera Domain Gap With Image-to-Image Translation Improves Glaucoma Diagnosis. Trans. Vis. Sci. Tech. 2023;12(12):20. https://doi.org/10.1167/tvst.12.12.20.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose: The purpose of this study was to improve the automated diagnosis of glaucomatous optic neuropathy (GON), we propose a generative adversarial network (GAN) model that translates Optain images to Topcon images.

Methods: We trained the GAN model on 725 paired images from Topcon and Optain cameras and externally validated it using an additional 843 paired images collected from the Aravind Eye Hospital in India. An optic disc segmentation model was used to assess the disparities in disc parameters across cameras. The performance of the translated images was evaluated using root mean square error (RMSE), peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), 95% limits of agreement (LOA), Pearson's correlations, and Cohen's Kappa coefficient. The evaluation compared the performance of the GON model on Topcon photographs as a reference to that of Optain photographs and GAN-translated photographs.

Results: The GAN model significantly reduced Optain false positive results for GON diagnosis, with RMSE, PSNR, and SSIM of GAN images being 0.067, 14.31, and 0.64, respectively, the mean difference of VCDR and cup-to-disc area ratio between Topcon and GAN images being 0.03, 95% LOA ranging from −0.09 to 0.15 and −0.05 to 0.10. Pearson correlation coefficients increased from 0.61 to 0.85 in VCDR and 0.70 to 0.89 in cup-to-disc area ratio, whereas Cohen's Kappa improved from 0.32 to 0.60 after GAN translation.

Conclusions: Image-to-image translation across cameras can be achieved by using GAN to solve the problem of disc overexposure in Optain cameras.

Translational Relevance: Our approach enhances the generalizability of deep learning diagnostic models, ensuring their performance on cameras that are outside of the original training data set.

Introduction
Glaucoma is a leading cause of blindness that affects over 70 million people worldwide and continues to grow.13 The disease originates from a state of high intraocular pressures which subsequently precipitates irreversible loss of retinal ganglion cells, optic nerve degeneration, and visual field defects. Most people with glaucoma have no obvious symptoms in the early stages,4 but without timely diagnosis and treatment, irreversible vision loss is inevitable, making early screening of great importance.5 
Deep learning is an evolving area in ophthalmology, where algorithms trained by fundus images have revolutionized the prospects of automated diagnoses for eye diseases.68 Studies have reported performance of deep learning based on images from a variety of camera models and it has been noticed that cross-camera performance could limit the application of these models911 owing to the nonidentical specifications of different camera brands.1215 Most camera brands, the images of which were used to train diagnostic algorithms, are those most commonly used in clinical practices, such as Topcon, Canon, Zeiss, and Orburg, which are all large desktop models. Recently, portable fundus cameras, such as Optain, have come to the forefront, offering alternative approaches for remote ophthalmic screening due to their light weight, small size, and affordable pricing.1621 Despite being a portable camera, Optain generates overexposed images within the optic disc region and it has not been specifically trained for GON diagnosis.22 As the visibility of the optic disk is critical in diagnosing GON, this limits the diagnostic accuracy of deep learning models posing a threat to the performance and uptake of Optain in real-world settings. 
Generative adversal networks (GANs) have been proposed as a system to address this challenge by improving cross-camera consistency. GANs consist of two competing deep neural networks, a generator and a discriminator, which convert different illustration formats interchangeably to facilitate image-to-image conversion.23,24 The generator produces fake data based on features, and the discriminator distinguishes the fake data from real examples. These GAN structures have previously demonstrated good performance in image-to-image translation tasks25,26 and could solve cross-camera performance discrepancies for cameras using automated image analysis algorithms. 
This paper aims to propose a cross-camera domain adaptation method to transform images captured by portable cameras into images that closely resemble those acquired by standard cameras used in clinical practice. 
Methods
Image Data
Patients aged 18 years or above who attended the outpatient ophthalmic clinic at Guangdong Provincial People's Hospital were invited to participate in this study. For every patient, two sets of fundus photographs were captured by trained staff; one set using a Topcon TRC-NW8 camera (field of view of 45 degrees) and the other using an Optain OPTFC01 camera (field of view of 50 degrees). We excluded images when they were deemed of poor quality, were considered ungradable by manual grading, or a matching Optain/Topcon pair was missing. This study protocol was approved by the Guangdong Provincial People's Hospital Institutional Review Board (KY-Q-2021-032-01) and adhered to the tenets of the Declaration of Helsinki. Informed consent was obtained from all participants prior to recruiting them into the study. 
External Data Set
We used a similar approach to gather paired fundus images for external validation at Aravind Eye Hospital in Madurai, India. This part of the study was approved by the Aravind Eye Hospital Institutional Review Board and adhered to the tenets of the Declaration of Helsinki. A total of 843 pairs of Topcon and Optain fundus images were collected and included images of participants with diabetic retinopathy (DR), glaucoma, and those with no retinal pathology. 
Image-to-Image Translation
First, we extracted vessels from the paired data obtained with the Topcon and Optain cameras to generate vessel maps using the Retina-based Microvascular Health Assessment System (RMHAS),27 which is a U-net-based retinal artery/vein/optic disc segmentation and measurement system. We conducted registration of fundus images by utilizing the AKAZE key point detector to detect key points from the corresponding vessel map, followed by using the Nearest-Neighbor-Distance-Ratio for feature-matching and random sample consensus (RANSAC)28 to generate homography matrices and to reject outliers (examples of image registration are shown in Supplementary Fig. S1). A validity restriction was added: the rotation scale was restricted to a range of 0.8 to 1.3, and the absolute rotation in radians was limited to less than 4, before applying the warping transformation. Poor registration pairs were filtered out based on a threshold value that was empirically set using the data set. Moreover, we excluded image pairs with inferior registration performance, specifically those with a Dice coefficient below 0.5. The threshold value was established empirically, using the data set used in our experiments. 
In this study, we utilized pix2pixHD,26 an advanced conditional GAN incorporating a multiscale discriminator and residual blocks, for performing high-resolution image-to-image translations in the context of cross-camera scenarios. The algorithm works by training a generator network and a discriminator network simultaneously. The generator network takes an input image and tries to generate a photorealistic output image that is similar to the target image. The discriminator network, on the other hand, tries to differentiate between the real target image and the generated output image, providing feedback to the generator about how to improve its output. The paired photographs recruited from Guangdong Provincial People's Hospital were randomly divided into training set, validation set, and test set at an 8:1:1 ratio at the patient level. Optain fundus images were used as input, whereas Topcon fundus images were utilized as the target labels for GAN training. Through iterations of training, the generator learns to produce increasingly realistic synthetic data, whereas the discriminator learns to better distinguish between real and synthetic data. This iterative process was continued until the generator reached an optimal game equilibrium, resulting in a final GAN model that produced translated images closest to real Topcon fundus color images. Examples of paired fundus images taken by Topcon and Optain cameras and GAN translated results are shown in Supplementary Figure S2
GON Model
In this study, a previously developed deep learning model for glaucomatous optic neuropathy (GON) screening was utilized. The algorithm was extensively described in a previous study,29 wherein it was trained using over 200,000 fundus photographs obtained from various ophthalmic clinics and institutions in China, using different fundus camera models, such as Topcon, Canon, Heidelberg, and Digital Retinography System. The data were stored on a web-based cloud resource platform (www.labelme.org). The deep learning model was developed using the Inception version 3 architecture and included disease classification, image quality assessment, and macular region detection. The GON model classified all images as “low risk,” “medium risk,” or “high risk.” A false positive diagnosis occurs when the Topcon image predicts a low risk, whereas the Optain image predicts a moderate to high risk. 
Disc-Cup Measurement
A previously validated deep learning model for disc-cup segmentation was used for optic disc parameter generation and its development is described elsewhere (doi: https://doi.org/10.1101/2023.11.06.23298106). In brief, we used pix2pixHD to achieve disc and cup segmentation.23 The images were first cropped to ensure that only the field of view was included before being input into the model. The batch size was 4 and the learning rate was 0.0002. Each training epoch had a total of 100 and the model with the highest Dice index on the validation set was selected. 
The vertical cup-to-disc ratio (VCDR) was computed by dividing the vertical diameter of the cup by the vertical diameter of the disc on the vertical line. Both disc area and cup area were measured in pixels. The cup-to-disc area ratio was defined as the ratio of the cup area to the disc area. 
Evaluation Metrics
To provide objective evidence for the accuracy of the translated images, we conducted a quantitative evaluation based on two widely used metrics: the structural similarity index (SSIM) and the peak signal-to-noise ratio (PSNR).30 SSIM measures the similarity in structural information between two images, with a value of 1 indicating perfect similarity and 0 indicating no similarity. PSNR, on the other hand, measures the quality of an image in terms of the difference between corresponding pixels, with a higher value indicating less distortion. Additionally, we used the root mean square error (RMSE) index to assess the agreement of optic disc parameters between Topcon and translated images. RMSE measures the degree of variation per pixel due to image processing, with a value closer to 0 indicating greater similarity between the images. 
We calculated the mean difference in VCDR and cup-to-disc area ratio for the Topcon and Optain cameras, as well as for the Topcon and GAN-transformed images. To assess the agreement of optic disc parameters between these camera types, we calculated the 95% limits of agreement (LOA), defined as the mean difference ± 1.96 times the standard deviation (SD). Furthermore, we used the interclass correlation coefficient (ICC) to compare the agreement of optic disc parameters between Topcon and Optain, and between Topcon and GAN-translated images. ICC values range from 0 to 1, with 0 indicating no reliability and 1 indicating perfect reliability. Additionally, we examined the strength of association between optic disc parameters obtained with the two cameras using a Pearson's correlation test, denoted by correlation coefficients Ra and Rb for the Topcon and Optain cameras, respectively. 
Metrics including exact agreement (%) and Cohen's Kappa coefficient (and its associated P value) were used for the consistency analyses. The evaluation was conducted by comparing the model's performance on Topcon photographs as a reference to that of Optain photographs and GAN-translated photographs. Cohen's Kappa coefficient ranges from −1 to 1, and is commonly interpreted as follows: 0.40 to 0.60 as moderate, 0.60 to 0.80 as substantial, and 0.80 to 1.00 as almost in perfect agreement. Statistical analyses were conducted using Stata version 15.0 software (StataCorp, College Station, TX, USA) and Python version 3.6. 
Results
We recruited a total of 485 patients and then we excluded 189 Topcon-Optain pairs of images from model development due to registration failure caused by low image quality. A workflow of the inclusion/exclusion process for images is illustrated in Figure 1. An example of fundus photographs taken by Topcon and Optain and the translated output by GAN model are available in Figure 2. For the same eye, fundus images taken from the Optain camera appear overexposed in the optic disc region compared with Topcon images, and this was improved by translation via the GAN model. 
Figure 1.
 
Workflow of the study.
Figure 1.
 
Workflow of the study.
Figure 2.
 
Examples of paired fundus images taken by Topcon and Optain cameras and GAN translated result.
Figure 2.
 
Examples of paired fundus images taken by Topcon and Optain cameras and GAN translated result.
The confusion matrix depicting the performance of the GON model on paired external data is presented in Figure 3. Specifically, Figure 3A illustrates the GON model's performance on Topcon and Optain, revealing a high number of false positive diagnosis (n = 99). To address the issue of disc overexposure, we used the GAN model to translate the false positive diagnosis and evaluated their performance with the GON model. The resulting confusion matrix between the transformed images and Topcon is displayed in Figure 3B. The findings indicate that the GAN transformed images exhibit considerably fewer false positive diagnoses (n = 8) compared to the original Optain images. The GAN translated images achieved a PSNR of 14.31 and an SSIM of 0.64, indicating moderate similarity with the Topcon images. The RMSE value of the translated images was 0.067, reflecting a small amount of variation per pixel due to processing. 
Figure 3.
 
Confusion matrix of GON model performance.
Figure 3.
 
Confusion matrix of GON model performance.
Tables 1 and 2 separately present a comparison of optic disc parameters between different cameras and the translated output generated by our model. The mean difference between Topcon and Optain cameras for both VCDR and cup-to-disc area ratio was −0.07, with 95% limits of agreement ranging from −0.23 to 0.08, and −0.21 to 0.06, respectively. On the other hand, the mean difference between Topcon and GAN transformed images for both VCDR and cup-to-disc area ratio was 0.03, with 95% limits of agreement ranging from −0.09 to 0.15 and −0.05 to 0.10, respectively. After the GAN translation, the Pearson correlation coefficients compared to Topcon increased from 0.61 to 0.85 in VCDR and from 0.70 to 0.89 in cup-to-disc area ratio. 
Table 1.
 
Vertical Cup-to-Disc Ratio of Topcon Photos as Reference Compared With Optain Photos and Photos Translated by GAN
Table 1.
 
Vertical Cup-to-Disc Ratio of Topcon Photos as Reference Compared With Optain Photos and Photos Translated by GAN
Table 2.
 
Cup-To-Disc Area Ratio of Topcon Photos as Reference Compared With Optain Photos and Photos Translated by GAN
Table 2.
 
Cup-To-Disc Area Ratio of Topcon Photos as Reference Compared With Optain Photos and Photos Translated by GAN
Table 3 lists the agreement and Cohen's linearly weighted kappa (κw) of GON model performances with Topcon photographs as reference. Notably, the consistency performance of the GON model improved significantly after GAN translation. The κw value increased from 0.32 to 0.60 when comparing Optain and GAN-translated photographs, indicating a substantial improvement in agreement. 
Table 3.
 
Agreement of Deep Learning Performance by Using Topcon Photographs as Reference Compared With Optain Photographs and Photographs Translated by GAN
Table 3.
 
Agreement of Deep Learning Performance by Using Topcon Photographs as Reference Compared With Optain Photographs and Photographs Translated by GAN
Discussion
This study was successful in demonstrating that Optain fundus images can be translated to a format similar to that of Topcon, and this can overcome the current limitations of portable fundus cameras for glaucoma diagnosis. Despite being different from the East Asian ethnic data used in the training set, our GAN model performed well on the external validation set. GAN networks improved false positive diagnosis GON images taken by portable Optain cameras, by altering images around the optic disc region in Optain photographs. This made them more similar to Topcon-derived fundi evidenced by PSNR, SSIM, and RMSE, thereby allowing diagnostic algorithms to more accurately diagnose GON. 
The convenience of portable, handheld fundus cameras has improved prospects for remote ophthalmic examinations in recent years.17,3135 However, image properties differ between fundus cameras, which may affect the performance of deep learning models that automatically diagnose GON from these images. In this study, Optain cameras appeared to overexpose the optic disc region, resulting in variability of disc segmentation compared with that of Topcon cameras. Hence, we applied GANs to resolve cross-camera discrepancies. Given that the optic nerve is the primary site of glaucomatous changes, such inconsistencies due to camera properties would raise doubts about the reliability of modern portable fundus cameras. Therefore, it is crucial to convert color fundus photographs obtained by portable fundus cameras into results comparable to those of standard clinical cameras to ensure the generalization and reliability of portable cameras in real-world applications and their integration into existing care delivery models.36,37 
GANs have the potential to facilitate the cross-camera image synthesis through feature extraction and data augmentation which are translated to a camera based on another. However, this method has seldom been explored to bridge the cross-camera domain gap dilemma for GON diagnosis.38,39 Although GANs have become popular tools in retinal imaging for the improvement of image quality and synthesis of imaging findings,40,41 their use for cross-camera image translation is yet to be explored. Our study presents itself as the first step to ensuring the use of portable fundus cameras without the need for retraining and validation of a new algorithm across camera models. Therefore, the potential of applications of GANs for this method are exponential and could be applied to other cross-camera diagnostic algorithms, including for age-related macular degeneration and DR. Application of similar techniques to different diagnostic frameworks should be explored in future studies to ensure diagnostic consistency and thereby giving portable fundus cameras the highest chance of success as a tool to enhance screening of posterior segment conditions in general practices as well as in rural outreach programs. 
This study harnessed a novel design based on GANs to allow for image-to-image translation across camera models, that can enhance the acceptability of deep learning algorithms in ophthalmology and the use of portable fundus cameras in general and rural practice. Despite the novelty of the method and these findings, this study has several limitations that need to be addressed. First, images from the Optain camera were not added to the training set of the GON model to compare the performance of the fine-tuned model with the approach presented in this paper. Second, our study only focuses on reducing the issue of false positive diagnoses in the GON model caused by disc overexposure in the Optain camera, and the cause of false negative diagnoses in the model has not been explored. Third, although the transformed images yielded good results when tested on the GON model, it is necessary to conduct further research in order to apply GANs to other types of cameras and explore their efficacy in other models used for various ophthalmic issues. It is worth noting that, despite not including task-specific transformations in this study, the proposed method still exhibited effectiveness. Moving forward, there is potential for task optimization, such as using optic disc parameter-guided translation or other purpose-guided translation, to further enhance the performance of the method. 
Conclusions
This study validated a cross-camera domain adaptation method to convert Optain portable fundus images to Topcon-like images, resulting in better consistency between cameras and their evaluation parameters. For cameras with fixed illumination issues, this method provides a novel solution to convert images across cameras, and may help optimize the range of camera models applied to diagnostic deep learning models in ophthalmology. 
Acknowledgments
Disclosure: S. He, None; S. Joseph, None; G. Bulloch, None; F. Jiang, None; H. Kasturibai, None; R. Kim, None; T.D. Ravilla, None; Y. Wang, None; D. Shi, None; M. He, None 
References
Cook C, Foster P. Epidemiology of glaucoma: what's new? Can J Ophthalmol. 2012; 47(3): 223–226. [CrossRef] [PubMed]
Jonas JB, Aung T, Bourne RR, Bron AM, Ritch R, Panda-Jonas S. Glaucoma. Lancet. 2017; 390(10108): 2183–2193. [CrossRef] [PubMed]
Shaikh Y, Yu F, Coleman AL. Burden of undetected and untreated glaucoma in the United States. Am J Ophthalmol. 2014; 158(6): 1121–1129.e1121. [CrossRef] [PubMed]
Soh Z, Yu M, Betzler BK, et al. The global extent of undetected glaucoma in adults: a systematic review and meta-analysis. Ophthalmology. 2021; 128(10): 1393–1404. [CrossRef] [PubMed]
Stein JD, Khawaja AP, Weizer JS. Glaucoma in adults-screening, diagnosis, and management: a review. JAMA. 2021; 325(2): 164–174. [CrossRef] [PubMed]
He M, Li Z, Liu C, Shi D, Tan Z. Deployment of artificial intelligence in real-world practice: opportunity and challenge. Asia Pac J Ophthalmol (Phila). 2020; 9(4): 299–307. [CrossRef] [PubMed]
Ting DSW, Pasquale LR, Peng L, et al. Artificial intelligence and deep learning in ophthalmology. Br J Ophthalmol. 2019; 103(2): 167–175. [CrossRef] [PubMed]
Balyen L, Peto T. Promising artificial intelligence-machine learning-deep learning algorithms in ophthalmology. Asia Pac J Ophthalmol (Phila). 2019; 8(3): 264–272. [PubMed]
Wu JH, Liu TYA, Hsu WT, Ho JH, Lee CC. Performance and limitation of machine learning algorithms for diabetic retinopathy screening: meta-analysis. J Med Internet Res. 2021; 23(7): e23863. [CrossRef] [PubMed]
Li JO, Liu H, Ting DSJ, et al. Digital technology, tele-medicine and artificial intelligence in ophthalmology: a global perspective. Prog Retin Eye Res. 2021; 82: 100900. [CrossRef] [PubMed]
Dong V, Sevgi DD, Kar SS, Srivastava SK, Ehlers JP, Madabhushi A. Evaluating the utility of deep learning for predicting therapeutic response in diabetic eye disease. Front Ophthalmol (Lausanne). 2022; 2: 852107. [CrossRef] [PubMed]
Fantaguzzi F, Servillo A, Sacconi R, Tombolini B, Bandello F, Querques G. Comparison of peripheral extension, acquisition time, and image chromaticity of Optos, Clarus, and EIDON systems. Graefes Arch Clin Exp Ophthalmol. 2023; 261(5): 1289–1297. [CrossRef] [PubMed]
Han YS, Pathipati M, Pan C, et al. Comparison of telemedicine screening of diabetic retinopathy by mydriatic smartphone-based vs nonmydriatic tabletop camera-based fundus imaging. J Vitreoretin Dis. 2021; 5(3): 199–207. [CrossRef] [PubMed]
Midena E, Zennaro L, Lapo C, et al. Handheld fundus camera for diabetic retinopathy screening: a comparison study with table-top fundus camera in real-life setting. J Clin Med. 2022; 11(9): 2352. [CrossRef] [PubMed]
Xiao B, Liao Q, Li Y, et al. Validation of handheld fundus camera with mydriasis for retinal imaging of diabetic retinopathy screening in China: a prospective comparison study. BMJ Open. 2020; 10(10): e040196. [CrossRef] [PubMed]
Yao X, Son T, Ma J. Developing portable widefield fundus camera for teleophthalmology: technical challenges and potential solutions. Exp Biol Med (Maywood). 2022; 247(4): 289–299. [CrossRef] [PubMed]
Grauslund J. Diabetic retinopathy screening in the emerging era of artificial intelligence. Diabetologia. 2022; 65(9): 1415–1423. [CrossRef] [PubMed]
Palermo BJ, D'Amico SL, Kim BY, Brady CJ. Sensitivity and specificity of handheld fundus cameras for eye disease: a systematic review and pooled analysis. Surv Ophthalmol. 2022; 67(5): 1531–1539. [CrossRef] [PubMed]
Lu L, Ausayakhun S, Ausayakuhn S, et al. Diagnostic accuracy of handheld fundus photography: a comparative study of three commercially available cameras. PLoS Digit Health. 2022; 1(11): e0000131. [CrossRef] [PubMed]
Das S, Kuht HJ, De Silva I, et al. Feasibility and clinical utility of handheld fundus cameras for retinal imaging. Eye (Lond). 2023; 37(2): 274–279. [CrossRef] [PubMed]
Kubin AM, Wirkkala J, Keskitalo A, Ohtonen P, Hautala N. Handheld fundus camera performance, image quality and outcomes of diabetic retinopathy grading in a pilot screening study. Acta Ophthalmol. 2021; 99(8): e1415–e1420. [CrossRef] [PubMed]
He S, Bulloch G, Zhang L, et al. Cross-camera performance of deep learning algorithms to diagnose common ophthalmic diseases: a comparative study highlighting feasibility to portable fundus camera use. Curr Eye Res. 2023; 48(9): 857–863. [CrossRef] [PubMed]
Wang TC, Liu MY, Zhu JY, Tao A, Kautz J, Catanzaro B. High-resolution image synthesis and semantic manipulation with conditional GANs. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2018, pp. 8798–8807.
Chen JS, Coyner AS, Chan RVP, et al. Deepfakes in ophthalmology: applications and realism of synthetic retinal images from generative adversarial networks. Ophthalmol Sci. 2021; 1(4): 100079. [CrossRef] [PubMed]
Shi D, He S, Yang J, Zheng Y, He M. One-shot retinal artery and vein segmentation via cross-modality pretraining. Ophthalmol Sci. 2023; 4(2): 100363. [CrossRef] [PubMed]
Shi D, Zhang W, He S, et al. Translation of color fundus photography into fluorescein angiography using deep learning for enhanced diabetic retinopathy screening. Ophthalmol Sci. 2023; 3(4): 100401. [CrossRef] [PubMed]
Shi D, Lin Z, Wang W, et al. A deep learning system for fully automated retinal vessel measurement in high throughput image analysis. Front Cardiovasc Med. 2022; 9: 823436. [CrossRef] [PubMed]
Fischler MA, Bolles RC. Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Communicat of the ACM. 1981 1981; 24(6): 381–395.
Li Z, He Y, Keel S, Meng W, Chang RT, He M. Efficacy of a deep learning system for detecting glaucomatous optic neuropathy based on color fundus photographs. Ophthalmology. 2018; 125(8): 1199–1206. [CrossRef] [PubMed]
Wang Z, Bovik AC, Sheikh HR, Simoncelli EP. Image quality assessment: from error visibility to structural similarity. IEEE Trans Image Proc. 2004; 13(4): 600–612. [CrossRef]
Malerbi FK, Andrade RE. Real-world diabetic retinopathy screening with a handheld fundus camera in a high-burden setting. Acta Ophthalmol. 2022; 100(8): e1771. [CrossRef] [PubMed]
Zapata MA, Martin R, Garcia-Arumi C, et al. Remote screening of retinal and optic disc diseases using handheld nonmydriatic cameras in programmed routine occupational health checkups onsite at work centers. Graefes Arch Clin Exp Ophthalmol. 2021; 259(3): 575–583. [CrossRef] [PubMed]
Lin TC, Chiang YH, Hsu CL, Liao LS, Chen YY, Chen SJ. Image quality and diagnostic accuracy of a handheld nonmydriatic fundus camera: feasibility of a telemedical approach in screening retinal diseases. J Chin Med Assoc. 2020; 83(10): 962–966. [CrossRef] [PubMed]
Ruan S, Liu Y, Hu WT, et al. A new handheld fundus camera combined with visual artificial intelligence facilitates diabetic retinopathy screening. Int J Ophthalmol. 2022; 15(4): 620–627. [CrossRef] [PubMed]
Rajalakshmi R, Prathiba V, Arulmalar S, Usha M. Review of retinal cameras for global coverage of diabetic retinopathy screening. Eye (Lond). 2021; 35(1): 162–172. [CrossRef] [PubMed]
Salamone F, Sibilio S, Masullo M. Assessment of the performance of a portable, low-cost and open-source device for luminance mapping through a DIY approach for massive application from a human-centred perspective. Sensors (Basel). 2022; 22(20): 7706. [CrossRef] [PubMed]
Jiang P, Liu J, Luo Q, Pang B, Xiao D, Cao D. Development of automatic portable pathology scanner and its evaluation for clinical practice. J Digit Imaging. 2023; 36(3): 1110–1122. [CrossRef] [PubMed]
Tran NT, Tran VH, Nguyen NB, Nguyen TK, Cheung NM. On data augmentation for GAN training. IEEE Trans Image Process. 2021; 30: 1882–1897. [CrossRef] [PubMed]
You A, Kim JK, Ryu IH, Yoo TK. Application of generative adversarial networks (GAN) for ophthalmology image domains: a survey. Eye Vis (Lond). 2022; 9(1): 6. [CrossRef] [PubMed]
Kazeminia S, Baur C, Kuijper A, et al. GANs for medical image analysis. Artif Intell Med. 2020; 109: 101938. [CrossRef] [PubMed]
Wang Z, Lim G, Ng WY, et al. Generative adversarial networks in ophthalmology: what are these and how can they be used? Curr Opin Ophthalmol. 2021; 32(5): 459–467. [CrossRef] [PubMed]
Figure 1.
 
Workflow of the study.
Figure 1.
 
Workflow of the study.
Figure 2.
 
Examples of paired fundus images taken by Topcon and Optain cameras and GAN translated result.
Figure 2.
 
Examples of paired fundus images taken by Topcon and Optain cameras and GAN translated result.
Figure 3.
 
Confusion matrix of GON model performance.
Figure 3.
 
Confusion matrix of GON model performance.
Table 1.
 
Vertical Cup-to-Disc Ratio of Topcon Photos as Reference Compared With Optain Photos and Photos Translated by GAN
Table 1.
 
Vertical Cup-to-Disc Ratio of Topcon Photos as Reference Compared With Optain Photos and Photos Translated by GAN
Table 2.
 
Cup-To-Disc Area Ratio of Topcon Photos as Reference Compared With Optain Photos and Photos Translated by GAN
Table 2.
 
Cup-To-Disc Area Ratio of Topcon Photos as Reference Compared With Optain Photos and Photos Translated by GAN
Table 3.
 
Agreement of Deep Learning Performance by Using Topcon Photographs as Reference Compared With Optain Photographs and Photographs Translated by GAN
Table 3.
 
Agreement of Deep Learning Performance by Using Topcon Photographs as Reference Compared With Optain Photographs and Photographs Translated by GAN
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×