Open Access
Articles  |   April 2016
Suitability of a Low-Cost, Handheld, Nonmydriatic Retinograph for Diabetic Retinopathy Diagnosis
Author Affiliations & Notes
  • Gwenolé Quellec
    Inserm, UMR 1101, Brest, F-29200 France
  • Loïc Bazin
    Service d'Ophthalmologie, CHRU Brest, Brest, F-29200 France
  • Guy Cazuguel
    Inserm, UMR 1101, Brest, F-29200 France
    Institut Mines-Telecom, Telecom Bretagne, UEB, Dpt ITI, Brest, F-29200 France
  • Ivan Delafoy
    Service d'Ophthalmologie, CHRU Brest, Brest, F-29200 France
  • Béatrice Cochener
    Inserm, UMR 1101, Brest, F-29200 France
    Service d'Ophthalmologie, CHRU Brest, Brest, F-29200 France
    Univ Bretagne Occidentale, Brest, F-29200 France
  • Mathieu Lamard
    Inserm, UMR 1101, Brest, F-29200 France
    Univ Bretagne Occidentale, Brest, F-29200 France
  • Correspondence: Gwenole Quellec, Inserm, UMR 1101 LaTIM, Bâtiment 1 - 1er étage, CHRU Morvan - 2, Av. Foch, 29609 Brest CEDEX–France. e-mail: gwenole.quellec@inserm.fr 
Translational Vision Science & Technology April 2016, Vol.5, 16. doi:https://doi.org/10.1167/tvst.5.2.16
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Gwenolé Quellec, Loïc Bazin, Guy Cazuguel, Ivan Delafoy, Béatrice Cochener, Mathieu Lamard; Suitability of a Low-Cost, Handheld, Nonmydriatic Retinograph for Diabetic Retinopathy Diagnosis. Trans. Vis. Sci. Tech. 2016;5(2):16. https://doi.org/10.1167/tvst.5.2.16.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose: We assessed the suitability of a low-cost, handheld, nonmydriatic retinograph, namely the Horus DEC 200, for diabetic retinopathy (DR) diagnosis. Two factors were considered: ease of image acquisition and image quality.

Methods: One operator acquired fundus photographs from 54 patients using the Horus and AFC-330, a more expensive, nonportable retinograph. Satisfaction surveys were filled out by patients. Then, two retinologists subjectively assessed image quality and graded DR severity in one eye of each patient. Objective image quality indices also were computed.

Results: During image acquisitions, patients had difficulty locating the fixation target inside the Horus: by default, 53.7% of them had to fixate external points with the contralateral eye, as opposed to none of them using the AFC-330 (P < 0.0001). This issue impacted the duration of image acquisitions. Images obtained by the Horus were of significantly lower quality according to the experts (P = 0.0002 and P = 0.0004) and to the objective criterion (P < 0.0001). As a result, up to 20.4% of eyes were inadequate for interpretation, as opposed to 9.3% using the AFC-330. However, no significant difference was found in terms of DR severity according to both experts (P = 0.557 and P = 0.156).

Conclusions: The Horus can be used to screen DR, but at the cost of longer examination times and higher proportions of patients referred to an ophthalmologist due to inadequate image quality.

Translational Relevance: The Horus is adequate to screen DR, for instance in primary care centers or in mobile imaging units.

Introduction
Diabetic retinopathy (DR) is a substantial worldwide public health burden: in 2010, it was estimated to affect 93 million people worldwide.1 In particular, it is the leading cause of blindness in the working population of the United States and European Union.2,3 Detecting DR in the at-risk population (diabetics), generally using eye fundus photography, is crucial for providing timely treatment of DR and, therefore, preventing visual loss.4 In the past decade, faced with the increase of the at-risk population,5 retinal screening programs for diabetic retinopathy have experienced rapid growth.611 To expand these screening programs into rural areas, through highly distributed primary care facilities12,13 or through mobile imaging units,1416 it would be beneficial to have access to low-cost, portable, easy-to-operate, and high image quality fundus cameras. This is of particular importance in low and middle income countries.17,18 Additionally, it should be noted that mydriatic fundus photography, which implies pharmacologically dilating the pupil, limits the widespread use of fundus photography. Therefore, nonmydriatic fundus cameras would be a better option. 
A few low-cost, handheld, nonmydriatic eye fundus cameras are now commercially available: Smartscope Pro (Optomed, Oulu, Finland), commercialized in the United States as Pictor (Volk Optical, Mentor, OH, USA), VersaCamTM DS-10 (Nidek, Gamagori, Japan), Horus DEC 200 (MiiS, Hsinchu, Taiwan), as well as Genesis-D (Kowa, Nagoya, Japan), although the last one is more voluminous. These devices have advanced features comparable to more expensive, nonportable fundus cameras: internal fixation targets, autofocus, and high-resolution images (e.g., 5 mega-pixels). Besides these commercially available solutions, various prototypes are under development,19,20 including by Epipole (Dunfermline, UK)21 and IDx (Iowa City, IA).22 
It should be noted that smartphone-based solutions also have been proposed. Fundus images can be taken directly from a smartphone, using a more or less compact adaptor: Peek Vision (Nesta, London, UK),23 PanOptic + iExaminer (Welch Allyn, Skaneateles Falls, NY), and D-Eye (D-EYE, Padova, Italy).24 However, these solutions currently have drawbacks: Peek Vision and D-Eye are mydriatic, and PanOptic and D-Eye have limited fields of view (25° and 20°, respectively). Note that Fundus-ON-phone (Remidio, Bengaluru, India) also is smartphone-based, but it is not handheld. 
Two studies have reported comparisons between low-cost, handheld, mydriatic devices and their more expensive, nonportable counterparts. In a first study, involving six patients, a handheld prototype by the authors and a TRC-50EX camera (TopCon Medical Systems, Tokyo, Japan) were compared.20 Mydriatic imaging modalities were comparable in terms of image quality. In a second study, the D-Eye adaptor, mounted on an iPhone 5 (Apple, Cupertino, CA), was compared to retinal slit-lamp examination, also after pupil dilation.24 Mydriatic imaging modalities were comparable in terms of DR diagnostic performance (κ = 0.78). 
To the best of our knowledge, in the case of nonmydriatic cameras, no such comparison was ever published in the literature. The purpose of this study was to compare the Horus DEC 200 (MiiS) to the AFC-330 (Nidek), in terms of ease of image acquisition, image quality, and DR diagnosis performance. 
Methods
Patient Recruitment
Consecutive patients consulting for DR detection at the Endocrinology and Ophthalmology Departments of Brest University Hospital between December 2014 and April 2015 were invited to participate in this study. All study participants provided written informed consent. The study adhered to the tenets of the Declaration of Helsinki. 
Compared Retinographs
Two nonmydriatic retinographs were compared in this study (see Fig. 1): an AFC-330 by Nidek and a handheld Horus DEC 200 by MiiS. The Nidek retinograph, denoted by R1, is used routinely in the ophthalmology department. The MiiS retinograph, denoted by R2, was acquired specifically for this study. The characteristics of these two retinographs are compared in Table 1. It should be noted that both retinographs have the same field of view (45°) and are equipped with an internal fixation target. 
Figure 1
 
Compared retinographs.
Figure 1
 
Compared retinographs.
Table 1
 
Characteristics of the Retinographs
Table 1
 
Characteristics of the Retinographs
Retinograph Operator
The retinograph was operated by a single retinologist from the Ophthalmology Department (LB). Before the study began, the operator practiced using the MiiS camera on 10 healthy subjects for 2 weeks. He did not have to practice using the Nidek camera since he uses that camera on a daily basis. 
Examination Protocol
All examinations were conducted in a low light environment. First, both eyes were photographed with one retinograph. Then, the patient rested for 10 minutes to let his or her eyes recover. Finally, both eyes were photographed with the second retinograph. The order in which retinographs were used, as well as the order in which the left and right eye were photographed (the same order for both retinographs), was determined randomly before the examination. 
In DR screening networks, acquiring one 45° photograph centered on the fovea and one 45° photograph centered on the optic disc is common practice.8,9 Therefore, for each retinograph, the operator tried to obtain one good quality macula-centered photograph and one good quality optic disc-centered photograph of both eyes. This was done with the help of the fixation target shipped in each retinograph. Whenever the patient was not able to locate the fixation target, a usual problem with some retinographs, the operator asked him or her to fixate external points with the contralateral eye. To assess image quality, the operator relied on image previews on the retinograph's display screen. The examination of each eye, with each retinograph, was timed. If, for a given eye, the operator could not obtain good quality images after 5 minutes, he interrupted the examination of that eye with the current retinograph. 
At the end of the examination, the patient was asked to fill out a satisfaction survey. He or she had to rate each retinograph subjectively according to three criteria: general comfort, in terms of body posture and perceived examination duration; eye comfort, in terms of luminous flash intensity and perceived fixation duration; and ease of finding the fixation point. Each criterion was assessed on a four-level scale (bad, fair, good, excellent). 
After the examination, photographs were stored on a computer as JPEG images. Images were organized in folders: one folder per retinograph and per patient. Folders' paths were stored in a database, together with the patient's age and sex. 
Image Interpretation by Retinologists
Images were interpreted by two retinologists from Brest University Hospital, namely experts E1 and E2, using 15-inch MacBook Pro laptops (Apple). Experts did not have access to the patients' medical records: besides images themselves, they only had access to the patients' age and sex. 
Two reading sessions were organized. To minimize biases due to reading order, the patient dataset was divided into two subsets, denoted by D1 and D2, each containing data from one-half of the patients, and reading sessions were organized as follows. During the first session, expert E1 interpreted images acquired by retinograph R1 in D1 and images acquired by retinograph R2 in D2; expert E2 interpreted images acquired by retinograph R2 in D1 and images acquired by retinograph R1 in D2. During the second session, expert E1 interpreted images acquired by retinograph R2 in D1 and images acquired by retinograph R1 in D2; expert E2 interpreted images acquired by retinograph R1 in D1 and images acquired by retinograph R2 in D2. To minimize recall bias, the second reading session was organized 2 weeks after the first one. Obviously, while reading images acquired by one retinograph, experts were blinded to those acquired by the other one. 
For each patient, the order in which the left and right eyes were interpreted was determined randomly before the first reading session; the same reading order was used in both sessions for both experts and both retinographs. For each eye, experts were asked to rate image quality and grade DR, as described hereafter. 
Qualitative Image Quality–Clarity
To assess image quality for a given eye of a given patient, each expert was asked to select one image centered on the optic disc and one image centered on the fovea: for each view, each expert subjectively selected the most suitable image for pathology detection. 
The quality of the selected images was first assessed qualitatively as follows, according to a criterion called “clarity.”25,26 Let C denote a circular region centered on the fovea whose radius equals the diameter of the optic disc. Images were graded with respect to a four level scale: Excellent (4)–small blood vessels are clearly visible and very sharp inside C and the nerve fiber layer is visible. Good (3)–small blood vessels are clearly visible, but not sharp, inside C or the nerve fiber layer is not visible. Fair (2)–small blood vessels are not clearly visible inside C but third generation blood vessels can be identified inside C. Inadequate (1)–third generation blood vessels cannot be identified inside C
To improve intergrader agreement, E1 and E2 jointly selected four representative images (outside D1 and D2) before the first reading session: one image per quality level. These images were used as reference during reading sessions. 
Quantitative Image Quality – Size of the Interpretable Area
The above clarity score does not take into account several problems that may affect parts of fundus images only: overexposure and underexposure in particular. So, additionally, for each selected image, experts determined the largest circle solely including interpretable parts of the image.27,28 That circle was determined using the ImageJ software (National Institutes of Health [NIH], Bethesda, MD), version 1.49. Its diameter (in pixels), divided by the camera's field of view (also in pixels), was used as quantitative quality score. 
Diabetic Retinopathy Grading
Then, for each eye of each patient, experts graded DR according to the international clinical DR scale.29 One of the following scores was assigned to each eye: inadequate for interpretation, no DR, mild nonproliferative DR (NPDR), moderate NPDR, severe NPDR, or proliferative DR (PDR). 
Automatic Quality Assessment
Image quality also was assessed automatically, using an in-house software, for comparison with the above subjective criteria. A quality index was defined by measuring the fractal dimension of the visible blood vasculature. In that purpose, blood vessels were segmented in the image using mathematical morphology30 and the fractal dimension was measured using the box-counting technique.31 Intuitively, fractal dimension increases with the number of visible vessel generations, which makes it a good quality index.32 
Main Outcome Measure
Our main objective was to compare retinographs R1 and R2 in terms of image quality and in terms of ability to diagnose DR. When comparing ordinal variables (qualitative quality scores, DR severity levels), paired Wilcoxon signed-rank tests were used to assess statistical difference.33 When comparing continuous variables (examination duration, quantitative quality scores), paired t-tests were used. The interpretation is obvious regarding examination duration or image quality: faster examinations and higher quality values indicate that the retinograph is better. Regarding DR severity, higher levels generally indicate that more lesions could be identified: if we are comparing photographs of the same eye of the same patient read by the same examiner, it means that the retinograph is more relevant. 
For a deeper insight into the interobserver and interdevice agreements, Cohen's κ with linear weights was also used.34 All statistical analyses were performed using the MedCalc software (MedCalc Software bvba, Ostend, Belgium), version 15.8. 
Results
We recruited 54 patients in this study: 25 females and 29 males. Patients were 60 years old on average (SD 13, minimum 29, maximum 88). Statistical analysis was performed using a single eye per patient: the eye that was interpreted first (§ 2.e) was used. As a result, 29 right and 25 left eyes were analyzed. 
All patients were able to locate the fixation target using the Nidek retinograph (R1). Using the MiiS retinograph (R2), the operator had to ask 29 patients (53.7%) to fixate external points with the contralateral eye. According to an exact binomial test, the difference is highly significant (P < 0.0001). On average, image acquisition lasted 54.4 seconds per eye (SD 44.9, minimum 12, maximum 262) using R1 and 115.9 seconds per eye (SD 62.4, minimum 28, maximum 300) using R2. The difference also is highly significant (P < 0.0001). 
Results of the satisfaction survey are reported in Table 2. For all criteria, except general comfort, the Nidek retinograph (R1) was significantly better appreciated by patients. No significant difference was found between retinographs in terms of patient's general comfort. 
Table 2
 
Results of the Satisfaction Survey at the End of the Examination – Histograms of Subjective Scores and Paired Wilcoxon Signed-Rank Tests
Table 2
 
Results of the Satisfaction Survey at the End of the Examination – Histograms of Subjective Scores and Paired Wilcoxon Signed-Rank Tests
Typical successful photographs obtained using R1 and R2 are presented in Figure 2
Figure 2
 
Typical successful photographs obtained with R1 and R2. Patient 1 had mild NPDR. Patient 2 had moderate NPDR. Patient 3 had moderate NPDR, as well as dry age-related macular degeneration.
Figure 2
 
Typical successful photographs obtained with R1 and R2. Patient 1 had mild NPDR. Patient 2 had moderate NPDR. Patient 3 had moderate NPDR, as well as dry age-related macular degeneration.
Interobserver analyses for qualitative quality assessment (clarity), performed for each retinograph, are reported in Table 3. Interdevice analyses, according to each expert, are reported in Table 4. The interobserver agreement was close to good using R1 (κ = 0.570) and moderate using R2 (κ = 0.518). The interdevice agreement was very low for expert E1 (κ = 0.190) and low for expert E2 (κ = 0.269). Both experts found that images acquired using the Nidek retinograph (R1) are of significantly higher clarity (P = 0.0002 for expert 1, P = 0.0004 for expert E2). 
Table 3
 
Interobserver Agreement for Qualitative Image Quality Evaluation (Clarity)
Table 3
 
Interobserver Agreement for Qualitative Image Quality Evaluation (Clarity)
Table 4
 
Interdevice Agreement for Qualitative Image Quality Evaluation (Clarity)
Table 4
 
Interdevice Agreement for Qualitative Image Quality Evaluation (Clarity)
Results of quantitative image quality assessment (size of the interpretable image area) by both experts are reported in Table 5. No significant difference was found between R1 and R2, except for macula-centered images according to one expert (E2). 
Table 5
 
Results of Quantitative Image Quality Evaluation (Relative Size of the Interpretable Area)–Paired t-Tests
Table 5
 
Results of Quantitative Image Quality Evaluation (Relative Size of the Interpretable Area)–Paired t-Tests
Regarding the objective image quality index (fractal dimension), Spearman correlations between that index and both subjective quality indices are reported in Table 6. It can be observed that fractal dimension is better correlated with clarity (0.636 ≤ ρ ≤ 0.745) than with the size of the interpretable area (0.489 ≤ ρ ≤ 0.636). According to that criterion, image quality is significantly higher using the Nidek retinograph (P < 0.0001). 
Table 6
 
Spearman Correlations Between the Automatic Quality Index (Fractal Dimension) and Subjective Quality Indices
Table 6
 
Spearman Correlations Between the Automatic Quality Index (Fractal Dimension) and Subjective Quality Indices
Regarding DR grading, expert E1 was not able to interpret 5 eyes (9.3%) using R1, the Nidek retinograph, and 8 eyes (14.8%) using R2, the MiiS retinograph. Expert E2 was not able to interpret 5 eyes (9.3%) using R1 and 11 eyes (20.4%) using R2. According to an exact binomial test, E2 could interpret significantly fewer images using R2 than using R1 (P = 0.0097); the difference was not significant in the case of E1 (P = 0.123). 
Interobserver analyses for DR grading, performed for each retinograph, are reported in Table 7. Only eyes successfully interpreted by both experts were considered in these analyses. Interdevice analyses for DR grading, according to each expert, are reported in Table 8. Similarly, only eyes successfully interpreted using both experts were considered in these analyses. Interobserver agreement was high for both devices: a better agreement was even observed for R2, the MiiS retinograph (κ = 0.742), than for R1, the Nidek retinograph (κ = 0.688). As for interdevice agreement, it was high for expert E1 (κ = 0.630) and even very high for E2 (κ = 0.869). No significant difference was found between the diagnoses of E1 using R1 and his diagnoses using R2 (P = 0.557). The same applies to the diagnoses of E2 (P = 0.156). 
Table 7
 
Interobserver Agreement for DR Grading Using Images Using Each Retinograph
Table 7
 
Interobserver Agreement for DR Grading Using Images Using Each Retinograph
Table 8
 
Interdevice Agreement for DR Grading According To Each Expert
Table 8
 
Interdevice Agreement for DR Grading According To Each Expert
Discussion
Two nonmydriatic retinographs were compared in this study, in terms of image acquisition and in terms of image interpretation: the low-cost, handheld Horus DEC 200 (MiiS) and the commonly used AFC-330 (Nidek). 
The first lesson of this study is that patients had difficulty locating the fixation target inside the MiiS retinograph: the operator had to ask half of them (53.7%) to fixate external points with the contralateral eye, as opposed to none of them using the Nidek retinograph. This alternative fixation solution sometimes resulted in centration problems, as illustrated in Figure 2d; in that case, additional photographs had to be taken to image all four macular quadrants. Operator-dependent problems, such as difficulty to focus on the retina, also explain that more photographs had to be taken using the MiiS retinograph. As a result of these patient- and operator-dependent problems, image acquisition was twice as long using the low-cost retinograph. This impacted the comfort of patients, whose eyes were exposed to more flashes. However, patients did not complain about their body posture or about examination duration. Note that a slit-lamp adaptor is now available for the MiiS camera, which should improve image acquisition, at the cost of reduced mobility. 
The second main lesson of this study is that image clarity, assessed by two expert readers, was significantly better for the Nidek retinograph than for the MiiS retinograph. Clarity was generally considered as “fair” to “good” for the latter retinograph, but rarely as “excellent.” Clarity was considered excellent by expert E1 in 11.1% of patients for the MiiS retinograph, as opposed to 44.4% for the Nidek retinograph. It was considered excellent by expert E2 in 7.4% of patients for the MiiS retinograph, as opposed to 25.9% for the Nidek retinograph. As a consequence, interobserver agreement is much higher than interdevice agreement (see Tables 3 and 4). The size of the interpretable area, on the other hand, was not significantly different overall from one retinograph to another. This brings us to the third main lesson. Because of lower image clarity, fewer eyes could be interpreted based on images acquired with the MiiS retinograph: expert E1 could interpret 85.2% of eyes and expert E2 could interpret 79.6% of eyes. In comparison, both experts could interpret 90.7% of eyes based on images acquired with the Nidek retinograph. The difference is significant for one of the experts. However, it should be noted that in a DR screening network, if images are inadequate for interpretation, the patient is referred to an ophthalmologist for a face-to-face examination.8 Therefore, if a referable DR case is not detected but marked as inadequate for interpretation, patient's safety is not threatened, although screening efficiency is reduced. The last main lesson is that, among eyes that could be interpreted, there was no significant difference between retinographs in terms of DR diagnosis. This is illustrated by the fact that interobserver agreement is at the same level as interdevice agreement (see Tables 7 and 8). 
This study has one main limitation: all acquisitions were made by a single operator, who had been using the Nidek retinograph regularly for a few months, which explains in part why the Nidek images were more easily obtained and of better quality. In fact, the operator had the feeling that with some practice and with clear explanations given to the patient, regarding target fixation in particular, image acquisition using the MiiS retinograph was acceptable in most cases. Another limitation is that this operator is a junior ophthalmologist, who probably is not representative of all operators in screening networks. However, we have shown that image quality can be assessed automatically (through fractal dimension analysis). This may help nonophthalmologist operators reject bad quality images. 
In conclusion, although the Nidek retinograph is the best option, in terms of image acquisition and in terms of image interpretation, the MiiS retinograph is an acceptable solution. However, this solution has a cost: longer examination times and higher proportion of patients referred to an ophthalmologist due to inadequate image quality. In the long-term, this solution is not necessarily cost-effective and it probably should not be used as a replacement for a nonportable device in existing screening programs. However, this solution may help reach patients who otherwise may not have been screened at all, through mobile imaging units or through highly distributed primary care facilities. Patients with positive screening results that lead to ophthalmology referrals may be motivated to consult an ophthalmologist simply by knowing that they have concerning eye findings.35 These benefits would significantly improve quality of care for the diabetic population. 
Acknowledgments
The authors thank DAMIE sas (Orvault, France) for lending them the Horus DEC 200 retinograph. 
Disclosure: G. Quellec, None; L. Bazin, None; G. Cazuguel, None; I. Delafoy, None; B. Cochener, None; M. Lamard, None 
References
Yau JW, Rogers SL, Kawasaki R, et al. Global prevalence and major risk factors of diabetic retinpathy. Diabetes Care. 2012; 35: 556–564.
Klonoff DC, Schwartz DM. An economic analysis of interventions for diabetes. Diabetes Care. 2000; 23: 390–404.
Sjolie AK, Stephenson J, Aldington S, et al. Retinopathy and vision loss in insulin-dependent diabetes in Europe. The EURODIAB IDDM Complications Study. Ophthalmology 1997; 104: 252–260.
Kinyoun JL, Martin DC, Fujimoto WY, Leonetti DL. Ophthalmoscopy versus fundus photographs for detecting and grading diabetic retinopathy. Invest Ophthalmol Vis Sci. 1992; 33: 1888–1893.
Mokdad AH, Bowman RA, Ford ES, et al. The continuing epidemics of obesity and diabetes in the United States. JAMA. 2001; 286: 1195–1200.
Mohamed Q, Gillies MC, Wong TY. Management of diabetic retinopathy: a systematic review. JAMA. 2007; 298: 902–916.
Looker HC, Nyangoma SO, Cromie DT, et al. Rates of referable eye disease in the Scottish National Diabetic Retinopathy Screening Programme. Br J Ophthalmol. 2014; 98: 790–795.
Erginay A, Chabouis A, Viens-Bitker C, et al. OPHDIAT: quality-assurance programme plan and performance of the network. Diabetes Metab. 2008; 34: 235–242.
Abràmoff MD, Suttorp-Schulten MSA. Web-based screening for diabetic retinopathy in a primary care population: the EyeCheck project. Telemed J E-Health. 2005; 11: 668–674.
Cuadros J, Bresnick G. EyePACS: an adaptable telemedicine system for diabetic retinopathy screening. J Diabetes Sci Technol. 2009; 3: 509–516.
Ogunyemi O, Terrien E, Eccles A, et al. Teleretinal screening for diabetic retinopathy in six Los Angeles urban safety-net clinics: initial findings. AMIA Annu Symp Proc. 2011; 2011: 1027–1035.
Taylor CR, Merin LM, Salunga AM, et al. Improving diabetic retinopathy screening ratios using telemedicine-based digital retinal imaging technology: the Vine Hill study. Diabetes Care. 2007; 30: 574–578.
Askew D, Schluter PJ, Spurling G, et al. Diabetic retinopathy screening in general practice: a pilot study. Aust Fam Physician. 2009; 38: 650–656.
Boucher MC, Desroches G, Garcia-Salinas R, et al. Teleophthalmology screening for diabetic retinopathy through mobile imaging units within Canada. Can J Ophthalmol. 2008; 43: 658–668.
Hautala N, Aikkila R, Korpelainen J, et al. Marked reductions in visual impairment due to diabetic retinopathy achieved by efficient screening and timely treatment. Acta Ophthalmol. 2014; 92: 582–587.
Bismuth P, Bismuth M, Dupuoy J, et al. [‘Local' or ‘mobile' screening for diabetic retinopathy in general practice by nonmydriatic digital cameras? about the Diabetes Midi-Pyrénées network activity between 2005 and 2010]. Rev Prat. 2012; 62: 1359–1363.
Murthy KR, Murthy PR, Kapur A, Owens DR. Mobile diabetes eye care: experience in developing countries. Diabetes Res Clin Pract. 2012; 97: 343–349.
John S, Sengupta S, Reddy SJ, et al. The Sankara Nethralaya mobile teleophthalmology model for comprehensive eye care delivery in rural India. Telemed J E-Health. 2012; 18: 382–387.
Haddock LJ, Kim DY, Mukai S. Simple, inexpensive technique for high-quality smartphone fundus photography in human and animal eyes. J Ophthalmol. 2013; 2013: 518479.
Tran K, Mendel TA, Holbrook KL, Yates PA. Construction of an inexpensive, hand-held fundus camera through modification of a consumer ‘point-and-shoot' camera. Invest Ophthalmol Vis Sci. 2012; 53: 7600–7607.
Robertson C. How to build and sell an inexpensive retinal fundus camera. Conf Proc IEEE Eng Med Biol Soc2014; 2014.
Talmage E, DeHoog E, Abràmoff MD. Low cost fundus imaging for automated disease screening. Conf Proc IEEE Eng Med Biol Soc 2014; 2014.
Giardini ME, Livingstone IA, Jordan S, et al. A smartphone based ophthalmoscope. Conf Proc IEEE Eng Med Biol Soc 2014; 2014: 2177–2180.
Russo A, Morescalchi F, Costagliola C, Delcassi L, Semeraro F. Comparison of smartphone ophthalmoscopy with slit-lamp biomicroscopy for grading diabetic retinopathy. Am J Ophthalmol. 2015; 159: 360–364.
Fleming AD, Philip S, Goatman KA, Olson JA, Sharp PF. Automated assessment of diabetic retinal image quality based on clarity and field definition. Invest Ophthalmol Vis Sci. 2006; 47: 1120–1125.
Niemeijer M, Abràmoff MD, van Ginneken B. Image structure clustering for image quality verification of color retina images in diabetic retinopathy screening. Med Image Anal. 2006; 10: 888–898.
Hansen AB, Sanders B, Larsenn M, et al. Screening for diabetic retinopathy using a digital nonmydriatic camera compared with standard 35-mm stereo colour transparencies. Acta Ophthalmol Scand. 2004; 82: 656–665.
Neubauer AS, Rothschuh A, Ulbig MW, Blum M. Digital fundus image grading with the nonmydriatic Visucam(PRO NM) versus the FF450(plus) camera in diabetic retinopathy. Acta Ophthalmol. 2008; 86: 177–182.
Wilkinson CP, Ferris FL,III, Klein RE, et al. Proposed international clinical diabetic retinopathy and diabetic macular edema disease severity scales. Ophthalmology. 2003; 110: 1677–1682.
Zana F Klein JC. Segmentation of vessel-like patterns using mathematical morphology and curvature evaluation. IEEE Trans Image Process. 2001; 10: 1010–1019.
Jelinek H, et al. Fractal analysis of the normal human retinal vasculature. Int J Ophthalmol Vis Sci. 2009; 8: 9788.
Wainwright A, Liew G, Burlutsky G, et al. Effect of image quality, color, and format on the measurement of retinal vascular fractal dimension. Invest Ophthalmol Vis Sci. 2010; 51: 5525–5529.
Wilcoxon F. Individual comparisons by ranking methods. Biom Bull. 1945; 1: 80–83.
Cohen J. A coefficient of agreement for nominal scales. Educ Psychol Meas. 1960; 20: 37–46.
Abdellaoui M, Marrakchi M, Benatiya IA, Tahri H. [Screening for diabetic retinopathy by nonmydriatic retinal camera in the region of Fez]. J Fr Ophtalmol. 2016; 39: 48–54.
Figure 1
 
Compared retinographs.
Figure 1
 
Compared retinographs.
Figure 2
 
Typical successful photographs obtained with R1 and R2. Patient 1 had mild NPDR. Patient 2 had moderate NPDR. Patient 3 had moderate NPDR, as well as dry age-related macular degeneration.
Figure 2
 
Typical successful photographs obtained with R1 and R2. Patient 1 had mild NPDR. Patient 2 had moderate NPDR. Patient 3 had moderate NPDR, as well as dry age-related macular degeneration.
Table 1
 
Characteristics of the Retinographs
Table 1
 
Characteristics of the Retinographs
Table 2
 
Results of the Satisfaction Survey at the End of the Examination – Histograms of Subjective Scores and Paired Wilcoxon Signed-Rank Tests
Table 2
 
Results of the Satisfaction Survey at the End of the Examination – Histograms of Subjective Scores and Paired Wilcoxon Signed-Rank Tests
Table 3
 
Interobserver Agreement for Qualitative Image Quality Evaluation (Clarity)
Table 3
 
Interobserver Agreement for Qualitative Image Quality Evaluation (Clarity)
Table 4
 
Interdevice Agreement for Qualitative Image Quality Evaluation (Clarity)
Table 4
 
Interdevice Agreement for Qualitative Image Quality Evaluation (Clarity)
Table 5
 
Results of Quantitative Image Quality Evaluation (Relative Size of the Interpretable Area)–Paired t-Tests
Table 5
 
Results of Quantitative Image Quality Evaluation (Relative Size of the Interpretable Area)–Paired t-Tests
Table 6
 
Spearman Correlations Between the Automatic Quality Index (Fractal Dimension) and Subjective Quality Indices
Table 6
 
Spearman Correlations Between the Automatic Quality Index (Fractal Dimension) and Subjective Quality Indices
Table 7
 
Interobserver Agreement for DR Grading Using Images Using Each Retinograph
Table 7
 
Interobserver Agreement for DR Grading Using Images Using Each Retinograph
Table 8
 
Interdevice Agreement for DR Grading According To Each Expert
Table 8
 
Interdevice Agreement for DR Grading According To Each Expert
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×