Open Access
Methods  |   October 2021
A Novel, Smartphone-Based, Teleophthalmology-Enabled, Widefield Fundus Imaging Device With an Autocapture Algorithm
Author Affiliations & Notes
  • Anand Sivaraman
    Research & Development, Remidio Innovative Solutions Pvt. Ltd., Bangalore, Karnataka, India
  • Shanmuganathan Nagarajan
    Research & Development, Remidio Innovative Solutions Pvt. Ltd., Bangalore, Karnataka, India
  • Sivasundara Vadivel
    Research & Development, Remidio Innovative Solutions Pvt. Ltd., Bangalore, Karnataka, India
  • Sreetama Dutt
    Research & Development, Remidio Innovative Solutions Pvt. Ltd., Bangalore, Karnataka, India
  • Priyamvada Tiwari
    Research & Development, Remidio Innovative Solutions Pvt. Ltd., Bangalore, Karnataka, India
  • Srikanth Narayana
    Department of Eye and Retinal Diseases, Diacon Hospital, Bangalore, Karnataka, India
  • Divya Parthasarathy Rao
    Research & Development, Remidio Innovative Solutions Pvt. Ltd., Bangalore, Karnataka, India
  • Correspondence: Anand Sivaraman, Research & Development, Remidio Innovative Solutions Pvt. Ltd., No. 1-51-2/12, II Floor, Vacuum Techniques Compound, 1st Cross Road, Phase-I, Peenya, Bangalore, Karnataka 560058, India. e-mail: anand@remidio.com 
Translational Vision Science & Technology October 2021, Vol.10, 21. doi:https://doi.org/10.1167/tvst.10.12.21
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Anand Sivaraman, Shanmuganathan Nagarajan, Sivasundara Vadivel, Sreetama Dutt, Priyamvada Tiwari, Srikanth Narayana, Divya Parthasarathy Rao; A Novel, Smartphone-Based, Teleophthalmology-Enabled, Widefield Fundus Imaging Device With an Autocapture Algorithm. Trans. Vis. Sci. Tech. 2021;10(12):21. https://doi.org/10.1167/tvst.10.12.21.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose: Widefield imaging can detect signs of retinal pathology extending beyond the posterior pole and is currently moving to the forefront of posterior segment imaging. We report a novel, smartphone-based, telemedicine-enabled, mydriatic, widefield retinal imaging device with autofocus and autocapture capabilities to be used by non-specialist operators.

Methods: The Remidio Vistaro uses an annular illumination design without cross-polarizers to eliminate Purkinje reflexes. The measured resolution using the US Air Force target test was 64 line pairs (lp)/mm in the center, 57 lp/mm in the middle, and 45 lp/mm in the periphery of a single-shot retinal image. An autocapture algorithm was developed to capture images automatically upon reaching the correct working distance. The field of view (FOV) was validated using both model and real eyes. A pilot study was conducted to objectively assess image quality. The FOVs of montaged images from the Vistaro were compared with regulatory-approved widefield and ultra-widefield devices.

Results: The FOV of the Vistaro was found to be approximately 65° in one shot. Automatic image capture was achieved in 80% of patient examinations within an average of 10 to 15 seconds. Consensus grading of image quality among three graders showed that 91.6% of the images were clinically useful. A two-field montage on the Vistaro was shown to exceed the cumulative FOV of a seven-field Early Treatment Diabetic Retinopathy Study image.

Conclusions: A novel, smartphone-based, portable, mydriatic, widefield imaging device can view the retina beyond the posterior pole with a FOV of 65° in one shot.

Translational Relevance: Smartphone-based widefield imaging can be widely used to screen for retinal pathologies beyond the posterior pole.

Introduction
Widefield fundus imaging allows visualization of the retina beyond the posterior pole. It has increasingly become an important standard of care for imaging the posterior segment. Conditions such as retinal vascular pathologies, uveal and retinal inflammatory disorders, intraocular tumors, and retinal degeneration, among others, require imaging of the peripheral retina for effective diagnosis, assessment of disease progression, and disease management.1,2 The widefield imaging (WFI) and ultra-widefield imaging (UWFI) devices available today provide a field of view (FOV) of 100° to 200° and are totally or partially based on scanning laser ophthalmoscope (SLO)-based designs. This ability to capture images of the peripheral retina has provided insights into the importance of disease-related peripheral pathology that were previously unknown. However, despite being technologically advanced and providing a wider view of the retina, these devices are limited in their application to high-end private hospitals and clinics and are bulky and expensive. Limitations of the UWFI systems also include abnormal color, eyelash artifacts, and peripheral distortion.3 
Today, the gold standard for fundus photography remains the Early Treatment Diabetic Retinopathy Study (ETDRS) seven-field mydriatic tabletop system that allows imaging of the posterior 75° of the retina.46 Technology has transitioned from the use of desktop-based cameras to smartphone-based devices for fundus imaging. Studies have demonstrated and validated the application of three-field, 45° fundus photography using portable handheld devices based on smartphones for effective screening for retinal pathologies.79 Such studies highlight the potential of smartphone-based fundus imaging to screen for vision-threatening disease, especially in developing countries where 80% of the affected people live in remote, rural areas.10 Compared with the available SLO-based widefield and ultra-widefield devices that capture single-shot images between 130° and 200°, WFI using smartphone-based devices has provided a maximum FOV of just over 50° in a single-shot image, albeit with reflex artifacts. By montaging four or five images together, however, a FOV of 100° has been demonstrated in previous studies in sufficiently dilated pupils measuring 8 mm.4,11 
Here, we report a unique smartphone-based, mydriatic, portable, wide-angle imaging device, the Vistaro (Remidio Innovative Solutions Pvt. Ltd., Bangalore, India), that works on a patented annular illumination technology to capture retinal pathologies extending beyond the posterior pole in adults. It can be operated in either a handheld mode or a stabilized mode while being mounted on a chin rest. To our knowledge, this optical design is the first of its kind in widefield retinal imaging that works on a minimum pupil size of 5 mm for a FOV of up to 65° from the center of the pupil. It separates the illumination and imaging pathways using a donut-shaped image mask with an optically opaque central aperture. It eliminates Purkinje and corneal artifacts without using cross-polarizers. Thus, it can yield high-quality images with a FOV of up to 65° captured in a single shot and up to at least 90° when two fields are auto-montaged. 
Materials and Methods
Optical Design and Hardware
Figure 1 shows an ophthalmologic imaging apparatus (100) that can capture reflex-free retinal images of a subject's eye (E) by separating the illumination axis (105) and the imaging axis (110). The imaging apparatus (100) is comprised of light sources (115, 120) arranged on the illumination axis. It has an objective lens (160), a porosity mirror (165), a porosity tube (170), a collimating lens (175), a converging lens (180), and an imaging module (181) placed along the imaging axis (110). The imaging module (181) may be a camera lens of a smartphone, a digital single-lens reflex camera, or any other monocular or binocular device for viewing the retina. 
Figure 1.
 
Schematic diagram of the optical design of the hardware (left) and a photograph of the Vistaro handheld device (right).
Figure 1.
 
Schematic diagram of the optical design of the hardware (left) and a photograph of the Vistaro handheld device (right).
The imaging apparatus is further comprised of two ring-shaped shields (125, 130), a diffuser (135), a first condenser lens (140), a second condenser lens (145), at least one projection lens (150), and a transparent plate (155) placed along the illumination axis (105). The diffuser (135) can be made from glass, plastic, or any other substrate that will allow for near-uniform illumination. The first shield (125) has a central opaque portion (182) on which an observation light source (120) is mounted and an outer pair of coaxial annular transparent regions (190). The diffuser (135) is placed in front of the observation light source (120). The second shield (130) has a central opening portion (184) and is mounted on the diffuser (135), which is adapted to receive the illumination light beam emitted from the observation light source (light-emitting diode [LED]; 120) mounted on the first shield (125). 
The crucial optical parameters of the device, such as sharpness, contrast, image quality, and resolution, were modeled using Zemax (Kirkland, WA) and validated with US Air Force target tests (Edmund Optics, Barrington, NJ) (Figs. 23). The resolution of a single-shot retinal image was determined to be 64 line pairs (lp)/mm in the center, 57 lp/mm in the middle, and 45 lp/mm in the periphery (Fig. 3). The analysis showed that the device is capable of producing high-quality images of the retina while meeting the minimum resolution requirements of the ISO 10940 standard of the International Organization for Standardization (Table). 
Figure 2.
 
A modulation transfer function plot and spot diagram for analyzing the Vistaro image quality.
Figure 2.
 
A modulation transfer function plot and spot diagram for analyzing the Vistaro image quality.
Figure 3.
 
The US Air Force target resolution test showing the resolution of the Vistaro in a single-shot retinal image. The images show the resolution for the center, middle, and periphery in a single-shot retinal image. The image is on a smartphone sensor and hence has a certain level of thermal noise associated with it, as expected from such sensors.
Figure 3.
 
The US Air Force target resolution test showing the resolution of the Vistaro in a single-shot retinal image. The images show the resolution for the center, middle, and periphery in a single-shot retinal image. The image is on a smartphone sensor and hence has a certain level of thermal noise associated with it, as expected from such sensors.
Table.
 
ISO 10940 Requirements for Images Captured by Fundus Cameras on Digital Sensors
Table.
 
ISO 10940 Requirements for Images Captured by Fundus Cameras on Digital Sensors
With the portability and simplicity of a smartphone-based design in mind, we utilized the iPhone SE2 smartphone and developed an LED-based continuous illumination design for the device. The phone uses Sony Exmor RS sensors (Sony Corporation, Tokyo, Japan) with a pixel size of 1.22 µm and a contrast ratio of 1400:1. It captures images at a resolution of 4032 × 3024 pixels (∼12 megapixels). The resulting image is viewed on the phone screen with a resolution of 750 × 1334 pixels and a pixel density of ∼326 pixels per inch and can also be screen mirrored via Apple TV (Apple Inc., Cupertino, CA). Furthermore, the resolution of a real-time, square image on the smartphone sensor was 2845 × 2845 pixels, which, in turn, resulted in approximately 64.3 pixels per retinal degree. The working prototype weighs 830 g, and the dimensions of the device are approximately 195 mm × 85 mm × 200 mm. The device has external fixation targets and can be operated by non-specialized operators with appropriate training and minimal supervision. The device can be used for up to 5 hours continuously after a full cycle of charge. 
Software
One of the principal challenges faced by operators in screening for retinal pathologies, especially in remote areas, is the lack of internet connectivity, which limits timely diagnosis and triaging of patients. We tried to address this limitation by developing unique patient management software for the device that can be used to capture and montage images offline and store them in a secure Health Insurance Portability and Accountability Act (HIPAA)-compliant cloud server powered by the Google Cloud Platform (Alphabet, Mountain View, CA) when the device is connected to the internet. We used the DualAlign i2k Retina platform (DualAlign, Clifton Park, NY) for montaging images. This procedure ensures that clinically relevant information is retained and available for immediate diagnosis upon integrating the application with a third-party video conferencing application (e.g., Zoom, Google Meet, Microsoft Teams) to connect to a specialist in real time or in a feed-forward mechanism. A customized desktop-based interface can be also be provided to support a Digital Imaging and Communications in Medicine (DICOM)-based transfer for institutions with centralized electronic medical records (EMR) and a Picture Archiving Communication System (PACS). We focused on the operational simplicity of the device by designing an algorithm to detect the quality of the images so that non-specialist technicians can capture clinically useful images with basic training. 
The ability to capture good-quality fundus images is largely subjective, depending on proper camera-to-eye distance to effectively reduce haziness and artifacts. In the case of WFI, the sensitivity of the process increases with the working distance, as the imaging system must be closer to the eye than for standard 45° imaging. Due to this extreme sensitivity of the working distance, acquiring meaningful images from the far peripheral retina in these systems requires a skilled technician. By automating the process of detecting the correct working distance and triggering image capture at that point, our aim was to (1) alleviate stress experienced by the technician and reduce the level of expertise required to capture good-quality widefield retinal images without the risk of contact with the patient's cornea while adjusting the correct working distance, and (2) reduce the patient's gaze time prior to image capture. When the correct working distance has been detected, the autofocus capability of the native device software can be triggered to avoid capturing blurred images. 
To assist in detection of the correct working distance, this device relies on light from two green working-distance LEDs reflecting off the cornea (Purkinje reflexes), thereby making these LEDs visible in the live view of the camera when imaging a patient's eye. To automate the detection of correct working distances through the use of software, we exploited the observation that these LEDs appeared sharpest and brightest at the correct working distance and appeared only within an acceptable bounding region, which we marked on the UI Image with square overlays on the live view. At the correct working distance, the Purkinje reflex LEDs (working distance LEDs) were switched off and image capture was triggered. We faced challenges at four different levels during implementation of the automatic working distance detection software. The first was to establish a pattern and devise an appropriate image processing algorithm to distinguish between frames at correct versus incorrect working distances. The second was to find a diverse group of volunteers to validate, fine tune, and generalize the algorithm. The third was to find a balance between completely eliminating false positives and preventing the algorithm from becoming too sensitive. Finally, the fourth challenge was to determine the optimal camera settings to be applied during auto image capture. 
When we were devising the algorithm, our first objective was to be able to programmatically isolate the regions illuminated by the working distance LEDs. For this purpose, we captured several training retinal images with the working distance LEDs turned on. We initially analyzed 20 images of a model eye captured under different illumination conditions at various (correct and incorrect) working distances. Trials with the model eye enabled us to gather significant insights before moving on to real dilated eyes of subjects. When we split the sample images into their respective red, green, and blue pixel intensities, we observed that the blue intensity was least scattered by the retina and consistently peaked only in the regions illuminated by the working distance LEDs. This was possible because the blue intensity of the live view was significantly reduced or kept low through the use of an interference filter. This unique property of the blue channel remained true even for real eye images, where the optic disc region that was well illuminated in the red and green channels had a negligible blue component. Therefore, we were able to programmatically isolate the regions illuminated by the working distance LEDs using the blue channel pixel values alone. We first split the given blue channel image horizontally into two halves (left and right). In each half, we would expect one and only one bright region corresponding to one of the working distance LEDs. We located this bright region by the use of adaptive binary thresholding on the blue channel and next-generating spatial image moments of the thresholded image using the OpenCV moments function. The underlying computation is as follows: 
  • The adaptive binary thresholding is achieved, in our case, by thresholding between the average of the median and maximum blue pixel values.
  • For a two-dimensional continuous function, f(x, y), the moment (sometimes called “raw moment”) of order (p + q) is defined as  
    \begin{eqnarray*} {M_{pq}} = \int\limits_{ - \infty }^\infty {\int\limits_{ - \infty }^\infty {{x^p}{y^q}f( {x,y} )\,dxdy} } \end{eqnarray*}
    for p, q = 0, 1, 2, …. Adapting this to a scalar (grayscale) image with pixel intensities I(x, y), the raw image moments (Mij) are calculated by  
    \begin{eqnarray*} {M_{ij}} = \sum\limits_x {\sum\limits_y {{x^i}{y^j}I(x,y)} } . \end{eqnarray*}
The image moments data for each half image provide relevant details, such as centroid coordinates, spread value, and peak pixel intensity of the bright region corresponding to the respective working distance LED. We denote these using functions as centroid(L), spread(L), and intensity(L) for the left halves and centroid(R), spread(R), and intensity(R) for the right halves. We noted these values for images captured across different control parameters such as overall image brightness (illumination) and correct versus incorrect working distance. 
As detailed in Figures 4A to 4E, the blue channel distribution and the corresponding moments data helped us observe underlying patterns that we implemented into a generic image processing algorithm (Fig. 5) to detect images corresponding to the correct working distance. Based on quantitatively analyzing 25 real eye images, we were able to identify three deciding factors for our algorithm: 
  • 1. Peak intensity was in general always high for both the left and right LEDs at the correct working distance.
  • 2. Spread was always low for both left and right LEDs at the correct working distance. For cases at the correct working distance where peak intensity was unusually low, the spread was even lower.
  • 3. The peak intensity difference between left and right LEDs was always found to be negligible and within an intensity difference of 15 when at the correct working distance
Figure 4.
 
(A, center) Real eye image captured at the correct working distance and standard illumination, with the working distance LEDs turned on, along with the red, green, and blue channel images of the central region of interest. (A, far left) Intensity distribution of the blue channel along the line joining centroid(L) and centroid(R) of the regions illuminated by the working distance LEDs. (A, far right) Information obtained from the moments data of the regions illuminated by the working distance LEDs. (B, center) Real eye image captured at the correct working distance and very low illumination, with the working distance LEDs turned on, along with the red, green, and blue channel images of the central region of interest. (B, far left) As observed in the blue channel distribution and corresponding moments data, the maximum intensities are low due to the low illumination; (B, far right) however, to compensate, the spread of the blue channel peak is accordingly also low. (C, center) Real eye image captured at an incorrect working distance (too far) and standard illumination, with the working distance LEDs turned on, along with the red, green, and blue channel images of the central region of interest. (C, far left) As observed in the blue channel distribution and corresponding moments data, the maximum intensities are comparable to the low intensities observed for the correct working distance under low illumination (B); (C, far right) however, the spread of the blue channel peak is marginally higher. (D, center) Real eye image captured at an incorrect working distance (too far) and low illumination, with the working distance LEDs turned on, along with the red, green, and blue channel images of the central region of interest. (D, far left) As observed in the blue channel distribution and corresponding moments data, the maximum intensities are significantly low, (D, far right) and the spread of the blue channel peak is significantly high. (E, center) Real eye image captured at an incorrect working distance (too close) and standard illumination, with the working distance LEDs turned on, along with the red, green, and blue channel images of the central region of interest. (E, far left) As observed in the blue channel distribution and corresponding moments data, the maximum intensities are high similar to for the correct working distance (A); (E, far right) however, the spread of the blue channel peak is significantly higher.
Figure 4.
 
(A, center) Real eye image captured at the correct working distance and standard illumination, with the working distance LEDs turned on, along with the red, green, and blue channel images of the central region of interest. (A, far left) Intensity distribution of the blue channel along the line joining centroid(L) and centroid(R) of the regions illuminated by the working distance LEDs. (A, far right) Information obtained from the moments data of the regions illuminated by the working distance LEDs. (B, center) Real eye image captured at the correct working distance and very low illumination, with the working distance LEDs turned on, along with the red, green, and blue channel images of the central region of interest. (B, far left) As observed in the blue channel distribution and corresponding moments data, the maximum intensities are low due to the low illumination; (B, far right) however, to compensate, the spread of the blue channel peak is accordingly also low. (C, center) Real eye image captured at an incorrect working distance (too far) and standard illumination, with the working distance LEDs turned on, along with the red, green, and blue channel images of the central region of interest. (C, far left) As observed in the blue channel distribution and corresponding moments data, the maximum intensities are comparable to the low intensities observed for the correct working distance under low illumination (B); (C, far right) however, the spread of the blue channel peak is marginally higher. (D, center) Real eye image captured at an incorrect working distance (too far) and low illumination, with the working distance LEDs turned on, along with the red, green, and blue channel images of the central region of interest. (D, far left) As observed in the blue channel distribution and corresponding moments data, the maximum intensities are significantly low, (D, far right) and the spread of the blue channel peak is significantly high. (E, center) Real eye image captured at an incorrect working distance (too close) and standard illumination, with the working distance LEDs turned on, along with the red, green, and blue channel images of the central region of interest. (E, far left) As observed in the blue channel distribution and corresponding moments data, the maximum intensities are high similar to for the correct working distance (A); (E, far right) however, the spread of the blue channel peak is significantly higher.
Figure 5.
 
(Left) Flow diagram showing the image preprocessing steps to create the autocapture algorithm on the Vistaro. (Right) The decision tree executed to classify each frame as correct versus incorrect working distance.
Figure 5.
 
(Left) Flow diagram showing the image preprocessing steps to create the autocapture algorithm on the Vistaro. (Right) The decision tree executed to classify each frame as correct versus incorrect working distance.
As we iteratively tested our algorithm on over 30 dilated subjects, we were incrementally able to eliminate insignificant parameters, such as the Euclidean distance between two LED points, and fine tune the vital parameters by computing optimum threshold values of the spread, the peak intensities of the left and right LEDs, and their maximum allowable difference. When we attempted to eliminate false positives from the static images on which our algorithm was trained, we found that the initial threshold values of the decision parameters were extremely sensitive and resulted in poor user experience in the first few field trials. We overcame this problem by relaxing the threshold values and, to maintain accuracy, increasing the minimum number of contiguous frames that had to be classified as correct working distance before autocapture could be triggered. We also used the autofocus capability provided by iOS prior to the final capture to ensure that the retina was always in focus to produce the sharpest images. Figure 6 shows the autocapture algorithm in action. 
Figure 6.
 
(Left) The device working distance is too far. (Center) The device is at the correct working distance, with the working distance LEDs within the square live view overlays. (Right) Autocaptured image.
Figure 6.
 
(Left) The device working distance is too far. (Center) The device is at the correct working distance, with the working distance LEDs within the square live view overlays. (Right) Autocaptured image.
Optical Design Validation
The FOV for the device was evaluated using a model eye with standard measurements first and further verified with images captured on a real eye. The concentric arcs on the model eye started from 30° and had a 10° graduation for subsequent arcs. When the model eye was photographed with the Vistaro (Fig. 7, top left), the FOV was estimated to be approximately just over 70° until the sharpest limit on the periphery in a single-shot image. When we calculated the same on an image of an actual eye, the FOV was found to be up to 65°. The change in the FOV can be attributed to a change in medium when one measures a real eye (aqueous media) after a model eye (air media). We then compared the FOV with a real eye image of the same subject captured on the iCare EIDON Widefield TrueColor Confocal Fundus Imaging System (CenterVue, Padua, Italy). The FOV in a single-shot image on the EIDON was 60°, as noted by the manufacturer. We measured the FOV on a single-shot image of both the Vistaro and EIDON using the following formula: (total horizontal or vertical length ÷ distance between the center of the macula and the optic disc) × 15°. 
Figure 7.
 
(Top left) A model eye captured with a FOV of up to 70° using the Vistaro. The concentric arcs mark an increase in measurement by 10°, with the first one corresponding to 30°. (Top right) Real eye single-shot image captured on the Vistaro showing a FOV of up to 65°. The measured FOV of a single-shot image on the Vistaro was 56.36° compared with that of a single-shot image from the EIDON (bottom center).
Figure 7.
 
(Top left) A model eye captured with a FOV of up to 70° using the Vistaro. The concentric arcs mark an increase in measurement by 10°, with the first one corresponding to 30°. (Top right) Real eye single-shot image captured on the Vistaro showing a FOV of up to 65°. The measured FOV of a single-shot image on the Vistaro was 56.36° compared with that of a single-shot image from the EIDON (bottom center).
This resulted in the measured FOV on a single-shot EIDON image being 53.14°, and that on the Vistaro was 56.36°. 
The Vistaro was found to be compliant with the key standards pertaining to fundus cameras (ISO 10940:2009, ISO 15004-1, and ISO 15004-2:2007 for light hazard safety and International Electrotechnical Commission [IEC] 60601-1:2005 for electrical safety). 
Image Quality Assessment
To assess the quality of images captured on the Vistaro we conducted a single-arm, cross-sectional pilot study at a tertiary diabetes care hospital over a period of 4 days in January 2021. The study was approved by the Institutional Ethics Committee, and it was conducted as per the tenets of the Declaration of Helsinki. Thirty-four consecutive diabetic patients who were >18 years of age, provided written informed consent, and had no significant media opacity (such as vitreous hemorrhage) were recruited for the study while undergoing routine examination at the hospital. Of these, two patients refused dilation and hence were excluded; thus, a total of 32 patients were included, and the basic demographic details of age and gender were recorded. The patients underwent routine slit-lamp examination before being dilated with 1% tropicamide solution. After 15 minutes, all participants underwent mydriatic imaging by a minimally trained ophthalmic technician. Imaging protocol included one disc-centered and one macula-centered image captured per eye. The time taken to capture an image was noted manually. The number of cases for which the autocapture algorithm was successfully able to capture images was also noted manually. A maximum of two additional attempts were made if an image was of insufficient quality. For each patient, two best-captured images each of the disc and macula (two images per eye and four images in total for each patient) were anonymized, stored on a secure cloud server, and sent to three independent graders for image quality assessment. Out of the same patient set, one disc-centered and one macula-centered image were montaged for nine patients and stored in the same cloud server, after masking the patient details. All of the graders were practicing vitreoretinal specialists, and they provided an image quality grading on a per-image basis. Images were graded as follows: 
  • Grade 1 (excellent)—Graders can visualize third-order vessels (capillaries) on the image, and the image is sharp. If capillaries are visible, microaneurysms should also be visible. Any blurring or glare in the peripheral third of the image may be disregarded, and the image may be graded based on the clarity of the remaining two-thirds of the image.
  • Grade 2 (acceptable)—Graders can identify only second-order vessels (vessels larger than capillaries but smaller than the central arcade). If these vessels are visible, larger retinal hemorrhages and subhyaloid hemorrhages should be visible. Blurring or glare is present in more than a third of the image but less than half of the image.
  • Grade 3 (ungradable)—Vessels cannot be clearly identified. Blurring or glaring spanning half or more of the image may require the image to be graded as ungradable.
Images graded as excellent and acceptable were considered clinically useful. 
FOV Comparison With Regulatory-Approved Devices With Traceable Specifications
We also compared the FOV of the Vistaro to a regulatory-approved widefield device (EIDON Widefield TrueColor Confocal Fundus Imaging System) and an ultra-widefield device (Daytona; Optos Inc., Marlborough, MA) by overlaying a seven-field standard ETDRS montage in images derived on all devices. The same subject was imaged using all three devices, after pupillary mydriasis, with a pupil size of 6.2 mm. For comparison between the EIDON and Vistaro, we used a montage of disc- and macula-centered images to overlay the ETDRS seven-field scale; for the Daytona, we used a single-shot image to map the same cumulative FOV achieved (Figs. 8A–8C). Additionally, we also mapped the FOV of a single image and a two-shot montage in terms of disc diameters (Figs. 9A, 9B). 
Figure 8.
 
(A) ETDRS seven-field overlay on a two-field montage captured on the EIDON. (B) ETDRS seven-field overlay on a two-field montage captured on the Vistaro. The FOV exceeded the cumulative FOV of the seven-field ETDRS image. (C) ETDRS seven-field overlay on a single-shot captured on the Optos Daytona.
Figure 8.
 
(A) ETDRS seven-field overlay on a two-field montage captured on the EIDON. (B) ETDRS seven-field overlay on a two-field montage captured on the Vistaro. The FOV exceeded the cumulative FOV of the seven-field ETDRS image. (C) ETDRS seven-field overlay on a single-shot captured on the Optos Daytona.
Figure 9.
 
(A) Measuring a single-shot image on the Vistaro in terms of disc diameters (∼11.5). (B) Measuring a two-field montage on the Vistaro in terms of disc diameters (∼14.5).
Figure 9.
 
(A) Measuring a single-shot image on the Vistaro in terms of disc diameters (∼11.5). (B) Measuring a two-field montage on the Vistaro in terms of disc diameters (∼14.5).
Results
The autocapture algorithm helped the technician capture retinal images within an average span of 10 to 15 seconds per image. The algorithm was used successfully to capture images in 80% of the patients. 
The mean age of the study cohort of 34 patients was 57.2 years (standard deviation, 13.3), and they all had established diabetes. Out of the 32 patients actually included in the study, 128 single-shot images were evaluated by three independent vitreoretinal specialists. Further, two-field montages for nine patients were also graded for image quality by the same graders. 
The consensus grading for single images in terms of quality by three independent vitreoretinal specialists showed that 91.6% of the images were clinically useful. Three patients were sensitive to the light source and moved their eyes during image acquisition, leading to poorly focused images that were deemed not clinically useful. The agreements (Cohen's kappa) between the three graders and the consensus were 0.75, 0.65, and 0.8, respectively. The montages were also graded for image quality by all three graders, and 100% of the images were graded as clinically useful on a consensus basis. 
Overlaying the seven-field ETDRS montage on the two-field montages from EIDON (Fig. 8A) and the Vistaro (Fig. 8B) showed that both devices exceeded the cumulative FOV of the gold standard, with the Vistaro showing a larger FOV horizontally. Additionally, with regard to disc diameters, it was found that a single-shot 65° FOV image was equivalent to ∼11.5 disc diameters (Fig. 9A), and the montage was an equivalent of ∼14.5 disc diameters (Fig. 9B). The measured FOV of a two-field montage on the Vistaro was approximately 90° (Fig. 10A), and that of a three-field montage can be up to approximately 120°, depending on the images captured using external fixation targets (Fig. 10B). 
Figure 10.
 
(A) Two-field montage captured on the Vistaro. The cumulative FOV was 90°. (B) A three-field montage captured on the Vistaro. The cumulative FOV was up to 120°. It should be noted that the FOV is subject to the operator using the external fixation targets.
Figure 10.
 
(A) Two-field montage captured on the Vistaro. The cumulative FOV was 90°. (B) A three-field montage captured on the Vistaro. The cumulative FOV was up to 120°. It should be noted that the FOV is subject to the operator using the external fixation targets.
Discussion
Widefield retinal imaging is useful in detecting pathologies extending beyond the posterior pole and is currently moving to the forefront of posterior segment imaging. The current gold standard for retinal imaging requires a seven-field montage, with each field having a FOV of 30°, to view the posterior 75° of the retina. Although there are regulatory-approved widefield and ultra-widefield devices to image the peripheral retina, almost all of them are limited in their application due to either their inability to produce true-color retinal images or the possibility of missing minute details of retinal pathologies in the early and late stages, peripheral distortion, or operational complexities.3 We demonstrated that a portable, smartphone-based WFI device with a FOV of 65° was useful in imaging the retina beyond the posterior pole, and a two-field montage covered well beyond the seven-field ETDRS FOV. The device was compared to a regulatory-approved WFI device to map the area covered by a two-field montage, and the same area was mapped on a single-shot UWFI device. The US Air Force target test card showed that the resolution for the center of a single-shot image captured on the iPhone SE 2 was 64 lp/mm; in the middle (between center and periphery), it was 57 lp/mm; and, in the periphery, it was 45 lp/mm. The device was on par with the ISO standards for different parameters pertaining to fundus cameras working with digital sensors. The resolution of a real-time, square image on the smartphone sensor was 2845 × 2845 pixels, resulting in approximately 64.3 pixels per retinal degree. This was almost two times the minimum image resolution requirement of 30 pixels per degree described by the UK Nation Health Service for screening purposes.12 
The device was designed keeping the main limiting factors in WFI faced by operators, in mind with regard to the sensitivity of the working distance, as well as other contributing factors such flash, gain, and gamma that affect exposure.13,14 A unique autocapture algorithm was developed to enable automatic capture of images when the correct working distance has been reached. This was done to help reduce operator-associated errors in WFI due to incorrect working distances and to eliminate unwanted artifacts. More operators can be trained with basic instructions to capture clinically useful retinal images. The device can work for patients with refractive errors within a range of –20 diopters (D) to +20 D. The unique patient-management software enables offline montaging of images to provide clinically relevant details when sent to specialists over third-party applications, in real time or on a feed-forward basis using telemedicine. The same software also allows for secure data storage on a HIPAA-compliant cloud server upon connection to the internet. The gradability and image quality of the device were assessed with the help of a pilot trial on patients with diabetes. Based on consensus gradings among three vitreoretinal specialists, the device was capable of producing 91.6% clinically useful images. With the two-field montage giving an effective FOV of 88.75°, a montage of three or more fields would conform to the definitions of UWFI as per guidelines mentioned previously (Figs. 10A, 10B).15 A critical criterion in this case would be the way by which the image is captured, as keeping the optic disc in different positions using the external fixation targets would eventually affect the cumulative FOV. Two-field montages were shown to be 100% clinically useful, based on consensus grading among the same graders. The autocapture algorithm was able to capture 80% clinically useful images in the same study. 
The early concepts for obtaining widefield and ultra-widefield images included trans-scleral, trans-palpebral, and trans-pars-planar modes of illumination. The Equator Plus lens (Volk Optical, Mentor, OH) and Panoret-1000 camera (Medibell Medical Vision Technologies, Inc., Haifa, Israel) built on this principle could photograph a field of approximately 148° from equator to equator using trans-scleral illumination, and a little beyond.16 However, clinical deployment of trans-scleral illumination was unsuccessful due to factors such as contact-mode imaging, which imposed potential risks of inflammation, contamination, and abrasion to the sclera and cornea. The image quality was highly dependent on the illumination location, and the device required the simultaneous use of both hands for imaging.17,18 In trans-pars-planar illumination, it becomes necessary to carefully control the illumination spot size and to identify the location of the pars plana to optimize the process, which can be challenging for technicians while attempting to capture repeatable images.19 Although trans-palpebral illumination is a non-contact method that delivers light through the palpebra (upper eyelid), data on the quality and repeatability of images captured using this technique remain inadequate.20 Another challenge is the requirement of separate adjustment and optimization of imaging and illumination subsystems.21 
The advent of confocal imaging and scanning laser ophthalmoscopy (SLO) marked a significant advancement in WFI and UWFI. The handheld Staurenghi 230 SLO Retina Lens (Ocular Instruments, Bellevue, WA) used two biconvex aspheric lenses and a two-element convex–concave lens that was incorporated into a confocal scanning laser ophthalmoscopy system, the Heidelberg Retinal Angiography (HRA) Spectralis (Heidelberg Engineering, Heidelberg, Germany), to increase the FOV from 100° to 150° of the retina.3,22 The Mirante Scanning Laser Ophthalmoscope (Nidek, San Jose, CA), a confocal, SLO-based device, is a traditional UWFI device with a WFI adapter, covering a FOV of 163° of the retina to provide a pan-retinal view up to the ora serrata when montaged. It uses monochromatic lasers in the red–green–blue spectrum to image the retina and produce pseudocolors.23 The Clarus fundus camera (Carl Zeiss Meditec, Jena, Germany) is an UWFI device that covers up to 133° of the retina in a single image using confocal SLO optics and traditional true color imaging, a technique referred to as broad line fundus imaging, to minimize lash and lid artifacts and provide higher resolution images.24 The Daytona (Optos by Nikon, Tokyo, Japan) is an UWFI device that uses an ellipsoid mirror along with red and green lights to enhance visualization of retinal substructures, unlike conventional devices with white LEDs, thus resulting in a pseudocolor but covering up to 200° of the retina. These devices can work with pupils as small as 2.5 mm.3,25 A notable factor in SLO-based devices is that the FOV of such devices is measured from the center of the eye. An additional advantage offered by many of the present WFI and UWFI systems is the possibility of simultaneous acquisition of fundus fluorescein angiography, indocyanine angiography, red-free photography, fundus photography, and fundus autofluorescence (FAF).3 
The need to implement WFI into standard retina practice comes from the observation that diseases presented in the posterior pole do not always reflect the severity of that seen in the retinal periphery and vice versa.22 Several studies have provided interesting insights into the potential associations between macular and peripheral changes in conditions such as age-related macular degeneration, retinal vascular pathologies such as vein occlusions, and proliferative diabetic retinopathy.26 It has also been shown that peripheral lesions are predictive of increased risk of disease progression in diabetic retinopathy.26,27 Additionally, several high-risk diabetic retinopathy features, such as intraretinal microvascular abnormalities or neovascularization peripheral to the standard two-field 45° photographs, used during screening can be missed.3,26 Peripheral involvement is also an important component of posterior uveitis, vasculitis, infectious retinitis, retinitis pigmentosa, and ocular melanomas, among many others.3 This requires a clear image beyond the posterior pole to ensure that typical findings are not missed. This can be achieved with a wider field of view beyond the posterior pole and high image quality—two attributes addressed by smartphone-based WFI devices such as the Vistaro and the EIDON Widefield TrueColor Confocal Fundus Imaging System. 
Transitioning from high-end, tabletop devices to smartphones has been the norm in portable, conventional 45° fundus imaging over the past decade.2830 Extending this to WFI implies a transition in image quality, as well. The main camera-related factors contributing to poor-quality fundus images include the smaller pixel size of the sensors and the lack of separate pathways for illumination and imaging.13,31 When an eye is typically illuminated with a light beam, a part of this beam is reflected off the surface of the cornea and the lens. This results in reflection artifacts appearing on the fundus image and contributes to a reduction in quality. CellScope Retina (University of Michigan, Ann Arbor, MI) is a portable, smartphone-based, widefield fundus imaging device reported in the literature.4 It consists of a battery-powered three-dimension–printed optical and hardware system built around a smartphone with a FOV of 50° in a single shot with a dilated pupil size of 8 mm. Semiautomated widefield imaging with a FOV up to 100° is possible in the form of a montage of more than three images.4 The device uses polarization filters to eliminate the aforementioned reflex artifacts; however, these often result in non-uniformly illuminated images, with the disc and nerve fiber layer tending to be more exposed compared with the macula. The annular illumination–based design of the Vistaro is devoid of cross-polarizers like those in the CellScope Retina; hence, no reflectance light from the retina is rejected. The retina is illuminated uniformly, and images are free of corneal artifacts. Vistaro gives a FOV of up to 65° from the center of the pupil in one shot and an effective FOV of approximately 89° in a montage of two images. With smartphones already being validated for screening in primary-care facilities, it would be a big boost to the healthcare system to have a smartphone-based WFI device available to screen for signs of retinal disease extending beyond the posterior pole.29,32,33 
The EIDON Widefield TrueColor Confocal Fundus Imaging System is a WFI system that uses white LED illumination combined with a confocal aperture to produce high-definition images, with a FOV of up to 60°. The technology allows reflected light to pass through a small confocal aperture, to avoid any scattering or reflection of light outside the focal plane that could otherwise cause blurring. This results in a sharp, high-contrast image of an object layer located within the focal plane.34 It can work with non-mydriatic pupils over 2.5 mm in diameter. However, one critical feature of the EIDON is that the device produces a greater blue component than conventional flash-based fundus cameras, so the images differ from the conventional fundus images that ophthalmologists are used to seeing in routine examinations.34 As per the manufacturer's details, the EIDON works in the range of −12 D to +15 D; thus, it cannot focus on the posterior pole and detect retinal conditions related to pathological myopia in patients with a refractive error greater than 12 D. 
A major limitation of current SLO-based widefield devices such as the Mirante Scanning Laser Ophthalmoscope (Nidek) and Daytona (Optos) is the variation of color of the retina due to the use of lasers and image processing algorithms. The “pseudocolor” image produced using different laser wavelengths to illuminate the retina may not provide enough information to reconstruct the true color of the retina; hence, certain features detectable with the full visible spectrum may be missed.35 Macular resolution has also been found to be lower in UWFI compared with standard desktop cameras. Other limitations include peripheral distortion, making the lesions appear larger than they truly are, and lash artifacts.3 For the Daytona, the overall image was reported to be stretched by a factor of 1.12 horizontally, and the extreme peripheral part of the image was magnified 2 × 1.5X times, due to the ellipsoid mirror being used in the optical design instead of a spherical one.36 Although such devices cover a larger part of the retina in a non-mydriatic mode, UWFI has been found to have only moderate agreement and lower image gradability rates compared with dilated fundus examination.37 Devices such as the Ocular Instruments Staurenghi 230 SLO Retina Lens are contact-based systems that require skilled photographers and extremely cooperative patients to image the retina. Devices such as the EIDON (CenterVue) and Clarus (Carl Zeiss Meditec) produce true-color images, but the devices are bulky and expensive and thus limited in their use when it comes to screening. The Vistaro costs a tenth of the high-end, tabletop cameras. It is also portable and telemedicine friendly, and it can be used for screening purposes in rural and remote areas, as well. 
One major limitation of the Vistaro is the need for a minimum pupil size of 5 mm; thus, it requires eye dilation. Although the risk of angle closure is minimal, mydriasis can cause interruption of near vision and add time to the image capturing process.38 In contrast, SLO-based, non-mydriatic, widefield and ultrawide-field cameras can capture an image with a pupil size of 2.5 mm or higher. Another limitation is that the device is semiautomated and has external fixation targets. The device requires manual maneuvering in a handheld mode when more than two fields are required to capture the far retinal periphery, and a subsequent montage might be prone to errors if the technician is not trained well enough in the image acquisition process. A feature that is offered by the current WFI and UWI devices that is not offered by the Vistaro is the simultaneous acquisition of fundus fluorescein angiography, indocyanine angiography, and FAF. However, being able to do so is offset by the substantial additional cost of these devices. We plan to address some of these limitations in the next version of the device, wherein we intend to make the device non-mydriatic, bring in automated internal fixation targets, and introduce FAF. The autocapture algorithm was not able to capture clinically useful images on 20% patients, as it required a certain level of patient cooperation and clear media. However, this algorithm was primarily designed to reduce the time and stress levels involved in capturing clinically useful images, which it effectively did. The level of patient cooperation and media opacities remain subjective areas in this case. 
The optical design outlined in this paper makes it possible to capture up to 65° from the center of the pupil in a single shot and approximately 90° in an offline automated montage generated with two images, without compromising image quality. It offers a more effective way to view the seven-field ETDRS equivalent retinal field. More images can be montaged to obtain an ultra-widefield view of the retina. As it is simple to use, repeatable high-quality images can be obtained. Because the optical design is implemented on smartphone-based hardware, it is a low-cost alternative to the otherwise bulky and expensive SLO-based widefield and ultra-widefield imaging systems. This makes the device useful in screening and diagnosing patients in resource-constrained settings. An autocapture algorithm ensures that an operator can capture images with basic training, eliminating the need for skilled technicians. The next step would entail doing a clinical validation study to test the device in a real-world setting, as there is tremendous potential to do widefield imaging using smartphones with the proper optical design. 
Acknowledgments
The authors thank Adeeb Ulla Baig, Venkatesh Surabathula, Vighnesh M.J., Selvaraj K., Rachit Gupta, Mathew Dominic, Vasudev M.G., Aishwarya Nilawar, Somanath B. and Venkatesh Reddy B.J. from Remidio Innovative Solutions Pvt. Ltd. for helping us with the prototyping, user-based analysis, software iterations, and acquisition of images with Remidio Vistaro in house and in the field. We also thank Chaitra Jayadev, MD, from Narayana Nethralaya, Bangalore, for helping us with the comparison of images using CenterVue Eidon and Nikon Optos. We also express our heartfelt gratitude to Usha Sharma, MD, and Sahana G.V., from Diacon Hospital, Bangalore, for grading the images. We also thank the partial financial support from Seva Foundation, USA, for development of the camera system. 
Disclosure: A. Sivaraman, Remidio Innovative Solutions (E); S. Nagarajan, Remidio Innovative Solutions (E); S. Vadivel, Remidio Innovative Solutions (E); S. Dutt, Remidio Innovative Solutions (E); P. Tiwari, Remidio Innovative Solutions (E); S. Narayana, None; D.P. Rao, Remidio Innovative Solutions (E) 
References
Liu TYA, Arevalo JF. Wide-field imaging in proliferative diabetic retinopathy. Int J Retina Vitr. 2019; 5(suppl 1): 20.
Nguyen NV, Vigil EM, Hassan M, et al. Comparison of montage with conventional stereoscopic seven-field photographs for assessment of ETDRS diabetic retinopathy severity. Int J Retina Vitr. 2019; 5(1): 51. [CrossRef]
Bilgeç MD, Erol N, Topbaş S. Wide-field retinal imaging in adults and children. In: Nowinska A, ed. Novel Diagnostic Methods in Ophthalmology. London: IntechOpen; 2019: 47–66.
Kim TN, Myers F, Reber C, et al. A Smartphone-based tool for rapid, portable, and automated wide-field retinal imaging. Transl Vis Sci Technol. 2018; 7(5): 21. [CrossRef] [PubMed]
Rajalakshmi R, Arulmalar S, Usha M, et al. Validation of smartphone-based retinal photography for diabetic retinopathy screening. PLoS One. 2015; 10(9): e0138285. [CrossRef] [PubMed]
Ghasemi Falavarjani K, Wang K, Khadamy J, Sadda SR. Ultra-wide-field imaging in diabetic retinopathy; an overview. J Curr Ophthalmol. 2016; 28(2): 57–60. [CrossRef] [PubMed]
Natarajan S, Jain A, Krishnan R, Rogye A, Sivaprasad S. Diagnostic accuracy of community-based diabetic retinopathy screening with an offline artificial intelligence system on a smartphone. JAMA Ophthalmol. 2019; 137(10): 1182–1188. [CrossRef] [PubMed]
Sengupta S, Sindal MD, Baskaran P, Pan U, Venkatesh R. Sensitivity and specificity of smartphone-based retinal imaging for diabetic retinopathy: a comparative study. Ophthalmol Retina. 2019; 3(2): 146–153. [CrossRef] [PubMed]
Sosale AR. Screening for diabetic retinopathy—is the use of artificial intelligence and cost-effective fundus imaging the answer? Int J Diabetes Dev Ctries. 2019; 39(1): 1–3. [CrossRef]
Wintergerst MWM, Mishra DK, Hartmann L, et al. Diabetic retinopathy screening using smartphone-based fundus imaging in India. Ophthalmology. 2020; 127(11): 1529–1538. [CrossRef] [PubMed]
Patel TP, Kim TN, Yu G, et al. Smartphone-based, rapid, wide-field fundus photography for diagnosis of pediatric retinal diseases. Transl Vis Sci Technol. 2019; 8(3): 29. [CrossRef] [PubMed]
Maamari RN, Keenan JD, Fletcher DA, Margolis TP. A mobile phone-based retinal camera for portable wide field imaging. Br J Ophthalmol. 2014; 98(4): 438–441. [CrossRef] [PubMed]
Veiga D, Pereira C, Ferreira M, Gonçalves L, Monteiro J. Quality evaluation of digital fundus images through combined measures. J Med Imaging (Bellingham). 2014; 1(1): 014001. [CrossRef] [PubMed]
Bartling H, Wanger P, Martin L. Automated quality evaluation of digital fundus photographs. Acta Ophthalmol (Copenh). 2009; 87(6): 643–647. [CrossRef]
Patel SN, Shi A, Wibbelsman TD, Klufas MA. Ultra-widefield retinal imaging: an update on recent advances. Ther Adv Ophthalmol. 2020; 12: 2515841419899495. [PubMed]
Shields CL, Materin M, Epstein J, Shields JA. Wide-angle imaging of the ocular fundus. Rev Ophthalmol. 2003; 10: 2.
Toslak D, Chau F, Erol MK, Liu C, Chan RVP, Son T, et al. Trans-pars-planar illumination enables a 200° ultra-wide field pediatric fundus camera for easy examination of the retina. Biomed Opt Express. 2019; 11(1): 68–76. [CrossRef] [PubMed]
Toslak D, Thapa D, Chen Y, Erol MK, Paul Chan RV, Yao X. Wide-field fundus imaging with trans-palpebral illumination. Proc SPIE Int Soc Opt Eng. 2017; 10045: 100451X. [PubMed]
Wang B, Toslak D, Alam MN, Chan RVP, Yao X. Contact-free trans-pars-planar illumination enables snapshot fundus camera for nonmydriatic wide field photography. Sci Rep. 2018; 8(1): 8768. [CrossRef] [PubMed]
Toslak D, Thapa D, Chen Y, Erol MK, Paul Chan RV, Yao X. Trans-palpebral illumination: an approach for wide-angle fundus photography without the need for pupil dilation. Opt Lett. 2016; 41(12): 2688–2691. [CrossRef] [PubMed]
Toslak D, Ayata A, Liu C, Erol MK, Yao X. Wide-field smartphone fundus video camera based on miniaturized indirect ophthalmoscopy. Retina. 2018; 38(2): 438–441. [CrossRef] [PubMed]
Kim EL . Wide-field imaging of retinal diseases. US Ophthalmic Rev. 2015; 08(2): 125–131. [CrossRef]
Bawankule P, Narnaware S. Mirante: adding new dimensions to ultra-wide-field imaging system. Egypt Retina J. 2020; 7(2): 50.
Chen A, Dang S, Chung MM, et al. Quantitative comparison of fundus images by two ultra-wide field fundus cameras. Ophthalmol Retina. 2020; 5(5): 450–457. [CrossRef] [PubMed]
Borrelli E, Querques L, Lattanzio R, et al. Nonmydriatic widefield retinal imaging with an automatic white LED confocal imaging system compared with dilated ophthalmoscopy in screening for diabetic retinopathy. Acta Diabetol. 2020; 57(9): 1043–1047. [CrossRef] [PubMed]
Quinn N, Csincsik L, Flynn E, et al. The clinical relevance of visualising the peripheral retina. Prog Retin Eye Res. 2019; 68: 83–109. [CrossRef] [PubMed]
Ghasemi Falavarjani K, Wang K, Khadamy J, Sadda SR. Ultra-wide-field imaging in diabetic retinopathy; an overview. J Curr Ophthalmol. 2016; 28(2): 57–60. [CrossRef] [PubMed]
Sengupta S, Sindal MD, Besirli CG, et al. Screening for vision-threatening diabetic retinopathy in South India: comparing portable non-mydriatic and standard fundus cameras and clinical exam. Eye (Lond). 2018; 32(2): 375–383. [CrossRef] [PubMed]
Prathiba V, Rajalakshmi R, Arulmalar S, et al. Accuracy of the smartphone-based nonmydriatic retinal camera in the detection of sight-threatening diabetic retinopathy. Indian J Ophthalmol. 2020; 68(suppl 1): S42–S46. [PubMed]
Sosale B, Aravind SR, Murthy H, et al. Simple, Mobile-based Artificial Intelligence Algo r ithm in the detection of Diabetic Retinopathy (SMART) study. BMJ Open Diabetes Res Care. 2020; 8(1): e000892. [CrossRef] [PubMed]
Marrugo AG, Millán MS, Cristóbal G, Gabarda S, Abril HC. No-reference quality metrics for eye fundus imaging. In: Real P, Diaz-Pernil D, Molina-Abril H, Berciano A, Kropatsch W, eds. Computer Analysis of Images and Patterns. Berlin: Springer Science+Business Media; 2011: 486–493.
Sengupta S, Sindal MD, Baskaran P, Pan U, Venkatesh R. Sensitivity and specificity of smartphone-based retinal imaging for diabetic retinopathy. Ophthalmol Retina. 2019; 3(2): 146–153. [CrossRef] [PubMed]
Natarajan S, Jain A, Krishnan R, Rogye A, Sivaprasad S. Diagnostic accuracy of community-based diabetic retinopathy screening with an offline artificial intelligence system on a smartphone. JAMA Ophthalmol. 2019; 137(10): 1182–1188. [CrossRef] [PubMed]
Sarao V, Veritti D, Borrelli E, Sadda SVR, Poletti E, Lanzetta P. A comparison between a white LED confocal imaging system and a conventional flash fundus camera using chromaticity analysis. BMC Ophthalmol. 2019; 19(1): 231. [CrossRef] [PubMed]
Bartsch D-U, Freeman WR, Lopez AM. A false use of “true color.” Arch Ophthalmol. 2002; 120(5): 675–676. [PubMed]
Oishi A, Hidaka J, Yoshimura N. Quantification of the image obtained with a wide-field scanning ophthalmoscope. Invest Ophthalmol Vis Sci. 2014; 55(4): 2424–2431. [CrossRef] [PubMed]
Kumar V, Surve A, Kumawat D, et al. Ultra-wide field retinal imaging: a wider clinical perspective. Indian J Ophthalmol. 2021; 69(4): 824–835. [CrossRef] [PubMed]
Wolfs RC, Grobbee DE, Hofman A, de Jong PT. Risk of acute angle-closure glaucoma after diagnostic mydriasis in nonselected subjects: the Rotterdam Study. Invest Ophthalmol Vis Sci. 1997; 38(12): 2683–2687. [PubMed]
Figure 1.
 
Schematic diagram of the optical design of the hardware (left) and a photograph of the Vistaro handheld device (right).
Figure 1.
 
Schematic diagram of the optical design of the hardware (left) and a photograph of the Vistaro handheld device (right).
Figure 2.
 
A modulation transfer function plot and spot diagram for analyzing the Vistaro image quality.
Figure 2.
 
A modulation transfer function plot and spot diagram for analyzing the Vistaro image quality.
Figure 3.
 
The US Air Force target resolution test showing the resolution of the Vistaro in a single-shot retinal image. The images show the resolution for the center, middle, and periphery in a single-shot retinal image. The image is on a smartphone sensor and hence has a certain level of thermal noise associated with it, as expected from such sensors.
Figure 3.
 
The US Air Force target resolution test showing the resolution of the Vistaro in a single-shot retinal image. The images show the resolution for the center, middle, and periphery in a single-shot retinal image. The image is on a smartphone sensor and hence has a certain level of thermal noise associated with it, as expected from such sensors.
Figure 4.
 
(A, center) Real eye image captured at the correct working distance and standard illumination, with the working distance LEDs turned on, along with the red, green, and blue channel images of the central region of interest. (A, far left) Intensity distribution of the blue channel along the line joining centroid(L) and centroid(R) of the regions illuminated by the working distance LEDs. (A, far right) Information obtained from the moments data of the regions illuminated by the working distance LEDs. (B, center) Real eye image captured at the correct working distance and very low illumination, with the working distance LEDs turned on, along with the red, green, and blue channel images of the central region of interest. (B, far left) As observed in the blue channel distribution and corresponding moments data, the maximum intensities are low due to the low illumination; (B, far right) however, to compensate, the spread of the blue channel peak is accordingly also low. (C, center) Real eye image captured at an incorrect working distance (too far) and standard illumination, with the working distance LEDs turned on, along with the red, green, and blue channel images of the central region of interest. (C, far left) As observed in the blue channel distribution and corresponding moments data, the maximum intensities are comparable to the low intensities observed for the correct working distance under low illumination (B); (C, far right) however, the spread of the blue channel peak is marginally higher. (D, center) Real eye image captured at an incorrect working distance (too far) and low illumination, with the working distance LEDs turned on, along with the red, green, and blue channel images of the central region of interest. (D, far left) As observed in the blue channel distribution and corresponding moments data, the maximum intensities are significantly low, (D, far right) and the spread of the blue channel peak is significantly high. (E, center) Real eye image captured at an incorrect working distance (too close) and standard illumination, with the working distance LEDs turned on, along with the red, green, and blue channel images of the central region of interest. (E, far left) As observed in the blue channel distribution and corresponding moments data, the maximum intensities are high similar to for the correct working distance (A); (E, far right) however, the spread of the blue channel peak is significantly higher.
Figure 4.
 
(A, center) Real eye image captured at the correct working distance and standard illumination, with the working distance LEDs turned on, along with the red, green, and blue channel images of the central region of interest. (A, far left) Intensity distribution of the blue channel along the line joining centroid(L) and centroid(R) of the regions illuminated by the working distance LEDs. (A, far right) Information obtained from the moments data of the regions illuminated by the working distance LEDs. (B, center) Real eye image captured at the correct working distance and very low illumination, with the working distance LEDs turned on, along with the red, green, and blue channel images of the central region of interest. (B, far left) As observed in the blue channel distribution and corresponding moments data, the maximum intensities are low due to the low illumination; (B, far right) however, to compensate, the spread of the blue channel peak is accordingly also low. (C, center) Real eye image captured at an incorrect working distance (too far) and standard illumination, with the working distance LEDs turned on, along with the red, green, and blue channel images of the central region of interest. (C, far left) As observed in the blue channel distribution and corresponding moments data, the maximum intensities are comparable to the low intensities observed for the correct working distance under low illumination (B); (C, far right) however, the spread of the blue channel peak is marginally higher. (D, center) Real eye image captured at an incorrect working distance (too far) and low illumination, with the working distance LEDs turned on, along with the red, green, and blue channel images of the central region of interest. (D, far left) As observed in the blue channel distribution and corresponding moments data, the maximum intensities are significantly low, (D, far right) and the spread of the blue channel peak is significantly high. (E, center) Real eye image captured at an incorrect working distance (too close) and standard illumination, with the working distance LEDs turned on, along with the red, green, and blue channel images of the central region of interest. (E, far left) As observed in the blue channel distribution and corresponding moments data, the maximum intensities are high similar to for the correct working distance (A); (E, far right) however, the spread of the blue channel peak is significantly higher.
Figure 5.
 
(Left) Flow diagram showing the image preprocessing steps to create the autocapture algorithm on the Vistaro. (Right) The decision tree executed to classify each frame as correct versus incorrect working distance.
Figure 5.
 
(Left) Flow diagram showing the image preprocessing steps to create the autocapture algorithm on the Vistaro. (Right) The decision tree executed to classify each frame as correct versus incorrect working distance.
Figure 6.
 
(Left) The device working distance is too far. (Center) The device is at the correct working distance, with the working distance LEDs within the square live view overlays. (Right) Autocaptured image.
Figure 6.
 
(Left) The device working distance is too far. (Center) The device is at the correct working distance, with the working distance LEDs within the square live view overlays. (Right) Autocaptured image.
Figure 7.
 
(Top left) A model eye captured with a FOV of up to 70° using the Vistaro. The concentric arcs mark an increase in measurement by 10°, with the first one corresponding to 30°. (Top right) Real eye single-shot image captured on the Vistaro showing a FOV of up to 65°. The measured FOV of a single-shot image on the Vistaro was 56.36° compared with that of a single-shot image from the EIDON (bottom center).
Figure 7.
 
(Top left) A model eye captured with a FOV of up to 70° using the Vistaro. The concentric arcs mark an increase in measurement by 10°, with the first one corresponding to 30°. (Top right) Real eye single-shot image captured on the Vistaro showing a FOV of up to 65°. The measured FOV of a single-shot image on the Vistaro was 56.36° compared with that of a single-shot image from the EIDON (bottom center).
Figure 8.
 
(A) ETDRS seven-field overlay on a two-field montage captured on the EIDON. (B) ETDRS seven-field overlay on a two-field montage captured on the Vistaro. The FOV exceeded the cumulative FOV of the seven-field ETDRS image. (C) ETDRS seven-field overlay on a single-shot captured on the Optos Daytona.
Figure 8.
 
(A) ETDRS seven-field overlay on a two-field montage captured on the EIDON. (B) ETDRS seven-field overlay on a two-field montage captured on the Vistaro. The FOV exceeded the cumulative FOV of the seven-field ETDRS image. (C) ETDRS seven-field overlay on a single-shot captured on the Optos Daytona.
Figure 9.
 
(A) Measuring a single-shot image on the Vistaro in terms of disc diameters (∼11.5). (B) Measuring a two-field montage on the Vistaro in terms of disc diameters (∼14.5).
Figure 9.
 
(A) Measuring a single-shot image on the Vistaro in terms of disc diameters (∼11.5). (B) Measuring a two-field montage on the Vistaro in terms of disc diameters (∼14.5).
Figure 10.
 
(A) Two-field montage captured on the Vistaro. The cumulative FOV was 90°. (B) A three-field montage captured on the Vistaro. The cumulative FOV was up to 120°. It should be noted that the FOV is subject to the operator using the external fixation targets.
Figure 10.
 
(A) Two-field montage captured on the Vistaro. The cumulative FOV was 90°. (B) A three-field montage captured on the Vistaro. The cumulative FOV was up to 120°. It should be noted that the FOV is subject to the operator using the external fixation targets.
Table.
 
ISO 10940 Requirements for Images Captured by Fundus Cameras on Digital Sensors
Table.
 
ISO 10940 Requirements for Images Captured by Fundus Cameras on Digital Sensors
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×