December 2022
Volume 11, Issue 12
Open Access
Refractive Intervention  |   December 2022
Development and Validation of a Digital (Peek) Near Visual Acuity Test for Clinical Practice, Community-Based Survey, and Research
Author Affiliations & Notes
  • Marzieh Katibeh
    Peek Vision, Berkhamsted, UK
    Department of Ophthalmology, Aarhus University Hospital, Aarhus, Denmark
  • Sandip Das Sanyam
    Sagarmatha Choudhary Eye Hospital, Lahan, Nepal
  • Elanor Watts
    Peek Vision, Berkhamsted, UK
    Tennent Institute of Ophthalmology, Glasgow, UK
  • Nigel M. Bolster
    Peek Vision, Berkhamsted, UK
    International Centre for Eye Health, Clinical Research Department, London School of Hygiene and Tropical Medicine, London, UK
  • Reena Yadav
    Sagarmatha Choudhary Eye Hospital, Lahan, Nepal
  • Abhishek Roshan
    Sagarmatha Choudhary Eye Hospital, Lahan, Nepal
  • Sailesh K. Mishra
    Nepal Netra Jyoti Sangh, Kathmandu, Nepal
  • Matthew J. Burton
    International Centre for Eye Health, Clinical Research Department, London School of Hygiene and Tropical Medicine, London, UK
    National Institute for Health Research Biomedical Research Centre for Ophthalmology at Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, UK
  • Andrew Bastawrous
    Peek Vision, Berkhamsted, UK
    International Centre for Eye Health, Clinical Research Department, London School of Hygiene and Tropical Medicine, London, UK
  • Correspondence: Elanor Watts, Peek Vision, 90a High Street, Berkhamsted, Hertfordshire HP4 2BL, UK. [email protected] 
Translational Vision Science & Technology December 2022, Vol.11, 18. doi:https://doi.org/10.1167/tvst.11.12.18
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Marzieh Katibeh, Sandip Das Sanyam, Elanor Watts, Nigel M. Bolster, Reena Yadav, Abhishek Roshan, Sailesh K. Mishra, Matthew J. Burton, Andrew Bastawrous; Development and Validation of a Digital (Peek) Near Visual Acuity Test for Clinical Practice, Community-Based Survey, and Research. Trans. Vis. Sci. Tech. 2022;11(12):18. https://doi.org/10.1167/tvst.11.12.18.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose: Unaddressed near vision impairment (NVI) affects more than 500 million people. Testing near vision is necessary to identify those in need of services. To make such testing readily accessible, we have developed and validated a new smartphone-based near visual acuity (NVA) test: Peek Near Vision (PeekNV).

Methods: Two forms of the PeekNV test were developed: (1) quantitative measurement of NVA, and (2) binary screening test for presence or absence of NVI. The validity study was carried out with 483 participants in Sagarmatha Choudhary Eye Hospital, Lahan, Nepal, using a conventional Tumbling “E” Near Point Vision Chart as the reference standard. Bland–Altman limits of agreement (LoA) were used to evaluate test agreement and test–retest repeatability. NVI screening was assessed using Cohen's kappa coefficient, sensitivity, and specificity.

Results: The mean difference between PeekNV and chart NVA results was 0.008 logMAR units (95% confidence interval [CI], −0.005 to 0.021) in right eye data, and the 95% LoA between PeekNV and chart testing were within 0.235 and −0.218 logMAR. As a NVI screening tool, the overall agreement between tests was 92.9% (κ = 0.85). The positive predictive value of PeekNV was 93.2% (95% CI, 89.6% to 96.9%), and the negative predictive value 92.7% (95% CI, 88.9% to 96.4%). PeekNV had a faster NVI screening time (11.6 seconds; 95% CI, 10.5 to 12.6) than the chart (14.9 seconds; 95% CI, 13.5 to 16.2; P < 0.001).

Conclusions: The PeekNV smartphone-based test produces rapid NVA test results, comparable to those of an accepted NV test.

Translational Relevance: PeekNV is a validated, reliable option for NV testing for use with smartphones or digital devices.

Introduction
The World Health Organization (WHO), in the International Classification of Diseases, 11th Revision (ICD-11), defined near vision impairment (NVI) as “presenting near visual acuity worse than N6” at 40 cm, elsewhere described as “presenting near visual acuity worse than N6 or 0.8M with existing correction.”1,2 This equates to a logMAR of approximately 0.27 (Table 1). Although distance vision is more often assessed, NVI has the potential to substantially limit function and quality of life, as well as population-level productivity.3,4 For example, trial evidence shows that the provision of presbyopia correction can increase productivity for agricultural activities requiring good near vision.5 Presbyopia poses a barrier to a diverse range of activities, including reading, writing, use of mobile phones, interpretation of facial expressions, and close practical tasks such as sewing. In addition to practical and social difficulties, NVI can be dangerous—for example, with an increased risk of accidental ingestion of foreign bodies.6 
Table 1.
 
Conversion of NVA Measurement Units
Table 1.
 
Conversion of NVA Measurement Units
Uncorrected presbyopia was estimated to be responsible for 510 million of the world's 1.1 billion people with visual impairment in 2020.7 In most, presbyopia is easily addressed by the provision of near correction spectacles. The unmet need for optical correction varies substantially among regions, as it is higher in low- and middle-income countries and approaches 90% in central Sub-Saharan Africa.7,8 Following the release of the World Report on Vision in 2019, NVI has been prioritized by the WHO, with major emphasis being placed on the global reporting of effective refractive error coverage (eREC).9 WHO strongly urges countries to measure and report both near and distance eREC at the population level.10 Therefore, there is a need for reliable tools for community vision screening and population-based surveys, such as the Rapid Assessment of Avoidable Blindness (RAAB), to identify people with NVI and enable estimation of eREC for distance and near.11 The ideal tool for this task should provide a quick and reliable test of distance visual acuity and near visual acuity (NVA) that can be easily integrated into testing methodology, including increasingly favored digital data collection tools, and which meets the Visual Acuity Measurement Standard, as set out by the International Council of Ophthalmology.12 
Despite the impact of presbyopia on the lives of so many people, testing of NVA is much less standardized than that for distance vision. Most existing chart and digital NVA tests are found to have minimal or no evidence of validation.13,14 In addition to NVA tests, near reading acuity test charts are available; however, these are language specific and dependent on literacy, and they test cognitive functions beyond vision. Smartphones and tablets can also be used to test vision, allowing for automated digital data collection and transfer. Although smartphone access at one point posed a barrier to use in low- and middle-income countries, two-thirds of the global population now own a mobile phone, with a predicted 39% of the population accessing mobile internet by 2025 in Sub-Saharan Africa, as well as the uptake of a WHO smartphone hearing test in 179 of 195 countries globally.1517 The ability to clearly see phone and computer screens is now a priority for many people. Technology has proven useful in testing distance vision in a wide range of environments and contexts, including community and school eye health programs and population surveys, as has been successfully demonstrated by tests including Peek Acuity and Peek Contrast Sensitivity.1822 Of tests of its type, Peek Acuity has been shown to have superior test–retest reliability and the strongest correlation with Early Treatment of Diabetic Retinopathy Study (ETDRS) visual acuities measured in clinics, according to a recent systematic review.23 The aim of this study was to design and validate an equivalent digital tool for measuring near vision, with attributes comparable to those of an accepted conventional NVA testing method. 
Methods
Product Development
Following literature review, international experts in the field of eye health programs and digital technology were surveyed for their views regarding the characteristics of an ideal NVA test. Survey respondents included internal experts within Peek (including optometrists and ophthalmologists working internationally, researchers, and software developers) and external experts working at the London School of Hygiene and Tropical Medicine International Centre for Eye Health, the International Myopia Institute, Brien Holden Vision Institute, and WHO. Some respondents were interviewed further. Peek Near Vision (PeekNV) was developed by combining the results from the above with existing Peek Acuity technology. Based on the review and expert opinion, the following desirable test characteristics were identified: 
  • Provides full quantitative NVA measurement
  • Identifies NVI at a threshold for onward referral for services
  • Prompts provision of prescription-ready readers
  • Can be integrated into population survey data collection to measure the frequency of NVI
  • Uses Tumbling “E” single optotype test as the preferred format
  • Usable in populations with low levels of literacy
  • Short test time
Test Design
A single tumbling “E” optotype is shown in one of four orientations (90°, 180°, 270°, or 360°) to reduce barriers posed by language, literacy, or age. A bounding box simulates the crowding effect of a standard ETDRS chart using a crowding bar, with thickness equivalent to the limb of the optotype, and spacing between optotype and crowding bar equal to that of half the total optotype size. This contour interaction format matches that used by the reference standard chart. Optotypes are presented at the following sizes: N64, N50.4, N40, N32, N25.6, N20, N16, N12.8, N10, N8, N6, and N3.2, following a logarithmic pattern to enable comparison with existing charts and clinical relevance while excluding sizes that could not be accurately portrayed by current smartphone technology due to pixel size. 
Validity Study Design
Although no single gold standard test exists for NVA, a conventional standard test was selected that could fulfill these requirements as a comparator: Tumbling “E” Near Point Vision Chart (Precision Vision, Woodstock, IL). Two sub-studies separately assessed (1) the use of PeekNV for near vision screening (a binary screening test for presence or absence of NVI), and (2) as a quantitative measurement of NVA. Sample size was calculated for sub-study 1 using kappa statistics as per Fleiss’ formula and for sub-study 2 based on Bland–Altman limits of agreement (LoA).24,25 Formulae are available in the Supplementary Materials. These calculations produced minimum required sample sizes of 115 and 273, respectively. 
Participants
This was a cross-sectional observational validity/methods comparison study. Participants were recruited from Sagarmatha Choudhary Eye Hospital outpatient clinic in Lahan, Nepal. Patients with a wide range of eye problems, including refractive error, self-present or are referred to the hospital. No patients who would not otherwise have been attending the clinic were recruited. Other diagnoses and the results of an ophthalmic examination by an ophthalmologist were collected and considered as part of the eligibility assessment. A dedicated coordinator assessed the inclusion and exclusion criteria and obtained informed consent. The inclusion and exclusion criteria were as follows. 
Inclusion Criteria
  • 18 years of age or older
  • Patients attending Sagarmatha Choudhary Eye Hospital outpatient clinic
  • Able to give informed consent and carry out the tests
Exclusion Criteria
  • Uncorrected distance visual acuity worse than 6/12 in either eye, or any diagnosed macular disease affecting central vision (to reduce patient factors that might affect the repeatability of results)
  • Postoperative/intraoperative complications
  • Have received mydriatic drops
  • Declined to partake in the trial or complete all tests
  • Symptoms of COVID-19
NVA Measurement
Within each sub-study, participants’ uncorrected NVA was tested for monocular right eye vision, monocular left eye vision, and binocular vision, with a conventional Tumbling “E” Near Point Vision Chart and with a PeekNV smartphone-based test using Sony Xperia 10 II smartphone devices (Sony Corporation, Tokyo, Japan). Vision was tested without correction to maximize the range of near acuities available for analysis. This was carried out by two examiners in sequence—first examiner (A or B), second examiner, and again first examiner—thus allowing analysis of inter- and intra-rater repeatability. Therefore, in total, each participant's NVA was measured 18 times (Fig. 1). Data were collected using ODK Collect software, which randomized the order of tests (chart or app first) and the starting examiner. Randomization took place after enrollment of the participant and collection of baseline participant information and was embedded in the app; therefore, it was completely concealed. 
Figure 1.
 
Study flow diagram for the study design. Participants were first assessed for eligibility and then consented and were enrolled in the sub-study in progress at the time (sub-study 1 or 2). They would be automatically allocated to one of four random testing orders, commencing either with examiner A or B and with the chart or app, ultimately undertaking a total of 18 visual acuity tests in the order shown.
Figure 1.
 
Study flow diagram for the study design. Participants were first assessed for eligibility and then consented and were enrolled in the sub-study in progress at the time (sub-study 1 or 2). They would be automatically allocated to one of four random testing orders, commencing either with examiner A or B and with the chart or app, ultimately undertaking a total of 18 visual acuity tests in the order shown.
The Sony Xperia 10 II was chosen because its pixel density allowed a <5% tolerance in the width of each arm of every selected optotype to be adhered to, in keeping with the European Standards (EN)/International Organization for Standardization (ISO) 8596 standard, with the exception of N10, which had arms 5.6% wider than the theoretical width required for a test at 40 cm. 
Testing was performed within 10 × 10 × 8-foot booths constructed from polyvinyl chloride canvas. Within each station, a chin rest was used to stabilize the participant's head position, and a stand angled at 45° was positioned 40 cm away from the participant's eyes. This stand was used to hold the smartphone and Tumbling “E” Near Point Vision Chart (see Supplementary Fig. S1). Rooms were lit by light-emitting diode bulbs and checked every day with a manual lux meter (Restriction of Hazardous Substances [RoHS]-compliant, 2019) to ensure that the ambient light fell within the range of 150 to 180 lux. Illumination was also recorded automatically by a smartphone lux meter. If needed, some adjustments (table position shifting in optimal direction) were made to maintain the symmetry of lighting between two booths. The time taken by each test (from display of first “E” optotype to end of test) was recorded for 132 individual quantitative NVA tests (66 chart and 66 app), and 180 NVI screening tests (90 chart and 90 app). 
Data Collection and Management
Simple demographics, distance visual acuity, and the main eye condition leading to clinic attendance were inputted into a custom-built form using the ODK software application; participants were then guided to station A or B according to the randomization order, and the NV tests were conducted and recorded in the ODK form. The PeekNV test application was integrated into the ODK form, such that test results were automatically recorded. Results of the chart NV test were manually inputted by the examiners. Data collection with this test can be undertaken while offline, allowing for use in remote locations; data synchronization is then carried out later when an internet connection is available. Data were uploaded and transferred via a secure server supported by the London School of Hygiene and Tropical Medicine (LSHTM). No personal identifiers or confidential information were recorded, allowing the transfer of information internationally without risk to data protection. Data were recorded digitally within password-protected devices, and paper records (including consent forms) were stored in locked cabinets. At the end of the project, records will be stored for a minimum period of 7 years, as per legal mandate. 
Data Analysis
All VA measurements were converted to logMAR for analysis. Data cleaning was carried out using Microsoft Excel (Microsoft Corporation, Redmond, WA), and statistical analysis was undertaken using the Stata 16 software package (StataCorp LLC, College Station, TX). Bland–Altman plots were created using R 4.2.0 (National Institutes of Health, Bethesda, MD). The Bland–Altman LoA technique was used to evaluate the test–retest repeatability of quantitative NVA by the PeekNV test and the conventional Tumbling “E” test and to assess agreement between the PeekNV NVA testing with conventional Tumbling “E” testing. 
Treating the conventional Tumbling “E” test as a reference standard, the validity of the screening function of PeekNV test for identification of NVI was assessed using Cohen's kappa coefficient, sensitivity, and specificity. The repeatability of screening results by PeekNV and chart was reported by crude agreement percentage and Cohen's kappa, accompanied by 95% confidence intervals (CIs). As Bland–Altman assessment assumes the independence of data, sensitivity, specificity, and kappa agreements for the right eye, left eye, and both eyes for each test were reported separately. A logistic regression model within a generalized estimating equation (GEE) was used to assess the association of demographic characteristics and test elements with the agreement between PeekNV and the conventional test. The difference in time taken by the conventional test and PeekNV test was compared using a linear regression model within the GEE. 
Ethical Approval
The study was approved by the ethics committees of the London School of Hygiene & Tropical Medicine (ref. 22945) and the Nepal Health Research Council (proposal ID 207-2021). Informed consent was obtained from all participants after they were given an explanation of the nature and objectives of the study and examination process. All participants gave written (or thumbprint) consent to participate. The study was conducted in accordance with the tenets of the Declaration of Helsinki. 
Results
Participant Characteristics
Of the 483 individuals who participated in this validity study, 183 participated in sub-study 1 for NVI screening and 300 participated in sub-study 2 for quantitative NVA measurement (Fig. 1). The participants’ demographics, distance visual acuity, and main eye conditions for the two sub-studies are presented separately in Table 2
Table 2.
 
Characteristics of Study Participants
Table 2.
 
Characteristics of Study Participants
Sub-Study 1: Binary NVI Screening
The first sub-study assessed the PeekNV test as a screening tool for NVI according to the WHO definition: inability to read N6 at 40 cm. The prevalence of NVI in the participants’ right eyes was 42.1% (n = 77) as measured by the Tumbling “E” chart. The agreement and screening attributes of PeekNV results compared to the Tumbling “E” chart (reference standard) are presented in Table 3. There was 92.9% agreement between the standard chart and app screening results, with a kappa agreement of 0.85. When treating the Tumbling “E” Near Point Vision Chart as the reference standard, PeekNV had a sensitivity of 89.6% and specificity of 95.3% for identifying NVI. The positive predictive value was 93.2% and negative predictive value was 92.7%. Table 4 shows the repeatability of each NV test when it was performed by the same examiner on the same eye of the same person. All comparisons have an acceptable and comparable kappa score for both the PeekNV test and Tumbling “E” chart. 
Table 3.
 
Accuracy of PeekNV as a NVI Screening Test
Table 3.
 
Accuracy of PeekNV as a NVI Screening Test
Table 4.
 
Repeatability of Peek and Chart NVI Screening Tests
Table 4.
 
Repeatability of Peek and Chart NVI Screening Tests
Based on a GEE regression model, other factors were found to have no significant impact on the agreement level: level of light or lux (P = 0.85); order of tests (i.e., app first or chart first) (P = 0.73); examiner (P = 0.14); participants’ age (P = 0.15); participants’ gender (P = 0.19); or the examined eye (i.e., right, left, or binocular) (P = 0.86). 
Sub-Study 2: Quantitative NVA Measurement
In assessment of right eye data from the first examination, the mean difference (bias) between the PeekNV measurement and Tumbling “E” Near Point Vision Chart measurement was 0.008 logMAR (95% confidence interval [CI], −0.005 to 0.021), and the 95% LoA were within 0.235 to −0.218 logMAR. Correlation (scatterplots) and Bland–Altman plots for comparisons are shown in Figure 2. The LoA and mean difference (bias) in other combinations of paired app and chart near vision examinations (including left eye and binocular, in the first, second and third examinations) are provided in Table 5 and in the Supplementary Figures. As shown, the mean difference in all comparisons is close to 0, with a 95% CI that covers 0. The LoA are also in an acceptable range (close to 0.2 logMar) in all comparisons, meaning that the difference between the app and chart results is not statistically or clinically significant. 
Figure 2.
 
Bland–Altman and scatterplots comparing PeekNV versus chart NVA results (right eye) and the quantitative NVA results from PeekNV smartphone-based testing and the conventional Tumbling “E” Near Point Vision Chart. (A) Bland–Altman plot demonstrating the difference between measurements from the two different testing modalities. The mean difference (bias) between tests is indicated by the solid line, 95% CIs of this difference are indicated by dotted blue lines, and 95% LoA are indicated by dotted red lines. E1 represents results from first examination; OD indicates results from right eye testing. Results from other tests (left eye and binocular testing) are available in the Supplementary Figures. (B) Scatter plot to represent correlation between testing modalities. Y axis = PeekNV smartphone-based app test results. X axis = Tumbling “E” Near Point Vision Chart test results.
Figure 2.
 
Bland–Altman and scatterplots comparing PeekNV versus chart NVA results (right eye) and the quantitative NVA results from PeekNV smartphone-based testing and the conventional Tumbling “E” Near Point Vision Chart. (A) Bland–Altman plot demonstrating the difference between measurements from the two different testing modalities. The mean difference (bias) between tests is indicated by the solid line, 95% CIs of this difference are indicated by dotted blue lines, and 95% LoA are indicated by dotted red lines. E1 represents results from first examination; OD indicates results from right eye testing. Results from other tests (left eye and binocular testing) are available in the Supplementary Figures. (B) Scatter plot to represent correlation between testing modalities. Y axis = PeekNV smartphone-based app test results. X axis = Tumbling “E” Near Point Vision Chart test results.
Table 5.
 
Pairwise Comparisons of Quantitative NVA Results Measured by the PeekNV App and the Conventional Tumbling “E” Chart
Table 5.
 
Pairwise Comparisons of Quantitative NVA Results Measured by the PeekNV App and the Conventional Tumbling “E” Chart
The repeatability of each test was measured by comparison of the same eye tested by the same examiner using the same method. Figure 3A shows the repeatability of the NVA measured by the PeekNV test in the right eye during the first examination. The mean difference between the two PeekNV measurements was 0.034 logMAR (95% CI, 0.021 to 0.047), and the 95% LoA were within −0.194 to 0.262 logMAR. The same two measurements by the conventional Tumbling “E” Near Point Vision Chart (Fig. 3B) had a mean difference of 0.025 logMAR (95% CI, 0.013 to 0.037), and the 95% LoA were within −0.180 to 0.230 logMAR. As shown, the 95% LoA and 95% CI of mean difference of measurements by PeekNV versus PeekNV and chart versus chart are very similar. Similar repeatability of NV scores was found in all other comparisons of eye/exam orders (see Table 4 and Supplementary Figs. S2 to S13). 
Figure 3.
 
Bland–Altman plots showing the intra-rater repeatability of NVA testing by the PeekNV app and the chart. (A) This Bland–Altman plot compares the right eye NVA results from PeekNV smartphone-based testing during the first examination (E1) and the third examination (E3), both carried out by the same assessor. (B) This Bland–Altman plot compares the right eye NVA results from conventional Tumbling “E” chart testing during the first examination (E1) and the third examination (E3), both carried out by the same assessor. The mean difference (bias) between the test results is indicated by the solid line, 95% CIs of this difference are indicated by dotted blue lines, and 95% LoA are indicated by dotted red lines.
Figure 3.
 
Bland–Altman plots showing the intra-rater repeatability of NVA testing by the PeekNV app and the chart. (A) This Bland–Altman plot compares the right eye NVA results from PeekNV smartphone-based testing during the first examination (E1) and the third examination (E3), both carried out by the same assessor. (B) This Bland–Altman plot compares the right eye NVA results from conventional Tumbling “E” chart testing during the first examination (E1) and the third examination (E3), both carried out by the same assessor. The mean difference (bias) between the test results is indicated by the solid line, 95% CIs of this difference are indicated by dotted blue lines, and 95% LoA are indicated by dotted red lines.
Timing of the Tests
Figure 4 shows the details of test duration in both sub-studies. There was no statistically significant difference between mean time taken to measure NVA with the Tumbling “E” Near Point Vision Chart and PeekNV (31.36 seconds vs. 33.78 seconds). Time taken to identify the presence or absence of NVI with the conventional Tumbling “E” chart was 14.87 seconds (95% CI, 13.49 to 16.24); the mean time with PeekNV was 11.58 seconds (95% CI, 10.52 to 12.64), making PeekNV 3.29 seconds quicker (P < 0.001). 
Figure 4.
 
Duration of the near vision tests by the PeekNV app versus the chart. The box plot summarizes the time taken by the PeekNV and chart-based testing. Screening for NVI is shown on the left side of the graph, and quantitative NVA scoring is shown on the right side of the graph. As each participant was tested multiple times, and to allow for practice effects and fatigue, first, second, and third tests (examinations) are shown separately. Blue boxes represent chart testing, and red boxes represent app testing.
Figure 4.
 
Duration of the near vision tests by the PeekNV app versus the chart. The box plot summarizes the time taken by the PeekNV and chart-based testing. Screening for NVI is shown on the left side of the graph, and quantitative NVA scoring is shown on the right side of the graph. As each participant was tested multiple times, and to allow for practice effects and fatigue, first, second, and third tests (examinations) are shown separately. Blue boxes represent chart testing, and red boxes represent app testing.
Discussion
Here we report the results of validation studies for a new smartphone-based NVA test, which we present in two formats: (1) a screening test to identify people with NVI (worse than N6), and (2) a quantitative assessment of the degree of NVI. For both forms of the test, we found high levels of agreement with a reference near vision test and high degrees of repeatability in a controlled environment (within and between assessors). 
Our results showed that the newly developed PeekNV test identified NVI with a positive predictive value of 93.2%, negative predictive value of 92.7%, sensitivity of 89.6%, and specificity of 95.3%. Agreement of the quantitative NVA results between PeekNV and Tumbling “E” Near Point chart testing was within acceptable limits in 95% of comparisons. Given the dynamic variability of accommodation, some variation in results on retesting is to be expected. Repeatability was similar for the standard chart and PeekNV testing. The kappa agreement was higher for the chart in right-eye examinations but was higher for the app than the chart in left-eye and binocular examinations within our study setting (Table 4). Therefore, we believe the degree of variation in PeekNV results is within acceptable limits. The PeekNV test was quicker than the standard chart when used to screen for NVI, though not clinically significant, and there was comparable timing with no significant difference between mean test time for the quantitative PeekNV NVA measurement and chart-based testing. Overall, the PeekNV test performed well and was no less repeatable than the Tumbling “E” Near Point chart test, and it was comparable in accuracy. 
PeekNV was tested and validated against the Tumbling “E” Near Point Vision Chart as a test of both monocular and binocular acuity and impairment, confirming that either option can be used. Although monocular visual acuity is a more helpful indicator of ocular pathology, binocular acuity is more closely correlated with an impact on quality of life, making both valuable metrics.26 
Many available digital near vision tests either had insufficient evidence of validity or attributes that were not compatible with the needs we found in the expert survey prior to developing the PeekNV app. Important missing attributes included adequate evidence of reliability, appropriateness for individuals with low literacy, and timing of the test. Some of the existing digital tests were designed for individuals to use for at-home testing27 and are often too time consuming to be considered for mass screening or survey (e.g., 172 seconds for PDI Check, or much longer for some tests).28 Some validated tests were unfortunately no longer available, such as MAVERIC, or were only available in limited regions.2931 
Our study has a number of limitations. First, as agreement was found to be superior in those without NVI, agreement between the tests might be lower in a population of exclusively presbyopic individuals. One explanation for this finding might be accommodation fatigue in those who are struggling to see the smaller optotypes. Our study cohort included a large proportion of younger, pre-presbyopic adults, and further research could include a predominantly older participant group. Of course, accuracy in both those with and without presbyopia is important for a useful screening test, and our cohort included participants with a wide range of NVAs. Considering a key purpose of our test is extensive population-based screening, we believe this test can be used with high confidence. Second, in order to directly compare PeekNV with a chart and minimize confounders, this study was carried out in a standardized, controlled setting. Visual acuity results (with any testing tool) are likely to vary much more if, for example, testing distance is not so rigorously controlled. Third, the Tumbling “E” Near Point Vision Chart was treated as a reference standard, as it is a widely accepted NVA testing device. However, unlike in distance acuity measurement, there is a lack of consensus on a reference standard test for NVA. Suggestions have been made on optimal chart choice (e.g., for children of different ages), but in practice various “conventional” charts are used, according to the personal preference of the assessor.32 This lack of a widely agreed-upon standardized near acuity test makes measuring and tackling NVI more challenging and may limit interpretation of the results. Finally, this validity study focused only on adult participants. Although functional NVA testing could be carried out in children without cycloplegia, cycloplegic testing would be required for identification of hyperopia, due to their stronger accommodative abilities. 
Further research involving this test could involve validation using other devices (which must meet pixel density specifications) and in other populations and settings, including less standardized testing environments. In addition, results produced during its use in various research projects and screening programs will be collected and analyzed. There is also potential for the development of additional test types, such as continuous text reading tests. 
Globally, untreated NVI is estimated to affect >500 million individuals.9 This number is expected to increase due to growing aging populations and an increased burden of presbyopia. eREC acts as a key indicator of eye care service uptake and achievement of universal health coverage.10 Calculation of near vision eREC requires measurement of whether individuals have uncorrected NVA worse than N6 at 40 cm, but whose presenting NVA is N6 or better, due to refractive correction (met need). This requires a tool for measurement of NVA to be used as part of a population-based survey. Conventional charts can be costly and may become damaged, and access can be a challenge in low-income settings. A rapid and reliable test that could be integrated into different digital platforms such as the RAAB could serve this purpose, as well as the potential to develop a rapid presbyopia-specific population survey tool.33,34 Digital distance visual acuity testing has proved popular in community/school screening programs and research, in part due to increased screening quality and the improved ease of data management and automatic recording it enables.20 This in turn can improve service uptake.35 Combining this with digital NVA testing, particularly in populations at risk of presbyopia who can be identified, diagnosed, and treated in the same visit, will facilitate the identification and management of NVI. It may also make NVA measurement more appealing to organizers of community health surveys and surveillance who are already using digital devices for data collection. The PeekNV test we have validated in this study could serve this purpose. 
Acknowledgments
The authors are very grateful for the hard work of ophthalmic assistants Rabi Shankar Sah and Kamlesh Yadav; eye health workers Ram Narayan Bhandari, Sharban Mandal, and Aasha Chaudhary; Richard Evans and Vince Hewitt, who provided technical software support; Lars Jacobsen, who provided technical support for the data analysis; and Melissa Witte, who provided organizational support. 
MJB and the data collection team in Nepal are supported by the Wellcome Trust (207472/Z/17/Z). This work was also supported by the National Institute for Health Research (NIHR) (using the UK's Official Development Assistance (ODA) Funding) and Wellcome (215633/Z/19/Z) under the NIHR-Wellcome Partnership for Global Health Research. The views expressed are those of the authors and not necessarily those of Wellcome, the NIHR or the Department of Health and Social Care. The funding organizations had no role in the design or conduct of this research. 
Disclosure: M. Katibeh, Peek Vision Ltd. (E); S.D. Sanyam, None; E. Watts, Peek Vision Ltd. (E); N.M. Bolster, Peek Vision Ltd. (E); R. Yadav, None; A. Roshan, None; S.K. Mishra, None; M.J. Burton, The Peek Vision Foundation (S); A. Bastawrous, The Peek Vision Foundation (O), Peek Vision Ltd. (O) 
References
World Health Organization. International Classification of Diseases for Mortality and Morbidity Atatistics, 11th Revision. Geneva: World Health Organization; 2018.
World Health Organization. Blindness and vision impairment. Available at: https://www.who.int/news-room/fact-sheets/detail/blindness-and-visual-impairment. Accessed December 19, 2022.
The National Academies of Sciences, Engineering, and Medicine. Making Eye Health a Population Health Imperative: Vision for Tomorrow. Washington, DC: National Academies Press; 2016.
Burton MJ, Ramke J, Marques AP, et al. The Lancet Global Health Commission on Global Eye Health: vision beyond 2020. Lancet Glob Health. 2021; 9(4): e489–e551. [CrossRef] [PubMed]
Reddy PA, Congdon N, MacKenzie G, et al. Effect of providing near glasses on productivity among rural Indian tea workers with presbyopia (PROSPER): a randomised trial. Lancet Glob Health. 2018; 6(9): e1019–e1027. [CrossRef] [PubMed]
Sevillano C, Morana MN, Estevez S. Visual involvement in foreign-body intestinal perforations. Arch Soc Esp Oftalmol. 2016; 91(1): 20–22. [CrossRef] [PubMed]
GBD 2019 Blindness and Vision Impairment Collaborators, Vision Loss Expert Group of the Gloval Burden of Disease Study. Trends in prevalence of blindness and distance and near vision impairment over 30 years: an analysis for the Global Burden of Disease Study. Lancet Glob Health. 2021; 9(2): e130–e143. [CrossRef] [PubMed]
Fricke TR, Tahhan N, Resnikoff S, et al. Global prevalence of presbyopia and vision impairment from uncorrected presbyopia: systematic review, meta-analysis, and modelling. Ophthalmology. 2018; 125(10): 1492–1499. [CrossRef] [PubMed]
World Health Organization. World Report on Vision. Geneva: World Health Organization; 2019.
Keel S, Müller A, Block S, et al. Keeping an eye on eye care: monitoring progress towards effective coverage. Lancet Glob Health. 2021; 9(10): e1460–e1464. [CrossRef] [PubMed]
Kuper H, Polack S, Limburg H. Rapid assessment of avoidable blindness. Community Eye Health. 2006; 19(60): 68–69. [PubMed]
International Council of Ophthalmology. Visual acuity measurement standard. Ital J Ophthalmol. 1988; II(I): 1–15.
Bellsmith KN, Gale MJ, Yang S, et al. Validation of home visual acuity tests for telehealth in the COVID-19 era. JAMA Ophthalmol. 2022; 140(5): 465–471. [CrossRef] [PubMed]
Yeung WK, Dawes P, Pye A, et al. eHealth tools for the self-testing of visual acuity: a scoping review. NPJ Digit Med. 2019; 2: 82. [CrossRef] [PubMed]
Global System for Mobile Communications Association. Global Mobile Trends 2017. London: GSMA; 2017.
Global System for Mobile Communications Association. The Mobile Economy Sub-Saharan Africa 2021. London: GSMA.
De Sousa KC, Smits C, Moore DR, Chada S, Myburgh H, Swanepoel W. Global use and outcomes of the hearWHO mHealth hearing test. Digit Health. 2022; 8:20552076221113204.
Bastawrous A, Rono HK, Livingstone IA, et al. Development and validation of a smartphone-based visual acuity test (peek acuity) for clinical practice and community-based fieldwork. JAMA Ophthalmol. 2015; 133(8): 930–937. [CrossRef] [PubMed]
Rono HK, Bastawrous A, Macleod D, et al. Smartphone-based screening for visual impairment in Kenyan school children: a cluster randomised controlled trial. Lancet Glob Health. 2018; 6(8): e924–e932. [CrossRef] [PubMed]
Andersen T, Jeremiah M, Thamane K, et al. Implementing a school vision screening program in Botswana using smartphone technology. Telemed J E Health. 2020; 26(2): 255–258. [CrossRef] [PubMed]
Bright T, Kuper H, Macleod D, Musendo D, Irunga P, Yip JLY. Population need for primary eye care in Rwanda: a national survey. PLoS One. 2018; 13(5): e0193817. [CrossRef] [PubMed]
Habtamu E, Bastawrous A, Bolster NM, et al. Development and validation of a smartphone-based contrast sensitivity test. Transl Vis Sci Technol. 2019; 8(5): 13. [CrossRef] [PubMed]
Samanta A, Mauntana S, Barsi Z, Yarlagadda B, Nelson PC. Is your vision blurry? A systematic review of home-based visual acuity for telemedicine [published online ahead of print November 22, 2020]. J Telemed Telecare, https://doi.org/10.1177/1357633X20970398.
Norman GR, Streiner DL. Biostatistics: The Bare Essentials. Hamilton, Ontario, Canada: B.C. Decker; 2000.
Lu MJ, Zhong WH, Liu YX, Miao HZ, Li YC, Ji MH. Sample size for assessing agreement between two methods of measurement by Bland-Altman Method. Int J Biostat. 2016; 12(2): 1–8. [CrossRef] [PubMed]
Kidd Man RE, Liang Gan AT, Fenwick EK, et al. Using uniocular visual acuity substantially underestimates the impact of visual impairment on quality of life compared with binocular visual acuity. Ophthalmology. 2020; 127(9): 1145–1151. [CrossRef] [PubMed]
Claessens JLJ, Geuvers JR, Imhof SM, Wisse RPL. Digital tools for the self-assessment of visual acuity: a systematic review. Ophthalmol Ther. 2021; 10(4): 715–730. [CrossRef] [PubMed]
Martin SJ, Rowe KS, Hser N, et al. Compared near-vision testing with the Nintendo 3DS PDI Check Game on the Thai-Burma Border. Asia-Pac J Ophthalmol (Phila). 2019; 8(4): 330–334. [CrossRef] [PubMed]
Aslam TM, Tahir HJ, Parry NR, et al. Automated measurement of visual acuity in pediatric ophthalmic patients using principles of game design and tablet computers. Am J Ophthalmol. 2016; 170: 223–227. [CrossRef] [PubMed]
Mirza N, Tahir HJ, Wang Y, Parry NRA, Murray IJ, Aslam TM. Testing of an automated tablet-based method for the determination of low contrast near visual acuity in ophthalmic patients. Acta Ophthalmol Conf. 2015; 93(S255).
Aslam TM, Parry NR, Murray IJ, et al. Development and testing of an automated computer tablet-based method for self-testing of high and low contrast near visual acuity in ophthalmic patients. Graefes Arch Clin Exp Ophthalmol. 2016; 254(5): 891–899. [CrossRef] [PubMed]
Huurneman B, Boonstra FN. Assessment of near visual acuity in 0–13 year olds with normal and low vision: a systematic review. BMC Ophthalmol. 2016; 16(1): 215. [CrossRef] [PubMed]
Mactaggart I, Limburg H, Bastawrous A, Burton MJ, Kuper H. Rapid assessment of avoidable blindness: looking back, looking forward. Br J Ophthalmol. 2019; 103(11): 1549–1552. [CrossRef] [PubMed]
Mactaggart I, Wallace S, Ramke J, et al. Rapid assessment of avoidable blindness for health service planning. Bull World Health Organ. 2018; 96(10): 726–728. [CrossRef] [PubMed]
Rono H, Bastawrous A, Macleod D, et al. Effectiveness of an mHealth system on access to eye health services in Kenya: a cluster-randomised controlled trial. Lancet Digit Health. 2021; 3(7): e414–e424. [CrossRef] [PubMed]
Figure 1.
 
Study flow diagram for the study design. Participants were first assessed for eligibility and then consented and were enrolled in the sub-study in progress at the time (sub-study 1 or 2). They would be automatically allocated to one of four random testing orders, commencing either with examiner A or B and with the chart or app, ultimately undertaking a total of 18 visual acuity tests in the order shown.
Figure 1.
 
Study flow diagram for the study design. Participants were first assessed for eligibility and then consented and were enrolled in the sub-study in progress at the time (sub-study 1 or 2). They would be automatically allocated to one of four random testing orders, commencing either with examiner A or B and with the chart or app, ultimately undertaking a total of 18 visual acuity tests in the order shown.
Figure 2.
 
Bland–Altman and scatterplots comparing PeekNV versus chart NVA results (right eye) and the quantitative NVA results from PeekNV smartphone-based testing and the conventional Tumbling “E” Near Point Vision Chart. (A) Bland–Altman plot demonstrating the difference between measurements from the two different testing modalities. The mean difference (bias) between tests is indicated by the solid line, 95% CIs of this difference are indicated by dotted blue lines, and 95% LoA are indicated by dotted red lines. E1 represents results from first examination; OD indicates results from right eye testing. Results from other tests (left eye and binocular testing) are available in the Supplementary Figures. (B) Scatter plot to represent correlation between testing modalities. Y axis = PeekNV smartphone-based app test results. X axis = Tumbling “E” Near Point Vision Chart test results.
Figure 2.
 
Bland–Altman and scatterplots comparing PeekNV versus chart NVA results (right eye) and the quantitative NVA results from PeekNV smartphone-based testing and the conventional Tumbling “E” Near Point Vision Chart. (A) Bland–Altman plot demonstrating the difference between measurements from the two different testing modalities. The mean difference (bias) between tests is indicated by the solid line, 95% CIs of this difference are indicated by dotted blue lines, and 95% LoA are indicated by dotted red lines. E1 represents results from first examination; OD indicates results from right eye testing. Results from other tests (left eye and binocular testing) are available in the Supplementary Figures. (B) Scatter plot to represent correlation between testing modalities. Y axis = PeekNV smartphone-based app test results. X axis = Tumbling “E” Near Point Vision Chart test results.
Figure 3.
 
Bland–Altman plots showing the intra-rater repeatability of NVA testing by the PeekNV app and the chart. (A) This Bland–Altman plot compares the right eye NVA results from PeekNV smartphone-based testing during the first examination (E1) and the third examination (E3), both carried out by the same assessor. (B) This Bland–Altman plot compares the right eye NVA results from conventional Tumbling “E” chart testing during the first examination (E1) and the third examination (E3), both carried out by the same assessor. The mean difference (bias) between the test results is indicated by the solid line, 95% CIs of this difference are indicated by dotted blue lines, and 95% LoA are indicated by dotted red lines.
Figure 3.
 
Bland–Altman plots showing the intra-rater repeatability of NVA testing by the PeekNV app and the chart. (A) This Bland–Altman plot compares the right eye NVA results from PeekNV smartphone-based testing during the first examination (E1) and the third examination (E3), both carried out by the same assessor. (B) This Bland–Altman plot compares the right eye NVA results from conventional Tumbling “E” chart testing during the first examination (E1) and the third examination (E3), both carried out by the same assessor. The mean difference (bias) between the test results is indicated by the solid line, 95% CIs of this difference are indicated by dotted blue lines, and 95% LoA are indicated by dotted red lines.
Figure 4.
 
Duration of the near vision tests by the PeekNV app versus the chart. The box plot summarizes the time taken by the PeekNV and chart-based testing. Screening for NVI is shown on the left side of the graph, and quantitative NVA scoring is shown on the right side of the graph. As each participant was tested multiple times, and to allow for practice effects and fatigue, first, second, and third tests (examinations) are shown separately. Blue boxes represent chart testing, and red boxes represent app testing.
Figure 4.
 
Duration of the near vision tests by the PeekNV app versus the chart. The box plot summarizes the time taken by the PeekNV and chart-based testing. Screening for NVI is shown on the left side of the graph, and quantitative NVA scoring is shown on the right side of the graph. As each participant was tested multiple times, and to allow for practice effects and fatigue, first, second, and third tests (examinations) are shown separately. Blue boxes represent chart testing, and red boxes represent app testing.
Table 1.
 
Conversion of NVA Measurement Units
Table 1.
 
Conversion of NVA Measurement Units
Table 2.
 
Characteristics of Study Participants
Table 2.
 
Characteristics of Study Participants
Table 3.
 
Accuracy of PeekNV as a NVI Screening Test
Table 3.
 
Accuracy of PeekNV as a NVI Screening Test
Table 4.
 
Repeatability of Peek and Chart NVI Screening Tests
Table 4.
 
Repeatability of Peek and Chart NVI Screening Tests
Table 5.
 
Pairwise Comparisons of Quantitative NVA Results Measured by the PeekNV App and the Conventional Tumbling “E” Chart
Table 5.
 
Pairwise Comparisons of Quantitative NVA Results Measured by the PeekNV App and the Conventional Tumbling “E” Chart
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×