Abstract
Purpose:
To evaluate two builds of the digital grating acuity test, “Peekaboo Vision” (PV), in young (6–60 months) populations in two hospital settings (Malawi and United Kingdom).
Methods:
Study 1 evaluated PV in Blantyre, Malawi (N = 58, mean age 33 months); study 2 evaluated an updated build in Glasgow, United Kingdom (N = 60, mean age 44 months). Acuities were tested-retested with PV and Keeler Acuity Cards for Infants (KACI). Bland-Altman techniques were used to compare results and repeatability. Child engagement was compared between groups. Study 2 included test-time comparison.
Results:
Study 1 (Malawi): The mean difference between PV and KACI was 0.02 logMAR with 95% limits of agreement (LoA) of 0.33 to 0.37 LogMAR. On test-retest, PV demonstrated 95% LoA of −0.283 to 0.198 logMAR with coefficient of repeatability (CR) 0.27. KACI demonstrated 95% LoA of −0.427 to 0.323 logMAR, and larger CR was 0.37. PV evidenced higher engagement scores than KACI (P = 0.0005). Study 2 (UK): The mean difference between PV and KACI was 0.01 logMAR; 95% LoA was −0.413 to 0.437 logMAR. Again, on test-retest, PV had narrower LoA (−0.344 to 0.320 logMAR) and lower CR (0.32) versus KACI, with LoA −0.432 to 0.407 logMAR, CR 0.42. The two tests did not differ in engagement score (P = 0.5). Test time was ∼1 minute shorter for PV (185 vs. 251 s, P = 0.0021).
Conclusions:
PV gives comparable results to KACI in two pediatric populations in two settings, with benefits in repeatability indices and test duration.
Translational Relevance:
Leveraging tablet technology extends reliable infant acuity testing to bedside, home, and rural settings, including areas where traditional equipment cannot be financed.
Peekaboo Vision had higher engagement scores than did Keeler cards over all three viewing conditions (BEO, RE, LE) (
Table 3). An engagement score of 2 was achieved on significantly more occasions for Peekaboo Vision than for Keeler cards (100/174 [57%] versus 79/174 [45%]; McNemar's test for correlated proportions,
P = 0.0005).
Engagement score with respect to age was analyzed for BEO results for both tests. Thirty-six children attained an engagement score of 2 with Peekaboo Vision; their median age was 37 months. Thirty-one children attained an engagement score of 2 with Keeler cards; their median age was 38 months. Engagement scores of 1 were more frequent in younger children, with median ages of 28 months (N = 23) and 25.5 months (N = 18) for Keeler cards and Peekaboo Vision, respectively. Engagement scores of 0 were infrequent, but in study 1 occurred in the same older children (N = 4), with a median age of 36.5, using both Keeler cards and Peekaboo Vision.
There was evidence that engagement dropped slightly for Keeler card monocular testing following binocular testing: the average engagement score dropped from median 2 (mean 1.5) for binocular testing (N = 58) to median 1 (mean 1.3) for monocular testing (N = 58, Mann-Whitney U test, W [W = Mann-Whitney test statistic: the sum of the ranks of the first value] = 3665, adjusted P = 0.10). With Peekaboo Vision testing, engagement seemed to be marginally better maintained: average engagement score for binocular testing (N = 58) dropped from median 2 (mean 1.6) to median 1.5 (mean 1.5) for monocular testing (N = 58, Mann-Whitney U test, W = 3565, adjusted P = 0.3).
For Keeler cards, engagement score did not significantly drop between test and retest, with a median score of 1 for both groups, with a modest decrease in the mean from 1.4 for test 1 versus 1.2 for retest (N = 174, Mann-Whitney U test, W = 31,977, adjusted P = 0.06).
For PVb1, ES did not significantly change between test and retest, but evidenced a maintained median ES of 2 for across test and retest, with the mean decreasing from 1.5 to 1.4 on retest (N = 174, Mann-Whitney U test, W = 31,192, adjusted P = 0.32).
Peekaboo Vision and Keeler cards had very similar engagement scores (
Table 3) based on the first test result for all three viewing conditions (BEO, RE, LE). An engagement score of 2 was achieved for slightly fewer Peekaboo Vision tests than for Keeler card tests (118/158 [75%] versus 123/158 [77%]); McNemar's test for correlated proportions,
P = 0.5). Engagement scores for BEO results were reviewed with respect to age as for study 1. Forty-seven children attained an engagement score of 2 with Peekaboo Vision; their median age was 54 months. Forty-five children attained an engagement score of 2 with Keeler cards; their median age was also 54 months. As for study 1, engagement scores of 1 were more frequent in younger children, with median ages of 27 months for both Peekaboo Vision (
N = 13) and Keeler cards (
N = 14). Engagement score of zero occurred only once, for a 17-month-old) child using Keeler cards.
There was no convincing evidence that engagement dropped for monocular testing following binocular testing with Keeler cards: average engagement score for binocular testing (N = 49) was median 2 (mean 1.8) and was median 2 (mean 1.7) for monocular testing (N = 49, Mann-Whitney U test, W = 2505, adjusted P = 0.4). For Peekaboo Vision testing, engagement dropped slightly: average engagement score for binocular testing (N = 49) was median 2 (mean 1.9) and was median 2 (mean 1.6) for monocular testing (N = 49, Mann-Whitney U test, W = 2668, adjusted P = 0.02).
Regarding change in engagement on test-retest, for Keeler cards the median ES was maintained at 2, with the mean ES dropping modestly from 1.9 to 1.7 (N = 119, Mann-Whitney U test, W = 14,859, adjusted P = 0.07).
Similarly, PVi did not show a significant change in ES on retest, with median ES of 2 in both groups and the mean decreasing from 1.9 to 1.8 on retest (N = 119, W = 14,763, adjusted P = 0.10).
In both studies, the mean difference in acuities measured by the card-based test and by the digital test is modest, being 0.02 logMAR (95% CI for mean difference: −0.02 to 0.06) for PVb1 (study 1, Malawi) and 0.01 logMAR (95% CI for mean difference: −0.03 to 0.05) for PVi build (study 2, United Kingdom). When comparing the index with the reference standard, the upper and lower limits of agreement (the interval of two standard deviations of the measurement differences either side of the mean difference) exceeded an octave step, but they were within 5 logMAR lines (LoA 0.33–0.37 logMAR for study 1; 95% LoA −0.413 to 0.437 logMAR for study 2). These limits of agreement are similar to those observed when KACI is compared with itself on retest in both studies (study 1: LoA −0.427 to 0.323 logMAR; study 2: LoA −0.432 to 0.407 logMAR).
Furthermore, with narrower limits of agreement on test-retest when compared with KACI in both studies, the findings support the use of high-resolution tablet-based technology, such as iPads, as a credible addition to the armamentarium available to clinicians in the assessment of vision in young children. However, as discussed below, there are limitations within these studies, and it cannot be unequivocally concluded that that these digital tests represent a replacement of the reference standard.
The number of forced-choice alternatives represents a major difference between the index tests and the two-target Keeler Acuity Card. With PVb1, the staircase continued until two out of two presentations were correct. This rigid stair-casing did not allow progression when an error was made, and only poorer levels were tested thereafter. This method of stair-casing reflects the recommended testing strategy for Keeler cards, as per instructions for use. Assuming a one in four (0.25) probability of the correct target being selected by pure chance at one presentation, the binomial probability of a given level being passed with two digital presentations by pure chance equals 0.063. Using a similar paradigm, albeit with two targets per presentation (as is advised in the handbook accompanying KACI), the probability of two consecutive correct identifications at a given level arising by pure chance is 0.25.
With PVi, a change was made to the codebase altering the stair-casing (
Fig. 2B): instead of terminating progression down the staircase after one error and retesting the previous level higher in the staircase, further presentations were given at the same level, and two out of three correct test screen presentations (each with four potential positions) were required for a level to be passed.
Our expectation was that the more nuanced stair-casing in PVi, allowing for staircase reversals, would increase accuracy. Given the significant differences in population and test builds between studies 1 and 2, direct comparison of repeatability indices must be interpreted with caution, though it is interesting to note that PVb1 exhibited apparent superior indices of repeatability than did PVi. This is likely to relate, in part, to the fewer testable levels at the finer range of acuities with PVb1, contributing to increased clustering around 0.1 and 0.4 levels (
Fig. 4B, left panel).
For PVi, the binomial probability of a level being passed with two out of three correct identifications at a given frequency grating, with random target selection, equals 0.141. If two consecutive presentations are correct for a given level (e.g., at the lower end of the staircase), a third presentation at that level is not offered. The probability of these two consecutive presentations being correctly identified by chance equals 0.063 (as with PVb1),
\(\def\upalpha{\unicode[Times]{x3B1}}\)\(\def\upbeta{\unicode[Times]{x3B2}}\)\(\def\upgamma{\unicode[Times]{x3B3}}\)\(\def\updelta{\unicode[Times]{x3B4}}\)\(\def\upvarepsilon{\unicode[Times]{x3B5}}\)\(\def\upzeta{\unicode[Times]{x3B6}}\)\(\def\upeta{\unicode[Times]{x3B7}}\)\(\def\uptheta{\unicode[Times]{x3B8}}\)\(\def\upiota{\unicode[Times]{x3B9}}\)\(\def\upkappa{\unicode[Times]{x3BA}}\)\(\def\uplambda{\unicode[Times]{x3BB}}\)\(\def\upmu{\unicode[Times]{x3BC}}\)\(\def\upnu{\unicode[Times]{x3BD}}\)\(\def\upxi{\unicode[Times]{x3BE}}\)\(\def\upomicron{\unicode[Times]{x3BF}}\)\(\def\uppi{\unicode[Times]{x3C0}}\)\(\def\uprho{\unicode[Times]{x3C1}}\)\(\def\upsigma{\unicode[Times]{x3C3}}\)\(\def\uptau{\unicode[Times]{x3C4}}\)\(\def\upupsilon{\unicode[Times]{x3C5}}\)\(\def\upphi{\unicode[Times]{x3C6}}\)\(\def\upchi{\unicode[Times]{x3C7}}\)\(\def\uppsy{\unicode[Times]{x3C8}}\)\(\def\upomega{\unicode[Times]{x3C9}}\)\(\def\bialpha{\boldsymbol{\alpha}}\)\(\def\bibeta{\boldsymbol{\beta}}\)\(\def\bigamma{\boldsymbol{\gamma}}\)\(\def\bidelta{\boldsymbol{\delta}}\)\(\def\bivarepsilon{\boldsymbol{\varepsilon}}\)\(\def\bizeta{\boldsymbol{\zeta}}\)\(\def\bieta{\boldsymbol{\eta}}\)\(\def\bitheta{\boldsymbol{\theta}}\)\(\def\biiota{\boldsymbol{\iota}}\)\(\def\bikappa{\boldsymbol{\kappa}}\)\(\def\bilambda{\boldsymbol{\lambda}}\)\(\def\bimu{\boldsymbol{\mu}}\)\(\def\binu{\boldsymbol{\nu}}\)\(\def\bixi{\boldsymbol{\xi}}\)\(\def\biomicron{\boldsymbol{\micron}}\)\(\def\bipi{\boldsymbol{\pi}}\)\(\def\birho{\boldsymbol{\rho}}\)\(\def\bisigma{\boldsymbol{\sigma}}\)\(\def\bitau{\boldsymbol{\tau}}\)\(\def\biupsilon{\boldsymbol{\upsilon}}\)\(\def\biphi{\boldsymbol{\phi}}\)\(\def\bichi{\boldsymbol{\chi}}\)\(\def\bipsy{\boldsymbol{\psy}}\)\(\def\biomega{\boldsymbol{\omega}}\)\(\def\bupalpha{\bf{\alpha}}\)\(\def\bupbeta{\bf{\beta}}\)\(\def\bupgamma{\bf{\gamma}}\)\(\def\bupdelta{\bf{\delta}}\)\(\def\bupvarepsilon{\bf{\varepsilon}}\)\(\def\bupzeta{\bf{\zeta}}\)\(\def\bupeta{\bf{\eta}}\)\(\def\buptheta{\bf{\theta}}\)\(\def\bupiota{\bf{\iota}}\)\(\def\bupkappa{\bf{\kappa}}\)\(\def\buplambda{\bf{\lambda}}\)\(\def\bupmu{\bf{\mu}}\)\(\def\bupnu{\bf{\nu}}\)\(\def\bupxi{\bf{\xi}}\)\(\def\bupomicron{\bf{\micron}}\)\(\def\buppi{\bf{\pi}}\)\(\def\buprho{\bf{\rho}}\)\(\def\bupsigma{\bf{\sigma}}\)\(\def\buptau{\bf{\tau}}\)\(\def\bupupsilon{\bf{\upsilon}}\)\(\def\bupphi{\bf{\phi}}\)\(\def\bupchi{\bf{\chi}}\)\(\def\buppsy{\bf{\psy}}\)\(\def\bupomega{\bf{\omega}}\)\(\def\bGamma{\bf{\Gamma}}\)\(\def\bDelta{\bf{\Delta}}\)\(\def\bTheta{\bf{\Theta}}\)\(\def\bLambda{\bf{\Lambda}}\)\(\def\bXi{\bf{\Xi}}\)\(\def\bPi{\bf{\Pi}}\)\(\def\bSigma{\bf{\Sigma}}\)\(\def\bPhi{\bf{\Phi}}\)\(\def\bPsi{\bf{\Psi}}\)\(\def\bOmega{\bf{\Omega}}\)\begin{equation}P\left( x \right) = {{N!} \over {x!\left( {N - x} \right)!}}{\pi ^x}{\left( {1 - \pi } \right)^{N - x}}{\rm {,}}\end{equation}
where
P(
x) = probability of
x successes out of
N trials,
N = number of trials, π = probability of success on a given trial.
12
The four-target setup in the index tests denotes an intrinsic lower probability of correct levels being passed by chance (0.063–0.141 for PVi and 0.063 PV) than for KACI (0.25), which should theoretically increase reliability.
However, there are several other differences between the paradigms that are likely to impact on reliability. Having four target options on a smaller iPad screen makes refixation eye movement detection more challenging and demands a closer working distance to help mitigate this. In turn, this brings the potential to positively bias toward myopes. Refractive status was not evaluated in either study and would be a desirable aspect of future validation studies.
The closer working distance does bring another advantage, however, in that the screen can be reached by the child at arm's length. In this study, one recognized limitation in the methodology relates to the fact we did not record when children transitioned from a looking response to actively touching the gratings themselves. Although the examiner was not prescriptive regarding looking/pointing/touching behavior, the child was encouraged to tap the touchscreen. The “capture area” around the grating comprised one quarter of the total touchscreen to help allow for minor target-touching “mis-hits” around a given grating. Touching the grating introduces another major divergence from traditional card-based testing, where touching the gratings is discouraged due to the potential for permanently marking/scratching the card. Children were nevertheless encouraged to point to the KACI grating to limit bias. Given the age-groups involved, coordination of hand/arm movement presents another source of potential error, likely to be greater in younger infants and those with concurrent motor impairments.
The typical experience was that in older children their natural tendency was to touch the grating. In less-confident, typically younger children, the test frequently commenced as a preferential looking test, but after a few low-frequency presentations, the behavior often changed to pointing/touching. Recording such behavior in future validation work would allow a more nuanced assessment of the accuracy of the digital platform specifically as a preferential looking test. Furthermore, attention to given behaviors within the context of varying acuity and age subgroups would help evaluate confounding influences.
For use as a preferential looking test, a potential shortcoming of the design of the Peekaboo Vision test screens relates to the difference between vertical and horizontal spacing of the four test grating loci. This presents a potential bias whereby refixations horizontally/diagonally may be relatively easier to spot than vertical refixation eye movements. Indeed, this may exaggerate difficulties in deciding upon looking responses in cases of vertical or horizontal strabismus.
Table 2 details the difference in visual angle between the digital platform and Keeler cards. When tested at 25 cm (compared to recommended 38-cm test distance for Keeler Acuity Cards), the center-to-center distance of the digital grating patches is similar horizontally (1402 vs. 1375 arcmin for Peekaboo Vision), but reduced in the vertical plane to 1045 arcmin, around 25% less than that of the horizontal visual angle between the gratings in Keeler cards. These differences are more pronounced when the digital test distance is extended to the end of the dynamic range to 50 cm, where the vertical visual angle reduces to just 523 arcmin, translating to a refixation eye movement around 37% of the magnitude expected of Keeler cards. While no examiners reported this to be an issue, a potential improvement in the design for future incarnations may be to match horizontal and vertical grating distances from center, or limit the number of loci to two wider spaced target areas, which is an option within the PVi, though not a setting that has been evaluated in the present study. Emerging large display platforms, such as the 12.9 inch iPad Pro
13 provides over 77% increase in display area when compared to the model used in the present study, increasing the scope for incorporation of wider spacing for targets for future versions of digital preferential looking tests.
Table 2 Physical Properties/Sizing Details for KACI and Peekaboo Vision
Table 2 Physical Properties/Sizing Details for KACI and Peekaboo Vision
Table 3 Number of Tests (Comprising BEO, RE, and LE) With Each Engagement Score (0, 1, or 2) for the Two Acuity Tests
Table 3 Number of Tests (Comprising BEO, RE, and LE) With Each Engagement Score (0, 1, or 2) for the Two Acuity Tests
Where looking responses are replaced with screen tapping or pointing by the child, the issues pertaining to detection of visual behavior are largely obviated. Indeed, for such populations that can reliably tap proximally to the grating patch, there may be advantages to bringing the grating targets closer together such that the gratings fall to a more central retinal position, providing a better index of central macular/foveal acuity.
Another noteworthy difference between the index and reference tests relate to the number of testable acuity levels in each (18 for KACI and PVb1 versus 25 for PVi build), as outlined in
Table 1. This difference is also likely to impact on observed reliability differences. Furthermore, the step size in the digital platform becomes coarser toward the finer end of the high-frequency grating range for the digital platforms. By clumping a large
range of acuities between wide levels in one nominal acuity level, the precision would appear better than when compared with a test that captured more nuanced acuities between steps, but the accuracy may actually be poorer.
With the screen resolution and fixed test distance of 25 cm in study 1, the highest two spatial frequencies possible are created with 1- and 2-pixel grating widths, corresponding with acuity scores of 0.12 and 0.42 logMAR (doubling of visual angle). True acuities lying between 0.42 and 0.12 logMAR were therefore all scored as 0.42 logMAR (
Table 1), resulting in the horizontal clustering of data at 0.42 logMAR in
Figure 4B, right panel. In contrast, the Keeler cards with the Children's Additional Set included were −0.1, 0.1, 0.2, and 0.3 logMAR: levels untestable with PVb1. This might be expected to have caused Peekaboo Vision to underestimate acuity in children with good acuities, thereby creating overall disagreement between Peekaboo Vision and Keeler cards, but this was not seen, perhaps because the numbers affected were small. For the PVi version (study 2), the adjustable test distance increased the range of measurable thresholds at the better acuity end of the test, but the differences noted between Peekaboo Vision and Keeler cards remained very small, suggesting there was little or no skew effect with PVb1 in study 1.
In both studies, an animated smiley face comprised the reward animation, and during testing, after the animation reward was demonstrated, the connection between tapping the grating and the subsequent face animation was reinforced to encourage the children to engage. Children with very poor vision, including those with central scotomas, may not have appreciated the details of the smiley face (within the grating in study 1 or within the animation reward in both studies), creating potential confusion. We did not observe any instances where this created an obvious barrier to testing, as even in those children with very poor vision, orientation of attention toward the lowest frequency gratings on an otherwise featureless screen appeared instinctive. It should be noted, however, that statistical analysis regarding acuity was confined to patients who were deemed reliable in reaching an endpoint with all tests (engagement score 2). In our data, this subset captured children with reasonable vision, with only one child exhibiting acuity poorer than 1.0 logMAR. Consequently, impact of a child's inability to detect the face features (in PBb1 or reward in both study 1 and 2) may be a factor missed in the present analysis. Future studies should evaluate Peekaboo Vision in children with severe visual loss, as it is not clear whether the present findings are generalizable.
Bittner et al.
14 compared a digital gratings test (measuring up to 2.2 logMAR), which also employed a four-target forced choice paradigm, comparing instead with the Early Treatment Diabetic Retinopathy Study (ETDRS) as the gold standard in a population of adults who had low vision (legally blind) due to retinal disease. It is interesting that they demonstrated good agreement with digital gratings, which scaled similarly to ETDRS in their retinitis pigmentosa low-vision group, with ±2 SD within approximately ±0.4 logMAR steps.
When comparing the index test (PVb1 and PVi) to the reference standard, we observed around 0.4 logMAR difference in 95% of tests, which is similar to that observed when PVb1 was tested in an adult cohort with artificially degraded vision, using +4, +8, or +16 spherical lenses to evaluate performance in low vision.
5
The edge artefact at the junction between grating and the high-frequency background checkerboard presents a potential extra visual cue to children. This was a noticeable finding in PVb1 at the finest gratings and also at the junction between grating and isoluminant background at the edges of the face elements (
Fig. 1A). A similar finding regarding increased visibility in relation to grating edge effect was reported in relation to the Teller Acuity Test,
15 which prompted the use of the white rings with Keeler Acuity Cards for Children.
11 Following this observation with PVb1 during study 1, the white rings were included in PVi to remove this potential cue, and the face details used in PVb1 were removed.
The data presented here suggest Peekaboo Vision has better repeatability than Keeler preferential looking cards: coefficients of repeatability were 0.27 for Peekaboo Vision versus 0.37 for Keeler cards in study 1 and 0.32 for Peekaboo Vision versus 0.42 for Keeler cards in study 2. This is clinically important, particularly when measuring the change in vision of a child over a period of time.
In study 1 (Malawi), children appeared to engage more with Peekaboo Vision than with Keeler cards, while study 2 (United Kingdom) data suggested no difference. This may reflect changes made in the Peekaboo Vision application, such as the removal of the smiley face detail, or it may reflect differences in the populations tested: tablets are less widely available in Malawi than in the United Kingdom, so their relative novelty may have improved engagement. Other factors such as level of vision and ocular/medical conditions may have influenced engagement, but they are difficult to quantify in such heterogeneous groups. Given that each child had to undertake up to 12 acuity tests (BEO, RE, and LE test and retest for both Peekaboo Vision and Keeler cards) plus a 15-minute interval, better engagement might be expected for a single test in typical practice.
Our data suggest a trend toward decreased engagement on retest across all comparisons, which is to be expected in the given age group. This did not, however, appear to reach statistical significance in our analysis in either study for either platform.
A drop in engagement score was also observed when testing monocularly after BEO testing, with mean engagement score dropping from 1.9 to 1.6 (N = 49, Mann-Whitney U test, W = 2668, adjusted P = 0.02). While statistically significant, the clinical significance of this small drop in engagement score is uncertain. It is possible that an intrinsic feature of PVi increases the potential for disinterest, introducing diminishing engagement with repeat testing. Future design adjustments could accelerate progression on the staircase and increase the variety of sounds and animation rewards to maintain engagement through binocular and monocular tests. Improvements in the study methodology are also desirable in future studies, limiting the testing gamut to replicate a test time more typical of a “real-life” clinical setting.
In study 2, we note wider 95% limits of agreement for Keeler test-retest, approaching an octave step, wider than that found in study 1 with the same test. This may relate to several factors, particularly the very different populations, as well intertester variability. The extensive range of testing (12 tests) in such a young population is likely to be the most significant factor and the most likely reason for the observed trend toward decreased ES across all repeat testing.
There are extensive differences between study 1 and study 2, both in population as well as in design of the index tests. Study 1 is a pilot in nature, testing the methodology in advance of study 2 and also informing the development of the formal PVi build. While it is useful to evaluate in broad terms how a digital infant acuity test performs in Malawi, we cannot draw any meaningful conclusions based on comparisons between the two studies due to intrinsic differences between PVb1 and PVi. The next logical study would seek to repeat the methodology with PVi in a similar cohort in Malawi.
Only total session time was noted for study 1. For study 2, the data inclusion form was amended to capture individual test times. The mean time to test (first test, BEO) was over 1 minute shorter for Peekaboo Vision than for Keeler cards (185 vs. 251 s, paired t test, N = 33, P = 0.0021), that is, 26% shorter. This may be partly due to Peekaboo Vision's four-choice paradigm compared with Keeler cards' two-choice paradigm. Shorter test time represents an important potential benefit of Peekaboo Vision, given the short attention span of this age group; another is the potential cost saving in high-throughput orthoptic clinics or screening programs.
Compared to card-based vision tests, tablet-based tests are susceptible to veiling glare and to reflections, preventing use outside. On the other hand, a tablet's Lambertian surface can maintain contrast even if the viewing angle is not perpendicular, and photometric compliance of tablets with British and European Standards is at least as good as gold standard retro-illuminated ETDRS charts.
6 Given the potential for variation between reflected light from cards and Lambertian displays, measuring ambient luminance, together with reflected and emitted light from the cards and iPads, respectively, would have been a useful area to explore; however, this was not addressed in the present research, which aimed to evaluate precision and accuracy in a real-life context.
Digital gratings have been used elsewhere in an infant population, using two tablets emulating the Dobson card test.
16 Integration of eye tracking on a monitor-based system has also been evaluated with grating acuity in children
9; this could increase objectivity and potentially remove the need for a stringent fixed test distance, with live distance measuring between eye and tracker, dynamically adjusting acuity score relative to distance at the moment of refixation.
Furthermore, using a large crowd-sourced data set to train a deep learning convolutional network, the native front-facing mobile camera alone has been demonstrated to predict gaze with an accuracy purported to outperform current state of the art approaches.
17 Combining such technology with the high-fidelity Lambertian tablet display and using gaze to guide preferential looking methodologies through an automated staircase culminates in an exciting possibility whereby visual function could be profiled using a software-only solution on a near ubiquitous mobile platform designed for recreation. Such development could extend the reach of a visual screening program into the patient's home. Indeed, such directions present an exciting new direction for pediatric vision testing, not only for high-contrast acuity, but also for contrast and color assessment.
18,19
Ongoing regulatory checks of applications for such measures are desirable given the frequent updates to operating systems and hardware. Expansion of national and international standards for vision-testing equipment to include such ubiquitous mobile technology could help support the safe adoption of tablet-based vision testing into regular practice. Further investigation is required to evaluate the role of the technology in amblyopia screening and to evaluate performance in nonexpert testers.
The authors would like to thank Mario Ettore Giardini, Department of Biomedical Engineering, University of Strathclyde, for his review of the manuscript and Claire Tarbert, NHS GG&C Medical Devices Unit, for her technical help with PVb1 and contribution to photometric compliance evaluation.
Supported by Fiona's Eye Fund (study in Malawi) and by the Queen Elizabeth Diamond Jubilee Trust for equipment and research time (IL) (study in United Kingdom).
Peekaboo Vision is a CE marked medical device. The legal manufacturer for Peekaboo Vision is Scottish Health Innovation Ltd., who handle intellectual property on behalf of the National Health Service in Scotland. None of the authors have a commercial relationship with Scottish Health Innovations Ltd.
Disclosure: I. Livingstone, None; L. Butler, None; E. Misanjo, None; A. Lok, None; D. Middleton, None; J.W. Wilson, None; S. Delfin, None; P. Kayange, None; R. Hamilton; None