Open Access
Articles  |   January 2019
Impact of Children's Postural Variation on Viewing Distance and Estimated Visual Acuity
Author Affiliations & Notes
  • Lisa M. Hamm
    School of Optometry and Vision Science, The University of Auckland, Auckland, New Zealand
    New Zealand National Eye Centre, The University of Auckland, Auckland, New Zealand
  • Kishan Mistry
    School of Optometry and Vision Science, The University of Auckland, Auckland, New Zealand
  • Joanna M. Black
    School of Optometry and Vision Science, The University of Auckland, Auckland, New Zealand
    New Zealand National Eye Centre, The University of Auckland, Auckland, New Zealand
  • Cameron C. Grant
    Department of Paediatrics: Child and Youth Health, The University of Auckland, Auckland, New Zealand
  • Steven C. Dakin
    School of Optometry and Vision Science, The University of Auckland, Auckland, New Zealand
    New Zealand National Eye Centre, The University of Auckland, Auckland, New Zealand
    UCL Institute of Ophthalmology, University College London, London, UK
  • Correspondence: Lisa M. Hamm, School of Optometry and Vision Science, 85 Park Rd, Grafton, Auckland, 1023, New Zealand. e-mail: [email protected] 
Translational Vision Science & Technology January 2019, Vol.8, 16. doi:https://doi.org/10.1167/tvst.8.1.16
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Lisa M. Hamm, Kishan Mistry, Joanna M. Black, Cameron C. Grant, Steven C. Dakin; Impact of Children's Postural Variation on Viewing Distance and Estimated Visual Acuity. Trans. Vis. Sci. Tech. 2019;8(1):16. https://doi.org/10.1167/tvst.8.1.16.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose: Reliable estimation of visual acuity requires that observers maintain a constant distance from the target, but use of chin rests is not always feasible. Our aim was to quantify children's movement during community testing and its impact on near (40 cm) and intermediate (150 cm) acuity measures.

Methods: Thirty-three 7-year-old children performed several acuity tests run on a tablet computer, administered in the child's home by a trained lay screener. The tablet webcam was used to derive a continuous estimate of the child's position during testing. We estimated acuity using both the recommended viewing distance and using trial-by-trial estimates of the child's physical distance from the screen.

Results: Although initial positioning in the 40-cm viewing distance condition was accurate, on 18% of trials children moved sufficiently to support a 0.1 logMAR improvement in acuity, leading 16% of staircases to overestimate acuity by more than one line. Initial positioning for the 150-cm condition was less accurate, but the longer viewing distance minimized the impact of children's movement on the visual angle of the target. Overall, at 150 cm 8% of staircases were overestimated by more than 0.1 logMAR.

Conclusions: Children move substantially during intermediate and near acuity tests despite assessors encouraging maintenance of the correct viewing distance.

Translational Relevance: Real-time estimates of the child's physical distance from the target are possible when assessments are conducted on camera-enabled devices. Correction for movement will likely lead to more accurate measures of near and intermediate visual acuity.

Introduction
Obtaining an accurate measure of a child's recognition acuity is challenging for a host of reasons, including participant inattention, timidity, and lack of familiarity with items they are asked to recognize. Inaccuracies in acuity estimates can also arise when children move closer to the target to get a better look.1 The resulting increase in the angular subtense of the target will lead to overestimation of visual acuity. Although such changes in viewing distance are likely to be inconsequential at 6 m, their impact rises exponentially as the recommended testing distance decreases.2 At close viewing distances, movement is not a problem when the position of target and observers are fixed (by wall placement and chin rests, respectively) but this is not common clinical practice. Testing within community settings is particularly vulnerable to this type of error because the person administering the test must establish the distance between target and observer in a new environment, often with only the aid of a measuring tape. 
The potential value of community-based acuity testing at intermediate and near viewing distances is high. In an effort to improve engagement and to reduce the number of assessors required, an intermediate testing distance (1.5 m rather than 3 or 4 m) is now recommended for preschool community screening.3 Near visual acuity tasks (typically conducted at a 40-cm viewing distance) have functional significance because they closely parallel visual skills used for reading.1 Indeed, near tests are sometimes used to assess whether educational support will be provided for a child.1 Near acuity tests have also been recommended for screening4,5 and epidemiological work,6 both because of their relevance to “everyday” vision1 and because conducting both near and distance testing can be a more accurate screen for refractive error than can either test alone.5 This potential value is not always realized in practice; it is acknowledged that accuracy of near acuity measurement is limited by the need to establish and maintain appropriate distance from the target.1 Huurneman1 states that measurement of “[near visual acuity] is time consuming and requires special skills because children are eager to shorten viewing distance.” Those of us who have worked with children are aware of this tendency, but to what extent does it compromise the results of intermediate and near visual acuity tests? 
Research that quantifies such postural variation during acuity testing (in the absence of a chin rest) is scant. A single study has tested adult participants with visual acuity charts at near (40 cm) and far (3 m). The authors recorded side-on videos to quantify movement. Results indicate that adult participants moved up to 12 cm forward for the near tests and up to 17 cm forward for the 3-m tests.2 These shifts would support a theoretical increase in acuity of more than 0.1 logMAR at near testing distances but would have a negligible impact at the 3-m test distance. Side-on video analysis requires additional equipment not ideal for capturing “natural” movement in diverse community settings. Fortunately, the increased use of electronic (rather than chart-based), acuity tests710 introduces the possibility of estimating physical viewing distance using cameras built into the testing device. This approach has been used to record how close adult observers are when reading smartphones,11 but to the authors' knowledge, it has not been applied to the measurement of visual acuity. 
To address this, we set up a system that used the camera built into the testing device (in our case, a Microsoft Surface Pro 4 tablet computer [Microsoft, Inc., Bellevue, WA] running a custom acuity test) to record images of the child during testing; we then wrote software to track a bull's-eye target (attached to the child's occluding glasses) in order to infer the child's distance throughout visual acuity tests. The system was used by lay screeners in the community to measure near (40 cm) and intermediate (150 cm) distance visual acuity in 7-year-old children. Using the collected images and acuity test results, we set out to determine whether children move sufficiently during the course of visual acuity tests to compromise the accuracy of their results. 
Methods
Assessors
We partnered with a longitudinal epidemiological study, Growing Up in New Zealand, during a pilot data collection phase when the children were 7 years old. Lay screeners were trained over 1 week to conduct a 3- to 4-hour assessment of children in their homes. The assessment was broad and included psychological, social, language, and vision testing. Assessors were given 2 hours of training on how to administer our tablet-based visual acuity test as well as a printed manual. To simplify the procedure for assessors, the tablet test automatically set target size, stored children's responses, and managed termination and scoring criteria. Furthermore, the order of tests was fixed and progression between tests automated. The program included prompts between tests to remind the assessor to check (1) that the appropriate eye was covered and (2) that the child was at the correct distance from the screen. Active confirmation by the assessor that the correct eye was occluded and that the distance between the child and the screen was correct was required to initiate each acuity test. 
Participants
Thirty-three participants attempted the visual acuity tests, but time constraints and some technical issues meant that not all children completed the near visual acuity tasks. Data files and webcam images were available for 26 children at near (40 cm) and 33 children at an intermediate (150 cm) viewing distance. The research followed the tenets of the Declaration of Helsinki. The research was approved by the Health and Disability Ethics Committee of the New Zealand Ministry of Health. The caregivers of all enrolled children provided written, informed consent. 
Testing Equipment
Testing was performed using Microsoft Surface Pro 4 tablet computers. These devices have high-resolution 31-cm LCD displays (2736 × 1824 pixels), incorporating a front-facing web camera. Screens were gamma-corrected in software, confirmed using a Minolta LS100 photometer (Konica Minolta, Tokyo, Japan). Luminance was 1 cd/m2, 150 cd/m2, and 300 cd/m2 for black, mid-gray, and white, respectively. All screens were fitted with antiglare protectors. Each assessor was provided with a kit including a custom keyboard (to input children's responses), occluding glasses (left eye and right eye, as well as a felt occluding sleeve to cover one side of glasses for children with habitual refraction), a tape measure, a stand with a beaded rope for confirming distance (similar to that used on near visual acuity charts), and a matching sheet for the optotypes. Testing equipment is shown in Figure 1. The occluded side of each set of glasses and each felt sleeve had a bull's-eye sticker attached to it (black inner circle of 1.1 cm diameter and outer black ring of 3.4 cm diameter) to facilitate estimation of viewing distance from webcam images. 
Figure 1
 
Testing equipment. Acuity testing was performed using Microsoft Surface Pro tablet computers. Assessors were provided with an additional stand and distance measure, an input keyboard, as well as a measuring tape and occluding glasses fitted with a bull's-eye target (not shown).
Figure 1
 
Testing equipment. Acuity testing was performed using Microsoft Surface Pro tablet computers. Assessors were provided with an additional stand and distance measure, an input keyboard, as well as a measuring tape and occluding glasses fitted with a bull's-eye target (not shown).
Acuity Testing
Rather than presenting items of a fixed size on lines (from largest to smallest), we presented single items on-screen whose size was controlled by a Bayesian adaptive staircase. Tests were run with custom MATLAB code (MathWorks, Natick, MA; compiled to be executable outside of the MATLAB environment) that utilized Psychtoolbox12 for presentation. We used the QUEST13 algorithm to set trial-by-trial stimulus size and Palamedes14 for posthoc threshold estimation from raw stimulus-response data. 
We used open access picture optotypes (The Auckland Optotypes; https://github.com/dakinlab/OpenOptotypes), which are available in regular and vanishing formats.15 Vanishing optotypes have a stroke that is split between black and white, displayed on a gray background.16 This format allows the optotype to perceptually vanish into the background when displayed beyond an observer's resolution threshold.16 We were interested to include this format because it may be of use for vision testing,1719 and it may impact movement because of the perceptual difference from regular format optotypes. 
Each participant attempted eight visual acuity tests. Children started at 150 cm, had their right eye tested with regular, then vanishing optotypes, then their left eye tested with both optotype formats. Children then moved to 40 cm and repeated the four tests in the same order. Each child was required to correctly name or match the 10 shapes prior to testing as part of a familiarization/engagement phase, including animations of each optotype (https://github.com/dakinlab/OpenOptotypes). After the engagement phase, each acuity test (or staircase) consisted of 16 optotype presentations (or trials). Acuity data obtained for each participant are presented in the Table. Overall, visual acuity was better at 150 than 40 cm (F1 = 52.24, P < 0.001) but did not differ significantly based on optotype format or eye. 
Table
 
Participant Acuity Data (in logMAR) and Testing Order
Table
 
Participant Acuity Data (in logMAR) and Testing Order
Distance Tracking
During pilot work, we attempted to record distance using facial-tracking software, similar to that used by Ho et al.11 However, the estimation of face width and height was not specific enough to provide measures of absolute distance, and errors occurred when more than one face was in frame. We found that the presence of multiple faces in our images was surprisingly common, as children's parents or siblings were often in frame during acuity testing. We opted instead for a custom image analysis strategy based on localizing and measuring a distinctive target of a known size attached to the occluding device. 
Image Acquisition
The front-facing camera had a horizontal field of view of 76°. For the near (40 cm) task, the webcam resolution was set to 640 × 360 pixels and the full image retained. For the intermediate (150 cm) tasks, the resolution was set to 1920 × 1440 pixels, and images were cropped to 640 × 360. The display operated at a refresh rate of 30 Hz, and we acquired a webcam image every six stimulus frames (i.e., at 5 Hz). This sampling rate allowed us to acquire images, convert them to grayscale, and save them onto the hard drive as pixel matrices in real time. Figure 2A shows an example of a demonstration image at the resolution we captured. Each image from the webcam was time-stamped with the frame number and system time to allow synchronization with the acuity test. 
Figure 2
 
Automated bull's-eye analysis. Each step is described in more detail in the text. Demonstration image (of younger, nonstudy child) used with parental permission.
Figure 2
 
Automated bull's-eye analysis. Each step is described in more detail in the text. Demonstration image (of younger, nonstudy child) used with parental permission.
Automated Identification of Bull's-Eye
Location of Candidate Bull's-Eye
We used hysteresis thresholding (hysthresh.m; Copyright 1996–2005 Peter Kovesi) to identify candidate pixel fragments from the grayscale images (Fig. 2B). Pixel fragments were subsequently eliminated if they were too large or too small to be a substantive component of the bull's-eye (Fig. 2C). We next compared candidate pixel fragments and found the two sharing the most similar centroid coordinates. Note that in an ideal image of a bull's-eye, the inner circle and outer ring have the same centroid coordinates. We then cropped the image to a rectangle including the two candidate pixel fragments (Fig. 2D). 
Confirmation of Bull's-Eye
We obtained two lists of pixel values from horizontal and vertical cross-sections of the cropped image (red and green, respectively, in Fig. 2D), each of which passed through the mean centroid coordinates. We interpolated each array to 512 data points, allowing subpixel estimation of transitions in polarity (points when luminance switched from either dark to light or from light to dark, shown in Fig. 2E as points crossing 0). We then checked whether the horizontal and vertical profiles were consistent with a bull's-eye. To be consistent with a bull's-eye, the profile required (1) at least six cross points, (2) a sufficient change in pixel values between dark and light portions (amplitude—represented by a double-sided vertical arrow touching dotted lines in Fig. 2E—needed to be at least ±1 standard deviation of the root mean square contrast of the original image); and (3) contain two outer dark portions of similar width (distance between cross points 1 and 2 and cross points 5 and 6). Criteria were strict as missing data were easier to identify than incorrect data. 
Estimation of Minimum Child's Viewing Distance on Each Trial
Determining the Distance Between Child and Target
If each constraint of bull's-eye identification was verified, the pixel width and height of the bull's-eye was calculated from the interpolated cross-sections. We used the larger of the two estimates (to minimize impact of horizontal or vertical tilt) to calculate the child's distance from the screen using simple trigonometry. The distance between the eye and the glasses (back vertex distance of 2 cm, mean of authors LMH and KM) was then added. 
Verification of Distance Over Time
Because the tablet acuity test was programmed to prompt the assessors to remeasure and confirm the appropriate viewing distance before starting each staircase, we assumed that a single staircase captured continual motion of the child. Within this unit, we flagged images that suggested either large jumps in motion, movement of the child outside of the expected range, or no motion at all. We then manually inspected such flagged frames, as well as all frames in which no bull's-eye could be automatically detected, and a set of randomly sampled images. If a bull's-eye was present in these queried images, subpixel analysis was initiated manually by cuing the center of the bull's-eye. For images in which the bull's-eye was partially out of frame, partially obscured, or in which multiple bull's-eyes were present, subpixel analysis was initiated manually by cuing the center of the correct bull's-eye if at least half of the bull's-eye was in frame (examples shown in Fig. 3). In approximately 8% of the images (5030/63,868), the bull's-eye did not appear in the frame due to the angle of the tablet in relation to the child or was sufficiently obscured to preclude accurate estimation of diameter. Such images led to missing data points. 
Figure 3
 
Example images in which automation failed. (A) Edge of frame and participant partially obscured, (B) edge of frame and silhouetted by window, (C) multiple bull's-eyes present, (D) partially obscured within frame. Participant's faces are grayed-out to conceal their identity.
Figure 3
 
Example images in which automation failed. (A) Edge of frame and participant partially obscured, (B) edge of frame and silhouetted by window, (C) multiple bull's-eyes present, (D) partially obscured within frame. Participant's faces are grayed-out to conceal their identity.
Estimating Minimum Distance per Trial
Corrected viewing distance data were processed with the MATLAB smooth function, using a lowess model. Fitted distance estimates for each staircase were segmented into individual trials. From each trial segment, we determined the minimum viewing distance, or the point at which the child was the closest to the target. Example staircases (note the varying duration) are shown in Figure 4
Figure 4
 
Examples of a children's movement across individual staircases. Trials are highlighted with alternating gray- and white-shaded regions and defined with text in each upper left corner. Frame level raw viewing distances are represented by black dots and smoothed data by superimposed yellow lines. Minimum viewing distance per trial is represented by a dotted line and text below. In images where the bull's-eye was more than half obscured or out of frame (missing data points), red dots are placed at the recommended viewing distance.
Figure 4
 
Examples of a children's movement across individual staircases. Trials are highlighted with alternating gray- and white-shaded regions and defined with text in each upper left corner. Frame level raw viewing distances are represented by black dots and smoothed data by superimposed yellow lines. Minimum viewing distance per trial is represented by a dotted line and text below. In images where the bull's-eye was more than half obscured or out of frame (missing data points), red dots are placed at the recommended viewing distance.
Missing data points (recall, distance data could not be obtained for 8% of the images) meant that we needed to impose rules concerning the degree to which we could infer the child's position. To this end, we evaluated each data point to ensure that we had a measure of distance (an image with a visible bull's-eye) for at least three out of seven surrounding data points (among which the data point in question was the central point). If this condition was met, we maintained the fitted data for this position, if not, we rejected it. In some cases limiting our inferences led to an underestimation of how much a child moved forward. Consider Figure 4C as an example; this participant leaned toward the screen considerably on her first trial. The arc of her movements suggests she came closer than the black dots (raw data points) indicate, but because she went out of the image frame, we could not track the entire arc of her movement. Our choice to limit such inferences provides a conservative estimate of the minimum distance on a trial. If all fitted data for a trial had been rejected, but even a single measured data point existed, we conservatively took this known point as the minimum for that trial. On some trials, no data were available (fit or raw) for an entire trial, and we could not estimate a minimum distance. There were 140 (7%) trials for which we could not derive minimum distance for the 150-cm task and 49 (3%) for the 40-cm task. 
Recalculation of Visual Acuity
Using the minimum distances, we recalculated visual angle of the optotype on a trial-by-trial basis. Optotype size in pixels was converted to logMAR in two ways. Our raw method assumed maintenance at recommended viewing distance, and our adjusted method used the specific trial minimum viewing distance to calculate the logMAR value. We plotted either raw or adjusted stimulus size against binary correct/incorrect performance and used PAL_PFML_Fit14 to perform a maximum likelihood fit of a cumulative normal function. From these psychometric functions, we could estimate raw and adjusted acuity thresholds. The α parameter of our psychometric function (which sets possible acuity outcomes) was constrained to be between −0.3 and 1.5 logMAR, the β (slope) parameter set to 3.5, the γ (correct guessing rate) to 10% (10 alternative forced choice), and λ (lapse rate) to 1%. Example raw and adjusted staircases are shown in Figure 5 as gray and yellow plots, respectively. 
Figure 5
 
Examples of children's raw and adjusted visual acuity staircases. Gray markers show raw and yellow adjusted logMAR values. The dotted lines show threshold estimations for the staircase of the corresponding color. Staircases correspond to those depicted in Figure 4.
Figure 5
 
Examples of children's raw and adjusted visual acuity staircases. Gray markers show raw and yellow adjusted logMAR values. The dotted lines show threshold estimations for the staircase of the corresponding color. Staircases correspond to those depicted in Figure 4.
In cases where fewer than half of the 16 trials had an associated minimum viewing distance (10 cases), we did not calculate an adjusted visual acuity estimate. In one additional case, the child disengaged halfway through a test and answered eight consecutive trials incorrectly, precluding threshold estimation. In total, 5% (11/236) of staircases could not be used to compare raw and adjusted acuity estimates. 
Analysis
We characterize the position of children (and the impact of postural variation on visual acuity) using binned histograms with overlaid probability distributions. We report mean, standard deviation, skew, and kurtosis on each. We fit Pearson functions to distance data using the MATLAB function pearspdf.m. We then superimpose the mean and standard deviation (solid lines) and mode (dotted line) on each histogram. Skewness is a measure of distribution symmetry, with a normal distribution having 0 skew. A normal distribution has a kurtosis of 3; higher values indicate the distribution is more peaky or prone to contain outliers. Finally, we define “clinical significance” in this context to be a change of more than 0.1 logMAR or one line on a standard eye chart (calculated theoretically based on the impact of the child's movement on visual angle). In each case, we report the percentage of children with clinically significant movement as well as the percentage of the fitted distribution that meets this criterion. 
We then explored how movement was influenced by staircase level and trial level factors with linear regression models (MATLAB's fitlm function) with subsequent ANOVA testing. At the level of staircases, we explored the impact of participants, recommended viewing distance, stimulus format (regular or vanishing), tested eye, and estimated acuity. At the trial level, we included optotype identity (for example heart, tree, car, etc.), trial number (1– 16), and relative difficulty of trial within the staircase. 
Results
Positional Change
Initial Position of Child and Tablet
We first assessed the accuracy of initial positioning, which arguably reflects the assessor's setup of the child and tablet more than the behavior of the child. To do this, we used the viewing distance estimate derived from the first image of each staircase. The first image of every staircase had a defined bull's-eye except for five, all of which were intermediate distance tasks. Distribution of initial position for available data is shown in Figure 6. Overall, initial positioning was accurate for the near task (with a mean starting position of approximately 40 cm and the mode just behind ideal placement). The intermediate distance task was skewed toward closer distances, with the mode 4 cm, and mean 6 cm closer than the recommended 150 cm viewing distance. Note that forward movement of 8 and 30 cm is sufficient to change visual acuity by one line for our near and intermediate tasks, respectively. Using this criterion, for the near task, only 1% of the staircases were impacted, whereas for the intermediate task, there was more of a tendency for the child to be positioned too close, with 4% of staircases at risk of overestimating acuity (even if the child were to remain perfectly still). 
Figure 6
 
Initial positioning of the child. The green dashed line is where the child ought to be at the start of the test. The fitted distribution, mean, and standard deviation are plotted with solid lines, and mode with a dotted line. Binned frequency data are represented by gray circles.
Figure 6
 
Initial positioning of the child. The green dashed line is where the child ought to be at the start of the test. The fitted distribution, mean, and standard deviation are plotted with solid lines, and mode with a dotted line. Binned frequency data are represented by gray circles.
Trial-by-Trial Position
In order to assess the amount our participants moved, we plotted the minimum distance for each trial in Figure 7. Whereas children appeared to initially be accurately placed for the near task, they moved considerably once the test started, with a mean viewing distance of 36.3 cm (±5.0 cm). On 18% of these trials, children moved more than 8 cm closer; putting them at risk of erroneously gaining a line of acuity. The impact of movement on the intermediate task was reduced (9%) but the distribution has a long tail and is clearly skewed toward closer viewing distances. 
Figure 7
 
Position of children on a trial-by-trial basis. The green dashed line is where the child ought to be during the test. The fitted distribution, mean, and standard deviation are plotted with solid gray lines, and mode with a dotted line. Binned frequency data are represented by gray circles.
Figure 7
 
Position of children on a trial-by-trial basis. The green dashed line is where the child ought to be during the test. The fitted distribution, mean, and standard deviation are plotted with solid gray lines, and mode with a dotted line. Binned frequency data are represented by gray circles.
Child's Movement From Initial Position
If we further express the position of the child in relation to their initial position (rather than the recommended viewing distance) we can see a disproportionate impact of the child's motion (compared to initial positioning errors) in near (40 cm) tests. Figure 8 summarizes the same data as presented in Figures 6 and 7, but plots them against the child's change from initial position. Positive numbers represent forward motion, and again, the shaded region is the probability of clinically significant (8 cm for near and 30 cm for distance) movement. Using these criteria, 15% of trials in near-distance tests and only 1% of trials at the longer distance were impacted. Note the skew of the distribution suggests more forward than backward motion, and high kurtosis reflects the substantial number of outlying data points in both distributions. 
Figure 8
 
Summary of children's movement in relation to initial position. Fitted distribution, mean, and standard deviation are plotted with solid lines, and mode with a dotted line. The green dashed line represents no movement. Binned frequency data are represented by gray circles.
Figure 8
 
Summary of children's movement in relation to initial position. Fitted distribution, mean, and standard deviation are plotted with solid lines, and mode with a dotted line. The green dashed line represents no movement. Binned frequency data are represented by gray circles.
Impact of Positional Change on Estimated Acuity
As stated above, if we assume that the child used the resolution he or she had available at the closest viewing distance of each trial, we can recalculate thresholds/acuities based on these positions. We can then compare the raw acuity threshold (based on recommended viewing distance) and the adjusted threshold (based on minimum viewing distance per trial). In the example staircases shown in Figure 5, the difference in acuity estimates between the staircases is shown by the gap between the gray and yellow dotted lines. 
The overall effect of movement on all staircases for each distance is summarized in Figure 9. We found that 17% of staircases (17 of 100) changed at least 0.1 logMAR; for 16 staircases, the acuity estimate was poorer after the adjustment (displayed in red in Figure 9), and on one staircase the acuity result improved. The impact on acuity was lower for the intermediate task, with only 8% changing more than one line. 
Figure 9
 
Impact of movement on visual acuity estimates. The green dashed line highlights no change in acuity. The fitted distribution, mean, and standard deviation are plotted with solid lines and mode with a dotted line. Binned frequency data are represented by gray circles.
Figure 9
 
Impact of movement on visual acuity estimates. The green dashed line highlights no change in acuity. The fitted distribution, mean, and standard deviation are plotted with solid lines and mode with a dotted line. Binned frequency data are represented by gray circles.
Which Factors Influence Changes in Position
So far, results have been described in terms of individual trials or individual staircases without considering the child, the eye, the acuity in that eye, or the type of stimuli with which the child was tested. However, each participant completed several staircases. It is possible that some children move more than others, which may be related to the acuity in the tested eye. It is also possible that children move more when looking at regular or vanishing optotypes or at near or intermediate distances. We created a linear regression model to understand factors that influenced movement at the level of the staircase. We included distance, eye, and stimulus format (including all interactions), as well as participant and acuity. The model had an adjusted R2 of 0.626. In the subsequent ANOVA, there were no significant interactions, and the only two significant variables were distance (F1 = 111.3, P < 0.001; movement on the intermediate task [mean 20.5 ± 14.5 cm] was more than the near [mean 7.5 ± 5.3 cm]) and participant (F32 = 7.9, P < 0.001; 12 participants were significant contributors to variability, P < 0.05). Figure 10 summarizes this information visually, showing the difference between the raw and adjusted visual acuity estimates (rather than movement) by participant. We separated near and intermediate tasks and participants. Participants whose movement was significant at a 0.05 level (according to the model) are highlighted with a circle. 
Figure 10
 
Difference between raw and adjusted acuity, by participant. Each marker represents the difference in visual acuity threshold between raw and adjusted for a single acuity test. Participants are sorted by mean difference in logMAR acuity across all near and intermediate acuity tasks. Circles denote right eye and squares represent the left eye. Green represents regular optotypes and blue represents vanishing optotypes. Participant numbers are circled if multiple regression produced a P value of less than 0.05 for the participant in terms of absolute movement. Participant numbers are marked with asterisks if at least one 0.1 logMAR difference between raw and adjusted acuity is estimated. Asterisks are positioned above the participant number for near tasks and below for intermediate tasks.
Figure 10
 
Difference between raw and adjusted acuity, by participant. Each marker represents the difference in visual acuity threshold between raw and adjusted for a single acuity test. Participants are sorted by mean difference in logMAR acuity across all near and intermediate acuity tasks. Circles denote right eye and squares represent the left eye. Green represents regular optotypes and blue represents vanishing optotypes. Participant numbers are circled if multiple regression produced a P value of less than 0.05 for the participant in terms of absolute movement. Participant numbers are marked with asterisks if at least one 0.1 logMAR difference between raw and adjusted acuity is estimated. Asterisks are positioned above the participant number for near tasks and below for intermediate tasks.
As can be seen in Figure 10, 11 of the 26 participants who completed the near task (42%, each marked with an asterisk above the participant number) accounted for the 17 impacted staircases (staircases that changed more than 0.1 logMAR are labeled with participant numbers within the red-shaded area). The impact was more child specific for the intermediate task; only 4 of the 33 participants (12%, each marked by an asterisk below the participant number) accounted for the 10 impacted staircases, with two participants having all four staircases compromised. 
It is also possible that factors at the level of the trial impacted movement. Trial number reflects the temporal progression through the test; children may move forward or back as the test progresses. How difficult a trial is for the child may also influence whether the child leans in or not. Since the QUEST procedure adjusts stimulus size based on response to make each staircase equally difficult, difficulty can only be assessed at the trial level. To address this, we generated a measure of trial difficulty by converting stimulus size on-screen to a percentage from the smallest target shown (100% difficulty) to the largest target shown (0% difficulty). Finally, it is possible that the optotype identity (which shape was shown on a particular trial) may influence movement. We used a second regression model to investigate these trial level factors, and included the two relevant factors from the staircase level analysis (viewing distance and participant). Participant and distance remained significant factors, but only difficulty emerged as a relevant trial level factor influencing movement (F1 = 86.07, P < 0.001). This relationship followed the predicted direction, with the smaller, more difficult targets associated with movement toward the screen. 
Discussion
The finding that children move during vision testing despite encouragement to remain still is not surprising. Tidbury and O'Conner found that even cooperative adults moved during acuity testing, and they suggested that the range of movement would be greater for children.2 In line with this suggestion, Huurmenan noted that the primary challenge of near acuity testing is that children are eager to move closer to the targets.1 While not surprising, quantifying and describing the effect is critical if it is to be overcome. Indeed, the impact of postural changes was greater than we anticipated; the finding that this could impact 42% of children performing near testing and 12% of those completing intermediate testing should change how we think about visual acuity assessment at these distances. 
Interpretation of Results
For a near task (40 cm), initial positioning was excellent; assessors were successful at getting the child to start at the correct distance. If children had remained in this position, only 1% of staircases would be incorrect by more than one line. Perhaps having the examiner confirm distances at the start of each staircase facilitated this, or potentially having options for tablet stands facilitated accurate distances. Having the assessor hold a near test is known to increase test variability.2 Although positioning was generally accurate at the beginning of the test, we found that children moved forward substantially during near visual acuity testing. When we look at trials individually, 18% were at risk of clinically significant measurement errors. This was distributed across participants with 42% of children tested having at least one staircase (out of four) in which they moved close enough to compromise the accuracy of their test by more than one line. 
The intermediate distance testing (150 cm) told a different story. There was more of a tendency for assessors to place children too close to the display. Despite on-screen reminders to recheck the child's distance, 4% of staircases were initiated close enough to the screen to overestimate acuity by more than one line. This may not be attributable to less-diligent measurement by the assessor; it is possible shorter than recommended viewing distances were due to lack of appropriate testing locations or poor cooperation from children. Adding movement of the child after initiation of the test further increased the impact, with 8% of staircases overestimating acuity by more than one line. This effect was less commonplace, with 12% (compared to 42% for near) of children impacted. 
We explored various factors that might influence the extent to which children move during a staircase and a trial. Trial difficulty is perhaps an intuitive factor; when a child is struggling to see the target, they are likely to move toward the target to get a better look. Likewise, we might assume some children would be more likely to move forward than others. Indeed, both emerged as significant factors. However, variability in movement could not be further accounted for by optotype identity or format, eye, trial order, or acuity. The remaining variability is likely to reflect factors that we did not measure: perhaps a child's propensity to follow instruction, their response to success and failure, their engagement with the task or rapport with assessor. 
Taken together, because we cannot predict which children will move under what conditions, and because the impact of postural variation is large, our data suggest that keeping track of where the child is in relation to the target is an important part of obtaining an accurate measure of visual acuity. For near tasks (in our case 40 cm), ongoing monitoring would be needed to improve accuracy. For intermediate distance tasks (in our case, 150 cm), if positioning were corrected at the outset, it would have compensated for about half of the positional errors we found. 
Note that we tested a specific age-group of children (only 7-year-olds). Additionally, we relied upon trained lay screeners, who were required to set up the acuity test in a new environment for each child. These were the constraining parameters of the child cohort study with which we collaborated. This provided a useful natural environment reflecting current needs for community testing. However, expansion of the study across a wider range of participant ages, assessors, and testing environments would be valuable. 
Improving the Accuracy of Visual Acuity Tests in Light of Children's Movement
Stricter enforcement of viewing distance, and perhaps use of a chin rest, in community settings could be helpful. However, with the shift toward testing on electronic devices, more palatable options (such as the use of a built-in camera, described here) are now feasible. Strategies for utilizing such data could include (1) storing the minimum viewing distance per trial and using this to recalculate psychometric functions after testing; (2) displaying realtime estimates of viewing distance to assessors and showing a warning if it falls outside a permissible range (an option that has been used to try and prevent adults from working too close to their smartphones11); (3) pausing the test if viewing distance veers too far from intended viewing distance; or (4) compensating for the minimum distance during the test (either by adjusting the stimulus size in real time, or by adjusting the staircase based on visual angle of the stimulus). Which strategy will prove the most effective is a question for future research. Such research could have added benefits, as it allows objective record of testing environment (e.g., background light levels, visual distraction) and a record of which eye was being tested on a given run (information that can easily be manually recorded incorrectly). 
Potential for Full Automation and Real-Time Tracking
Saving images for retrospective analysis (as described here) is not feasible for all applications. For widespread application, automation of viewing distance would need to be robust enough to not require manual image checking. The challenges to automation we observed included environmental lighting, out of frame or obscured bull's-eye targets (missing data), and multiple bull's-eyes or bull's-eye–like targets in a frame (sometimes producing inaccurate data). The former two challenges can impact the accuracy of acuity testing due to potential screen-glare and inappropriate viewing angles, respectively. In these cases, pausing a test until the issue could be corrected is likely to have added benefits for test accuracy (albeit extending the duration of testing). Errors due to image analysis strategy (in our case multiple bull's-eye–like targets, in other cases multiple faces in frame) would require a specific strategy. 
Applications
This project was motivated both by a recent recommendation for 150-cm testing for population-wide preschool screening3 and by the increasing importance attached to near testing (40 cm for assessment of functional vision).4,5 Although theoretically valuable, such shorter testing distances amplify testing inaccuracy due to children's movement. Restraining a child's head to counteract this limitation is often not feasible. Therefore, compensating for children's movement in the ways outlined here is likely to improve the accuracy of closer visual acuity testing, allowing the benefits of closer viewing distances to be realized. This approach may have other applications. As with visual acuity, assessment of stereo acuity, contrast sensitivity functions, or any other measure dependent on angular subtense requires accurate viewing distance, and again, like measurement of visual acuity, chin rests are not always feasible. Such tests could therefore also be made more accurate by measuring and compensating for viewing distance. Additionally, some research actively investigates the effect of viewing distance (rather than just its effect on the angular subtense of a given visual target). For example, studies of eye strain and mobile phone use,11,20 accommodation,21 and myopia22,23 often require an accurate continuous measure of viewing distance. Currently, viewing distance is estimated in many different ways, including side camera,20 face tracking,11 self-report,22 and use of a specific commercial device.23 A strategy similar to the one described may prove to be a cost-effective approach for a broad range of research requiring accurate estimation of viewing distance. Experiments not specifically concerned with vision or viewing distance might also benefit from routine viewing distance tracking. For example, the degree to which a child with attention-deficit hyperactivity disorder moves is a valuable secondary measure to interpret performance on a wide range of tasks.24 
Conclusion
We conclude first that children's actual position is different enough from recommended testing distance to compromise the accuracy of acuity measurements at 150- and 40-cm testing distances. Second, continuous estimation of viewing distance is feasible on an electronic device with a built-in camera. Taken together, we suggest that measurement of the actual viewing distance should be considered for any visual acuity testing at or closer than 150 cm. The most effective methods for measuring and compensating for children's postural changes (for example, real time versus post hoc), as well as potential applications, merit further investigation. 
Acknowledgments
We thank Jason Turuwhenua for his advice on localizing bull's-eye targets and Theantay Keo for his assistance with data analysis. The authors also thank the families who participated in the study and acknowledge the contributions of the original Growing Up in New Zealand study investigators: Susan M.B. Morton, Polly E. Atatoa Carr, Arier C. Lee, Dinusha K. Bandara, Jatender Mohal, Jennifer M. Kinloch, Johanna M. Schmidt, Mary R. Hedges, Vivienne C. Ivory, Te Kani R. Kingi, Renee Liang, Lana M. Perese, Elizabeth Peterson, Jan E. Pryor, Elaine Reese, Elizabeth M. Robinson, Karen E. Waldie, Clare R. Wall. 
This project was supported by Cure Kids NZ and The Robert Leitl Trust. Growing Up in New Zealand has been funded by the New Zealand Ministries of Social Development, Health, Education, Justice and Pacific Island Affairs; the former Ministry of Science Innovation and the former Department of Labour (now both part of the Ministry of Business, Innovation and Employment); the former Ministry of Women's Affairs (now the Ministry for Women); the Department of Corrections; the Families Commission, the Social Policy Evaluation and Research Unit; Te Puni Kokiri; New Zealand Police; Sport New Zealand; the Housing New Zealand Corporation; and the former Mental Health Commission, The University of Auckland, and Auckland UniServices Limited. Other support for the study has been provided by the Health Research Council of New Zealand, Statistics New Zealand, the Office of the Children's Commissioner, and the Office of Ethnic Affairs. 
The views reported in this paper are those of the authors and do not necessarily represent the views of the Growing Up in New Zealand investigators who are not authors of this paper. 
Disclosure: L.M. Hamm, None; K. Mistry, None; J.M. Black, None; C.C. Grant, None; S.C. Dakin, None 
References
Huurneman B, Boonstra FN. Assessment of near visual acuity in 0–3 year olds with normal and low vision: a systematic review. BMC Ophthalmol. 2016; 16: 215.
Tidbury LP, O'Connor AR. Testing vision testing: quantifying the effect of movement on visual acuity measurement. Eye (Lond). 2015; 29: 129–135.
Cotter SA, Cyert LA, Miller JM, et al. Vision screening for children 36 to G72 months: recommended practices. Optom Vis Sci. 2015; 92: 6–16.
Bušić M, Bjeloš M, Petrovečki M, et al. Zagreb Amblyopia Preschool Screening Study: near and distance visual acuity testing increase the diagnostic accuracy of screening for amblyopia. Croat Med J. 2016; 57: 29–41.
Jin P, Zhu J, Zou H, et al. Screening for significant refractive error using a combination of distance visual acuity and near visual acuity. PLoS One. 2015; 10: e0117399.
Bourne RRA, Flaxman SR, Braithwaite T, et al. Magnitude, temporal trends, and projections of the global prevalence of blindness and distance and near vision impairment: a systematic review and meta-analysis. Lancet Glob. Health. 2017; 5: e888–e897.
Yamada T, Hatt SR, Leske DA, et al. A new computer-based pediatric vision-screening test. J AAPOS. 2015; 19: 157–162.
Beck RW, Moke PS, Turpin AH, et al. A computerized method of visual acuity testing. Am J Ophthalmol. 2003; 135: 194–205.
Aslam TM, Tahir HJ, Parry NRA, et al. Automated measurement of visual acuity in pediatric ophthalmic patients using principles of game design and tablet computers. Am J Ophthalmol. 2016; 170: 223–227.
Holmes JM, Beck RW, Repka MX, et al. The amblyopia treatment study visual acuity testing protocol. Arch Ophthalmol. 2001; 119: 1345–1353.
Ho J, Pointner R, Shih HC, et al. Eye protector: encouraging a healthy viewing distance when using smartphones. In: MobileHCI 2015–Proceedings of the 17th International Conference on Human-Computer Interaction with Mobile Devices and Services. Copenhagen: ACM; 2015: 77–85.
Kleiner M, Brainard D, Pelli D. What's new in Psychtoolbox-3? Perception. 2007; 36: 1–16.
Watson AB, Pelli DG. Quest: a Bayesian adaptive psychometric method. Perception & Psychophysics 1983; 33: 113–120.
Prins N, Kingdom FAA. Palamedes: Matlab routines for analyzing psychophysical data. 2009. Available at http://www.palamedestoolbox.org.
Hamm LM, Yeoman JP, Anstice N, Dakin SC. The Auckland Optotypes: an open-access pictogram set for measuring recognition acuity. J Vis. 2018; 18: 13–13.
Frisen L. Vanishing optotypes. New type of acuity test letters. Arch Ophthalmol. 1986; 104: 1194–1198.
Shah N, Dakin SC, Redmond T, Anderson RS. Vanishing optotype acuity: repeatability and effect of the number of alternatives. Ophthalmic Physiol Opt. 2011; 31: 1–22.
Shah N, Anderson R, Tufail A, Egan C, Dakin S. Visual acuity loss in patients with AMD, measured using a vanishing optotype letter chart. Invest Ophthalmol Vis Sci. 2013; 54: 5021–5021.
Adoh TO, Woodhouse JM, Oduwaiye KA. The Cardiff test: a new visual acuity test for toddlers and children with intellectual impairment. A preliminary report. Optom Vis Sci. 1992; 69: 427–432.
Long J, Cheung R, Duong S, Paynter R, Asper L. Viewing distance and eyestrain symptoms with prolonged viewing of smartphones. Clin Exp Optom. 2017; 100: 133–137.
Yeo ACH, Atchison DA, Schmid KL. Children's accommodation during reading of Chinese and English texts. Optom Vis Sci. 2013; 90: 156–163.
Ip JM, Saw SM, Rose KA, et al. Role of near work in myopia: findings in a sample of Australian school children. Invest Ophthalmol Vis Sci. 2008; 49: 2903–2910.
Haro C, Poulain I, Drobe B. Investigation of working distance in myopic and non-myopic children. Optom Vis Sci. 2000; 77: 189.
Hall CL, Valentine AZ, Groom MJ, et al. The clinical utility of the continuous performance test and objective measures of activity for diagnosing and monitoring ADHD in children: a systematic review. Eur Child Adolesc Psychiatry. 2016; 25: 677–699.
Figure 1
 
Testing equipment. Acuity testing was performed using Microsoft Surface Pro tablet computers. Assessors were provided with an additional stand and distance measure, an input keyboard, as well as a measuring tape and occluding glasses fitted with a bull's-eye target (not shown).
Figure 1
 
Testing equipment. Acuity testing was performed using Microsoft Surface Pro tablet computers. Assessors were provided with an additional stand and distance measure, an input keyboard, as well as a measuring tape and occluding glasses fitted with a bull's-eye target (not shown).
Figure 2
 
Automated bull's-eye analysis. Each step is described in more detail in the text. Demonstration image (of younger, nonstudy child) used with parental permission.
Figure 2
 
Automated bull's-eye analysis. Each step is described in more detail in the text. Demonstration image (of younger, nonstudy child) used with parental permission.
Figure 3
 
Example images in which automation failed. (A) Edge of frame and participant partially obscured, (B) edge of frame and silhouetted by window, (C) multiple bull's-eyes present, (D) partially obscured within frame. Participant's faces are grayed-out to conceal their identity.
Figure 3
 
Example images in which automation failed. (A) Edge of frame and participant partially obscured, (B) edge of frame and silhouetted by window, (C) multiple bull's-eyes present, (D) partially obscured within frame. Participant's faces are grayed-out to conceal their identity.
Figure 4
 
Examples of a children's movement across individual staircases. Trials are highlighted with alternating gray- and white-shaded regions and defined with text in each upper left corner. Frame level raw viewing distances are represented by black dots and smoothed data by superimposed yellow lines. Minimum viewing distance per trial is represented by a dotted line and text below. In images where the bull's-eye was more than half obscured or out of frame (missing data points), red dots are placed at the recommended viewing distance.
Figure 4
 
Examples of a children's movement across individual staircases. Trials are highlighted with alternating gray- and white-shaded regions and defined with text in each upper left corner. Frame level raw viewing distances are represented by black dots and smoothed data by superimposed yellow lines. Minimum viewing distance per trial is represented by a dotted line and text below. In images where the bull's-eye was more than half obscured or out of frame (missing data points), red dots are placed at the recommended viewing distance.
Figure 5
 
Examples of children's raw and adjusted visual acuity staircases. Gray markers show raw and yellow adjusted logMAR values. The dotted lines show threshold estimations for the staircase of the corresponding color. Staircases correspond to those depicted in Figure 4.
Figure 5
 
Examples of children's raw and adjusted visual acuity staircases. Gray markers show raw and yellow adjusted logMAR values. The dotted lines show threshold estimations for the staircase of the corresponding color. Staircases correspond to those depicted in Figure 4.
Figure 6
 
Initial positioning of the child. The green dashed line is where the child ought to be at the start of the test. The fitted distribution, mean, and standard deviation are plotted with solid lines, and mode with a dotted line. Binned frequency data are represented by gray circles.
Figure 6
 
Initial positioning of the child. The green dashed line is where the child ought to be at the start of the test. The fitted distribution, mean, and standard deviation are plotted with solid lines, and mode with a dotted line. Binned frequency data are represented by gray circles.
Figure 7
 
Position of children on a trial-by-trial basis. The green dashed line is where the child ought to be during the test. The fitted distribution, mean, and standard deviation are plotted with solid gray lines, and mode with a dotted line. Binned frequency data are represented by gray circles.
Figure 7
 
Position of children on a trial-by-trial basis. The green dashed line is where the child ought to be during the test. The fitted distribution, mean, and standard deviation are plotted with solid gray lines, and mode with a dotted line. Binned frequency data are represented by gray circles.
Figure 8
 
Summary of children's movement in relation to initial position. Fitted distribution, mean, and standard deviation are plotted with solid lines, and mode with a dotted line. The green dashed line represents no movement. Binned frequency data are represented by gray circles.
Figure 8
 
Summary of children's movement in relation to initial position. Fitted distribution, mean, and standard deviation are plotted with solid lines, and mode with a dotted line. The green dashed line represents no movement. Binned frequency data are represented by gray circles.
Figure 9
 
Impact of movement on visual acuity estimates. The green dashed line highlights no change in acuity. The fitted distribution, mean, and standard deviation are plotted with solid lines and mode with a dotted line. Binned frequency data are represented by gray circles.
Figure 9
 
Impact of movement on visual acuity estimates. The green dashed line highlights no change in acuity. The fitted distribution, mean, and standard deviation are plotted with solid lines and mode with a dotted line. Binned frequency data are represented by gray circles.
Figure 10
 
Difference between raw and adjusted acuity, by participant. Each marker represents the difference in visual acuity threshold between raw and adjusted for a single acuity test. Participants are sorted by mean difference in logMAR acuity across all near and intermediate acuity tasks. Circles denote right eye and squares represent the left eye. Green represents regular optotypes and blue represents vanishing optotypes. Participant numbers are circled if multiple regression produced a P value of less than 0.05 for the participant in terms of absolute movement. Participant numbers are marked with asterisks if at least one 0.1 logMAR difference between raw and adjusted acuity is estimated. Asterisks are positioned above the participant number for near tasks and below for intermediate tasks.
Figure 10
 
Difference between raw and adjusted acuity, by participant. Each marker represents the difference in visual acuity threshold between raw and adjusted for a single acuity test. Participants are sorted by mean difference in logMAR acuity across all near and intermediate acuity tasks. Circles denote right eye and squares represent the left eye. Green represents regular optotypes and blue represents vanishing optotypes. Participant numbers are circled if multiple regression produced a P value of less than 0.05 for the participant in terms of absolute movement. Participant numbers are marked with asterisks if at least one 0.1 logMAR difference between raw and adjusted acuity is estimated. Asterisks are positioned above the participant number for near tasks and below for intermediate tasks.
Table
 
Participant Acuity Data (in logMAR) and Testing Order
Table
 
Participant Acuity Data (in logMAR) and Testing Order
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×