September 2024
Volume 13, Issue 9
Open Access
Low Vision Rehabilitation  |   September 2024
Estimating Visual Acuity Without a Visual Acuity Chart
Author Affiliations & Notes
  • Yueh-Hsun Wu
    College of Optometry, The Ohio State University, Columbus, OH, USA
    Department of Psychology, University of Minnesota, Minneapolis, MN, USA
  • Deyue Yu
    College of Optometry, The Ohio State University, Columbus, OH, USA
  • Judith E. Goldstein
    Wilmer Eye Institute, Johns Hopkins University School of Medicine, Baltimore, MD, USA
  • MiYoung Kwon
    Department of Psychology, Northeastern University, Boston, MA, USA
  • Micaela Gobeille
    The New England College of Optometry, Boston, MA, USA
  • Emily Watson
    College of Optometry, The Ohio State University, Columbus, OH, USA
  • Luc Waked
    College of Optometry, The Ohio State University, Columbus, OH, USA
  • Rachel Gage
    Department of Psychology, University of Minnesota, Minneapolis, MN, USA
  • Chun Wang
    Measurement and Statistics, College of Education, University of Washington, Seattle, WA, USA
  • Gordon E. Legge
    Department of Psychology, University of Minnesota, Minneapolis, MN, USA
  • Correspondence: Yueh-Hsun Wu, College of Optometry, The Ohio State University, 338 W 10th Ave., Columbus, OH 43210, USA. e-mail: wu.6239@osu.edu 
Translational Vision Science & Technology September 2024, Vol.13, 20. doi:https://doi.org/10.1167/tvst.13.9.20
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Yueh-Hsun Wu, Deyue Yu, Judith E. Goldstein, MiYoung Kwon, Micaela Gobeille, Emily Watson, Luc Waked, Rachel Gage, Chun Wang, Gordon E. Legge; Estimating Visual Acuity Without a Visual Acuity Chart. Trans. Vis. Sci. Tech. 2024;13(9):20. https://doi.org/10.1167/tvst.13.9.20.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose: This study explored whether visual acuity (VA) can be inferred from self-reported ability to recognize everyday objects using a set of yes/no questions.

Methods: Participants answered 100 yes/no questions designed to assess their ability to recognize familiar objects at typical viewing distances, such as distinguishing between a full moon and a half moon on a clear night. The questions demanded VA ranging from normal to severe vision impairment. Responses were analyzed using item response theory, and the results were compared with participants' VA values.

Results: We recruited 385 participants from 4 sites in the United States. Participants had a mean age of 56.7 years with VA ranging from −0.3 to 2.0 logarithm of the minimum angle of resolution (logMAR) (mean = 0.58). A strong relationship was observed between participants' estimated vision ability and their VA (r = −0.72). The linear relationship can be used to predict each participant's VA based on their estimated vision ability. The average signed and unsigned prediction errors were 0 and 0.24 logMAR, respectively, with a coefficient of repeatability of 0.59 logMAR between the estimated VA and measured VA. The same linear function was used to determine the VA limit required for each question. For instance, the VA limit for the moon question was 1.0 logMAR.

Conclusions: Yes/no questions about everyday visual activities have the potential to estimate an individual's VA. Future refinements may enhance reliability.

Translational Relevance: The survey provides insights into the real-world visual capabilities of people with low vision, making it potentially useful for telehealth applications.

Introduction
Visual acuity (VA) is a measure of an individual's ability to see small details. It has been widely used to screen for vision disorders, assess treatment effectiveness, and determine eligibility for different rehabilitation services. Common methods for measuring VA use letter charts such as Snellen, Early Treatment Diabetic Retinopathy Study (ETDRS) chart, or the Landolt C or tumbling E charts. It is often assumed that VA measurements from a letter chart are informative about a person's everyday visual capabilities. This study was designed to develop a VA survey to establish the relationship between measured values of VA and daily viewing experiences. 
Past visual function questionnaires (VFQs) have validated the relationship between VA and people's responses to different VFQs. For example, Goldstein et al.1 found that VA was associated with low vision patients' estimated vision abilities using the Activity Inventory24 in doing daily living tasks, such as reading-related activities (e.g., reading medication labels), activities involved in seeing objects (e.g., seeing food on a plate), visual-motor tasks (e.g., threading needles), and mobility (e.g., getting in the bathtub). VA was found to be associated with overall scores, and scores on the near-activities subscale, and the distant activities subscale in the National Eye Institute VFQ-25 in people with low vision.5,6 Similar findings were observed in other VFQ instruments, such as the Activities of Daily Vision Scale,7 the 14-item Visual Functioning Index,8 the Visual Activities Questionnaire,9 and the Impact of Vision Impairment.10 
Numerous studies have shown that people with a lower VA are more impaired in specific activities of daily living. For example, West et al.11 found that people with a VA of worse than 20/40 experience difficulty with reading and face recognition. In the same study, mobility was not significantly affected until the VA was worse than 20/200. Rubin et al.12 found that VA was associated significantly with self-reported difficulties reading street signs and watching television after controlling for demographic factors and other vision measurements, like contrast sensitivity, field loss, and glare sensitivity. Arditi et al.13 also demonstrated that VA was associated with properties of visual imagery, including spatial resolution and viewpoint. 
Other studies have developed questionnaires to screen for acuity below normal. Coren and Hakstian14 demonstrated that a participant's responses to two questions regarding difficulties reading book text and recognizing faces could be used to detect if the participant had a VA of worse than 20/40. Owen et al.15 also found that older people with worse overall scores on the National Eye Institute VFQ-25 were likelier to have a VA lower than 20/60. 
These findings suggest a correlation between VA and people's subjective experiences of seeing objects or recognizing object details in daily life. However, no studies have demonstrated that a participant's VA measure could be estimated by their answers to questions concerning their ability to see everyday objects. 
Moreover, the previous studies using VFQs or VA screening questions have not considered viewing distance explicitly, which is a critical element of VA measurement. Associating acuity with daily viewing experience requires considering both object size and viewing distance; the visual angle depends on test distance. A familiar object viewed at different distances can result in different angular sizes corresponding with different retinal image sizes. A person with a VA of 20/40 may recognize a friend's face from 6 feet away, but not across a street. 
We designed a set of 100 yes/no questions about a participant's visual experience in seeing everyday objects at particular viewing distances. The questions were selected to span a wide range of demands on acuity. The answers were analyzed by item response theory (IRT). The analysis yielded a vision ability measure for each participant, which we expected to be related closely to acuity. The analysis also produced item difficulty scores, which we expected to be related to the acuity requirements associated with each item. 
Such a questionnaire could be useful in the remote evaluation or self-assessment of VA, linking numerical estimates with real-world seeing abilities. Its development could streamline screening for patient services, particularly in telemedicine, where standard vision tests may not be available. 
Methods
Participants
Participants were recruited between June 2019 and February 2022 from four sites including the University of Minnesota, The Ohio State University, Johns Hopkins University, and the University of Alabama at Birmingham. The research protocol was approved by the institutional review boards at the four sites. The study followed the tenets of the Declaration of Helsinki. Researchers contacted participants from previous studies or patients at each site. All participants provided either signed or electronic consent before participating in the study. The inclusion criteria included age of 18 years or older; availability of VA measurement data from laboratory, clinic, or electronic health records; and ability to hear and respond to questions in English in person or over the phone. No specific requirements for VA, visual field, or diagnosis were imposed because the aim was to evaluate a wide range of vision ability and VA. 
VA and survey responses were entered into a database at each center and later combined for analysis. The majority of the VA measurements were performed using the Bailey–Lovie or ETDRS charts, except for some measurements using Snellen charts. All VA results were converted into logarithm of the minimum angle of resolution (logMAR) units for data analysis. Participants with a wide range of VA were recruited. 
VA Survey
The 100 yes/no questions assessed participants' self-report recognition of object details, including this example: “Would you be able to see the eye of a needle, if you hold it at arm's length?” In this example, a commonly seen object, the eye of a needle, was viewed at an approximately standard distance, in this case, at arm's length. Participants were asked to answer the question assuming good lighting and normal viewing conditions. 
The development of the survey was an iterative process, where the Minnesota team members developed a pool of possible items, aiming to include a wide range of acuity levels. Selection was to ensure the scenarios would be broadly familiar to most U.S. participants and to minimize the inclusion of potentially unfamiliar objects. Additionally, the chosen items were subjected to informal pilot testing with both normally sighted and low-vision individuals, leading to further refinements through the addition and removal of certain items based on the feedback from these tests. An example of a removed item was, “Can you see someone's footprints in the snow on a snowy day?” The full list of 100 questions can be found in Supplementary Table S1
Participants completed the VA survey with an interviewer over the phone or in a clinic at our testing sites. The survey took 30 to 60 minutes to complete. Every participant was encouraged to give either a yes or no response to each question. However, participants were allowed to give a not applicable response if the scenario in the question was unfamiliar. In addition to the 100 yes/no questions, demographic information was collected, including race, ethnicity, age, etiology of visual impairment, age of onset of visual impairment, and the presence of central or peripheral visual field loss. If available, contrast sensitivity was also reported. 
IRT and Analysis
IRT was performed to estimate the relationship between a participant's latent trait (θ, see Equation 1), and responses to the VA survey items. Here the latent trait is vision ability to recognize object details. One form of IRT, Rasch analysis,16 has been used to analyze data from past VFQs for evaluating people's vision abilities to perform daily tasks. For example, some VFQs, including the Activity Inventory,2 estimate people's vision ability for performing daily activities using Rasch analysis. However, the estimated vision abilities from prior studies were designed primarily to measure general performance of vision-related tasks, whereas the current study aims to target participant's vision ability to see object details specifically. 
Unlike Rasch analysis, which constrains all item discrimination parameters (a in Equation 1) to be equal, the two-parameter logistic (2PL) IRT model17 used in the current study allowed both the parameter for item difficulty (b in Equation 1) and the parameter for item discriminability (a in Equation 1) to vary across all items. Equation 1 shows the interaction between the participant's vision ability (θ) and the two item parameters can be represented by a nonlinear probability function called the item characteristic curve (Fig. 1 and Equation 1). 
Figure 1.
 
Examples of different item characteristic curves (ICCs). (A) Three ICCs represent different item difficulty levels when sharing the same item discriminability. (B) Three ICCs show different levels of item discriminability when sharing the same level of item difficulty.
Figure 1.
 
Examples of different item characteristic curves (ICCs). (A) Three ICCs represent different item difficulty levels when sharing the same item discriminability. (B) Three ICCs show different levels of item discriminability when sharing the same level of item difficulty.
An item characteristic curve depicts the probabilities of yes responses to an item given at different vision ability levels of θ. In the current study, the latent trait (θ) is a participant's vision ability to see object details in daily life, with higher θ values indicating better vision abilities. We hypothesize that a person's value of θ is associated with their logMAR acuity measured on the acuity chart.  
\begin{eqnarray}P\ \left( {X = 1{\rm{|}}\theta ,a,b} \right) = \frac{{{e^{a\left( {\theta - b} \right)}}}}{{1 + {e^{a\left( {\theta - b} \right)}}}},\end{eqnarray}
(1)
where P(X) is the probability of yes/no response to an item from a participant; X is a yes response and X = 1; a no response is X = 0; θ is a participant's estimated vision ability; a is an item's discriminability; and b is an item's difficulty. 
The IRT model prediction of a participant's yes/no response (X) to a VA survey item depends on both the participant's vision ability level (θ) and item characteristics (b and a). If a participant's estimated vision ability (θ) is higher than an item's difficulty, the model predicts that the participant has more than a 50% chance to say yes to it. In the current study, a more difficult item would put a demand on acuity in terms of object size (i.e., smaller size) and viewing distance (i.e., greater viewing distance). Items with higher discriminability have item characteristic curves with steeper slopes (Fig. 1B). In the VA survey, for an item with greater discriminability, the probability of a yes response increases more rapidly as the vision ability (θ) increases. 
The ‘mirt’ package v1.4018 from R was used to build the 2PL IRT model. After each participant's vision ability (θ) was estimated using the 2PL model, a linear regression was built between the estimated vision ability and VA. The linear function converted each participant's vision ability into an estimated VA. The same linear function was used to convert each item's difficulty level into an item's estimated VA threshold. 
Results
Sample Characteristics
Three hundred eighty-five participants with VA ranging from −0.3 logMAR to 2 logMAR (mean = 0.58, SD = 0.44, median = 0.51) were recruited from four testing sites. Figure S1 in the supplementary material shows a histogram of VA. Table lists the sample characteristics across the four sites. 
Table.
 
Characteristics of Participants Across The Four Testing Sites
Table.
 
Characteristics of Participants Across The Four Testing Sites
The 2PL IRT Results
Figure 2A shows a Wright map with the two distributions of vision abilities and item difficulties aligned on the same dimension. The mean vision ability for the sample was 0.01 logit (SD = 0.94 logits), ranging from −2.81 logit to 2.03 logit. The item difficulty estimates ranged from −2.65 to 1.29 (mean = 0.81 logit; SD = 0.82 logits). Eighty-six percent of participants had their estimated vision abilities within the range of item difficulties in the VA survey. The other 14% of the participants had estimated vision abilities higher than the item scores with maximum difficulty, except for one participant who had an estimated vision ability lower than the minimum difficulty. For individuals whose estimated VA exceeded the maximum difficulty level, their average measured VA was 0.05 logMAR (SD = 0.17). The most difficult item (b = 1.29) was “Would you be able to see the eye of a needle if you hold it at arm's length?” The easiest item (b = −2.65) was “If you are standing in the middle of a parking space in a parking lot, would you be able to see the white lines that border the spot you are standing in?” 
Figure 2.
 
(A) Distributions of estimated thetas and item difficulty. (B) The scatterplot shows the relationship between the estimated item difficulty and item discriminability.
Figure 2.
 
(A) Distributions of estimated thetas and item difficulty. (B) The scatterplot shows the relationship between the estimated item difficulty and item discriminability.
Figure 2B shows the relationship between the estimates of item difficulty and item discriminability. The items on the two ends of the distribution of difficulty had lower discriminability than those in the middle. The average item discriminability estimate was 2.91 (SD = 0.76), ranging from 1.43 to 5.45. The item with the highest discriminability was “If a toothbrush is sitting on the counter in front of you, can you see which end has the bristles?” (a = 5.45, b = −1.31). The item, “When holding a soccer ball an arm's length away, are you able to see the individual black hexagons among the white hexagons that make up the ball?” had the lowest discriminability (a = 1.43, b = −2.22). 
IRT enables modeling of the data and thus evaluation of goodness of fit for items and participants. We evaluated the item fit by using the signed chi-squared test.6 Five out of 100 items (Q12, Q23, Q35, Q49, and Q79) had significantly worse fit (P < 0.05). Among the five items, the item, “Can you see if the person eating dinner directly across the table from you is wearing glasses?” had the poorest fit (P < 0.01). The goodness-of-fit for individuals in our 2PL model was quantified using the Zh statistic.7 Zh is a standardized index developed to quantify whether a participant's response pattern fits the IRT model. Participants who had a Zh value lower than −1.96 or higher than 1.96 were considered misfits. Out of the 385 participants, 60 participants were considered misfits. For the purpose of evaluating the development of the survey, the items and participants with poor fits were not excluded in the following analysis. 
Estimating VA From Estimated Vision Abilities
After estimating each participant's vision ability (θ), a best-fitted linear regression model was built between vision abilities and measured VA. In the linear model, theta was a significant predictor of measured VA (slope = −0.33; 95% confidence interval = −0.37 to −0.3; r = −0.73; R2 = 0.53) (Fig. 3). The linear relationship (Equation 2) between estimated vision abilities and measured VA was used to transform each participant's estimated theta into estimated VA. For example, a person with an estimated theta of 0 would have an estimated VA of 0.59 logMAR.  
\begin{eqnarray} && Estimated\ VA\ \nonumber \\ && = \ 0.59\ -\ 0.33\ \times \ vision\ ability\ \left( \theta \right)\end{eqnarray}
(2)
 
Figure 3.
 
The relationship between vision abilities and measured VA (logMAR).
Figure 3.
 
The relationship between vision abilities and measured VA (logMAR).
Figure 4 shows the relationship between the estimated VA and the measured VA. The correlation between estimated VA and measured VA was significant (r = 0.73; P < 0.001). Signed prediction errors were calculated by taking the differences between estimated VA and measured VA, whereas unsigned prediction errors were the absolute values of signed prediction errors. 
Figure. 4
 
The relationship between estimated VA and measured VA. Estimated VA values were transformed from vision ability estimates.
Figure. 4
 
The relationship between estimated VA and measured VA. Estimated VA values were transformed from vision ability estimates.
The average signed and unsigned prediction errors were 0 logMAR (SD = 0.3) and 0.24 logMAR (SD = 0.19). Figure 5A shows the agreement between the estimated VA and the measured VA. The coefficient of repeatability (0.95) between the estimated VA and the measured VA was 0.59 logMAR. The results showed that 60% of our participants had a prediction error within the ±0.24 logMAR range. We also found significant correlation between signed prediction errors and measured VA (r = −0.69; P < 0.01) suggesting that participants with better VA tended to have estimated VA that were worse than their actual VA measurements, whereas those with worse measured VA were more likely to have estimated VA that were better than their measured VA. A significant positive correlation was also found between the unsigned prediction error and measured VA (r = 0.33; P < 0.001) (Fig. 5B), which indicates that worse VA was associated with larger prediction errors. 
Figure 5.
 
(A) The plot represents the agreement between the estimated and measured VA. The signed prediction errors are calculated as the differences between the estimated VA and the measured VA. The solid black line represents the mean of sign prediction errors. The two black dashed lines indicate the range within ± 1.96 SD of the signed prediction errors. The two red dash-dot lines indicate the range of average unsigned prediction errors. (B) The scatterplot shows the relationship between measured VA and unsigned prediction errors.
Figure 5.
 
(A) The plot represents the agreement between the estimated and measured VA. The signed prediction errors are calculated as the differences between the estimated VA and the measured VA. The solid black line represents the mean of sign prediction errors. The two black dashed lines indicate the range within ± 1.96 SD of the signed prediction errors. The two red dash-dot lines indicate the range of average unsigned prediction errors. (B) The scatterplot shows the relationship between measured VA and unsigned prediction errors.
VA Thresholds for Items
We can also use the same linear equation (Equation 2) to convert each item's estimated difficulty threshold in logits, where a participant has a 50/50 chance of a yes response, into VA in logMAR. The transformed VA threshold indicates where a participant with the corresponding VA had a 50/50 chance to say yes or no. In other words, if a participant had a VA better than the VA threshold, the participant was more likely to say yes to the item. For example, the item “Would you be able to see the eye of a needle if you hold it at arm's length?” had an estimated VA threshold of 0.15 logMAR, the highest threshold among the 100 items. In contrast, the item “If you are standing in the middle of a parking space in a parking lot, would you be able to see the white lines that border the spot you are standing in?” represented the lowest item measure and had an estimated VA threshold of 1.47 logMAR. 
Some items had estimated VA thresholds close to frequently used VA screening criteria. For example, the item “If you are standing on a sidewalk, would you be able to see an ant that you are about to step on?” had an estimated VA threshold of 0.29 logMAR, which was close to the VA boundary of logMAR 0.3 (Snellen 20/40), often used for impaired vision. Another item, “When looking at a bunch of bananas on a table in front of you, are you able to count the individual bananas?” had an estimated VA of 1.0 (Snellen 20/200) logMAR, which was close to the criterion for legal blindness in the United States. The threshold VA values for all 100 items are shown in Table S2
Discussion
The current study aimed to determine if a person's VA can be estimated from self-reported answers to questions about their daily viewing experiences. The VA survey includes 100 yes/no questions asking if a participant can see an object at a given viewing distance. Using IRT to estimate the participants’ vision abilities based on their responses to 100 questions, we observed a strong correlation (r = −0.73) between measured logMAR VA and the estimated vision abilities. We further transformed the estimated abilities into estimated VA using the linear relationship between estimated abilities and VA. Although the average unsigned prediction error of our method was 0.24 logMAR, which translates to approximately 2 lines on a typical VA chart or 12 letters, the coefficient of repeatability of 0.95 of 0.59 logMAR is larger than the past findings of the test–retest reliability of VA charts in people with low vision (coefficient of repeatability = 0.95 = 0.2 logMAR).19 
Although previous VFQs have assessed the relationship between overall visual abilities to perform daily tasks and VA, our VA survey was designed to measure recognition of object details associated with VA specifically. As expected, the correlation between estimated vision ability and VA from our VA survey was greater than other VFQ instruments with similar VA ranges, such as the Activity Inventory (r = 0.57; VA median = 0.56 logMAR; interquartile range = 0.38–0.85 logMAR; r = 0.6 with a VA range between 0.08 and 1.64 logMAR),20,21 National Eye Institute VFQ-25 in low vision (r = 0.52),6 and patients with age-related macular degeneration (r = −0.50; VA range = 0.18–1.2 logMAR),5 Activities of Daily Vision Scale (r = 0.20; mean VA = 0.87 logMAR; VA range was not reported),7 14-item Visual Functioning Index (r = 0.27; VA range = 0 logMAR to hand motion),8 Visual Activities Questionnaire (r = 0.14–0.36; VA range was not reported),9 and Impact of Vision Impairment (R2 = 0.19; VA range = 0–1.3 logMAR).10 The more robust correlation observed in the present study underscores the advantage of including viewing distance and object size in the self-reported questionnaire when the focus is on VA, rather than on overall visual ability. 
In contrast with other VFQs, the VA survey estimates both item difficulty and discriminability using a 2PL IRT model. Incorporating a discriminability parameter is especially advantageous in real-world testing contexts, because it challenges the assumption that all items demonstrate identical discrimination levels. Moreover, the 2PL model offers a more detailed understanding of item performance, facilitating informed decisions about future item revisions and selections. Nevertheless, owing to the more complex model with one additional parameter, the 2PL model usually demands larger sample sizes compared with Rasch Analysis to ensure stable parameter estimation. 
Although the questions in our survey were designed to target participants’ VA, it is likely that other factors, such as contrast sensitivity, also influenced their responses. Among the subset of participants with contrast sensitivity measured (n = 291), we found a correlation of −0.51 (P < 0.001) between VA and contrast sensitivity, consistent with previous findings (Xiong et al.,22 r = −0.43 to −0.78; Goldstein et al.,23 r = −0.52). Some survey items could be considered as low-contrast tasks, akin to the real-world scenario where both angular size and contrast contribute to visibility. For example, the item “If you accidentally drop a credit card on the floor, would you be able to see where it is to pick it up?” might have had a different impact on contrast sensitivity depending on the card and the flooring. Therefore, it is expected that contrast sensitivity sometimes interacts with VA in determining object visibility. Similarly, Owsley and Sloane24 found that contrast sensitivity was associated with real-life object identification and recognition (i.e., faces, road signs, and commonplace objects). However, in a subgroup of participants with VA ranging from 0.3 to 0.4 logMAR who also had CS measured (n = 33), there was no significant correlation between CS measurements and unsigned prediction errors (r = −0.3; P > 0.05). Further studies are needed to determine the precise impact of VA versus contrast sensitivity on participant responses to our items. 
Four additional factors may contribute to the strong yet imperfect mapping between the estimated abilities from our questionnaire and chart measurements of VA. First, letter chart acuity may not entirely represent everyday recognition of object detail. Second, perhaps even different acuity charts may generate different results.25 Third, at two sites (The Ohio State University and Johns Hopkins University), some VA data were extracted from existing electronic health records and may, in some cases, have been measured with Snellen charts. Accordingly, as many as 20% to 30% of our study participants might have been tested with Snellen charts, which could result in greater prediction errors because the Snellen chart includes fewer letters, non-Sloan letters, and greater increments in size between lines when measuring poorer VA. Finally, VA is mainly tested under well-controlled conditions with a fixed viewing distance and proper lighting, but recognizing object details in real life occurs under a wider range of viewing conditions. 
The misfit results showed that about 15% of our participants and five items had significant misfit. Person misfit in IRT occurs when an individual's responses deviate from the model's predictions based on their estimated ability. Item misfit happens when the response pattern for a test item across all participants does not align with the model's predictions. Removing items with higher misfit and adding other items to expand the coverage of the VA survey could possibly improve the utility of the survey tool. The 60 participants with significant misfit had a larger unsigned prediction error (mean = 0.33 logMAR) than other participants, mean = 0.22 logMAR, t(75.71) = 3.71, P < 0.05. Further evaluation into the participants (e.g., the presence of scotomas, peripheral visual field loss) with higher misfit values could also help us to improve prediction accuracy. 
Limitations
Similar to other VFQs,26,27 we observed less accurate predictions for the participants outside the covered VA thresholds in the VA survey. Participants with VA better than 0.15 logMAR and poorer than 1.47 logMAR show poorer predictions due to the lack of items in these ranges. The increased unsigned prediction errors in participants with worse VA also represent a limitation of the survey for people with worse VA. However, the range in the current survey covers the majority of patients with low vision in the United States.23,28 This limitation could be addressed by adding more items targeting acuity outside the range covered in our survey. 
The significant correlation linear model between signed prediction errors and measured VA (Fig. 5A) indicates that the linear transformation used in the current study could result in worse predicted VA for participants with better VA and better predicted VA for those with worse measured VA. A nonlinear mapping between the scale of vision abilities and the scale of logMAR VA could possibly improve the VA predictions. Adding an additional regression correction to the predicted VA could also improve the prediction errors. 
Our study found that measured VA explained about 53% of the variance in estimated vision abilities (R2 = 0.53 across a heterogeneous group with a wide range of acuities. However, we didn't explore other influencing factors like specific diagnoses, and central or peripheral visual field loss. Further research is needed to understand other key factors affecting individuals' object discernment in daily situations. 
The testing time for the 100-item VA survey ranged from 30 to 60 minutes, which may contribute to response fatigue. We plan to develop a computer adaptive testing method that may help to decrease the time for estimating a participant's VA by automatically picking informative items contingent on prior responses while excluding the misfit items from the current study. 
Although previous tools for testing VA at home, such as phone app29,30 or printed charts,31 have demonstrated test–retest reliability, they generally rely on optotypes similar to those used in ETDRS or Snellen charts and often require access to specific technologies or physical materials. This barrier can be significant for individuals with low vision who may not have easy access to these apps or the ability to use printed materials. In contrast, our VA survey can be administered over the phone or via a website easily, making it more accessible to a broader audience. The VA survey not only offers a practical alternative, but also validates the link between letter acuity tests and everyday visual experiences. Additionally, the estimated VA thresholds for various items in our survey provide a direct association between real-life situations and specific VA measurements, enhancing the usefulness and relevance of VA in assessing visual function in everyday settings. 
Conclusions
The current study examined the relationship between VA and participants’ real-life viewing experiences. By analyzing participants’ responses to 100 questions and using IRT, we found a strong correlation between the estimated abilities to answer the questionnaire and the measured VA. The linear relationship can be used to transform the estimated abilities into estimated VA. This VA survey shows promise for clinical applications or as a screening tool within telehealth and broader public health settings. To enhance the reliability and applicability of this approach for future use, further work is needed. Future research should focus on adding questionnaire items to extend the range of acuities covered and consider additional factors like contrast sensitivity. Developing a computer-adaptive testing method could also reduce testing time, making the VA survey more suitable for future use in clinical and telehealth settings. 
Acknowledgments
Supported by, NIH R01 EY002934, NIH R01 EY025658 
Disclosure: Y.-H. Wu, None; D. Yu, None; J.E. Goldstein, None; M. Kwom, None; M. Gobeille, None; E. Watson, None; L. Waked, None; R. Gage, None; C. Wang, None; G.E. Legge, None 
References
Goldstein JE, Chun MW, Fletcher DC, Deremeik JT, Massof RW; Low Vision Research Network Study Group. Visual ability of patients seeking outpatient low vision services in the united states. JAMA Ophthalmol. 2014; 132(10): 1169–1177, doi:10.1001/jamaophthalmol.2014.1747. [CrossRef] [PubMed]
Massof RW, Ahmadian L, Grover LL, et al. The Activity Inventory: an adaptive visual function questionnaire. Optom Vis Sci. 2007; 84(8): 763–774, doi:10.1097/OPX.0b013e3181339efd. [CrossRef] [PubMed]
Massof RW, Hsu CT, Baker FH, et al. Visual disability variables. I: the importance and difficulty of activity goals for a sample of low-vision patients. Arch Phys Med Rehabil. 2005; 86(5): 946–953, doi:10.1016/j.apmr.2004.09.016. [CrossRef] [PubMed]
Massof RW, Hsu CT, Baker FH, et al. Visual disability variables. II: the difficulty of tasks for a sample of low-vision patients. Arch Phys Med Rehabil. 2005; 86(5): 954–967, doi:10.1016/j.apmr.2004.09.017. [CrossRef] [PubMed]
Orr P, Rentz AM, Margolis MK, et al. Validation of the National Eye Institute Visual Function Questionnaire-25 (NEI VFQ-25) in age-related macular degeneration. Invest Ophthalmol Vis Sci. 2011; 52(6): 3354–3359, doi:10.1167/iovs.10-5645. [CrossRef] [PubMed]
Massof RW, Fletcher DC. Evaluation of the NEI visual functioning questionnaire as an interval measure of visual ability in low vision. Vision Res. 2001; 41(3): 397–413, doi:10.1016/S0042-6989(00)00249-2. [CrossRef] [PubMed]
Mangione CM, Phillips RS, Seddon JM, et al. Development of the “activities of daily vision scale”: a measure of visual functional status. Med Care. 1992; 30(12): 1111–1126. [CrossRef] [PubMed]
Steinberg EP, Tielsch JM, Schein OD, et al. The VF-14. An index of functional impairment in patients with cataract. Arch Ophthalmol. 1994; 112(5): 630–638, doi:10.1001/archopht.1994.01090170074026. [CrossRef] [PubMed]
Sloane ME, Ball K, Owsley C, Bruni JR, Roenker DL. The Visual Activities Questionnaire: developing an instrument for assessing problems in everyday visual tasks. In: Noninvasive Assessment of the Visual System. Vol. 1. New York: Optical Society of America; 1992: 26–29, doi:10.1364/NAVS.1992.SuB4.
Fink DJ, Terheyden JH, Pondorfer SG, Holz FG, Finger RP. Test–retest reliability of the impact of vision impairment–Very Low Vision Questionnaire. Transl Vis Sci Technol. 2023; 12(6): 6, doi:10.1167/tvst.12.6.6. [CrossRef] [PubMed]
West SK, Rubin GS, Broman AT, et al. How does visual impairment affect performance on tasks of everyday life? The SEE project. Arch Ophthalmol. 2002; 120(6): 774–780, doi:10.1001/archopht.120.6.774. [CrossRef] [PubMed]
Rubin GS, Bandeen–Roche K, Huang GH, et al. the association of multiple visual impairments with self-reported visual disability: SEE project. Invest Ophthalmol Vis Sci. 2001; 42(1): 64–72. [PubMed]
Arditi A, Legge G, Granquist C, Gage R, Clark D. Reduced visual acuity is mirrored in low vision imagery. Br J Psychol. 2021; 112(3): 611–627, doi:10.1111/bjop.12493. [CrossRef] [PubMed]
Coren S, Hakstian AR. Visual screening without the use of technical equipment: preliminary development of a behaviorally validated questionnaire. Appl Opt. 1987; 26(8): 1468–1472, doi:10.1364/AO.26.001468. [CrossRef] [PubMed]
Owen CG, Rudnicka AR, Smeeth L, Evans JR, Wormald RP, Fletcher AE. Is the NEI-VFQ-25 a useful tool in identifying visual impairment in an elderly population? BMC Ophthalmol. 2006; 6(1): 24, doi:10.1186/1471-2415-6-24. [CrossRef] [PubMed]
Rasch G. Probabilistic Models for Some Intelligence and Attainment Tests. San Diego, CA: MESA Press; 1993.
Birnbaum A. Some latent trait models and their use in inferring an examinee's ability. In: Lord FM, Novick MR, eds. Statistical Theories of Mental Test Scores. Reading, MA: Addison-Wesley; 1968: 397–479.
Chalmers RP . mirt: a multidimensional item response theory package for the R environment. J Stat Softw. 2012; 48: 1–29, doi:10.18637/jss.v048.i06. [CrossRef]
Kiser AK, Mladenovich D, Eshraghi F, Bourdeau D, Dagnelie G. Reliability and consistency of visual acuity and contrast sensitivity measures in advanced eye disease. Optom Vis Sci. 2005; 82(11): 946–954, doi:10.1097/01.opx.0000187863.12609.7b. [CrossRef] [PubMed]
Macedo AF, Ramos PL, Hernandez-Moreno L, et al. Visual and health outcomes, measured with the activity inventory and the EQ-5D, in visual impairment. Acta Ophthalmol (Copenh). 2017; 95(8): e783–e791, doi:10.1111/aos.13430. [CrossRef]
Tabrett DR, Latham K. Factors influencing self-reported vision-related activity limitation in the visually impaired. Invest Ophthalmol Vis Sci. 2011; 52(8): 5293–5302, doi:10.1167/iovs.10-7055. [CrossRef] [PubMed]
Xiong YZ, Kwon M, Bittner AK, Virgili G, Giacomelli G, Legge GE. Relationship between acuity and contrast sensitivity: differences due to eye disease. Invest Ophthalmol Vis Sci. 2020; 61(6): 40, doi:10.1167/iovs.61.6.40. [CrossRef] [PubMed]
Goldstein JE, Massof RW, Deremeik JT, et al. Baseline traits of low vision patients served by private outpatient clinical centers in the United States. Arch Ophthalmol. 2012; 130(8): 1028–1037, doi:10.1001/archophthalmol.2012.1197. [CrossRef] [PubMed]
Owsley C, Sloane ME. Contrast sensitivity, acuity, and the perception of “real-world” targets. Br J Ophthalmol. 1987; 71(10): 791–796, doi:10.1136/bjo.71.10.791. [CrossRef] [PubMed]
Wittich W, Overbury O, Kapusta MA, Watanabe DH. Differences between recognition and resolution acuity in patients undergoing macular hole surgery. Invest Ophthalmol Vis Sci. 2006; 47(8): 3690–3694, doi:10.1167/iovs.05-1307. [CrossRef] [PubMed]
Goldstein JE, Bradley C, Gross AL, Jackson M, Bressler N, Massof RW. The NEI VFQ-25C: calibrating items in the National Eye Institute Visual Function Questionnaire-25 to enable comparison of outcome measures. Transl Vis Sci Technol. 2022; 11(5): 10, doi:10.1167/tvst.11.5.10. [CrossRef] [PubMed]
Goldstein JE, Fenwick E, Finger RP, et al. Calibrating the Impact of Vision Impairment (IVI): creation of a sample-independent visual function measure for patient-centered outcomes research. Transl Vis Sci Technol. 2018; 7(6): 38, doi:10.1167/tvst.7.6.38. [CrossRef] [PubMed]
Goldstein JE, Guo X, Boland MV, Smith KE. Visual acuity: assessment of data quality and usability in an electronic health record system. Ophthalmol Sci. 2023; 3(1): 100215, doi:10.1016/j.xops.2022.100215. [CrossRef] [PubMed]
Bastawrous A, Rono HK, Livingstone IAT, et al. Development and validation of a smartphone-based visual acuity test (Peek Acuity) for clinical practice and community-based fieldwork. JAMA Ophthalmol. 2015; 133(8): 930–937, doi:10.1001/jamaophthalmol.2015.1468. [CrossRef] [PubMed]
Han X, Scheetz J, Keel S, et al. Development and validation of a smartphone-based visual acuity test (Vision at Home). Transl Vis Sci Technol. 2019; 8(4): 27, doi:10.1167/tvst.8.4.27. [CrossRef] [PubMed]
Chen TA, Li J, Schallhorn JM, Sun CQ. Comparing a home vision self-assessment test to office-based Snellen visual acuity. Clin Ophthalmol. 2021; 15: 3205–3211, doi:10.2147/OPTH.S309727. [CrossRef] [PubMed]
Figure 1.
 
Examples of different item characteristic curves (ICCs). (A) Three ICCs represent different item difficulty levels when sharing the same item discriminability. (B) Three ICCs show different levels of item discriminability when sharing the same level of item difficulty.
Figure 1.
 
Examples of different item characteristic curves (ICCs). (A) Three ICCs represent different item difficulty levels when sharing the same item discriminability. (B) Three ICCs show different levels of item discriminability when sharing the same level of item difficulty.
Figure 2.
 
(A) Distributions of estimated thetas and item difficulty. (B) The scatterplot shows the relationship between the estimated item difficulty and item discriminability.
Figure 2.
 
(A) Distributions of estimated thetas and item difficulty. (B) The scatterplot shows the relationship between the estimated item difficulty and item discriminability.
Figure 3.
 
The relationship between vision abilities and measured VA (logMAR).
Figure 3.
 
The relationship between vision abilities and measured VA (logMAR).
Figure. 4
 
The relationship between estimated VA and measured VA. Estimated VA values were transformed from vision ability estimates.
Figure. 4
 
The relationship between estimated VA and measured VA. Estimated VA values were transformed from vision ability estimates.
Figure 5.
 
(A) The plot represents the agreement between the estimated and measured VA. The signed prediction errors are calculated as the differences between the estimated VA and the measured VA. The solid black line represents the mean of sign prediction errors. The two black dashed lines indicate the range within ± 1.96 SD of the signed prediction errors. The two red dash-dot lines indicate the range of average unsigned prediction errors. (B) The scatterplot shows the relationship between measured VA and unsigned prediction errors.
Figure 5.
 
(A) The plot represents the agreement between the estimated and measured VA. The signed prediction errors are calculated as the differences between the estimated VA and the measured VA. The solid black line represents the mean of sign prediction errors. The two black dashed lines indicate the range within ± 1.96 SD of the signed prediction errors. The two red dash-dot lines indicate the range of average unsigned prediction errors. (B) The scatterplot shows the relationship between measured VA and unsigned prediction errors.
Table.
 
Characteristics of Participants Across The Four Testing Sites
Table.
 
Characteristics of Participants Across The Four Testing Sites
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×