October 2023
Volume 12, Issue 10
Open Access
Low Vision Rehabilitation  |   October 2023
Self-Reported Visual Ability Versus Task Performance in Individuals With Ultra-Low Vision
Author Affiliations & Notes
  • Arathy Kartha
    SUNY College of Optometry, New York, NY, USA
  • Ravnit Kaur Singh
    Zanvyl Krieger School of Arts and Sciences, Johns Hopkins University, Baltimore, MD, USA
  • Chris Bradley
    Wilmer Eye Institute, Johns Hopkins University School of Medicine, Baltimore, MD, USA
  • Gislin Dagnelie
    Wilmer Eye Institute, Johns Hopkins University School of Medicine, Baltimore, MD, USA
  • Correspondence: Arathy Kartha, SUNY College of Optometry, 33 W 42nd Street, New York, NY 10036, USA. e-mail: akartha@sunyopt.edu 
Translational Vision Science & Technology October 2023, Vol.12, 14. doi:https://doi.org/10.1167/tvst.12.10.14
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Arathy Kartha, Ravnit Kaur Singh, Chris Bradley, Gislin Dagnelie; Self-Reported Visual Ability Versus Task Performance in Individuals With Ultra-Low Vision. Trans. Vis. Sci. Tech. 2023;12(10):14. https://doi.org/10.1167/tvst.12.10.14.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose: Visual functioning questionnaires are commonly used as patient-reported outcome measures to estimate visual ability. Performance measures, on the other hand, provide a direct measure of visual ability. For individuals with ultra-low vision (ULV; visual acuity (VA) <20/1600), the ultra-low vision visual functioning questionnaire (ULV-VFQ) and the Wilmer VRI—a virtual reality–based performance test—estimate self-reported and actual visual ability, respectively, for activities of daily living. But how well do self-reports from ULV-VFQ predict actual task performance in the Wilmer VRI?

Methods: We administered a subset of 10 matching items from the ULV-VFQ and Wilmer VRI to 27 individuals with ULV. We estimated item measures (task difficulty) and person measures (visual ability) using Rasch analysis for ULV-VFQ and using latent variable signal detection theory for the Wilmer VRI. We then used regression analysis to compare person and item measure estimates from self-reports and task performance.

Results: Item and person measures were modestly correlated between the two instruments, with r2 = 0.47 (P = 0.02) and r2 = 0.36 (P = 0.001), demonstrating that self-reports are an imperfect predictor of task difficulty and performance.

Conclusions: While self-reports impose a lower demand for equipment and personnel, actual task performance should be measured to assess visual ability in ULV.

Translational Relevance: Visual performance measures should be the preferred outcome measure in clinical trials recruiting individuals with ULV. Virtual reality can be used to standardize tasks.

Introduction
Ultra-low vision (ULV) refers to a level of vision where perception is limited to seeing moving shadows or silhouettes and where an individual is unable to read any letter on an early treatment diabetic retinopathy study (ETDRS) chart even from a distance of 0.5 m (20/1600 or 1.9 logarithm of the minimum angle of resolution [logMAR]).1 Individuals with ULV may have a congenital disorder of the visual system or be in the advanced stages of an eye disease that causes visual impairment (e.g., inherited retinal degenerations, glaucoma, diabetic retinopathy). A growing number of clinical trials recruit individuals with ULV (e.g., gene therapy or stem cell therapy) or restore vision to the levels of ULV (e.g., retinal prosthesis). Therefore, it is important to estimate visual ability in individuals with ULV as a potential outcome measure for treatments and rehabilitation and as a predictor of real-world visual ability. Assessing visual ability in ULV requires specific tools: individuals with ULV lack form vision, and standard vision charts with letters or gratings measuring visual function (e.g., visual acuity or contrast sensitivity) cannot provide an accurate measure of how vision supports their performance in activities of daily living. 
One way to assess functional ability in ULV is to use a visual function questionnaire (VFQ). VFQs have been widely used as outcome measures for various clinical treatments and trials. With proper statistical techniques such as Rasch analysis,2 VFQs allow for estimation of item difficulty and person ability (item measures and person measures, respectively). Ultimately, there is no “one size fits all,” and therefore, different VFQs are needed for different target populations (e.g., geriatric versus pediatric, visually impaired versus dual sensory loss, low vision versus ultra-low vision). The ultra-low vision visual function questionnaire (ULV-VFQ) was developed and validated using Rasch analysis to assess self-reported difficulty in performing activities of daily life among individuals with ULV.35 It has 150 items that were developed based on a 760-item ULV inventory5 reported by individuals with ULV during focus group discussions. Functional domains in the inventory include detail vision, visuomotor, visual information gathering, and wayfinding. The distribution of ULV-VFQ items across functional domains is in proportion to their relevance, with 107 items for visual information gathering and only 3 for detail vision. 
One limitation of questionnaires is that the estimated measures are based on self-reports and thus subject to individual bias.6 Szlyk et al.7 studied perceived and actual performance in individuals with retinitis pigmentosa by comparing scores from self-reports to the results of clinical vision tests—visual acuity, contrast sensitivity, visual fields, electroretinogram, and actual performance on reading, mobility, and peripheral detection tasks. The correlation coefficients ranged from 0.13 to 0.7 between self-reported ability and task performance for various tasks involving reading and mobility, while the correlations were weaker for complex tasks involving detail vision and more than one visual domain (e.g., threading a needle that involves eye–hand coordination and depth perception). 
With respect to ULV, the results from Szlyk et al.7 may not be generalizable because reading is a visual domain that is not relevant for ULV (due to the lack of form vision). Performance should be assessed using tasks that are similar to activities of daily life for the target population. This is challenging for tasks other than reading and visual search because it is difficult to standardize settings, methods, and scoring across centers and examiners. One way to standardize the task is by using a virtual reality (VR) system that can present scenarios similar to the real world, yet allow examiners to keep key test parameters uniform across participants and centers. In a further effort to make the outcomes uniform and objective, tasks may be presented using a multiple alternative forced-choice (m-AFC) paradigm where precisely one out of m ≥ 2 response alternatives is defined to be correct, minimizing observer and participant bias. 
Recently, we reported on the development and calibration of a VR-based test (Wilmer VRI) to assess visual information gathering in individuals with ULV.8 Visual information gathering can be defined as the process of actively looking around one's environment to gather visual cues or information to create an internal representation of the environment that can form the basis for visually guided actions. To our knowledge, no studies have compared self-reported visual ability using ULV-VFQ with task performance in individuals with ULV. The aim of this study is to investigate the relationship between self-reported and actual task performance in individuals with ULV and to investigate how well one measure predicts the other. 
Methods
Self-Reported Ability Versus Task Performance
The psychometric properties and item calibrations for ULV-VFQ and Wilmer VRI have been published elsewhere.3,4,8,9 Both the ULV-VFQ and Wilmer VRI were developed to assess visually guided performance in individuals with profound visual impairment. In this study, we used 10 matching items from both tests to assess how well self-reported ability can predict actual task performance. The Table shows the activities used to compare self-reported performance measures and task performance measures. Regression analysis was used to assess how well item and person measures from the ULV-VFQ predict those from the Wilmer VRI. 
Table.
 
Items Used for Assessment From thw ULV-VFQ and Wilmer VRI
Table.
 
Items Used for Assessment From thw ULV-VFQ and Wilmer VRI
Self-reported visual ability was assessed using the ULV-VFQ. Among the 107 visual information items in the ULV-VFQ, 10 items were selected for this study based on the feasibility of these items to be translated into real-world related tasks for ULV (Table). Each item on the questionnaire had five response categories for the participants to rate task difficulty. The response categories were “impossible to see or do visually,” “very difficult,” “somewhat difficult,” “not difficult,” and “not applicable.” These response categories corresponded to a Likert scale from 1 to 4 and NA for “not applicable.” 
Among the 19 activities in the Wilmer VRI,8 we selected 10 that matched the questions in the ULV-VFQ. The remaining VRI items were not considered in this analysis as they did not have a close match in the ULV-VFQ. Items were presented inside a commercial VR headset (FOVE, Inc., Tokyo, Japan) with a resolution of 1280 × 1440 pixels/eye, a field of view of 100 degrees, and a frame rate of 70 Hz. Each item was presented as an m-AFC task with 2 ≤ m ≤ 4 and chance performance level of 1/m. Each item was presented at three levels of contrast, and there were three trials per contrast level resulting in a total of nine trials per item. All participants performed the tasks binocularly with their habitual refractive corrections. No auditory or tactile cues or feedback were provided for any of the tasks. Participants were not allowed to use any low-vision devices during the test. 
Psychometric Analysis
Responses to the ULV-VFQ were analyzed using the method of successive dichotomizations (MSD),10 which is a polytomous Rasch model that estimates parameters on the same equal interval scale regardless of the number of response categories. MSD was used to estimate person and item measures (measures of person ability and item difficulty) as well as infit mean square statistics to assess unidimensionality. All analysis was done using the R package msd (https://www.r-project.org). 
Task performance from the Wilmer VRI was assessed using an m-AFC paradigm with different chance levels across different items (i.e., different items had different numbers of response alternatives m) (Fig. 1). This required us to use latent variable signal detection theory (SDT) analysis developed by Bradley and Massof11 to estimate item difficulty and person ability. Each correct response was scored as “1,” and each incorrect response was scored as “0” and then converted to relative d prime (d′) units. Data were analyzed using the R program available at https://sourceforge.net/projects/sdt-latent/files/. Item and person measures were estimated on the same relative d′ axis. A more positive item measure indicates greater task difficulty, while a more positive person measure indicates greater ability. Zero d′ on this axis represents chance performance for an average person. 
Participants
Participants with ULV were recruited through the low-vision clinic at the Wilmer Eye Institute, as well as other organizations serving the blind (National Federation for the Blind and Foundation Fighting Blindness). Testing was conducted via phone (ULV-VFQ) and at participants’ homes (n = 15) or in the Ultra-Low Vision Lab (n = 12) at Johns Hopkins. The study was approved by the Johns Hopkins University Institutional Review Board. All participants provided a signed informed consent prior to the study, and the study conformed to the tenets of the Declaration of Helsinki. 
Results
Twenty-seven individuals with ULV completed both the ULV-VFQ and the Wilmer VRI. The age of the participants ranged between 22 and 83 years, with a mean (SD) of 53.9 (19.5) years. There were 14 females (51.9%), and the conditions causing ULV were retinitis pigmentosa (44.4%), diabetic retinopathy (14.8%), glaucoma (14.8%), microphthalmia (3.7%), hydrocephalus (3.7%), Leber's congenital atrophy (3.7%), retinoblastoma (3.7%), cone-rod dystrophy (3.7%), optic atrophy (3.7%), and age-related macular degeneration (AMD) + glaucoma (3.7%). The mean (SD) estimated visual acuity was 3.25 ± 0.5 logMAR using the Berkeley Rudimentary Vision Test (BRVT).5 
Self-Reports Versus Task Performance
Figure 2A shows the comparison of estimated item measures from the two instruments, and Figure 2B compares the estimated person measures. We found significant correlations for both item measures (r = 0.7, P = 0.02) and person measures (r = 0.6, P = 0.001). The r2 = 0.47 for item measures shows that self-reported task difficulty explains approximately half the total variance in actual task difficulty. The r2 = 0.36 for person measures shows that self-reported visual ability explains only about one-third of the total variance in actual task performance. 
Figure 1.
 
An example of a task showing a black towel on the wall. This was a 3-AFC task where the participant had to report whether the towel was on the left, right, or missing.
Figure 1.
 
An example of a task showing a black towel on the wall. This was a 3-AFC task where the participant had to report whether the towel was on the left, right, or missing.
Figure 2.
 
(A) Relationship between item measures from self-reports and task performance. (B) Relationship between person measures from self-reports and task performance.
Figure 2.
 
(A) Relationship between item measures from self-reports and task performance. (B) Relationship between person measures from self-reports and task performance.
The equation of the best-fitting line in Figure 2A is y = 4.2x + 2.6, where y represents item measures estimated from self-reports and x represents item measures estimated from task performance. The equation of the best-fitting line in Figure 2B is y = 1.3x – 0.17. These equations represent the best ability to convert self-report measures to task performance measures given our data. 
Item measures estimated from MSD ranged from –0.79 to 3.02 logit with a mean (SD) of 0 ± 1.99 (by convention, the mean item measure is 0). Person measures ranged from –4.4 to 3.6 logit with a mean (SD) of –0.51 ± 2.03 logit. The mean standard error of the item measures was 0.46 logit while the mean standard error for the person measures was 0.8 logit. The item infits ranged between 0.97 to 3.24, with 80% of the items within the range of [0.5,1.5].12 The person infits ranged between 0.18 and 3.36 logit, with 54% of the individuals within this range. 
Item measures estimated using latent variable SDT analysis ranged between –1.1 and –0.17 relative d′ units with a mean of –0.59 d′; the more negative the relative d′ item measure, the easier the task. Person measures ranged between –1.62 and 1.93 relative d′ units with a mean of –0.29 relative d′ units. The mean standard errors for item and person measures were 0.16 and 0.07 relative d′ units, respectively. 
Figure 3 shows that there was a negative relationship between estimated visual acuity using BRVT and estimated person measures for both self-reports and task performance, confirming that those with better visual acuity (smaller logMAR) tend to have higher visual ability (higher person measures). However, this relationship was stronger and statistically significant only for person measures from task performance (r = 0.6; P = 0.002) and not for person measures estimated from self-reports (r = 0.3; P = 0.12). The accuracy of the regression line is negatively affected by the crude scale of the BRVT visual acuity estimates, which has only four distinct levels in our population. Similarly, the regression analysis showed that visual acuity was a weak predictor of self-reported person measures (r2 = 0.1) compared to that of person measures estimated from task performance (r2 = 0.4). The slopes of the regression lines were –1.36 and –1.15 for self-reports and task performance, respectively. 
Figure 3.
 
Relationship between estimated visual acuity in logMAR and person measures estimated using self-reports (open pink triangles) (logit) and task performance (open purple circles) (relative d′). The slopes of the regression lines were –1.36 for self-reported (orange) and –1.15 for task performance (blue).
Figure 3.
 
Relationship between estimated visual acuity in logMAR and person measures estimated using self-reports (open pink triangles) (logit) and task performance (open purple circles) (relative d′). The slopes of the regression lines were –1.36 for self-reported (orange) and –1.15 for task performance (blue).
Discussion
In this study, we investigated the relationship between self-reported visual ability and task performance among individuals with ULV using items in the visual information gathering domain. Our results show that an individual's self-reported ability is not a good predictor of their actual ability, with a r2 = 0.36 between the two measures. Our results also show that self-reported task difficulty is only moderately correlated with actual measures of task difficulty, with a r2 = 0.47. To our knowledge, this is the first study to report on the relationship between self-reported visual ability and task performance in individuals with ULV. 
One of the biggest constraints for assessing actual task performance in ULV is that it can be time-consuming in a clinical setting. The 10 task performance items used in this study can be completed in less than 20 minutes, which is sufficiently short to be incorporated during a ULV evaluation. In contrast, the ULV-VFQ may take less than 5 minutes. However, our findings show that self-reports cannot be used as a proxy for actual task performance in individuals with ULV. While self-reports are easier to administer and do not require specially trained or skilled personnel, they can be affected by individual bias. As more and more clinical trials are recruiting individuals with ULV, we recommend using an actual performance measure to complement self-reports. 
We found a weak association between BRVT visual acuity and both self-reported person measures and task performance person measures. This means that it is not possible to use BRVT visual acuity as a proxy measure for either self-reported or actual visual ability. The weak association could be due in part to the large step sizes in BRVT compared to continuous person measures and the fact that half of our participants were at the floor of the BRVT (i.e., the 3.5 logMAR value assigned to anyone who cannot distinguish the largest BRVT stimulus). 
A key limitation of this study is that task performance was measured in virtual reality and not in the real-world. While this was done to standardize the tasks, task performance in VR may be different from task performance in habitual settings or conditions at home. However, some studies have shown that real-world and VR performances are comparable.1316 A future study is required to show that this is true for our 10 ULV items. Our sample size of 27 individuals with ULV is also relatively small. However, our ULV sample was heterogeneous with a variety of conditions causing ULV, suggesting that it may be representative of the general ULV population. 
In summary, a growing number of vision restoration trials recruit individuals with ULV. Validated outcome measures are important in reporting effectiveness of various treatments. Currently, there are few standardized assessments for individuals with ULV. While VFQs are popular patient-reported outcome measures that can be self-administered or require less skilled personnel to administer, our study underlines the importance of measuring actual task performance. 
Acknowledgments
The authors thank all participants with ULV who volunteered for this study, as well as Will Gee and Chau Tran from Balti Virtual who developed the Wilmer VRI test in virtual reality. 
Supported by Research to Prevent Blindness (GD), R01EY028452 (GD), K99EY033031 (AK). 
Disclosure: A. Kartha, Johns Hopkins Technology Ventures (P); R.K. Singh, None; C. Bradley, None; G. Dagnelie, Johns Hopkins Technology Ventures (P) 
References
Geruschat DR, Bittner AK, Dagnelie G. Orientation and mobility assessment in retinal prosthetic clinical trials. Optom Vis Sci. 2012; 89: 1308–1315. [CrossRef] [PubMed]
Bond TG, Fox CM. Applying the Rasch Model: Fundamental Measurement in the Human Sciences. Mahwah, NJ: Lawrence Erlbaum; 2001.
Dagnelie G, Jeter PE, Adeyemo O, Group PLS. Optimizing the ULV-VFQ for clinical use through item set reduction: psychometric properties and trade-offs. Transl Vis Sci Technol. 2017; 6: 12. [CrossRef] [PubMed]
Jeter PE, Rozanski C, Massof R, Adeyemo O, Dagnelie G. Development of the ultra-low vision visual functioning questionnaire (ULV-VFQ). Transl Vis Sci Technol. 2017; 6: 11. [CrossRef] [PubMed]
Adeyemo O, Jeter PE, Rozanski C, et al. Living with ultra-low vision: an inventory of self-reported visually guided activities by individuals with profound visual impairment. Transl Vis Sci Technol. 2017; 6: 10. [CrossRef] [PubMed]
Warrian KJ, Spaeth GL, Lankaranian D, Lopes JF, Steinmann WC. The effect of personality on measures of quality of life related to vision in glaucoma patients. Br J Ophthalmol. 2009; 93: 310–315. [CrossRef] [PubMed]
Szlyk JP, Seiple W, Fishman GA, Alexander KR, Grover S, Mahler CL. Perceived and actual performance of daily tasks: relationship to visual function tests in individuals with retinitis pigmentosa. Ophthalmology. 2001; 108: 65–75. [CrossRef] [PubMed]
Kartha A, Sadeghi R, Bradley C, Tran C, Gee W, Dagnelie G. Measuring visual information gathering in individuals with ultra low vision using virtual reality. Sci Rep. 2023; 13: 3143. [CrossRef] [PubMed]
Sargur K, Kartha A, Sadeghi R, Bradley C, Dagnelie G. Functional vision assessment in people with ultra-low vision using virtual reality: a reduced version. Invest Ophth Vis Sci. 2022; 63: 4055.
Bradley C, Massof RW. Method of successive dichotomizations: an improved method for estimating measures of latent variables from rating scale data. PLoS One. 2018; 13: e0206106. [CrossRef] [PubMed]
Bradley C, Massof RW. Estimating measures of latent variables from m-alternative forced choice responses. PLoS One. 2019; 14: e0225581. [CrossRef] [PubMed]
Mokkink LB, Prinsen CAC, Patrick DL, et al. Consensus-based standards for the selection of health measurement instruments (COSMIN) methodology for systematic reviews of patient-reported outcome measures (PROMs). In: COSMIN, ed. User Manual. Amsterdam, The Netherlands; 2018.
Bowman EL, Liu L. Individuals with severely impaired vision can learn useful orientation and mobility skills in virtual streets and can use them to improve real street safety. PLoS One. 2017; 12: e0176534. [CrossRef] [PubMed]
Daibert-Nido M, Pyatova Y, Cheung KG, et al. An audiovisual 3D-immersive stimulation program in hemianopia using a connected device. Am J Case Rep. 2021; 22: e931079. [CrossRef] [PubMed]
Daibert-Nido M, Pyatova Y, Cheung K, et al. Case report: visual rehabilitation in hemianopia patients. Home-based visual rehabilitation in patients with hemianopia consecutive to brain tumor treatment: feasibility and potential effectiveness. Front Neurol. 2021; 12: 680211. [CrossRef] [PubMed]
Huang Q, Wu W, Chen X, et al. Evaluating the effect and mechanism of upper limb motor function recovery induced by immersive virtual-reality-based rehabilitation for subacute stroke subjects: study protocol for a randomized controlled trial. Trials. 2019; 20: 104. [CrossRef] [PubMed]
Figure 1.
 
An example of a task showing a black towel on the wall. This was a 3-AFC task where the participant had to report whether the towel was on the left, right, or missing.
Figure 1.
 
An example of a task showing a black towel on the wall. This was a 3-AFC task where the participant had to report whether the towel was on the left, right, or missing.
Figure 2.
 
(A) Relationship between item measures from self-reports and task performance. (B) Relationship between person measures from self-reports and task performance.
Figure 2.
 
(A) Relationship between item measures from self-reports and task performance. (B) Relationship between person measures from self-reports and task performance.
Figure 3.
 
Relationship between estimated visual acuity in logMAR and person measures estimated using self-reports (open pink triangles) (logit) and task performance (open purple circles) (relative d′). The slopes of the regression lines were –1.36 for self-reported (orange) and –1.15 for task performance (blue).
Figure 3.
 
Relationship between estimated visual acuity in logMAR and person measures estimated using self-reports (open pink triangles) (logit) and task performance (open purple circles) (relative d′). The slopes of the regression lines were –1.36 for self-reported (orange) and –1.15 for task performance (blue).
Table.
 
Items Used for Assessment From thw ULV-VFQ and Wilmer VRI
Table.
 
Items Used for Assessment From thw ULV-VFQ and Wilmer VRI
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×