Open Access
Articles  |   June 2021
Sensitivity and Stability of Functional Vision Tests in Detecting Subtle Changes Under Multiple Simulated Conditions
Author Affiliations & Notes
  • Zhipeng Chen
    State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
  • Yijing Zhuang
    State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
  • Zixuan Xu
    State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
  • Lily Y. L. Chan
    State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
    School of Optometry, The Hong Kong Polytechnic University, Hong Kong, SAR, China
  • Shenglan Zhang
    State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
  • Qingqing Ye
    State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
  • Lei Feng
    State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
  • Zhong-Lin Lu
    Division of Arts and Sciences, New York University Shanghai, Shanghai, China
    Center for Neural Science and Department of Psychology, New York University, New York, USA
    New York University–East China Normal University Institute of Brain and Cognitive Neuroscience, Shanghai, China
  • Jinrong Li
    State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
  • Correspondence: Zhong-Lin Lu, Division of Arts and Sciences, NYU Shanghai, Shanghai, China; Center for Neural Science and Department of Psychology, New York University, New York, USA; New York University–East China Normal University Institute of Brain and Cognitive Neuroscience, Shanghai, China. e-mail: zhonglin@nyu.edu 
  • Jinrong Li, State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China, 510060. e-mail: lijingr3@mail.sysu.edu.cn 
  • Footnotes
    *  ZC and YZ contributed equally to this study.
Translational Vision Science & Technology June 2021, Vol.10, 7. doi:https://doi.org/10.1167/tvst.10.7.7
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Zhipeng Chen, Yijing Zhuang, Zixuan Xu, Lily Y. L. Chan, Shenglan Zhang, Qingqing Ye, Lei Feng, Zhong-Lin Lu, Jinrong Li; Sensitivity and Stability of Functional Vision Tests in Detecting Subtle Changes Under Multiple Simulated Conditions. Trans. Vis. Sci. Tech. 2021;10(7):7. doi: https://doi.org/10.1167/tvst.10.7.7.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose: To explore whether subtle changes in visual quality can be detected using different measures of visual function against the quick contrast sensitivity function test (quick CSF).

Methods: Sixty participants, aged 17 to 34 years, were enrolled. Participants’ vision was degraded by 0.25 D undercorrection (0.25 D), 60% neutral density filter brightness reduction (60% ND), and 0.8 Bangerter foil optical diffusion (0.8BAN). Visual function tests including visual acuity and contrast sensitivity (CSV-1000E and quick CSF) were measured with participant's best-corrected vision and under simulated visual degradation conditions. Test sensitivities in detecting differences were compared.

Results: Statistically significant visual acuity degradation was observed in the 0.8BAN condition only (Pcorrected < 0.001). With CSV-1000E and outliers removed, significant CS degradation was observed in all spatial frequencies, area under log CSF (AULCSF) in the 0.8BAN condition (Pcorrected < 0.001 for all), medium and high spatial frequencies and AULCSF in the 60%ND condition (Pcorrected,6cpd = 0.002, Pcorrected,12cpd = 0.005, Pcorrected,18cpd = 0.001, Pcorrected,AULCSF < 0.001) and the 0.25 D condition (Pcorrected,6cpd = 0.011, Pcorrected,12cpd = 0.013, Pcorrected,18cpd = 0.015, Pcorrected,AULCSF < 0.001). With the quick CSF, significant CS degradation was observed in all simulated visual conditions in all spatial frequencies, cutoff frequency and AULCSF (Pcorrected < 0.001 for all). Test-retest reliability of the quick CSF method was high; coefficient of repeatability ranged from 0.14 to 0.18 logCS.

Conclusions: Compared with visual acuity and chart-based CS tests, the quick CSF method provided more reliable and sensitive measures to detect small visual changes.

Translational Relevance: The quick CSF method can provide sensitive and reliable measures to monitor disease progression and assess treatment outcomes.

Introduction
A variety of assessment methods have been developed to monitor vision changes in ocular and neural diseases. Visual acuity (VA), representing the finest high-contrast detail that an eye can resolve, is usually considered the defining clinical measure of vision. The contrast sensitivity function (CSF), a more comprehensive characterization of spatial vision,1 has, however, generally been shown as a better predictor of visual performance25 and a more sensitive measure of functional vision changes in a variety of ocular and neural diseases, including amblyopia,6,7 cataract,8 glaucoma,9,10 macular degeneration,11 and Parkinson's disease.12 Furthermore, even when acuity appears normal, a variety of ocular and neural diseases have been shown to cause CSF deficits.2,13 
Currently, most clinical CSF tests use preprinted letter or grating charts such as the Pelli-Robson Contrast Sensitivity (CS) chart,14,15 the Functional Acuity Contrast Test16 (a replacement of the original Vistech CS chart), and the Vector Vision CSV-1000 series charts (Vector Vision, Greenville, OH, USA).17 Although these charts are convenient for clinical use, the low measurement precision caused by coarse spatial frequency and contrast sampling limits their applications.1820 On the other hand, laboratory CSF tests based on adaptive psychophysical procedures, such as QUEST21 or ψ22 method, often require a minimum of 50 trials per spatial frequency over five to seven spatial frequencies. The 30- to 60-minute test time makes them impractical for clinical use. 
The challenge of developing a precise and efficient CSF assessment was addressed by the introduction of the quick CSF method.23,24 On the basis of the adaptive Bayesian framework,25 the method can obtain a highly precise (SD < 2–3 dB) CSF assessment in fewer than 100 tests or five to 10 minutes. Subsequently, Hou et al.26 and Zheng et al.27 implemented the quick CSF method with 10-alternative forced (10AFC) letter and digit identification tasks and further improved its efficiency. The 10AFC quick CSF test can be used to obtain a high-precision (SD < 0.10 log unit) CSF assessment in approximately five minutes or less. Since its development, the quick CSF method has been applied to investigate visual deficits in amblyopia,28 age-related macular degeneration,29 congenital cataract,30 central serous chorioretinopathy,31 and multiple sclerosis,32 as well as effects of visual adaptation33 and peripheral visual function.34 
In this study, we compared the test-retest reliabilities of the Bailey-Lovie VA Chart, CSV-1000E chart, and quick CSF, as well as their sensitivities in detecting minor visual quality changes in simulated visual conditions using a optical lens, a neutral density filter, and a Bangerter foil. 
Methods
Observers
Sixty healthy subjects, from 17 to 34 years old (mean = 24.40 ± 4.48 years; 24 females), were recruited from the Optometry Clinic of Zhongshan Ophthalmology Center (Guangzhou, China). All subjects participated in the main experiment, and 24 subjects participated in a repeated test session. All subjects were naïve psychophysical observers with normal or corrected-to-normal vision [visual acuity ≤ 0.00 logMAR (20/20)] and without any history of eye or mental health illness. Both anterior and posterior eye health was examined with slit lamp and direct ophthalmoscopy to screen for eye diseases before the test. The study protocol was approved by the Ethics Committee of Zhongshan Ophthalmic Center of Sun Yat-sen University and adhered to the tenets of the Declaration of Helsinki. All subjects signed informed consent after they were given written and verbal explanations of the nature and purpose of the study. Additional consent was obtained from the parents or legal guardians of those who were younger than 18 years of age. 
All subjects underwent a comprehensive eye examination and wore their best-corrected full distance refractive correction during the study. All tests were conducted monocularly with the fellow eye occluded. Before testing, pupil size was measured under ambient light with a PD ruler. The pupil size ranged from 3 to 4 mm across all the subjects. 
Simulated Visual Conditions
All subjects underwent testing in the following simulated visual conditions with a randomized test order: (1) normal viewing (BCVA), (2) optical blur with a +0.25 D lens (+0.25 D), (3) brightness reduction with a 60% ND filter (60% ND), and (4) diffusion with a 0.8 Bangerter foil (0.8 BAN). Twenty-four subjects were randomly selected to repeat these tests on different days. 
Visual Function Tests
Visual Acuity
High-contrast (96.9% ± 0.83%) logMAR distance visual acuity was measured at 4 m, for each eye, using the Bailey-Lovie tumbling E chart35 (Precision Vision, Woodstock, IL, USA), which was displayed in a transillumination cabinet at a luminance level of 160 cd/m.2,36 The stimuli of the Bailey-Lovie tumbling E chart are shown in the Figure 1a
Figure 1.
 
Stimuli for visual acuity and contrast sensitivity function assessments. (a) The Bailey-Lovie tumbling E chart has five high-contrast optotypes per row ranging in size from 1.0 to −0.3 logMAR. (b) CSV-1000E chart presents four sections of sine-wave grating with different spatial frequencies and contrasts. (c) The stimuli in the quick CSF method are a set of three Sloan digits with different contrasts.
Figure 1.
 
Stimuli for visual acuity and contrast sensitivity function assessments. (a) The Bailey-Lovie tumbling E chart has five high-contrast optotypes per row ranging in size from 1.0 to −0.3 logMAR. (b) CSV-1000E chart presents four sections of sine-wave grating with different spatial frequencies and contrasts. (c) The stimuli in the quick CSF method are a set of three Sloan digits with different contrasts.
Contrast Sensitivity Function
Monocular CSF in the right eye was measured with the CSV-1000E (Vector Vision) and quick CSF procedures in all the simulated visual conditions. 
The CSV-1000E provides an auto-light calibration fluorescent luminance source to display the chart at an 85 cd/m2 ± 0.1 log unit background luminance level. The chart consists of four sections, each made of 17 1.5-inch diameter circular patches arranged in nine columns. The first column in each section displays a single high-contrast vertical sinewave grating, and the remaining eight columns consist of two rows of patches. At most one of the two patches in each column contains a vertical sinewave grating. All sinewave gratings in each row are of the same spatial frequency (3, 6, 12, and 18 cycles/degree [cpd] in each row). The gratings in each section are arranged with decreasing contrast from left to right, with log reciprocals of contrast from 0.70 to 2.08, 0.91 to 2.29, 0.61 to 1.99, and 0.17 to 1.55 log units in the four rows, respectively (Fig. 1b). Subjects were tested at a distance of 2.5 m and were directed to observe the high-contrast gratings in the first column. Going through all four sections starting from the top, they were then instructed to identify the location of the grating in each column with a three-alternative forced choice response: top, bottom, or blank. They were encouraged to guess if a grating was at least partially visible. The lowest contrast level at which the subject correctly identified the location of the “stripes” was recorded for the section. The result is a contrast sensitivity function sampled at four spatial frequencies. 
The quick CSF method was performed at a test distance of 4 m. The stimuli were presented on a gamma-corrected 46-inch LCD monitor (NEC LCD P463), with a mean luminance of 91.2 cd/m2, a resolution of 1920 × 1080 pixels, and a vertical refresh rate of 60 Hz. Ten numeric digits, filtered with a raised cosine filter were presented as the test stimuli.27 Stimuli with different contrasts were obtained by scaling the intensities of the normalized images with corresponding contrast values, while the stimuli with different spatial frequencies were generated by resizing. There were 128 possible contrasts (evenly distributed in log space from 0.002 to 1) and 19 possible spatial frequencies (evenly distributed in log space from 1.19 to 30.95 cpd).37 In each trial, three filtered digits of the same size but decreasing contrast were presented in a row with a center-to-center distance of 1.1 times digit size (Fig. 1c). The quick CSF method selects the optimal stimuli from the two-dimensional contrast and spatial frequency space by maximizing the expected information gain in each trial and updates the posterior probabilities of the CSF parameters following the subject's response. Before the test, subjects were given five minutes to dark adapt to the test environment. Then, they were asked to verbally report the digits presented on the screen to the examiner, who used a computer keyboard to enter the verbal responses. The stimuli disappeared after the examiner input all the responses. A new trial began 500 ms later. Observers were given an option to report “I don't know” on which the response was regarded as incorrect. The quick CSF procedure used a 10-alternative forced-choice digit identification task to measure the CSF in 35 trials. The process took approximately five minutes. 
Analysis
Log contrast sensitivities (logCS) at spatial frequencies 3, 6, 12 and 18 cpd, the area under log CSF (AULCSF), and the cutoff spatial frequency (cutoff) of the CSF, were used in data analysis. As for the CSV-1000E data, CSF curves were generated by fitting a second-order polynomial to the CS values at four testing spatial frequencies. AULCSF was calculated according to the method of Yamane et al.,38 in which the fitted function was integrated between the fixed log spatial frequency limits of 0.48 (corresponding to 3 cpd) and 1.26 (18 cpd). The cutoff was then defined as the SF where the threshold is 50% contrast (or CS = 2) based on the best fitting curve. In terms of the quick CSF method, AULCSF, used as a summary metric of the CSF function,28,39,40 was calculated as the area under log CSF from 1.5 to 18 cpd.37,41 The cutoff spatial frequency, also defined as the spatial frequency at which logCS is 2.0 (threshold = 0.5),28 was derived from the estimated quick CSF curve. 
Data analyses were performed using SPSS version 25 for Windows (IBM, Armonk, NY, USA). The VA and logCS values obtained with the three tests were characterized by median and interquartile range (IQR) (Table 1). The normality of the data distributions was confirmed using the Shapiro-Wilks test. Because the logCS obtained with the CSV-1000E and quick CSF at four spatial frequencies, the logMAR VA, the estimated AULCSF, and cutoff frequency from the CSV-1000E test did not follow the normal distribution, the differences between the four simulated visual conditions at four spatial frequencies were compared using a 4 (spatial frequency) × 4 (condition) two-way nonparametric analysis of variance (ANOVA) (Scheirer-Ray-Hare test42) and Friedman test, respectively. The cutoff and AULCSF obtained from the quick CSF followed the normal distribution, and a one-way repeated measure ANOVA was performed to evaluate the differences among the three simulated conditions and the BCVA condition. When the differences were statistically significant (with P ≤ 0.05), the nonparametric Friedman test was followed by the Dunn-Bonferroni post hoc test to compare VA value, logCS at each spatial frequency from the CSV-1000E and the quick CSF tests, cutoff and AULCSF estimated from CSV-1000E, whereas a post hoc Bonferroni test was carried out to compare the cutoff and AULCSF from the quick CSF method. All α values were two-sided at a type I error rate of 0.05. Additionally, to quantitatively compare the two CSF methods in detecting small visual changes, a multivariate linear regression model was applied to test whether there were statistically significant differences in observed logCS differences between the simulated visual conditions and the BCVA condition measured with CSV-1000E and the quick CSF. The regression model takes the following form:  
\begin{eqnarray*}{\rm{y}} &=& \alpha + {\beta _1}{{\rm{x}}_1} + {\beta _2}{{\rm{x}}_2} + {\beta _3}{{\rm{x}}_3} + {\beta _4}{{\rm{x}}_1} \cdot {{\rm{x}}_2} + {\beta _5}{{\rm{x}}_1} \cdot {{\rm{x}}_3}\nonumber\\ && +\, {\beta _6}{{\rm{x}}_2} \cdot {{\rm{x}}_3} + {\beta _7}{x_1} \cdot {{\rm{x}}_2} \cdot {{\rm{x}}_3} + \varepsilon ,\end{eqnarray*}
where y is the logCS difference between the three simulated visual conditions and the BCVA condition, α was a constant, ε was a random error term, x1 is the scale variable of SF, x2, x3 are nominal variables of visual conditions and methods, β1, β2, β3 are the coefficients of the independent variables, β4x1 · x2, β5x1 · x3, β6x2 · x3 are the pairwise interactions of the three independent variables, and β7x1 · x2 · x3 is the three-way interaction. The goodness of fit of the regression model was evaluated using the adjusted coefficient of determination, adjusted R2, which adds precision and reliability by considering the impact of additional independent variables. 
Table 1.
 
Summary of logMAR Visual Acuity and Log Contrast Sensitivity Values Under Different Visual Conditions
Table 1.
 
Summary of logMAR Visual Acuity and Log Contrast Sensitivity Values Under Different Visual Conditions
Test-retest reliability was evaluated with the Bland-Altman plot. The coefficient of repeatability (CoR), defined as 1.96x within-subject standard deviation from the test-retest difference (test 2-test 1), was used to determine the test-retest reliability, where a lower value indicated less variability with repeated measures and therefore better reliability. The average difference between test and retest represented bias. Outliers were identified by the Tukey method (Boxplot), and fences were set at 1.5 IQR below Q1 and above Q3 [Q1-1.5 IQR, Q3+1.5IQR]. The number of the outliers was characterized by mean and standard deviation (SD). The above-mentioned analyses were performed both with and without outliers excluded. 
Results
The mean spherical equivalent refractive error of all 60 subjects was −4.49 ± 2.38 D (range, 0.00 to −8.50 D). The mean spherical and cylindrical errors were −4.22 ± 2.31 D and −0.54 ± 0.48 D, respectively. 
Visual Acuity
The median [IQR] logMAR VA for the four simulated visual conditions are listed in Table 1. The 0.8BAN condition showed statistically significant worsening of logMAR VA compared with the BCVA condition (post-hoc multiple comparisons are shown in Figure 2, Pcorrected < 0.001), whereas the other two simulated conditions (+0.25D and 60% ND) showed no statistically significant difference with the BCVA condition (0.25D: Pcorrected = 0.056, 60% ND: Pcorrected = 0.093). 
Figure 2.
 
Box plots of logMAR visual acuity (VA) obtained with Bailey-Lovie tumbling E chart. A box plot illustrating changes in logMAR VA under four visual conditions. Within each box, the middle horizontal line denotes the median logMAR VA values; the top and bottom of each box represent the twenty-fifth and seventy-fifth percentile range; the vertical extending line and the error bars denote adjacent values (i.e., the most extreme values within 1.5 interquartile range of the twenty-fifth and seventy-fifth percentile of each condition). Pairwise comparisons showed that there was a statistically significant difference (Pcorrected < 0.001, ***) in VA between the 0.8 Bangerter foil diffusion condition (0.8BAN) from the normal viewing condition (BCVA).
Figure 2.
 
Box plots of logMAR visual acuity (VA) obtained with Bailey-Lovie tumbling E chart. A box plot illustrating changes in logMAR VA under four visual conditions. Within each box, the middle horizontal line denotes the median logMAR VA values; the top and bottom of each box represent the twenty-fifth and seventy-fifth percentile range; the vertical extending line and the error bars denote adjacent values (i.e., the most extreme values within 1.5 interquartile range of the twenty-fifth and seventy-fifth percentile of each condition). Pairwise comparisons showed that there was a statistically significant difference (Pcorrected < 0.001, ***) in VA between the 0.8 Bangerter foil diffusion condition (0.8BAN) from the normal viewing condition (BCVA).
Contrast Sensitivity
The median [IQR] logCS scores at four spatial frequencies under the four simulated visual conditions, obtained from the CSV-1000E chart and the quick CSF, are presented in Table 1. A 4 (spatial frequency) × 4 (condition) nonparametric two-way ANOVA (Scheirer–Ray–Hare test) was performed on the logCS obtained with CSV-1000E. Both the main and interaction effects of condition and spatial frequency reached significance. Friedman post-hoc Dunn test were further carried out to compare logCS at each spatial frequency in the four conditions. 
Similar to VA, there were statistically significant differences between the 0.8BAN condition and the BCVA condition in all four spatial frequencies with the CSV-1000E test (Pcorrected < 0.001 for all, Fig. 3Table 1). But logCS provided additional information showing statistically significant losses of logCS in the +0.25D and 60% ND conditions at 6 cpd (+0.25D: Pcorrected = 0.011, 60%ND: Pcorrected = 0.002), 12cpd (+0.25D: Pcorrected = 0.013, 60% ND: Pcorrected = 0.005) and 18 cpd (+0.25D: Pcorrected = 0.015, 60% ND: Pcorrected = 0.001) in comparison with the BCVA condition with the outliers removed (Fig. 3Table 1). However, without excluding the outliers (4.50 ± 3.42 outliers), the results were somewhat different: degradation was statistically significant at all spatial frequencies in the 0.8 Bangerter viewing condition (Pcorrected < 0.001), at 6, 12 and 18 cpd in the 60% ND filter condition (6 cpd Pcorrected = 0.001, 12 cpd, Pcorrected = 0.003, 18 cpd, Pcorrected = 0.001), and only 12 cpd in the +0.25D condition (Pcorrected = 0.006) (Table 1). In terms of AULCSF, a summary statistic that quantifies the entire range of contrast visibility, statistically significant degradation was found in all the simulated conditions in comparison with the BCVA condition (with or without two outliers excluded, Fig. 3Table 1). For the cutoff spatial frequency, there was no statistically significant difference among the four conditions (with or without 17 outliers excluded, Fig. 3Table 1). 
Figure 3.
 
Box plot of logCS obtained with CSV-1000E for different spatial frequencies, cutoff frequency and AULCSF under four visual conditions. The details of the box plot are the same as Figure 2. The blue crosses represent individuals whose values were statistical outliers. As shown in the subplots, post hoc multiple pairwise comparisons were also used to examine statistical differences between the three simulated visual degraded conditions and the normal viewing condition (BCVA) with the outliers excluded. One to three asterisks indicate Pcorrected ≤ 0.05, Pcorrected ≤ 0.01, Pcorrected ≤ 0.001, respectively. P values are adjusted for multiple testing.
Figure 3.
 
Box plot of logCS obtained with CSV-1000E for different spatial frequencies, cutoff frequency and AULCSF under four visual conditions. The details of the box plot are the same as Figure 2. The blue crosses represent individuals whose values were statistical outliers. As shown in the subplots, post hoc multiple pairwise comparisons were also used to examine statistical differences between the three simulated visual degraded conditions and the normal viewing condition (BCVA) with the outliers excluded. One to three asterisks indicate Pcorrected ≤ 0.05, Pcorrected ≤ 0.01, Pcorrected ≤ 0.001, respectively. P values are adjusted for multiple testing.
However, when the quick CSF was used as the CS measure, logCS showed losses under all the simulated visual conditions in the same spatial frequencies tested with the CSV-1000E method (Table 1). The logCS values were generally higher with the CSV-1000E test than with the quick CSF test, especially at the highest frequency (18 cpd), where CSV-1000E produced a median measurement of 1.25 logCS under the normal viewing condition, which was almost three times as high as the measurement from the quick CSF test (0.49 logCS). Similar to CSV-1000E, a nonparametric two-way ANOVA (Scheirer-Ray-Hare test) was used to compare logCS results obtained with the quick CSF method. We found that there were significant main effects of both SF and condition, with no significant condition by SF interaction. Post-hoc Dunn's test for pairwise comparisons at each spatial frequency found statistically significant differences in logCS between all simulated visual conditions and the BCVA condition with and without the outliers removed (Pcorrected < 0.001 for all, one outlier, Fig. 4Table 1). The cutoff, which is strongly related to VA, showed more sensitivity than logMAR VA and more accurate than CSV-1000E in detecting changes (post hoc multiple comparisons are shown in Fig. 4 and Table 1, Pcorrected < 0.001 for all, with and without four outliers removed). The AULCSF in the BCVA condition was found to be significantly higher than the other three conditions (Pcorrected < 0.001 for all, Fig. 4 and Table 1), reflecting the effects these changes brought about onto the visual system as a whole.43 
Figure 4.
 
Box plot of log contrast sensitivity (CS), cutoff spatial frequency and the area under log contrast sensitivity function (AULCSF) obtained with the quick CSF under four visual conditions. The details of the box plot are the same as Figure 3. Post hoc multiple comparisons showed that statistically significant differences in logCS between the three simulated visual degraded conditions and the normal viewing condition (Pcorrected < 0.001, ***, P values are adjusted for multiple testing) at the same four SF with CSV-1000E, cutoff value, and AULCSF with the outliers excluded.
Figure 4.
 
Box plot of log contrast sensitivity (CS), cutoff spatial frequency and the area under log contrast sensitivity function (AULCSF) obtained with the quick CSF under four visual conditions. The details of the box plot are the same as Figure 3. Post hoc multiple comparisons showed that statistically significant differences in logCS between the three simulated visual degraded conditions and the normal viewing condition (Pcorrected < 0.001, ***, P values are adjusted for multiple testing) at the same four SF with CSV-1000E, cutoff value, and AULCSF with the outliers excluded.
Additionally, we quantitatively compared the two CSF measures in detecting small visual changes using multivariate linear regression analysis (Table 2). The regression model provided a good account of all the data [F (7, 51.14) = 144.89, P < 0.001, R2adjusted = 0.41]. Specifically, we compared the logCS differences between the three simulated visual conditions and the BCVA condition measured with the two measures. In terms of different measures, the quick CSF method detected statistically significant bigger logCS differences between the simulated and BCVA conditions than CSV-1000E (β3 = 0.31, P < 0.001). The three-way interaction term indicated that both spatial frequency and visual condition affected the logCS differences between the simulated and BCVA conditions measured with the two methods (β7 = 0.01, P < 0.001). That is, the observed logCS differences from the two methods were different at different spatial frequencies and for different visual conditions. For instance, a higher degradation of visual quality (0.8BAN) generated a bigger difference in logCS change between CSV-1000E (the average logCS difference between 0.8BAN and BCVA condition was 0.56) and the quick CSF method (the average logCS difference between 0.8BAN and BCVA condition was 0.67) in comparison with a lower degradation of visual quality (the average logCS difference between 60%ND and BCVA was 0.15 and 0.20 for CSV-1000E and the quick CSF, respectively). 
Table 2.
 
Multivariate Linear Regression of Contrast Sensitivity discrepancy for Spatial Frequency, Visual Condition, and Measure Variables
Table 2.
 
Multivariate Linear Regression of Contrast Sensitivity discrepancy for Spatial Frequency, Visual Condition, and Measure Variables
Probable outliers of each spatial frequency used in the two methods were identified by applying Tukey Fences. Due to the apparent skewness, the 0.8BAN condition in the 12cpd and 18 cpd conditions with the quick CSF had some outliers when using Tukey's range test. However, it is noted that, compared with the quick CSF, CSV-1000E had more outliers at each spatial frequency in the absence of obvious skew distribution (CSV-1000E: 4.50 ± 3.42 outliers vs. quick CSF: 1 outlier), indicating that there was more variability among individuals within each condition and more bias in the CSV-1000E test. Meanwhile, we also noticed that a considerable number of outliers were identified in all conditions in the cutoff frequency comparisons of CSV-1000E (17 outliers, Fig. 3). 
Teat-Retest Reliability
Bland-Altman plots of AULCSF, that is, the test-retest difference of AULCSF value against the mean, are shown in Figure 5. CoR was computed for each visual condition in the subplots. Between the two CSF tests, the quick CSF test demonstrated better overall repeatability than the CSV-1000E, with CoR values ranging from 0.14 (95% confidence interval [CI], 0.11–0.20) to 0.18 (95% CI, 0.11–0.20) logCS for the quick CSF compared to 0.29 to 0.42 logCS for CSV-1000E.43 The bias was low for all conditions in the quick CSF test, which ranged from −0.02 to 0.03 logCS from test to retest. Bias values tended toward the positive range, except for the 60% ND condition, representing a minor increase in logCS for retest measurements in comparison to the first test. This small improvement likely represents a practice effect or task-specific learning. 
Figure 5.
 
Bland-Altman plots of the Differences vs the Means of the area under contrast sensitivity function (AULCSF) obtained from the quick CSF test-retest values. Bland-Altman plots showing the differences (second AULCSF-first AULCSF) between AULCSF obtained from each quick CSF measure, plotted against their means (second AULCSF + first AULCSF/2) under four visual conditions. Each dot represents one data point. The central solid lines indicate the mean differences (MD) between different measures. Dashed lines indicate the 95% agreement upper and lower limit intervals. The CoR ranging from 0.14 (95% CI, 0.11–0.20) to 0.18 (95% CI, 0.11–0.20) logCS, was used to determine the test-retest reliability.
Figure 5.
 
Bland-Altman plots of the Differences vs the Means of the area under contrast sensitivity function (AULCSF) obtained from the quick CSF test-retest values. Bland-Altman plots showing the differences (second AULCSF-first AULCSF) between AULCSF obtained from each quick CSF measure, plotted against their means (second AULCSF + first AULCSF/2) under four visual conditions. Each dot represents one data point. The central solid lines indicate the mean differences (MD) between different measures. Dashed lines indicate the 95% agreement upper and lower limit intervals. The CoR ranging from 0.14 (95% CI, 0.11–0.20) to 0.18 (95% CI, 0.11–0.20) logCS, was used to determine the test-retest reliability.
Discussion
Although it is not routinely measured in clinical settings, CSF can provide useful information for which VA charts may not. The accuracy, sensitivity, and repeatability of CSF measurements make it an important test for early diagnosis and monitoring of disease progression. The test can also provide clinical information to assess treatment outcomes. Whereas chart-based CSF tests are economical, easy to operate, and take less time, the results may sometimes be misinterpreted given that different tests measure different aspects of vision across different spatial frequency ranges. Hence, the importance of CSF measurements in the clinic has been greatly underestimated. 
In this study, we tested the sensitivity of different functional vision assessment methods in detecting minor changes in visual quality with simulated visual degradations. Visual qualities were impaired by simulating an under-correction (+0.25 D), a brightness reduction (60% ND), and a foil that degraded VA (0.8BAN). Our results indicated that the VA chart could only detect visual impairment in the 0.8BAN condition, while CSF measured with CSV-1000E could only detect visual impairments in three simulated conditions at medium and high spatial frequencies (6 cpd, 12 cpd, and 18 cpd). Yet, it could not detect visual impairments of the +0.25D condition and the 60% ND condition at a lower SF (3 cpd), which is usually the earliest affected frequency by eye diseases such as glaucoma44,45 and neurological conditions such as cerebral injury.46 When we used the quick CSF method to measure CSF under the four different visual conditions, we found statistically significant differences between three simulated conditions and the normal viewing condition in all spatial frequencies. There were also statistical differences in AULCSF and cutoff between them. In addition a quantitative comparison of the two CSF measures in detecting small visual changes was conducted. Multivariate linear regression analysis applied to test whether there were statistically significant differences in observed logCS differences between the simulated visual conditions and the BCVA condition measured with CSV-1000E and the quick CSF showed that the quick CSF method detected a statistically significant bigger logCS differences compared with CSV-1000E. Additionally, the CoRs for the four different visual conditions obtained from the quick CSF test also demonstrated high repeatability and stability. In addition, there were many outliers in the results obtained from CSV-1000E, which reflected the instability of the test. The considerable number of outliers identified in the cutoff frequency would lead to the speculation that the contrast sensitivities at four spatial frequencies did not provide good constraints on the second-order polynomial in many cases. 
The difference in detection performance across methods is mainly a result of the limitation of the VA test in functional vision assessment and the low sampling resolution of CSV-1000E. The logCS values were generally higher with the CSV-1000E test than the quick CSF test, which had also been observed in previous studies.47 This is due to the fact that the quick CSF scaled its stimulus size according to its displayed SF; hence, the stimuli are significantly smaller in size than the fixed size grating in CSV-1000E test at higher SFs. Although VA is ubiquitously used as an indicator of clinical visual function, it only represents the spatial resolution at high contrast; it is not uncommon that patients with ocular disorders, such as early-stage cataract and glaucoma, exhibit normal visual acuity but poor visual quality.48 Such visual degradation might be due to a loss of contrast sensitivity at some spatial frequencies, especially in the intermediate frequencies, which are highly important for daily vision.46 In our study, we simulated visual conditions of low brightness and a small degree of undercorrection that rarely affect high-contrast spatial resolution, whereas a drop in contrast (entering a dimly lit room or night driving) causes a considerable drop in visual performance (reading, facial, and object recognition), to a much larger degree than what could be predicted from VA in the fovea. On the other hand, CSF charts often have fixed sampling steps. There are limited grating contrast and SF ranges in these charts, making them vulnerable to ceiling and floor effects, and thereby limiting their sensitivity to detect subtle changes in visual quality. Furthermore, Kelly et al.17 indicated that the test-retest reliability for CSF chart tests such as CSV-1000 is low. Its CoR ranged from 0.37 to 0.50 logCS even with outliers removed. The instability of testing results makes it difficult to be applied as a criterion-based outcome for clinical evaluation in diagnostic testing and treatment effectiveness of eye diseases. Thirty-two grating stimuli (four frequencies and eight contrasts) are used in the CSV-1000E chart; in contrast, the quick CSF method samples (at a minimum) 720 digital stimuli (12 spatial frequencies and 60 contrasts). Thus the quick CSF method is not only flexible enough to capture large-scale CS changes across different degrees of visual impairments but also precise enough to capture small-scale changes common to the progression or remediation of pathological visual functions. 
Despite the prospects demonstrated above, the quick CSF has its shortcomings. The quick CSF measurement relies on observers’ reliability in reporting their visual percept. Numeric recognition may still be a challenging task for preverbal newborns and preschoolers, especially those under the age of three years old. Therefore further improvement of the quick CSF method could explore alternative optotypes to satisfy the requirements of different age groups. For preschoolers with limited language skills and less-developed cognitive abilities, the numeric stimuli can be replaced with cartoon patterns, which can be filtered and rescaled to different sizes. On the other hand, for preverbal infants, eye-tracking technology can be used to implement the preferential looking method to measure the CSF. 
To conclude, the quick CSF method offers a promising clinical tool to provide useful information on visual quality. Future research could further evaluate the sensitivity, precision and reliability of the quick CSF test in quantifying subtle changes of visual function. When suitably applied, it can be used as a screening tool for evaluating the optical and physiological state of the eye and visual pathway to detect ocular and neural systemic diseases, to monitor disease progression, and to assess treatment outcomes. 
Acknowledgments
The image of Bailey-Lovie Tumbling E Chart is courtesy of Precision Vision. The photograph of CSV-1000E is courtesy of VectorVision Ocular Health Inc. 
Supported by the National Key Research & Development Project (2020YFC2003905) and the National Natural Science Foundation of China (81770954) (JL) and by the National Eye Institute (EY021553) (Z-LL). 
Disclosure: Z. Chen, None; Y. Zhuang, None; Z. Xu, None; LY.L. Chan, None; S. Zhang, None; Q. Ye, None; L. Feng, None; Z.-L. Lu, US 7938538 (P), WO2013170091 (P), PCT/US2015/028657 (P), Adaptive Sensory Technology, Inc. (F); J. Li, None 
References
Comerford JP . Vision evaluation using contrast sensitivity functions. Am J Optom Physiol Opt. 1983; 60: 394–398. [CrossRef] [PubMed]
Jindra LF, Zemon V. Contrast sensitivity testing: a more complete assessment of vision. J Cataract Refract Surg. 1989; 15: 141–148. [CrossRef] [PubMed]
Owsley C, Sloane ME. Contrast sensitivity, acuity, and the perception of “real-world: targets. Br J Ophthalmol. 1987; 71: 791–796. [CrossRef] [PubMed]
Marmor MF, Gawande A. Effect of visual blur on contrast sensitivity: clinical implications. Ophthalmology. 1988; 95: 139–143. [CrossRef] [PubMed]
Onal S, Yenice O, Cakir S, Temel A. FACT contrast sensitivity as a diagnostic tool in glaucoma. Int Ophthalmol. 2008; 28: 407–412. [CrossRef] [PubMed]
Bradley A, Freeman R. Contrast sensitivity in anisometropic amblyopia. Invest Ophthalmol Vis Sci. 1981; 21: 467–476. [PubMed]
Hess RF, Howell ER. The threshold contrast sensitivity function in strabismic amblyopia: evidence for a two type classification. Vision Res. 1977; 17(9): 1049–1055. [CrossRef] [PubMed]
Hess R, Woo G. Vision through cataracts. Invest Ophthalmol Vis Sci. 1978; 17: 428–435. [PubMed]
Hot A, Dul MW, Swanson WH. Development and evaluation of a contrast sensitivity perimetry test for patients with glaucoma. Invest Ophthalmol Vis Sci. 2008; 49: 3049–3057. [CrossRef] [PubMed]
Ross JE, Bron AJ, Clarke DD. Contrast sensitivity and visual disability in chronic simple glaucoma. Br J Ophthalmol. 1984; 68: 821–827. [CrossRef] [PubMed]
Loshin DS, White J. Contrast sensitivity. The visual rehabilitation of the patient with macular degeneration. Arch Ophthalmol. 1984; 102: 1303–1306. [CrossRef] [PubMed]
Bodis-Wollner I, Marx MS, Mitra S, et al. Visual dysfunction in Parkinson's disease. Loss in spatiotemporal contrast sensitivity. Brain. 1987; 110(Pt 6): 1675–1698. [CrossRef] [PubMed]
Huang C, Tao L, Zhou Y, Lu ZL. Treated amblyopes remain deficient in spatial vision: a contrast sensitivity and external noise study. Vision Res. 2007; 47: 22–34. [CrossRef] [PubMed]
Pelli D, Robson J. The design of a new letter chart for measuring contrast sensitivity. In Clinical Vision Sciences; 1988.
Regan D, Giaschi DE, Fresco BB. Measurement of glare susceptibility using low-contrast letter charts. Optom Vis Sci. 1993; 70: 969–975. [CrossRef] [PubMed]
Ginsburg AP . Next generation contrast sensitivity testing. Functional Assess Low Vis. 1996: 77–88.
Kelly SA, Pang Y, Klemencic S. Reliability of the CSV-1000 in adults and children. Optom Vis Sci. 2012; 89: 1172–1181. [CrossRef] [PubMed]
Bradley A, Hook J, Haeseker J. A comparison of clinical acuity and contrast sensitivity charts: effect of uncorrected myopia. Ophthalmic Physiol Opt. 1991; 11: 218–226. [CrossRef] [PubMed]
Bühren J, Terzi E, Bach M, Wesemann W, Kohnen T. Measuring contrast sensitivity under different lighting conditions: comparison of three tests. Optom Vis Sci. 2006; 83: 290–298. [CrossRef] [PubMed]
Hohberger B, Laemmer R, Adler W, Juenemann AG, Horn FK. Measuring contrast sensitivity in normal subjects with OPTEC® 6500: influence of age and glare. Graefes Arch Clin Exp Ophthalmol. 2007; 245: 1805–1814. [CrossRef] [PubMed]
Watson AB, Pelli DG. QUEST: a Bayesian adaptive psychometric method. Percept Psychophys. 1983; 33: 113–120. [CrossRef] [PubMed]
Kontsevich LL, Tyler CW. Bayesian adaptive estimation of psychometric slope and threshold. Vis Res. 1999; 39: 2729–2737. [CrossRef] [PubMed]
Lesmes LA, Jeon S-T, Lu Z-L, Dosher BA. Bayesian adaptive estimation of threshold versus contrast external noise functions: The quick TvC method. Vision Res. 2006; 46: 3160–3176. [CrossRef] [PubMed]
Lesmes LA, Lu ZL, Baek J, Albright TD. Bayesian adaptive estimation of the contrast sensitivity function: the quick CSF method. J Vis. 2010; 10: 1711–1721. [CrossRef]
Watson AB, Pelli DGJP. Psychophysics. QUEST: a Bayesian adaptive psychometric method. Percept Psychophys. 1983; 33(2): 113–120.
Hou F, Lesmes L, Bex P, Dorr M, Lu Z-L. Using 10AFC to further improve the efficiency of the quick CSF method. J Vis. 2015; 15(9): 2–2. [CrossRef] [PubMed]
Zheng H, Wang C, Cui R, et al. Measuring the contrast sensitivity function using the qCSF method with 10 Digits. Transl Vis Sci Technol. 2018; 7(6): 9–9. [CrossRef] [PubMed]
Hou F, Huang CB, Lesmes L, et al. qCSF in clinical application: efficient characterization and classification of contrast sensitivity functions in amblyopia. Invest Ophthalmol Vis Sci. 2010; 51: 5365–5377. [CrossRef] [PubMed]
Lesmes LA, Wallis J, Lu Z-L, Jackson ML, Bex P. Clinical application of a novel contrast sensitivity test to a low vision population: The quick CSF method. Invest Ophthalmol Vis Sci. 2012; 53: 4358–4358.
Kalia A, Lesmes LA, Dorr M, et al. Development of pattern vision following early and extended blindness. Proc Natl Acad Sci. 2014; 111: 2035–2039. [CrossRef] [PubMed]
Marmalidou A, Kim EL, Silverman R, et al. A Novel Contrast Sensitivity Test as a New Measure of Visual Function in Central Serous Chorioretinopathy. Invest Ophthalmol Vis Sci. 2018; 59: 3126–3126.
Stellmann JP, Young KL, Pottgen J, Dorr M, Heesen C. Introducing a new method to assess vision: Computer-adaptive contrast-sensitivity testing predicts visual functioning better than charts in multiple sclerosis patients. Mult Scler J Exp Transl Clin. 2015; 1: 2055217315596184. [PubMed]
Gepshtein S, Lesmes LA, Albright TD. Sensory adaptation as optimal resource allocation. Proc Natl Acad Sci. 2013; 110: 4368–4373. [CrossRef] [PubMed]
Rosén R, Lundström L, Venkataraman AP, Winter S, Unsbo P. Quick contrast sensitivity measurements in the periphery. J Vis. 2014; 14(8): 3–3. [CrossRef] [PubMed]
Ferris FL, III, Kassoff A, Bresnick GH, Bailey I. New visual acuity charts for clinical research. Am J Ophthalmol. 1982; 94: 91–96. [CrossRef] [PubMed]
Ferris FL, Sperduto RD. Standardized illumination for visual acuity testing in clinical research. Am J Ophthalmol. 1982; 94: 97–98. [CrossRef] [PubMed]
Hou F, Lesmes LA, Kim W, et al. Evaluating the performance of the quick CSF method in detecting contrast sensitivity function changes. J Vis. 2016; 16: 18. [CrossRef] [PubMed]
Yamane N, Miyata K, Samejima T, et al. Ocular higher-order aberrations and contrast sensitivity after conventional laser in situ keratomileusis. Invest Ophthalmol Vis Sci. 2004; 45: 3986–3990. [CrossRef] [PubMed]
Applegate RA, Howland HC, Sharp RP, Cottingham AJ, Yee RW. Corneal aberrations and visual performance after radial keratotomy. J Refract Surg. 1998; 14: 397–407. [CrossRef] [PubMed]
Oshika T, Okamoto C, Samejima T, Tokunaga T, Miyata K. Contrast sensitivity function and ocular higher-order wave front aberrations in normal human eyes. Ophthalmology. 2006; 113: 1807–1812. [CrossRef] [PubMed]
Montés-Micó R, Charman WN. Choice of spatial frequency for contrast sensitivity evaluation after corneal refractive surgery. J Refract Surg. 2001; 17: 646–651. [CrossRef] [PubMed]
Scheirer CJ, Ray WS, Hare N. The analysis of ranked data derived from completely randomized factorial designs. Biometrics. 1976: 429–434.
Dorr M, Elze T, Wang H, et al. New precision metrics for contrast sensitivity testing. IEEE J Biomed Health Inform. 2017; 22: 919–925. [CrossRef] [PubMed]
Horn F, Martus P, Korth M. Comparison of temporal and spatiotemporal contrast-sensitivity tests in normal subjects and glaucoma patients. Ger J Ophthalmol. 1995; 4: 97–102. [PubMed]
Arden G, Jacobson J. A simple grating test for contrast sensitivity: preliminary results indicate value in screening for glaucoma. Invest Ophthalmol Vis Sci. 1978; 17: 23–32. [PubMed]
Bulens C . Application of contrast sensitivity in clinical neurology. 1988. Available at https://repub.eur.nl/pub/51052/.
Thurman SM, Davey PG, McCray KL, Paronian V, Seitz AR. Predicting individual contrast sensitivity functions from acuity and letter contrast sensitivity measurements. J Vis. 2016; 16(15): 15–15. [CrossRef] [PubMed]
Ginsburg A . Spatial filtering and vision: Implications for normal and abnormal vision. Clin Appl Vis Psychophys. 1981: 70–106.
Figure 1.
 
Stimuli for visual acuity and contrast sensitivity function assessments. (a) The Bailey-Lovie tumbling E chart has five high-contrast optotypes per row ranging in size from 1.0 to −0.3 logMAR. (b) CSV-1000E chart presents four sections of sine-wave grating with different spatial frequencies and contrasts. (c) The stimuli in the quick CSF method are a set of three Sloan digits with different contrasts.
Figure 1.
 
Stimuli for visual acuity and contrast sensitivity function assessments. (a) The Bailey-Lovie tumbling E chart has five high-contrast optotypes per row ranging in size from 1.0 to −0.3 logMAR. (b) CSV-1000E chart presents four sections of sine-wave grating with different spatial frequencies and contrasts. (c) The stimuli in the quick CSF method are a set of three Sloan digits with different contrasts.
Figure 2.
 
Box plots of logMAR visual acuity (VA) obtained with Bailey-Lovie tumbling E chart. A box plot illustrating changes in logMAR VA under four visual conditions. Within each box, the middle horizontal line denotes the median logMAR VA values; the top and bottom of each box represent the twenty-fifth and seventy-fifth percentile range; the vertical extending line and the error bars denote adjacent values (i.e., the most extreme values within 1.5 interquartile range of the twenty-fifth and seventy-fifth percentile of each condition). Pairwise comparisons showed that there was a statistically significant difference (Pcorrected < 0.001, ***) in VA between the 0.8 Bangerter foil diffusion condition (0.8BAN) from the normal viewing condition (BCVA).
Figure 2.
 
Box plots of logMAR visual acuity (VA) obtained with Bailey-Lovie tumbling E chart. A box plot illustrating changes in logMAR VA under four visual conditions. Within each box, the middle horizontal line denotes the median logMAR VA values; the top and bottom of each box represent the twenty-fifth and seventy-fifth percentile range; the vertical extending line and the error bars denote adjacent values (i.e., the most extreme values within 1.5 interquartile range of the twenty-fifth and seventy-fifth percentile of each condition). Pairwise comparisons showed that there was a statistically significant difference (Pcorrected < 0.001, ***) in VA between the 0.8 Bangerter foil diffusion condition (0.8BAN) from the normal viewing condition (BCVA).
Figure 3.
 
Box plot of logCS obtained with CSV-1000E for different spatial frequencies, cutoff frequency and AULCSF under four visual conditions. The details of the box plot are the same as Figure 2. The blue crosses represent individuals whose values were statistical outliers. As shown in the subplots, post hoc multiple pairwise comparisons were also used to examine statistical differences between the three simulated visual degraded conditions and the normal viewing condition (BCVA) with the outliers excluded. One to three asterisks indicate Pcorrected ≤ 0.05, Pcorrected ≤ 0.01, Pcorrected ≤ 0.001, respectively. P values are adjusted for multiple testing.
Figure 3.
 
Box plot of logCS obtained with CSV-1000E for different spatial frequencies, cutoff frequency and AULCSF under four visual conditions. The details of the box plot are the same as Figure 2. The blue crosses represent individuals whose values were statistical outliers. As shown in the subplots, post hoc multiple pairwise comparisons were also used to examine statistical differences between the three simulated visual degraded conditions and the normal viewing condition (BCVA) with the outliers excluded. One to three asterisks indicate Pcorrected ≤ 0.05, Pcorrected ≤ 0.01, Pcorrected ≤ 0.001, respectively. P values are adjusted for multiple testing.
Figure 4.
 
Box plot of log contrast sensitivity (CS), cutoff spatial frequency and the area under log contrast sensitivity function (AULCSF) obtained with the quick CSF under four visual conditions. The details of the box plot are the same as Figure 3. Post hoc multiple comparisons showed that statistically significant differences in logCS between the three simulated visual degraded conditions and the normal viewing condition (Pcorrected < 0.001, ***, P values are adjusted for multiple testing) at the same four SF with CSV-1000E, cutoff value, and AULCSF with the outliers excluded.
Figure 4.
 
Box plot of log contrast sensitivity (CS), cutoff spatial frequency and the area under log contrast sensitivity function (AULCSF) obtained with the quick CSF under four visual conditions. The details of the box plot are the same as Figure 3. Post hoc multiple comparisons showed that statistically significant differences in logCS between the three simulated visual degraded conditions and the normal viewing condition (Pcorrected < 0.001, ***, P values are adjusted for multiple testing) at the same four SF with CSV-1000E, cutoff value, and AULCSF with the outliers excluded.
Figure 5.
 
Bland-Altman plots of the Differences vs the Means of the area under contrast sensitivity function (AULCSF) obtained from the quick CSF test-retest values. Bland-Altman plots showing the differences (second AULCSF-first AULCSF) between AULCSF obtained from each quick CSF measure, plotted against their means (second AULCSF + first AULCSF/2) under four visual conditions. Each dot represents one data point. The central solid lines indicate the mean differences (MD) between different measures. Dashed lines indicate the 95% agreement upper and lower limit intervals. The CoR ranging from 0.14 (95% CI, 0.11–0.20) to 0.18 (95% CI, 0.11–0.20) logCS, was used to determine the test-retest reliability.
Figure 5.
 
Bland-Altman plots of the Differences vs the Means of the area under contrast sensitivity function (AULCSF) obtained from the quick CSF test-retest values. Bland-Altman plots showing the differences (second AULCSF-first AULCSF) between AULCSF obtained from each quick CSF measure, plotted against their means (second AULCSF + first AULCSF/2) under four visual conditions. Each dot represents one data point. The central solid lines indicate the mean differences (MD) between different measures. Dashed lines indicate the 95% agreement upper and lower limit intervals. The CoR ranging from 0.14 (95% CI, 0.11–0.20) to 0.18 (95% CI, 0.11–0.20) logCS, was used to determine the test-retest reliability.
Table 1.
 
Summary of logMAR Visual Acuity and Log Contrast Sensitivity Values Under Different Visual Conditions
Table 1.
 
Summary of logMAR Visual Acuity and Log Contrast Sensitivity Values Under Different Visual Conditions
Table 2.
 
Multivariate Linear Regression of Contrast Sensitivity discrepancy for Spatial Frequency, Visual Condition, and Measure Variables
Table 2.
 
Multivariate Linear Regression of Contrast Sensitivity discrepancy for Spatial Frequency, Visual Condition, and Measure Variables
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×