August 2023
Volume 12, Issue 8
Open Access
Glaucoma  |   August 2023
Detectability of Visual Field Defects in Glaucoma Using Moving Versus Static Stimuli for Perimetry
Author Affiliations & Notes
  • Stuart K. Gardiner
    Devers Eye Institute, Legacy Health, Portland, OR, USA
  • Steven L. Mansberger
    Devers Eye Institute, Legacy Health, Portland, OR, USA
  • Correspondence: Stuart K. Gardiner, Devers Eye Institute, Legacy Research Institute, 1225 NE 2nd Ave, Portland, OR 97232, USA. e-mail: sgardiner@deverseye.org 
Translational Vision Science & Technology August 2023, Vol.12, 12. doi:https://doi.org/10.1167/tvst.12.8.12
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Stuart K. Gardiner, Steven L. Mansberger; Detectability of Visual Field Defects in Glaucoma Using Moving Versus Static Stimuli for Perimetry. Trans. Vis. Sci. Tech. 2023;12(8):12. https://doi.org/10.1167/tvst.12.8.12.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose: We have previously shown that using moving, instead of static, stimuli extends the effective dynamic range of automated perimetry in glaucoma. In this study, we further investigate the effect of using moving stimuli on the detectability of functional loss.

Methods: We used two experimental perimetry paradigms to test 155 subjects with a diagnosis of glaucoma or glaucoma suspect, and 34 healthy control subjects. One test used stimuli moving parallel to the average nerve fiber bundle orientation at each location; the other used static stimuli. Algorithms were otherwise identical. Sensitivities to moving stimuli were transformed to the equivalent values for static stimuli based on a Bland–Altman plot. The proportions of locations outside age-corrected normative limits were compared, and test–retest variability was compared against defect depth for each stimulus type.

Results: More tested locations were below the fifth percentile of the normative range for that location using static stimuli. However, among locations abnormal according to standard clinical perimetry on the same day, 19.2% were abnormal using static stimuli, versus 20.5% using moving stimuli (P = 0.372). Test–retest variability was 44% lower for moving stimuli across the range of defect depths.

Conclusions: When compared with static automated perimetry and expressed on a common scale, moving stimuli extend the effective dynamic range and decrease variability, without decreasing the detectability of known functional defects.

Translational Relevance: Moving stimuli provide a method to improve known problems of current clinical perimetry.

Introduction
Clinicians routinely use static automated perimetry to obtain an assessment of a patient's visual function across the central visual field.1 In regions of the visual field with moderate or severe damage, structural measurements from optical coherence tomography are subject to a pronounced floor effect,2 and so functional testing becomes the primary modality used for the assessment of further disease progression. However, both short-term and long-term variability of automated perimetry increase with the severity of loss,3 such that the floor of the effective dynamic range (below which it is impossible to obtain reliable repeatable measurements in a clinically realistic paradigm) is approximately 15 to 19 dB,4 well above the floor of the technical dynamic range from the instrument.5,6 Better methods are needed to increase the dynamic range and hence allow reliable testing at locations with more severe glaucomatous loss. 
We have recently shown that using a test stimulus for perimetry that moves parallel to the nerve fiber bundles, instead of the stationary stimulus currently used, extends the effective dynamic range, increasing sensitivities so that they remain above the 15 to 19 dB floor until later in the disease.7 In healthy observers, moving visual stimuli are easier to detect than static stimuli of the same contrast,810 owing to stimulating motion-sensitive retinal ganglion cells11 and cortical mechanisms responsible for motion detection12,13; a similar increase in sensitivity was found at glaucomatous locations. The moving stimulus approach still asks the patient to respond whether each presentation was seen or not seen (unlike e.g. kinetic perimetry), and participants reported preferring the moving stimulus, possibly because they had greater certainty over whether or not they saw it.7 The next step is to assess the relative ability of using moving versus static stimuli for the two main tasks in which perimetry is relied upon: detection of defects (i.e. signal-to-noise for distinguishing damaged locations from those within normal limits) and assessment of rate of progression (i.e. signal-to-noise for determining the rate of change). 
In this study, we assess defect depths using each of the two stimulus types, in a large population that is very experienced with performing automated perimetry. We then determine the test–retest variability at different severities, as a form of signal-to-noise analysis. To fairly compare the stimuli, sensitivities for moving stimuli are first transformed to the equivalent values for static stimuli, and only locations that remain within the effective dynamic range for both are analyzed. The actual detectability of defects that would be achieved in clinical care depends on the testing algorithms used, but by using the exact same locations and testing algorithm for both, and equating the measurement scales, we can assess the effect that is purely due to the stimulus type. This process will allow us to determine whether a moving stimulus could be suitable for the detection of functional defects, or whether it is preferable to use a static stimulus for detection, then gradually introduce stimulus movement once sensitivity decreases to extend the dynamic range in moderate to severe glaucoma. 
Methods
Participants
Data were taken from the ongoing longitudinal Portland Progression Project (P3).3,14,15 Subjects with a clinical diagnosis of glaucoma or glaucoma suspect, at the sole discretion of their clinician, undergo a range of diagnostic tests once every 6 months. Patients are excluded if they have detectable visual field loss from a cause other than glaucoma, with the exception of mild cataract, or if they are unable to produce reliable results when performing automated perimetry. Inclusion and exclusion criteria are kept deliberately liberal to better represent a typical clinical population. On each test date, among other diagnostic tests (intraocular pressure, optical coherence tomography, etc.), participants perform visual field testing on the Humphrey Field Analyzer (HFA) perimeter (Carl-Zeiss Meditec Inc, Dublin, CA), with the SITA Standard testing algorithm,16 using Size III (0.43° diameter) stimuli in a 24-2 grid. It is now known that the reliability indices from the HFA do not reflect test performance accurately,17,18 and so reliability is instead assessed by the technicians performing the test, who monitor fixation using the instrument's built-in camera and provide reminders and encouragement as needed. Results from two test dates were used for this analysis. If results from the two experimental perimetric tests and HFA perimetry were available from more than two dates, the most recent two were used, subject to the requirement that there had been no ocular surgery during that interval. 
A separate group of healthy control eyes was tested cross-sectionally. Participants in this cohort had to be free of any pathology likely to cause a detectable visual field defect (glaucoma, diabetic retinopathy, age-related macular degeneration, etc.) other than mild cataract. All test procedures were the same for these control subjects as for those in the P3 cohort. 
Experimental Perimetry
Subjects underwent two experimental functional tests in the same eye: once using 500 ms Size V static stimuli, and once using 500 ms Size V moving stimuli, with otherwise identical testing algorithms. The order of the testing was randomized at each visit. The worst eye, according to mean deviation from the most recent previous HFA visual field test before beginning the experimental perimetry study, was used for testing on all subsequent test dates. 
Full details of the testing have been previously reported.7 In brief, 34 locations were tested, chosen from those in the 24-2 testing grid, to cover the full extent of that grid while limiting test duration to approximately 5 minutes per stimulus type. A clinically realistic ZEST algorithm was used, with five stimulus presentations at four seed points located at (±9°, ±9°). This was followed by four stimulus presentations at each of the other 30 locations, with the initial prior probability density function being weighted toward the sensitivity of the nearest seed point to make the testing algorithm more efficient. Thus, the total number of stimulus presentations was the same for both static and moving stimuli, with the only potential source of differences in test duration being due to variations in response times. A practice test comprising two presentations at each of the four seed points was performed first to decrease the learning effect. Testing was performed on an Octopus perimeter (Haag-Streit Inc, Köniz, Switzerland), externally controlled by custom-written software in R version 4.2.219 using the Open Perimetry Interface version 1.6,20 on a 10 cd/m2 background. To increase the proportion of locations that were within the effective dynamic range, Size V stimuli were used for both moving and static stimuli. The 500 ms stimuli were used to decrease the frequency of responses occurring after the end of the stimulus presentation time; owing to firmware limitations of the instrument, such responses would be recorded for static stimuli, but not for moving stimuli. As with HFA perimetry, fixation was monitored continuously by the technician using the device's inbuilt camera. Reliability was assessed subjectively by the technician who was conducting and observing the test, and testing was repeated if warranted. Testing was approved by the local institutional review board and adhered to the tenets of the Declaration of Helsinki. 
At each test location, the moving stimulus used travels in a straight line parallel to the average nerve fiber bundle orientation at that location, according to the equations of Jansonius et al.21 At locations (±9°, ±15°), the stimulus travels at 5°/s: starting 1.25° closer to the blind spot than the designated location, passing through the designated location after 250 ms, and ending 1.25° further from the blind spot after 500 ms. At other eccentricities, stimulus speed was scaled proportional to 1/M, where M = 17.3/(Eccentricity + 0.75) represents the cortical magnification factor,22 so that stimuli of the same contrast are approximately equally resolvable across the visual field.23 The distance travelled by the moving stimulus ranged from 1.1° at the four central locations (±3°, ±3°), to 6.1° at the two temporal locations (27°, ±3°). Videos showing examples of the moving and static stimuli are available as supplementary movies in our prior publication.7 
Data Analysis
First, a Bland–Altman analysis was used to find the average bias between the two stimulus types, using the equation (Moving – Static) = α + β × (Moving + Static), including both healthy and glaucomatous eyes. This process was used to convert raw moving stimulus sensitivities to the equivalent static stimulus sensitivities, ensuring that they can be compared on a common scale (note that although multiple data points are present for each eye, so long as the number of locations per eye is constant, such clustering only affects the statistical significance of comparisons and not the actual regression line; hence, adjusting for clustering was unnecessary when deriving this conversion). Next, a generalized estimating equation linear model,24 accounting for multiple locations per eye, was used to test whether the rate of age-related decline differed between the two stimulus types (i.e., whether the interaction between age and stimulus type was significant) when expressed on this common scale, using the healthy control eyes. Having established that this rate of decline did not differ significantly, a common correction factor was derived by linear regression of the sensitivities for both stimulus types in the control eyes against age, and this was used to age correct all sensitivities to their equivalent for a 60-year-old individual. 
The most recent age-corrected sensitivities at each location from participants in the P3 cohort were compared against percentiles of the empirical distribution of age-corrected sensitivities in the healthy control eyes. This was used to calculate the percentage of locations that were outside normative limits, using different percentile cutoffs for normal for each stimulus type. This process was performed first using all locations, and secondly restricted only to the subset of those locations that were abnormal according to HFA testing, defined as a total deviation value with a P value of less than 5% on the same test date. 
Test–retest variability was defined as the difference between the first and second test dates at each location. Because visits were 6 months apart, this result constitutes the sum of the short-term and long-term variability, reflecting the variability that would be seen in clinical practice.3 This information was plotted against average deviation from age-corrected normal, for each stimulus type (again, after converting moving sensitivities to the equivalent static sensitivities). LOESS regression lines were calculated, using smoothing parameter 0.25; that is, the regression line at a given point is based on up to 25% of the total locations with tricubic weighting determined by the difference in deviation from that point on the curve.25 The predicted value on this LOESS curve can then be used as a form of signal-to-noise analysis demonstrating the ability of each stimulus type to identify locations as being abnormal. All analyses were performed in R version 4.2.1.19 
Results
The P3 cohort used in this analysis consisted of 155 subjects, with average age on their most recent visit 72.2 years (range, 51–91 years), and average HFA mean deviation on that visit was −3.2 dB (range, −20.0 to +2.3 dB). Before age correction, the average pointwise sensitivity in the experimental perimetry using static stimuli was 27.1 ± 6.0 dB; using moving stimuli, it was 27.9 ± 4.4 dB. The median interval between visits used for the test–retest analysis was 182 days, with an interquartile range of 176 to 196 days. Three individuals had gaps of 383, 432, and 563 days; these individuals had no evidence of significant visual field change during that time, with changes in their mean deviation of −0.8, +2.0, and −0.9 dB, respectively. The separate healthy control cohort consisted of 34 participants, with average age 61.1 years (range, 50–86 years), and an average mean deviation of −0.2 dB (range, −1.9 to +2.0 dB). 
Figure 1 shows a Bland–Altman plot comparing sensitivities from the two stimulus types, before age correction. Based on this plot, moving sensitivities were converted to the equivalent values for static sensitivities using the equation (Moving – StaticEquivalent) = 11.496–0.184 × (Moving + StaticEquivalent). After conversion, the interaction between age and stimulus type was not a significant predictor of sensitivity among the healthy control eyes (P = 0.149), and so the same age correction of −0.132dB/y was applied to both stimuli types. The distributions of pointwise sensitivities for each stimulus, on the static sensitivity decibel scale, after age correction, are shown in the histograms in Figure 2
Figure 1.
 
Bland–Altman plot comparing sensitivities to moving versus static stimuli, on their native decibel scales, before age correction. The line of best fit was given by (Moving – Static) = 11.496–0.184 × (Moving + Static).
Figure 1.
 
Bland–Altman plot comparing sensitivities to moving versus static stimuli, on their native decibel scales, before age correction. The line of best fit was given by (Moving – Static) = 11.496–0.184 × (Moving + Static).
Figure 2.
 
Distributions of age-corrected pointwise sensitivities for static (left) and moving (right) stimuli, after correction to the equivalent value for a static stimulus for a patient aged 60 years. Gray bars are for the P3 cohort of glaucoma suspect/glaucoma eyes, on their most recent test date; overlaid black bars are for healthy control eyes. The total number of locations is the same for both stimuli, but there is a wider spread when using static stimuli.
Figure 2.
 
Distributions of age-corrected pointwise sensitivities for static (left) and moving (right) stimuli, after correction to the equivalent value for a static stimulus for a patient aged 60 years. Gray bars are for the P3 cohort of glaucoma suspect/glaucoma eyes, on their most recent test date; overlaid black bars are for healthy control eyes. The total number of locations is the same for both stimuli, but there is a wider spread when using static stimuli.
Figure 3 shows stacked bar charts of the proportions of locations in eyes in the P3 cohort that were outside normative limits for that location on the most recent test date, for each stimulus type. Among all locations, there were slightly more locations outside normative limits using static stimuli. Using age-adjusted sensitivities, 23.4% were below the fifth percentile in the healthy control cohort when using static stimuli versus 19.9% using moving stimuli (P < 0.001, McNemar's test), and 13.6% were below the first percentile using static stimuli versus 10.8% using moving stimuli (P < 0.001). Among just those locations that had total deviation outside normal limits on the HFA on the same test date, 19.2% were below the fifth percentile of age-adjusted normative sensitivities using static stimuli versus 20.5% using moving stimuli (P = 0.372); 13.2% were below the first percentile using static stimuli versus 13.1% using moving stimuli (P = 1.000). Figure 4 shows the concordance between stimulus types in detecting locations with age-adjusted sensitivity outside normative limits. While acknowledging the caveat that the experimental perimetry paradigms are not optimized to the same extent as the SITA standard algorithm for the HFA, it is notable that the moving stimulus resulted in slightly higher agreement than the static stimulus with HFA sensitivities. 
Figure 3.
 
Percentage of locations that were outside age-corrected normative limits for that location on the most recent test date for each stimulus type. (Top) Percentage among all tested locations. (Bottom) Percentage among locations that had abnormal total deviation value on the same date according to standard clinical perimetry with the HFA.
Figure 3.
 
Percentage of locations that were outside age-corrected normative limits for that location on the most recent test date for each stimulus type. (Top) Percentage among all tested locations. (Bottom) Percentage among locations that had abnormal total deviation value on the same date according to standard clinical perimetry with the HFA.
Figure 4.
 
A Venn diagram showing the proportions of locations whose age-corrected sensitivity was abnormal at the P < 5% level using the two experimental perimetry paradigms, and using current clinical perimetry in the form of the SITA Standard algorithm for the HFA.
Figure 4.
 
A Venn diagram showing the proportions of locations whose age-corrected sensitivity was abnormal at the P < 5% level using the two experimental perimetry paradigms, and using current clinical perimetry in the form of the SITA Standard algorithm for the HFA.
Figure 5 shows the test–retest variability for the 105 subjects who have been tested on at least two test dates, defined as the absolute difference between age-corrected sensitivities from their most recent consecutive test dates (after converting moving sensitivities to their static sensitivity equivalent using the equation given above), plotted against the deviation from average age-corrected normal (averaged between those two test dates). The median test–retest variability was 1.82 dB for moving stimuli on the static sensitivity equivalent scale versus 3.48 dB for static stimuli. The LOESS regression line for each stimulus type is shown, based only on locations with a sensitivity of more than 19 dB (on their native scale) for both stimuli, that is, it is high enough to be within the effective dynamic range for reliable results for both. At low sensitivities, the variability seems to be slightly higher for the moving stimulus, but this is likely an artifact caused by the chosen floor. Locations were excluded from this plot if their sensitivity was below 19dB for static stimuli (11.3% of locations), and/or for moving stimuli (6.2% of locations), but 2.7% of the remaining included locations had sensitivity to moving stimuli that were actually less than 19 dB after conversion to the static equivalent decibel scale. Thus, static sensitivities are effectively being censored at a higher value on this common scale, decreasing their apparent test–retest variability. Across the rest of the range, test–retest variability was lower using moving stimuli, for the same defect depth, when expressed in equivalent units, indicating an improved signal-to-noise ratio for detecting functional loss. For deviations between −10 dB and +5 dB, the test–retest variability for moving stimuli (the predicted absolute test–retest difference based on the LOESS regression line shown in Fig. 5) is on average 56% of the test–retest variability for static stimuli at the same severity of loss. 
Figure 5.
 
Test–retest variability plotted against defect depth for each stimulus type. Variability was defined as the absolute difference between consecutive test dates (typically 6 months apart). Sensitivities for the moving stimulus were converted to the equivalent values that would be expected for a static stimulus before plotting, based on a Bland–Altman plot comparing the two. Solid lines represent LOESS regression fits to the data. Points and lines in red are for static stimuli; points and lines in blue are for moving stimuli.
Figure 5.
 
Test–retest variability plotted against defect depth for each stimulus type. Variability was defined as the absolute difference between consecutive test dates (typically 6 months apart). Sensitivities for the moving stimulus were converted to the equivalent values that would be expected for a static stimulus before plotting, based on a Bland–Altman plot comparing the two. Solid lines represent LOESS regression fits to the data. Points and lines in red are for static stimuli; points and lines in blue are for moving stimuli.
Discussion
Current clinical automated perimetry has been largely unchanged for the last thirty years, despite its well-known limitations. Test–retest variability is high, especially at damaged locations,3 and, partly as a consequence of this, the floor of the effective dynamic range is reached well before functional blindness (and indeed well before 0 dB, which is defined not by any physiologic properties but purely by hardware limitations of the instrument used).4 The first of those issues can be reduced, but not eliminated, by making changes to the testing algorithm, perhaps by incorporating structural information.26 The second issue can be attenuated, but not eliminated, by using larger test stimuli.27 We have proposed that further improvements can be made to address both of these problems by using moving, rather than static, test stimuli.7 However, there are two equally reasonable approaches to doing so. A constant amount of movement could be introduced at all stimuli contrasts, with that amount of movement determined only by location within the visual field, as was done here. Alternatively, the distance over which the stimulus moves could be minimal or even zero at healthy locations, increasing as sensitivity decreases. With those two approaches in mind, this study was designed to evaluate the impact of stimulus movement on detectability of defects of different severities. Both static and moving stimuli showed fewer locations outside normal limits than the HFA, which could be attributed to differences in the testing algorithm (which was not optimized to obtain more transparent comparisons), or the larger stimulus size (both experimental perimetry paradigms used Size V stimulus rather than the Size III stimulus, which is the most common stimulus size used with the HFA and Octopus perimeters); a formal and less caveated comparison between SAP and motion perimetry is needed to determine which test is optimal for the detection and monitoring of functional deficits in glaucoma. This study was designed to perform perhaps the more important comparison between the two experimental paradigms, for which all experimental details other than stimulus movement were kept the same. When considering the entire dataset, there were more locations outside age-corrected normal limits for the static stimulus than for the moving stimulus. However, many of these seem to be false positives, owing to the higher test–retest variability with static stimuli. When restricting the analysis to only those locations that were outside age-corrected normal limits when tested with the HFA, there was no significant difference in defect detectability between the two stimulus types. 
On the basis of defect detectability alone then, there is no clear preference between having a constant amount of motion at all contrasts versus increasing motion with contrast. Therefore, practical considerations become paramount. Having the same amount of stimulus movement at a given location regardless of contrast is simpler. With a projection perimeter, that may make the firmware easier to adapt; although with the advent of tablet-based and headset-based perimetry,28,29 the relevant hardware limitations become negligible. It is possible that having the same amount of movement at all contrasts would make the task easier to understand for patients, hence improving reliability of the results, but that notion has not been tested. Conversely, having zero stimulus movement at healthy locations and then gradually increasing the amount of motion at higher contrasts that are used once sensitivity decreases guarantees that detectability of defects will be the same as with the current testing paradigm and could mean that existing normative databases could be reused, because the paradigm in the normative range would not have changed. 
The testing algorithm used for the experimental perimetry was chosen to allow easy, direct comparison between the two stimulus types, at individual locations. For a clinical test, the algorithm would be optimized extensively, for example, by using spatial filtering to decrease variability as is done in the SITA family of algorithms.16 It would also be necessary to collect a much larger normative database, defining outside normal limits as being below the fifth percentile for the individual location among 34 healthy control eyes means that only one observed normal value is below that point, so the limit might not be calculated very accurately. Indeed, the proportion of locations that were outside this normal limits varied between locations, from 13% to 49% for the static stimulus, and from 3% to 42% for the moving stimulus. Therefore, not much can be read into the actual quantifications of defect detectability for each stimulus type; the important thing is the comparison between the two types and the fact that detectability is not made significantly worse by using moving stimuli. What is also clear from these results is that test–retest variability is significantly decreased by using moving stimuli across a wide range of sensitivities. This outcome supports and extends the results of our earlier study7; in this study, we show that this remains the case even after equalizing the measurement scales. 
A key caveat when interpreting the results is caused by the firmware on the Octopus perimeter that was used to conduct testing. The moving stimulus was implemented by adapting a kinetic perimetry stimulus, which means that patient responses are only recorded during the time over which the stimulus is being presented. By contrast, for the static stimulus, the response window extends for a further 500 ms beyond that time. We did not measure the exact timing of responses within this window, but it is likely that some occurred after the stimulus turned off; thus, a positive response is recorded even when the test subject only detects the stimulus turning off, which is common near the detection threshold.30 It is likely that the true sensitivity to a moving stimulus, when allowing an extended response window, would often be higher than those reported here. It could also be predicted that test–retest variability would be further reduced by extending the response window. Thus, some of the benefits of using a moving stimulus are underestimated in this experiment. The impact on the detectability of defects is harder to predict, but it is certainly possible that extending the response window would tighten the normative limits (by decreasing variability), improving the detectability of defects outside that range. Visual acuity to moving targets can be influenced to a greater extent than to equivalent static targets, which would also cause the increase in sensitivity with the moving stimulus to be underestimated, because the study cohort included some participants with a degree of lens opacity, for example, owing to mild cataract.31 
The moving stimuli follow a fixed, straight line trajectory at each location. It is possible that this factor partly explains the higher sensitivities observed using the moving stimulus, if it passes into less damaged regions, although it should be noted that sensitivities were also higher for moving stimuli at healthy locations. Personalization to an individual's anatomy could be advantageous, although doing so without introducing further variability owing to the mapping process would be challenging; it would likely require a scanning laser ophthalmoscope built into the perimeter, because moving the test subject between instruments inevitably results in at least some degree of ocular cyclotorsion and other fixational variations. Further, the chosen orientations are based on traces of nerve fiber bundles and do not take into account the displacement of receptive fields from those bundles; it is possible that such displacements may differ between motion-sensitive ganglion cells, causing small differences in the structure–function relation between the two stimulus types. 
There are a few further caveats that should have less effect on the results. Fixation was not quantified, and so we cannot formally assess whether the moving stimulus induces greater fixation instability. Such instability may be more common for our experimental perimetry than for clinical testing, owing to the longer stimulus duration, although this factor should apply equally to both moving and static stimuli. Fixational instability may be made more likely by the greater detectability of moving stimuli at the same contrast, but this should have little effect when near the detection threshold for that particular stimulus type. Stimulus movement decreases the precision of location information, and the extent of that caveat depends on the assumption of very high correlation between levels of damage at locations along the same nerve fiber bundle. Subjects had clinical diagnoses of glaucoma or glaucoma suspect, and the advantages and disadvantages of the technique for patients with differing or coexisting ocular pathologies is not yet known; in particular, their defects are less likely to follow nerve fiber bundle orientations. For example, statokinetic dissociation has been reported as a consequence of hemianopsia,32 which could alter the relative usefulness of static versus kinetic stimuli. 
In summary, we found that using moving instead of static stimuli for a clinically realistic automated perimetry task did not reduce the probability that a damaged visual field location (according to the HFA) would have a detectable defect in our experimental paradigm. Meanwhile, the moving stimulus had lower test–retest variability when expressed on equivalent scales across a wide range of severities of functional loss. This outcome supplements the previous finding that the moving stimulus extends the effective dynamic range, allowing reliable measurements at locations with more severe functional loss. Further studies are needed to determine the effect on the accuracy and reliability of measures of the rate of disease progression. For testing a single visit, however, switching to a moving stimulus seems to hold multiple advantages over current standard clinical perimetry. 
Acknowledgments
Supported by NEI R01 EY020922 (to author SKG); NEI R01 EY031686 (to author SKG); Good Samaritan Foundation. 
Disclosure: S.K. Gardiner, Legacy Health (P); S.L. Mansberger, None 
References
Stagg BC, Stein JD, Medeiros FA, et al. The frequency of visual field testing in a US nationwide cohort of individuals with open angle glaucoma. Ophthalmology Glaucoma. 2022; 5(6): 587–593. [CrossRef] [PubMed]
Moghimi S, Bowd C, Zangwill LM, et al. Measurement floors and dynamic ranges of OCT and OCT angiography in glaucoma. Ophthalmology. 2019; 126(7): 980–988. [CrossRef] [PubMed]
Gardiner SK, Swanson WH, Mansberger SL. Long- and short-term variability of perimetry in glaucoma. Transl Vis Sci Technol. 2022; 11(8): 3. [CrossRef] [PubMed]
Gardiner SK, Swanson WH, Goren D, Mansberger SL, Demirel S. Assessment of the reliability of standard automated perimetry in regions of glaucomatous damage. Ophthalmology. 2014; 121(7): 1359–1369. [CrossRef] [PubMed]
Heijl A, Lindgren A, Lindgren G. Test-retest variability in glaucomatous visual fields. Am J Ophthalmol. 1989; 108(2): 130–135. [CrossRef] [PubMed]
Artes PH, Iwase A, Ohno Y, Kitazawa Y, Chauhan BC. Properties of perimetric threshold estimates from full threshold, SITA standard, and SITA fast strategies. Invest Ophthalmol Vis Sci. 2002; 43(8): 2654–2659. [PubMed]
Gardiner SK, Mansberger SL. Moving stimulus perimetry: a new functional test for glaucoma. Transl Vis Sci Technol. 2022; 11(10): 9. [CrossRef] [PubMed]
Rowe F. Methods of visual field assessment. Visual fields via the visual pathway. Hoboken, NJ: Blackwell Publishing Ltd; 2008: 27–51.
Gandolfo E, Rossi F, Ermini D, Zingirian M. Early perimetric diagnosis of glaucoma by stato-kinetic dissociation. In: Mills R, Wall M, eds. Perimetry update 1994/1995. Amsterdam/New York: Kugler; 1995: 271–276.
Gandolfo E. Stato-kinetic dissociation in subjects with normal and abnormal visual fields. Eur J Ophthalmol. 1996; 6(4): 408–414. [CrossRef] [PubMed]
Manookin MB, Patterson SS, Linehan CM. Neural mechanisms mediating motion sensitivity in parasol ganglion cells of the primate retina. Neuron. 2018; 97(6): 1327–1340.e4. [CrossRef] [PubMed]
Born RT, Bradley DC. Structure and function of visual area MT. Annu Rev Neurosci. 2005; 28: 157–189. [CrossRef] [PubMed]
Bach M, Hoffmann MB. Visual motion detection in man is governed by non-retinal mechanisms. Vision Res. 2000; 40(18): 2379–2385. [CrossRef] [PubMed]
Gardiner SK, Mansberger SL, Demirel S. Detection of functional change using cluster trend analysis in glaucoma. Invest Ophthalmol Vis Sci. 2017; 58(6): Bio180–Bio190. [CrossRef] [PubMed]
Gardiner SK. Differences in the relation between perimetric sensitivity and variability between locations across the visual field. Invest Ophthalmol Vis Sci. 2018; 59(8): 3667–3674. [CrossRef] [PubMed]
Bengtsson B, Olsson J, Heijl A, Rootzen H. A new generation of algorithms for computerized threshold perimetry, SITA. Acta Ophthalmol Scand. 1997; 75(4): 368–375. [CrossRef] [PubMed]
Heijl A, Patella VM, Flanagan JG, et al. False positive responses in standard automated perimetry. Am J Ophthalmol. 2022; 233: 180–188. [CrossRef] [PubMed]
Yohannan J, Wang J, Brown J, et al. Evidence-based criteria for assessment of visual field reliability. Ophthalmology. 2017; 124(11): 1612–1620. [CrossRef] [PubMed]
R Development Core Team. R: A language and environment for statistical computing, 4.0.0 ed. Vienna, Austria: R Foundation for Statistical Computing; 2020.
Turpin A, Artes PH, McKendrick AM. The Open Perimetry Interface: an enabling tool for clinical visual psychophysics. J Vis. 2012; 12(11): 22. [CrossRef] [PubMed]
Jansonius NM, Nevalainen J, Selig B, et al. A mathematical description of nerve fiber bundle trajectories and their variability in the human retina. Vision Res. 2009; 49(17): 2157–2163. [CrossRef] [PubMed]
Horton JC, Hoyt WF. The representation of the visual field in human striate cortex. A revision of the classic Holmes map. Arch Ophthalmol. 1991; 109(6): 816–824. [CrossRef] [PubMed]
Johnston A, Wright MJ. Visual motion and cortical velocity. Nature. 1983; 304(5925): 436–438. [CrossRef] [PubMed]
Liang K, Zeger S. Longitudinal data analysis using generalized linear models. Biometrika. 1986; 73: 13–22. [CrossRef]
Cleveland WS. Robust locally weighted regression and smoothing scatterplots. J Am Stat Assoc. 1979; 74(368): 829–836. [CrossRef]
Denniss J, McKendrick AM, Turpin A. Towards patient-tailored perimetry: automated perimetry can be improved by seeding procedures with patient-specific structural information. Transl Vis Sci Technol. 2013; 2(4): 3. [CrossRef] [PubMed]
Gardiner SK, Demirel S, Goren D, Mansberger SL, Swanson WH. The effect of stimulus size on the reliable stimulus range of perimetry. Transl Vis Sci Technol. 2015; 4(2): 10. [CrossRef] [PubMed]
Vingrys AJ, Healey JK, Liew S, et al. Validation of a tablet as a tangent perimeter. Transl Vis Sci Technol. 2016; 5(4): 3. [CrossRef] [PubMed]
Matsumoto C, Yamao S, Nomoto H, et al. Visual field testing with head-mounted perimeter ‘imo’. PLoS One. 2016; 11(8): e0161974.. [CrossRef] [PubMed]
Gardiner SK, Swanson WH, Demirel S, McKendrick AM, Turpin A, Johnson CA. A two-stage neural spiking model of visual contrast detection in perimetry. Vision Res. 2008; 48(18): 1859–1869. [CrossRef] [PubMed]
Ao M, Li X, Huang C, Hou Z, Qiu W, Wang W. Significant improvement in dynamic visual acuity after cataract surgery: a promising potential parameter for functional vision. PLoS One. 2014; 9(12): e115812. [CrossRef] [PubMed]
Hayashi R, Yamaguchi S, Narimatsu T, Miyata H, Katsumata Y, Mimura M. Statokinetic dissociation (Riddoch phenomenon) in a patient with homonymous hemianopsia as the first sign of posterior cortical atrophy. Case Rep Neurol. 2017; 9(3): 256–260. [CrossRef] [PubMed]
Figure 1.
 
Bland–Altman plot comparing sensitivities to moving versus static stimuli, on their native decibel scales, before age correction. The line of best fit was given by (Moving – Static) = 11.496–0.184 × (Moving + Static).
Figure 1.
 
Bland–Altman plot comparing sensitivities to moving versus static stimuli, on their native decibel scales, before age correction. The line of best fit was given by (Moving – Static) = 11.496–0.184 × (Moving + Static).
Figure 2.
 
Distributions of age-corrected pointwise sensitivities for static (left) and moving (right) stimuli, after correction to the equivalent value for a static stimulus for a patient aged 60 years. Gray bars are for the P3 cohort of glaucoma suspect/glaucoma eyes, on their most recent test date; overlaid black bars are for healthy control eyes. The total number of locations is the same for both stimuli, but there is a wider spread when using static stimuli.
Figure 2.
 
Distributions of age-corrected pointwise sensitivities for static (left) and moving (right) stimuli, after correction to the equivalent value for a static stimulus for a patient aged 60 years. Gray bars are for the P3 cohort of glaucoma suspect/glaucoma eyes, on their most recent test date; overlaid black bars are for healthy control eyes. The total number of locations is the same for both stimuli, but there is a wider spread when using static stimuli.
Figure 3.
 
Percentage of locations that were outside age-corrected normative limits for that location on the most recent test date for each stimulus type. (Top) Percentage among all tested locations. (Bottom) Percentage among locations that had abnormal total deviation value on the same date according to standard clinical perimetry with the HFA.
Figure 3.
 
Percentage of locations that were outside age-corrected normative limits for that location on the most recent test date for each stimulus type. (Top) Percentage among all tested locations. (Bottom) Percentage among locations that had abnormal total deviation value on the same date according to standard clinical perimetry with the HFA.
Figure 4.
 
A Venn diagram showing the proportions of locations whose age-corrected sensitivity was abnormal at the P < 5% level using the two experimental perimetry paradigms, and using current clinical perimetry in the form of the SITA Standard algorithm for the HFA.
Figure 4.
 
A Venn diagram showing the proportions of locations whose age-corrected sensitivity was abnormal at the P < 5% level using the two experimental perimetry paradigms, and using current clinical perimetry in the form of the SITA Standard algorithm for the HFA.
Figure 5.
 
Test–retest variability plotted against defect depth for each stimulus type. Variability was defined as the absolute difference between consecutive test dates (typically 6 months apart). Sensitivities for the moving stimulus were converted to the equivalent values that would be expected for a static stimulus before plotting, based on a Bland–Altman plot comparing the two. Solid lines represent LOESS regression fits to the data. Points and lines in red are for static stimuli; points and lines in blue are for moving stimuli.
Figure 5.
 
Test–retest variability plotted against defect depth for each stimulus type. Variability was defined as the absolute difference between consecutive test dates (typically 6 months apart). Sensitivities for the moving stimulus were converted to the equivalent values that would be expected for a static stimulus before plotting, based on a Bland–Altman plot comparing the two. Solid lines represent LOESS regression fits to the data. Points and lines in red are for static stimuli; points and lines in blue are for moving stimuli.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×