**Purpose**:
To develop a new visual field simulation model that can recreate real-world longitudinal results at a point-wise level from a clinical glaucoma cohort.

**Methods**:
A cohort of 367 glaucoma eyes from 265 participants seen over 10.1 ± 2.5 years were included to obtain estimates of “true” longitudinal visual field point-wise sensitivity and estimates of measurement variability. These two components were then combined to reconstruct visual field results in a manner that accounted for correlated measurement error. To determine how accurately the simulated results reflected the clinical cohort, longitudinal variability estimates of mean deviation (MD) were determined by calculating the SD of the residuals from linear regression models fitted to the MD values over time for each eye in the simulated and clinical cohorts. The new model was compared to a previous model that does not account for spatially correlated errors.

**Results**:
The SD of all the residuals for the clinical and simulated cohorts was 1.1 dB (95% confidence interval [CI]: 1.1–1.2 dB) and 1.1 dB (95% CI: 1.1–1.1 dB), respectively, whereas it was 0.4 dB (95% CI: 0.4–0.4 dB) using the previous simulation model that did not account for correlated errors.

**Conclusions**:
A new simulation model accounting for correlated measurement errors between visual field locations performed better than a previous model in estimating visual field variability in glaucoma.

**Translational Relevance**:
This model can provide a powerful framework to better understand use of visual field testing in clinical practice and trials and to evaluate new methods for detecting progression.

^{1,2}It also remains an important outcome measure for clinical trials in glaucoma, as it represents a clinically relevant endpoint.

^{3–5}However, detecting progressive visual field loss remains a challenging task due to the extent and complexity of its measurement variability

^{6}and nature of change over time

^{7}in eyes with visual field damage.

^{8,9}However, how does one define stability in studies testing a new visual field algorithm? Recent studies have used short-term test-retest data to evaluate specificity since progression can be assumed to be absent over such a time frame.

^{10–16}However, this approach may insufficiently capture the full extent of measurement variability present in real-world visual field tests over a long-term period, making it hard to accurately determine the true potential performance of these new methods in clinical practice.

^{17}or paradigms

^{18}on the ability to detect progression or sample size requirements in a clinical trial using a visual field endpoint.

^{19}Previous studies have simulated visual field results (primarily to evaluate different thresholding algorithms) using models of an individual's responses to different stimulus intensities (i.e., psychometric functions),

^{20–31}often with estimates obtained from an experimental, rather than a clinical, setting.

^{32}

^{33}recently simulated visual field results using parameters obtained from a large longitudinal cohort of glaucoma patients under routine clinical care. Such an approach is highly advantageous because allows visual field results to be reconstructed in a manner that closely reflects those expected clinically. However, two important methodological refinements are required to ensure that such simulations better represent real-world results. First, the correlations between the estimates of measurement variability should be accounted for, since real-world visual fields contain such correlations at the individual level, evident from a previous study demonstrating how accounting for such correlations provided a better fit of longitudinal visual field data.

^{34}Second, the assumption of linearity held for changes in point-wise visual field sensitivity over time is unlikely to remain valid over a long follow-up duration, with nonlinear models better capturing such changes.

^{7}

^{35}This study also included only glaucoma eyes with ≥10 abnormal visual field tests (defined as having a pattern standard deviation [PSD] value with

*P*< 0.05 or Glaucoma Hemifield Test outside normal limits) over at least 5 years. Participants were also required to have open angles on gonioscopy and a best-corrected visual acuity of 20/40 or better, and they were excluded if they had any other ocular or systemic disease that could affect the optic nerve or the visual field.

^{36}Visual fields were then repeated if found to be unreliable or contained artifacts.

^{7}The sigmoid model assumes a nonlinear rate of visual field loss, with natural asymptotes occurring at normal levels of sensitivity and the perimetric floor. The model can be expressed as follows:

*s*=

*γ*/ (1 + e

^{α}^{+}

*), where*

^{βx}*s*denotes the measured sensitivity in decibels,

*γ*indicates the estimate of the initial sensitivity,

*α*indicates how soon the sigmoid function begins a steep decline,

*β*indicates the steepness of this decline, and

*x*indicates the time. This regression model was fitted using an iterative feasible generalized nonlinear least squares method (being equivalent to maximum likelihood estimation), except for locations where at least two out of the three initial tests had a measurement of 0 dB, which were fitted with a value of 0 dB throughout the duration of the follow-up. An example illustrating four locations that were fitted with this sigmoid regression model over the entire perimetric range is shown in Figure 1. The parameters of the sigmoid model could then be used to estimate true sensitivities at each location for an eye at any given time point; these derived sensitivity estimates were termed the “sensitivity template.”

^{33}Our methods are in essence similar to those used previously, but the key difference is that the previous method does not account for correlations between test locations for the estimates of measurement variability, accounted for by the noise templates with our model. In other words, visual fields were simulated by taking a sensitivity template and adding estimates of measurement variability by simply sampling residuals from the empirical PDF corresponding to the true fitted level of sensitivity at each location at random. This difference in methodology can be conceptualized as using an individual-based pattern of measurement variability with our method and a random selection of measurement variability for the previous method. To ensure that the primary comparison was performed between models that did or did not account for such correlated measurement errors, the sigmoid regression model was also used in this model (as opposed to a linear regression model used previously

^{33}) to determine the impact of accounting for such correlations.

*P*< 0.05) was present at two consecutive visits. For the simulation models, we also simulated 100 sequences of visual field tests for each eye in the glaucoma clinical cohort at the same visits at which they were seen clinically.

^{33}but incorporated two important refinements to ensure that the results accurately represent findings expected in routine clinical practice. The most important refinement involves accounting for the correlated measurement errors between locations in a given test. This resulted in longitudinal MD variability estimates that were very similar between the simulated and clinical visual fields. Without accounting for such correlations, the estimate of longitudinal variability was reduced to almost a third of those observed clinically, having a SD of 0.4 dB for all MD residuals, which falls within a similar range of 0.2 to 0.7 dB reported by Russell and colleagues.

^{33}The proportion of eyes detected as having progressed when such correlations were not accounted for was over a third higher than when these correlations were accounted for. When the correlations were accounted for in our proposed simulation model, the proportion of eyes detected as progressing was very similar to that found in the longitudinal clinical cohort. The value of accounting for such correlations has also recently been observed in a different context, where accounting for a global visit effect (i.e., correlated measurement error) resulted in a better fit of longitudinal visual field data.

^{34}The other refinement involves using a sigmoid regression model to capture the nonlinear behavior of visual sensitivity changes at a point-wise level over the entire perimetric range, which a recent study demonstrated to provide the best fit to such data in a large clinical cohort of glaucoma eyes seen over time.

^{7}Indeed, we observed that the sigmoid regression model had a lower root mean square error (RMSE) when compared to a linear model in 88% of the 19,084 locations evaluated in this study, having a mean (SD) RMSE of 2.6 (1.9) and 2.8 (2.0), respectively.

^{33}Note that the application of the simulation model in this situation helps clinicians understand their expected variability across various disease severities when seeking to identify true change. We observed that the variability of MD values peaked at approximately −18 and 10 dB for true MD and PSD, similar to the previous report of peaks at −20 and 8 dB, respectively, although the magnitude of the SD at these peaks were substantially different (see above).

^{33}The magnitude of the residual variability observed using our simulations is also similar to those from real-world clinical findings in a previous study where the variability estimates were obtained with a weighted moving average regression analysis of longitudinal visual field data.

^{37}For example, they report that 95% of the residual differences fell within a 4.5-dB range at an estimated true MD of −10 dB (and thus having a SD of approximately 1.1 dB), similar to an SD of approximately 1.3 dB from our findings. The variation in MD variability with different levels of visual field damage has also been reported in previous studies using short-term test-retest data.

^{11,38}However, the magnitude of the variability presented in those studies appears somewhat smaller than that observed with our longitudinal data, although a direct comparison is difficult due to the limited sample size of those previous studies. Our simulations also provided insights into the variability of the PSD measure, demonstrating how the range of variability can be especially wide for eyes where PSD values are between 4 and 8 dB.

^{10–16}(to ensure that no true progression is seen), although this approach is unlikely to truly represent long-term test-retest variability. Instead, the simulation model presented in our study provides a powerful method to estimate the specificity of new algorithms for detecting visual field progression, as results for eyes that are truly stable can be simulated to examine this. The simulated results would exhibit variability characteristics more typical of real-world patients seen in clinical practice, therefore providing a better evaluation of the actual potential real-world performance of a method. The simulations could also be used to examine the impact of testing frequencies

^{17}or paradigms

^{18}on the ability to detect visual field progression in clinical practice or sample size requirements in glaucoma clinical trials

^{19}using point-wise methods of analyses.

^{33}However, there are future opportunities to perform such analyses in even larger longitudinal datasets collated from multiple clinical centers (using the concept of “big data”), which can include up to tens of thousands of patients.

^{39,40}Nonetheless, we believe the study's sample size was sufficient and that including a large sample would only strengthen the conclusions reached. Another limitation to consider when interpreting the variability estimates shown in this study is that they were obtained from participants seen at approximately biannual intervals, and such estimates may be slightly higher for patients seen at annual intervals.

**Z. Wu**, None;

**F.A. Medeiros**, Alcon Laboratories (F), Allergan (F, C), Bausch & Lomb (F), Carl Zeiss Meditec (F, C), Heidelberg Engineering (F), Merck (F), National Eye Institute (F), Reichert (F), Topcon (F), Novartis (C)

*Curr Opin Ophthalmol*. 2009; 20: 92.

*Exp Rev Ophthalmol*. 2016; 11: 227–234.

*Invest Ophthalmol Vis Sci*. 2017; 58: BIO20–BIO6.

*Prog Retin Eye Res*. 2016; 56: 107–147.

*Invest Ophthalmol Vis Sci*. 2011; 52: 7842–7851.

*Invest Ophthalmol Vis Sci*. 2012; 53: 5985–5990.

*JAMA Ophthalmol*. 2016; 134: 496–502.

*Acta Ophthalmol*. 2001; 79: 116–120.

*Ophthalmology*. 2001; 108: 1954–1965.

*Ophthalmology*. 2012; 119: 458–467.

*Ophthalmology*. 2014; 121: 2023–2027.

*PLoS One*. 2014; 9: e85654.

*Invest Ophthalmol Vis Sci*. 2015; 56: 6077–6083.

*Trans Vis Sci Tech*. 2016; 5: 2.

*Invest Ophthalmol Vis Sci*. 2017; 58: BIO180–BIO90.

*Am J Ophthalmol*. 2017; 176: 148–156.

*Ophthalmology*. 2017; 124: 786–792.

*Invest Ophthalmol Vis Sci*. 2012; 53: 2770–2776.

*Curr Opin Ophthalmol*. 2012; 23: 144–154.

*Invest Ophthalmol Vis Sci*. 1992; 33: 2966–2974.

*Vision Res*. 1994; 34: 885–912.

*Acta Ophthalmol Scand*. 1997; 75: 368–375.

*Invest Ophthalmol Vis Sci*. 2002; 43: 322–331.

*Invest Ophthalmol Vis Sci*. 2003; 44: 4787–4795.

*Optom Vis Sci*. 2005; 82: 43–51.

*Invest Ophthalmol Vis Sci*. 2007; 48: 1627–1634.

*Trans Vis Sci Tech*. 2013; 2: 3.

*Invest Ophthalmol Vis Sci*. 2014; 55: 3265–3274.

*Optom Vis Sci*. 2015; 92: 70–82.

*Trans Vis Sci Tech*2016; 5: 7.

*Invest Ophthalmol Vis Sci*. 2017; 58: 3414–3424.

*Invest Ophthalmol Vis Sci*. 2000; 41: 417–421.

*PLoS One*. 2013; 8: e83595.

*Invest Ophthalmol Vis Sci*. 2015; 56: 4283–4289.

*Ophthalmology*. 2008; 115: 1340–1346.

*Arch Ophthalmol*. 2010; 128: 551–559.

*Invest Ophthalmol Vis Sci*. 2011; 52: 4030–4038.

*Invest Ophthalmol Vis Sci*. 2013; 54: 1345–1351.

*Ophthalmology*. 2017; 125: 352–360.

*Br J Ophthalmol*. In press.