**Purpose**:
With the rise of smartphone devices to monitor health status remotely, it is tempting to conclude that sampling more often will provide a more sensitive means of detecting changes in health status earlier over time, when interventions may improve outcomes.

**Methods**:
The answer to this question is derived in the context of a model where observations are generated from a linear-trend model with independent as well as autocorrelated autoregressive-moving average, or ARMA(1,1), errors.

**Results**:
The results imply a cautionary message that an increase in the sampling frequency may not always lead to a faster detection of trend changes. The benefit of rapid successive observations depends on how observations, taken closely together in time, are correlated.

**Conclusions**:
Shortening the observation period by half can be accomplished by increasing the number of independent observations to maintain the same power for detecting change over time. However, a strategy to detect progression of disease sooner by taking numerous closely spaced measurements over a shortened interval is limited by the degree of autocorrelation among adjacent observations. We provide a statistical model of disease progression that allows for autocorrelation among successive measurements, and obtain the power of detecting a linear change of specified magnitude when equal-spaced observations are taken over a given time interval.

**Translational Relevance**:
New emerging technology for home monitoring of visual function will provide a means to monitor sensory status more frequently. The model proposed here takes into account how successive measurements are correlated, which impacts the number of measurements needed to detect a significant change in status.

^{1,2}As an illustration, assume that the clinician takes observations of visual function or structure every 6 months wishing to detect a trend change of a certain magnitude within 3 years (that is,

*n*= 7 observations). This is a strategy used by many clinicians, for conditions such as glaucoma that can slowly progress if not treated optimally. The sampling frequency is limited by the resources available to accommodate more follow-up visits per year and the burden placed on patients who are asked to return more frequently for follow-up measurements. Of course, it would seem to be more beneficial to increase the sampling frequency and take observations every 3 months, every month, every week, and so on, if it would help detect a trend earlier or with more certainty, when a treatment intervention may lead to preservation of vision. Increasing the sampling frequency to improve the ability to detect a change over a given sampling period would certainly bring benefits, as long as the observations we collect are statistically independent, with little correlation between measurements at adjacent time intervals.

*n*equally spaced measurements be generated from the model with

*t*= (

_{i}*i*– 1)/(

*n*– 1). Equation l describes a stationary linear-trend model. The trend component

*T*=

_{t}*α*+

*βt*expresses the time progression of the measurement and

*ε*is a measurement error with mean 0 and variance

_{t}*σ*

^{2}. In this section we assume that the errors

*ε*are independent. This is reasonable provided there is no instrument carry-over from one measurement to the other and the linear trend is indeed deterministic. In the following section, we allow for autocorrelations among the errors. Autocorrelation in the errors can arise from instrument carry-over, but also because the progression of many anthropometric signals is not purely deterministic, but also affected by stochastic perturbations.

_{t}*β*. What is the power of detecting a specified slope change with just

*n*= 7 observations equally spaced on the unit-time interval? How does the power change if we take more than seven observations over the same unit-time interval? And what are the consequences of reducing the observation period in half (or to any other fraction of the unit-time interval) and taking seven (or more) observations over the reduced time interval? We start with the obvious: By restricting attention to only the first half of the time interval, we do not obtain observations from the second half of the interval. We lose the opportunity to learn whether there are changes to the trend during the second half of the interval. Trends are typically not stable, and an approach that looks at only part of the time interval certainly limits one's ability to check for changing trends. Assuming that changes in the trend are constant over time is certainly a very strong assumption.

*β*

_{Base}to the new value

*β*

_{Base}+

*β*

_{∗}, for known significance level

*α*(usually, 0.05) and error standard deviation

*σ*.

*Y*=

_{i}*Ȳ*+

*β*(

*t*−

_{i}*t̄*) +

*ε*with

_{i}*t*= (

_{i}*i*– 1)/(

*n*– 1) for

*i*= 1,2,…

*n*, and

*t̄*= 1/2. We consider the test of

*H*

_{0}:

*β*=

*β*

_{Base}against

*H*

_{1}:

*β*=

*β*

_{Base}+

*β*

_{∗}, with

*β*

_{∗}> 0. The standard error of the least squares estimate

*β̂;*is given by see Abraham and Ledolter

^{3}for the standard error; the last expression follows from straightforward algebra. The critical limit for the hypothesis test is

*CL = β*

_{Base}−

*z*, where

_{α}σ_{β̂;}*z*is the percentile of the standard normal distribution; for significance level

_{α}*α*= 0.05,

*z*= −1.645. The power of the test is given by where Φ(.) is the cumulative distribution function of the standard normal distribution.

_{α}*β*

_{∗}/

*σ*is a critical parameter; the power decreases if smaller changes need to be detected. For

*α*= 0.05,

*P*= 1 (using the full time interval, such as 3 years),

*β*

_{∗}/

*σ*= 2.5, and

*n*= 7, the power is 0.712. Given measurement variability

*σ*= 0.4, seven equally spaced observations over the full 3-year time interval allow us to detect an increase of one unit; detecting a smaller change, such as a half unit change over three years, with just seven observations is almost impossible (power 0.294). If we took

*n*= 15 observations over the full time period, the power of detecting an increase of one unit is larger (0.910), and we are fairly certain to detect a change of that magnitude. This is expected, as more observations are always better than fewer.

*n*= 7), which now are equally spaced over the first 1.5 years. Using

*P*= 0.5, Equation 3 results in power 0.294. As expected, this power is smaller than the one we get when spreading the seven observations over the whole 3 years, as it is equivalent to the power of detecting half of the change, (0.5)

*β*

_{∗}, over the full time interval (Equation 3). Of course, one can increase this unacceptable low power by adding more observations and sampling more often. For example, with

*n*= 15 observations equally spaced over the first 1.5 years, the power is 0.440. Figure 1 shows that we need 36 observations to achieve the same power (0.712) that we obtain with seven observations equally spaced over 3 years.

*P*= 0.3), but increasing the number of observations over this short time period even more? The results in Figure 1 show that we need 102 independent observations to attain the same power (0.712) that we obtain with

*n*= 7 observations spaced evenly over 3 years. It takes more independent observations to compensate for the reduced observation interval, but the same power can always be achieved by increasing the sample frequency.

*kσ*after a period of 3 years, for both

*k*= 1 and

*k*= 2. Independent normal observations with standard deviation

*σ*are generated every 6 months. Two different estimation and testing strategies are compared. Strategy 1 uses (only) the seven observations that are available at the end of the 3-year period to test whether the slope of the linear regression through the origin has increased (using significance level 0.05). Strategy 2 starts the testing after the first four observations have been collected (i.e., after having observed the measurement at 18 months) and repeats the test with each successive observation, for a total of four tests. It concludes a change in the slope when one or more of these tests reject the no-change hypothesis.

*T*=

_{t}*α*+

*βt*is often not purely deterministic but also affected by stochastic perturbations

*r*that lead to persistent slow-moving deviations from the deterministic linear trend, analogous to a slow moving wave. Persistence implies that a signal at time

_{t}*t*above the trend line tends to be followed by signals that are above the trend line as well. In other words, signals tend to stay above (or below) the trend line for several periods in a row. Such persistence can be modeled with a first-order autoregressive model,

*r*= (1/(1 –

_{t}*φB*))

*ξ*=

_{t}*ξ*+

_{t}*φξ*

_{t}_{−1}+

*φ*

^{2}

*ξ*

_{t}_{−2}+ …. Here,

*B*is the backshift operator,

*φ*is the autoregressive parameter (which, for statistical stationarity, has to be between −1 and 1), and

*ξ*are independent mean zero random variables with variance

_{t}*r*implies autocorrelations

_{t}*Cor*(

*r*,

_{t}*r*) =

_{t−k}*φ*and variance

^{k}*φ*is positive and close to 1. The autoregressive model becomes the (nonstationary) random walk when

*φ*= 1. A random walk can take very long persistent excursions from the deterministic trend line. For a detailed discussion of time series models (including the backshift operator notation, stationarity and nonstationarity, and autoregressive and moving average models) we refer the reader to Abraham and Ledolter

^{4}and Box et al.

^{5}

*φB*)

*Ỹ*=

_{t}*ξ*+ (1 –

_{t}*φB*)

*ε*, and is known as the autoregressive-moving average, or ARMA(1,1), model: there is just one lagged autoregressive term and the autocorrelations of the moving average component on the right-hand side of the model are zero after lag 1. It is straightforward to show that the standard deviation and the autocorrelations of the deviations from the linear trend model

_{t}*Ỹ*=

_{t}*Y*– (

_{t}*α + βt*) are

*σ*=

_{Ỹ}*σ*=

*ρ*

_{1}=

*φ*

*ρ*= (

_{k}*ρ*

_{1})

*φ*

^{k}^{−1}for

*k*≥ 1. For

*ρ*=

_{k}*φ*

^{k}^{−1}.

*φ*= 0.8. The ratio

*Ỹ*=

_{t}*Y*– (

_{t}*α + βt*) are

*ρ*

_{1}= 0.8/(1 + 3) = 0.2 and

*ρ*= (0.2)(0.8)

_{k}

^{k}^{−1}for

*k*≥ 1. While the lag 1 autocorrelation is moderate in size (

*ρ*

_{1}= 0.2), there is a persistent slow decay in the autocorrelations from lag 1 onward.

^{6,7}that errors in regressions of anthropometric time series data on deterministic functions of age follow ARMA(1,1) models. Carrico et al.

^{7}show that, in a regression of young-adult blood pressure on linear and quadratic functions of age, body mass index, and height, ARMA(1,1) errors are preferable to AR(1) and errors with compound symmetry.

*ε*follow an ARMA(1,1) model, implying an

*n*×

*n*error covariance matrix

*V*with elements

*v*=

_{ij}*σ*

^{2}for

*i = j*and

*v*=

_{ij}*σ*

^{2}

*ρ*

_{1}

*φ*

^{|}

^{i−j}^{|}

^{−}^{1}for

*i*≠

*j*. The generalized least squares (GLS) estimator of

*β*in the model in Equation 5 is given by

*β̂;*= (

_{GLS}*X*

^{T}V^{−1}

*X*)

^{−1}

*X*

^{T}V^{−1}(

*Y − Ȳ*). Here

*V*is the

*n × n*covariance matrix specified above,

*X*= (

*t*

_{1}−

*t̄*,

*t*

_{2}−

*t̄*,…,

*t*−

_{n}*t̄*)

*is the*

^{T}*n ×*1 column vector of times, and

*Y − Ȳ*= (

*Y*

_{1}−

*Ȳ*,

*Y*

_{2}−

*Ȳ*,…,

*Y*−

_{n}*Ȳ*)

*is the*

^{T}*n ×*1 column vector of mean-corrected observations. The superscript

*denotes the transpose. The GLS estimator is the most efficient estimator among all linear unbiased estimators, with the smallest sample variance*

^{T}*X*

^{T}V^{−1}

*X*)

^{−1}.

^{3}Substituting this standard error into Equations 2 and 3 leads to the power

*z*

_{0.05}= −1.645,

*β*

_{∗}= 1,

*σ*= 0.4 and an observation interval that is reduced from the original 3 years (

*P*< 1), but now assume that the error is characterized by the ARMA(1,1) model with weekly autoregressive coefficient

*φ*= 0.8 and variance ratio

_{W}*n*observations on the reduced unit-time interval [0,

*P*< 1] are spaced 156

*P*/(

*n*− 1) weeks apart. Hence, the autoregressive coefficient between successive observations is (

*φ*)

_{W}^{156}

^{P}^{/(}

^{n}^{−1)}. This value,

*σ*= 0.4 and the variance ratio

*V*of the

*n*observations equally spaced over the interval [0,

*P*< 1].

*P*= 1) and

*n*= 7 observations, the power calculated from Equation 6 with

*φ*= 0.8 is still 0.712, the same power we obtain when there is independence. This is because observations are 26 weeks apart (156

_{W}*P*/(

*n*– 1) = 156(1)/6 = 26),

*V*is a diagonal matrix with zero autocorrelations. The power is affected only for much larger weekly autoregressive coefficient very close to 1.

*φ*= 0.8,

_{W}*σ*= 0.4 and variance ratio

*n*= 100, for illustration, adjacent observations are 0.46 weeks apart and (

*φ*)

_{W}^{0.46}= (0.8)

^{0.46}= 0.90. Off-diagonal elements in the covariance matrix

*V*are large, which indicates that there is little benefit to collecting observations that are so close together in time.

*φ*= 0.8 and variance ratio

*ρ*

_{1}=

*φ*/4 = 0.20 with subsequent slow decay. For smaller variance ratios, when trend changes dominate independent measurement errors, our calculations show that increases in the number of required samples are even larger than the ones reported in Figure 2. Larger variance ratios decrease the autocorrelations. For

*ρ*

_{1}=

*φ*/10 = 0.08, and the increases in the number of required samples become smaller. We need 18 samples when the sampling period is reduced to 70% of the initial interval (20 samples are required when

*β*. This is important, as standard errors derived under independence are incorrect if errors are autocorrelated. For positive autocorrelation standard errors assuming independence are too small, which leads to spurious significance.

^{8,9}

^{10}is related to this discussion. They investigate, through simulations, whether it is better to collect more observations at the beginning and at the end of the observation period (“wait and see” approach) than to space the observations evenly throughout the observation window. Their finding that the power of detecting a change is increased with the “wait and see” strategy can be predicted from theory without any simulations, as the standard error of a slope estimate

*σ*becomes smaller when the settings of the regression predictor are located at the boundary of the experimental region; see Materials, Methods, and Results. Our paper derives results theoretically and also allows for correlation among adjacent observations.

_{β̂;}^{11}propose a structural model for the progression that includes the exponential of time and an autoregressive noise component that allows for the temporal correlation among adjacent observations. In their analysis of longitudinal cardiac imaging data, George et al.

^{12}consider several models for the temporal and the spatial correlations that can be expected across time and across different image locations. Lawton et al.

^{13}consider a longitudinal model for disease progression of multiple sclerosis patients. They argue quite convincingly that the structural regression component should not merely include linear time trends, but also fractional polynomials of time

*t*, such as log(

*t*) and sqrt(

*t*). In addition, their models include parameters for the autocorrelation among adjacent observations. Taketani et al.

^{14}study how to best predict for a given glaucoma patient his/her response at a future visit. Their model includes nonlinear components for the time progression (quadratic, exponential, and logistic terms of time), and they consider alternatives to the standard least squares estimation by considering robust statistical estimation methods. The study by Chan et al.

^{15}makes a convincing argument that longitudinal studies (in their application, the movement of a subject's arm over time) must generalize the noise component to allow for possible temporal correlation. Allowing for autocorrelation helps avoid a common mistake of adopting spurious results regarding the structural progression component of the model.

^{3}

*. 2014; 121: 535–544.*

*Ophthalmology**. 2015; 2015: http://dx.doi.org/10.1155/2015/285463.*

*J Ophthalmol**Introduction to Regression Modeling*. Belmont: Thompson Higher Education; 2006.

*Statistical Methods for Forecasting*. New York: Wiley; 1983.

*Time Series Analysis, Forecasting and Control. 3rd ed*. New York: Prentice Hall; 1994.

*. 1992; 135: 1166–1177.*

*Am J Epidemiol**. 2013; 3: 116–126.*

*Open J Pediatr**. 2012; 53: 2770–2776.*

*Invest Ophthalmol Vis Sci**. 1971; A134: 229–240.*

*J R Stat Soc**. 1974; 2: 111–120.*

*J Econom**. 2013; 54: 5505–5513.*

*Invest Ophthalmol Vis Sci**. 2016; 10: 527–548.*

*Ann Appl Stat**. 2015; 68: 1355–1365.*

*J Clin Epidemiol**. 2015; 56: 4076–4082.*

*Invest Ophthalmol Vis Sci**. 2004; 22: 631–648.*

*Hum Mov Sci*