To determine which candidate is likely to be most informative, at each location, we shall compute the expected change in entropy after the next trial, E(ΔH). This can be achieved using any commonly available MAP algorithm (e.g., QUEST+).
Note that since
E(
ΔH) is computed by the core MAP algorithm, not by MEDTEG itself, a full exposition of their workings is not provided and can be found elsewhere.
8 A brief overview is nonetheless provided here, as understanding how
E(
ΔH) is computed at each candidate location is key for understanding the subsequent steps that follow.
In short, MAP algorithms require us to specify a prior distribution (expressing our current beliefs about sensitivity in this region of the VF) and a psychometric function (expressing how we expect the probability of a correct response to vary with stimulus magnitude).
For the candidate location's prior, we shall use the PMF already computed in the previous step (using natural-neighbor interpolation).
For the candidate's psychometric function, this “frequency of seeing curve” will have been defined in advance (before testing). In our example
MATLAB code, the probability of responding correctly,
p(correct), is assumed to be determined by a modified cumulative Gaussian function, Ψ, with one variable parameter:
µ (i.e., the estimated DLS
dB value) and three fixed parameters (internal noise,
σ; lapse rate,
λ; and guess rate,
γ). Thus,
\begin{eqnarray}
&& p\left( {correct} \right) \nonumber \\
&& = \gamma + \left( {1 - \gamma - \lambda } \right)\left[ {1 - \Psi \left( {x;\left\{ {\mu,\sigma,\gamma,\lambda } \right\}} \right)} \right]. \end{eqnarray}
This psychometric function (illustrated graphically by the blue lines in
Fig. 4) is used by the MAP algorithm to compute the likelihood of each possible response given each possible stimulus, and from these values, one can compute the expected change (reduction) in entropy following the next trial,
E(ΔH).
Note that the steeper the psychometric slope (and also the lower the values of
λ and
γ), the more
informative the patient's response will be. Conversely, if—
in extremis—the slope of the psychometric function were to be completely flat (
σ = ∞), then the probability of responding correctly would always be 50% regardless of the stimulus magnitude. In which case, however the patient responds, we would learn nothing new about what they can or cannot see. In this way, as shown in
Figure 4, the algorithm will be naturally inclined toward selecting regions of the VF where the response will be more informative (i.e., where
σ is lower).
Note also that if (as is sometimes the case) the psychometric function is assumed to be constant across the VF, then this step of the MEDTEG algorithm could be skipped. Thus, instead of computing the expected change in entropy, E(ΔH), one could simply compute predicted entropy, H, and select the region about which we are currently most uncertain (i.e., “H max” rather than “E(ΔH) max”). An H max approach would be conceptually and computationally simpler but would be unable to take into account changes in response reliability (i.e., as a function of eccentricity and/or mean sensitivity). Thus, if there were regions of the VF where the psychometric slope was very flat, then an H max variant of MEDTEG is liable to get stuck there, repeatedly testing locations where entropy, H, is high but where the gain in information, ΔH, is low.