Visual prostheses aim to provide artificial vision to blind patients by using implanted electrodes to electrically stimulate the retina,
1–4 optic nerve,
5 thalamus,
6 or visual cortex,
7 evoking localized visual percepts. The location of the percept within the patient's egocentric spatial map is known to move in parity with the orientation of the eyes.
1,7,8 This apparent movement is because eye position plays an important role in the integration of retinotopic visual signals into a consistent spatial map,
9 even after blindness.
10
For tasks of coordination it is important that the percept location properly reflects the real world. This requires the orientation of the image sensor to be directly coupled with eye position; however, most present devices use an external camera of fixed orientation, divorcing the camera axis from the pupillary axis.
1–3 Recipients of these devices must rely exclusively on head movements to direct their field of view. While retinal implants have been shown to assist in hand–eye coordination tasks
2,11–13 it is likely that the decoupling of the camera and pupillary axes negatively affects performance on the tasks.
Currently, patients are trained to suppress eye movements at all times in order to maintain alignment between the camera and pupillary axes
14,15 but the accessibility of this technique is questionable. Patients have little intuition of the orientation of their eyes
8 and have difficulty suppressing eye movements, particularly those associated with nystagmus. We have previously found that suprachoroidal implant recipients made significant eye movements in response to stimuli during a static image localization task, despite being instructed not to.
16 A separate study in Argus II recipients found that camera-gaze misalignments occurred frequently during a visual search task, often due to the vestibulo-ocular reflexive movements that occur naturally during head scanning, and that patients rely on a series of complex head movements to properly localize objects in daily life.
8 Some have suggested that percept localization is so difficult that many patients simply use their devices as light detectors, ignoring any retinotopic information and instead relying solely on head and neck orientation.
1,11,17
Other visual prostheses forego the external camera and instead use implanted photodiode arrays, such as the Alpha IMS (Retina Implant AG, Reutlingen, Germany) subretinal implant.
4 In these devices, electrode activity is modulated by the light that is naturally incident to the eye, enabling naturalistic eye scanning. Studies with Alpha IMS recipients have shown that patients exhibit “qualitatively normal” oculomotor behavior when the hardware permits eye scanning.
18 Restoring naturalistic eye scanning in camera-based retinal and cortical implants is desirable as it has implications for perceptual localization and hand–eye coordination, and would reduce the cognitive burden on the recipient by facilitating more intuitive interaction with the technology. Implantable intraocular cameras have been proposed as one way of achieving this,
19–21 but to our knowledge the clinical feasibility of this approach has not been established. Others have proposed tracking the eye position and dynamically shifting the region-of-interest (ROI) inside a wide field of view image to compensate for eye movements as they occur.
22,23 This technique, which we term “gaze compensation,” is the more immediately applicable and clinically relevant technique.
Existing studies have examined the benefits of gaze compensation in prosthetic vision. In a previous study on suprachoroidal retinal implant recipients, we showed that gaze compensation improved performance in a static image localization task but we did not assess hand–eye coordination specifically.
24 Similarly, McIntosh
23 showed that subjects under simulated prosthetic vision performed better in a reach-and-grasp task and a visual search task when foveation was restored; however, significant results were found only at high phosphene densities, possibly because the tasks were too difficult to perform at low resolution even with gaze compensation.
Other studies have reported eye position as a confounding factor in hand–eye coordination. Sabbah et al.
8 tested the accuracy of epiretinal Argus II patients in a target localization task when the eyes were purposefully held in an eccentric position. They reported that pointing was skewed toward the direction of eye displacement; however, the analysis was limited to directionality and did not quantify the effect of eye displacement magnitude. In a separate study, Argus II patients indicated the location of percepts generated by direct-to-array stimulation during forced eccentric eye movements. After estimating the effect of eye movement, the authors inferred the retinotopic placement of electrodes from the pointed location.
25 A simulated prosthetic vision study in a visually impaired subject found that nystagmus adversely affected performance on a hand–eye coordination task when phosphenes moved in parity with the eyes.
22 Two preliminary reports regarding experiments in Argus II recipients (Caspi et al.
IOVS 2017;58:ARVO E-Abstract 4192) and simulated prosthetic vision (Hozumi et al.
IOVS 2016;57:ARVO E-Abstract 1958), respectively, have demonstrated reduced pointing error in a target localization task when gaze compensation was used. Finally, changes in the optimal camera alignment in Argus II patients have been shown to correlate with long-term changes in eye orientation (Barry et al.
IOVS 2017;58:ARVO E-Abstract 4687). It is clear that a relationship between eye position and hand–eye coordination exists, but to our knowledge the specific effect of gaze eccentricity on coordination has not been characterized in any of the existing literature.
The present study aimed to test the effectiveness of gaze compensation for improving hand–eye coordination in visual prosthetic recipients. We used a prosthetic vision simulator, based on the classic scoreboard model of phosphene vision, with built-in eye tracking to simulate prosthetic vision with and without gaze compensation. Further, we aimed to characterize the relationship between eye position and pointing error in a target localization task under simulated prosthetic vision in order to better understand the effect of eye movements on hand–eye coordination. In contrast to the studies by Caspi et al.
25 and Sabbah et al.,
8 any pupil eccentricity was spontaneously occurring rather than experimenter controlled. We hypothesized that pointing error in the gaze-compensated condition would be significantly smaller than in the uncompensated condition and comparable to an idealized condition in which phosphenes never moved and camera-gaze misalignments did not arise from eye movement. We further hypothesized that the magnitude and directionality of pointing error would be correlated to eye position, but that these correlations would diminish with gaze compensation.