Open Access
Retina  |   March 2023
Estimating Phosphene Locations Using Eye Movements of Suprachoroidal Retinal Prosthesis Users
Author Affiliations & Notes
  • Samuel A. Titchener
    Bionics Institute, East Melbourne, VIC, Australia
    Medical Bionics Department, University of Melbourne, Melbourne, VIC, Australia
  • Jeroen Goossens
    Donders Institute for Brain Cognition and Behaviour, Radboudumc, the Netherlands
  • Jessica Kvansakul
    Bionics Institute, East Melbourne, VIC, Australia
    Medical Bionics Department, University of Melbourne, Melbourne, VIC, Australia
  • David A. X. Nayagam
    Bionics Institute, East Melbourne, VIC, Australia
    Department of Pathology, University of Melbourne, Victoria, Australia
    Centre for Eye Research Australia, Royal Victorian Eye & Ear Hospital, Melbourne, VIC, Australia
  • Maria Kolic
    Centre for Eye Research Australia, Royal Victorian Eye & Ear Hospital, Melbourne, VIC, Australia
  • Elizabeth K. Baglin
    Centre for Eye Research Australia, Royal Victorian Eye & Ear Hospital, Melbourne, VIC, Australia
  • Lauren N. Ayton
    Centre for Eye Research Australia, Royal Victorian Eye & Ear Hospital, Melbourne, VIC, Australia
    Ophthalmology, Department of Surgery, University of Melbourne, Melbourne, VIC, Australia
    Department of Optometry and Vision Sciences, University of Melbourne, Melbourne, VIC, Australia
  • Carla J. Abbott
    Centre for Eye Research Australia, Royal Victorian Eye & Ear Hospital, Melbourne, VIC, Australia
    Ophthalmology, Department of Surgery, University of Melbourne, Melbourne, VIC, Australia
  • Chi D. Luu
    Centre for Eye Research Australia, Royal Victorian Eye & Ear Hospital, Melbourne, VIC, Australia
    Ophthalmology, Department of Surgery, University of Melbourne, Melbourne, VIC, Australia
  • Nick Barnes
    Data61, CSIRO, Canberra, ACT, Australia
    Research School of Engineering, Australian National University, ACT, Australia
  • William G. Kentler
    Department of Biomedical Engineering, University of Melbourne, Melbourne, VIC, Australia
  • Mohit N. Shivdasani
    Graduate School of Biomedical Engineering, University of New South Wales, Kensington, NSW, Australia
  • Penelope J. Allen
    Centre for Eye Research Australia, Royal Victorian Eye & Ear Hospital, Melbourne, VIC, Australia
    Ophthalmology, Department of Surgery, University of Melbourne, Melbourne, VIC, Australia
  • Matthew A. Petoe
    Bionics Institute, East Melbourne, VIC, Australia
    Medical Bionics Department, University of Melbourne, Melbourne, VIC, Australia
  • Correspondence: Samuel A. Titchener, Bionics Institute, 384-388 Albert St, East Melbourne 3002, Australia. e-mail: samtitchener111@gmail.com 
Translational Vision Science & Technology March 2023, Vol.12, 20. doi:https://doi.org/10.1167/tvst.12.3.20
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Samuel A. Titchener, Jeroen Goossens, Jessica Kvansakul, David A. X. Nayagam, Maria Kolic, Elizabeth K. Baglin, Lauren N. Ayton, Carla J. Abbott, Chi D. Luu, Nick Barnes, William G. Kentler, Mohit N. Shivdasani, Penelope J. Allen, Matthew A. Petoe; Estimating Phosphene Locations Using Eye Movements of Suprachoroidal Retinal Prosthesis Users. Trans. Vis. Sci. Tech. 2023;12(3):20. https://doi.org/10.1167/tvst.12.3.20.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose: Accurate mapping of phosphene locations from visual prostheses is vital to encode spatial information. This process may involve the subject pointing to evoked phosphene locations with their finger. Here, we demonstrate phosphene mapping for a retinal implant using eye movements and compare it with retinotopic electrode positions and previous results using conventional finger-based mapping.

Methods: Three suprachoroidal retinal implant recipients (NCT03406416) indicated the spatial position of phosphenes. Electrodes were stimulated individually, and the subjects moved their finger (finger based) or their eyes (gaze based) to the perceived phosphene location. The distortion of the measured phosphene locations from the expected locations (retinotopic electrode locations) was characterized with Procrustes analysis.

Results: The finger-based phosphene locations were compressed spatially relative to the expected locations all three subjects, but preserved the general retinotopic arrangement (scale factors ranged from 0.37 to 0.83). In two subjects, the gaze-based phosphene locations were similar to the expected locations (scale factors of 0.72 and 0.99). For the third subject, there was no apparent relationship between gaze-based phosphene locations and electrode locations (scale factor of 0.07).

Conclusions: Gaze-based phosphene mapping was achievable in two of three tested retinal prosthesis subjects and their derived phosphene maps correlated well with the retinotopic electrode layout. A third subject could not produce a coherent gaze-based phosphene map, but this may have revealed that their phosphenes were indistinct spatially.

Translational Relevance: Gaze-based phosphene mapping is a viable alternative to conventional finger-based mapping, but may not be suitable for all subjects.

Introduction
The past decade has seen increasing interest in visual prostheses as functional aids for patients with profound vision loss. To date, two devices have been commercialized and marketed: the Argus II epiretinal implant (Vivani Medical, Inc., Emeryville, CA)1,2 and the Alpha IMS/AMS subretinal implant (Retina Implant AG. Reutlingen, Germany), although neither are commercially available any longer.3,4 Several other devices are at the clinical trial stage, such as the Second Generation Suprachoroidal Retinal Implant (Bionic Vision Technologies, Melbourne, Australia),5,6 the Suprachoroidal Transretinal Stimulation implant (Nidek, Aichi, Japan),7 the Prima subretinal implant (Pixium Vision, Paris, France),8 the NR600 epiretinal implant (Nano Retina, Herzliya, Israel),9 the Cortical visual neuroprosthesis for the blind (CORTIVIS Project, Spain),10 the Intracortical Visual Prosthesis (ICVP, Illinois Institute of Technology, Chicago, IL),11 and the Orion cortical visual prosthesis (Vivani Medical, Inc.).12 These devices deliver electrical stimulation via electrodes implanted in the retina or visual cortex to elicit localized visual percepts, termed phosphenes. Coordinated patterns of phosphenes can be used to convey shapes and images, but this requires a model to map image coordinates to the perceived location of each phosphene within the visual field. 
In early human experiments involving stimulation of the visual cortex, it was quickly recognized that the locations of phosphenes could not be sufficiently determined simply by the layout of electrodes.13 The visual cortex contains multiple and differing retinotopic representations of the visual field, and the complex topography of sulci and gyri make predicting these maps nontrivial (e.g., requiring magnetic resonance imaging distortion–correction and correlation to a structure–function model).14 Phosphene mapping techniques were developed, in which the subject indicated the relative or absolute location of phosphenes by pointing, drawing, verbal description, or marker placement.1520 The phosphene maps could then be integrated into a vision processing algorithm, enabling more accurate representation of spatial information, with particular regard to the size and position of the prosthetic visual field. 
In retinal implants, it is generally assumed that there is direct correspondence between phosphene locations and retinotopic electrode locations, because in natural vision the correspondence between retinal space and incident light is absolute. For photovoltaic devices, in which light is detected by photodiodes colocated with the stimulating electrodes, this assumption is implicit in the design, whereas for camera-based devices it may be incorporated explicitly into the video processing algorithm.21,22 However, a number of practical considerations may challenge this assumption. Phosphenes may be perceived as large, elongated, irregularly shaped, or indistinct owing to current spread, the geography of the retinal degeneration, and incidental axonal activation.23,24 This factor may cause the center of the phosphene to be offset from the electrode center and, for phosphenes with a complex shape, it may even be unclear which part of the phosphene should be considered the center. Additionally, retinal remodeling is known to occur in retinal degeneration,25 and it is conceivable that this factor could further affect the appearance and location of phosphenes. In our previous work, we have used purposefully scrambled phosphene maps to highlight the correspondence between preserved retinotopy, oculomotor control, and functional performance.26 To further qualify the extent of retinal remodeling, and to quantify the size and position of the prosthetic visual field, it would be advantageous to map the correspondence between electrode position and perceived phosphene location in these same retinal prosthesis recipients. 
Phosphene mapping studies using traditional pointing methods in retinal implant recipients have generally reported approximate correspondence between electrode and phosphene locations, but with some distortions in the mapping and large variation in results across subjects. In one study, one intrascleral implant recipient had reasonable retinotopic correspondence, but a second recipient did not.27 In our first-generation study in three suprachoroidal retinal implant recipients, the “general retinotopic arrangement was preserved” in two subjects, whereas for the third subject all phosphenes had the same appearance, regardless of which electrodes (or electrode combinations) were stimulated.23 Three studies investigating phosphene locations in Argus II epiretinal implant recipients have been published. In one, the location of a single phosphene appeared in the expected quadrant of the visual field in four of the six subjects.28 In the second study (in different subjects), the phosphene locations matched the expected locations based on the layout of the electrodes, but the distances between the phosphenes were expanded considerably relative to the electrode layout.29 Finally, a feasibility study in a single subject demonstrated that eye movements may be used to map the percept location of the implanted electrodes.30 These studies have identified differences in the specific retinal disease subtype, level of disease progression, and retinotopic placement of the electrodes (eccentricity from fovea) between patients as potential factors affecting phosphene appearance and location.23,27,28 This variability in mapping distortions calls for simple and intuitive mapping procedures if video processing algorithms are to correct for them on a patient-to-patient basis. 
It is thought that phosphene maps derived from pointing, drawing, or marker placement may exaggerate the distances between phosphenes owing to open-loop pointing bias, because blind-folded normal-sighted subjects tend to overestimate the eccentricity of peripheral stimuli when pointing in the absence of visual feedback of the location of their hand.31 A second potential source of error in finger-based phosphene mapping is eye position, because eye position at stimulus onset directly affects the perceived location of the phosphenes.16,32 Often, eye position is not monitored during phosphene mapping because conventional eye tracker calibration techniques are not suitable for ultra-low-vision subjects. Instead, the effect of initial eye position is minimized by instructing the subject to maintain fixation on a tactile marker during stimulation using their sense of proprioception.23,27,28 
Gaze-based phosphene mapping, wherein the subject moves their eyes instead of their finger to point to the remembered locations of phosphenes, requires a calibrated eye tracker, but may offer some advantages over finger-based phosphene mapping. First, open-loop pointing bias is eliminated because the task no longer requires pointing. For example, a simulated prosthetic vision study found that gaze-based mapping produced estimates of phosphene locations that more closely matched the true simulated phosphene locations compared with conventional finger-based mapping (Kaskhedikar GP, et al. IOVS 2015;56(7):ARVO E-Abstract 4315; Weinreb S, et al. IOVS 2020;61(7):ARVO E-Abstract 4274). Second, the effect of the initial eye position on phosphene location can be fully accounted for because eye position is monitored. One possible disadvantage of gaze-based mapping is that eye movement occurring during presentation of a phosphene will likely cause movement of the phosphene across the visual field. Previous reports, strictly comparing eye position at stimulation onset versus at the time of pointing, suggest that indicated phosphene locations are more closely correlated with eye position at stimulation onset, rather than at the time of response.29 
Gazed-based phosphene mapping has been shown to be feasible in a study of one Argus II recipient, with the major findings being that the subject had sufficient oculomotor control to direct their gaze to the phosphene location and that the mapped locations matched the location of the electrodes on the array.30 However, in this study the eye tracker was uncalibrated, and hence the analyses were limited to the orientation of relative movements in the captured pupil image rather than gaze orientation and magnitude. 
In the present study, we demonstrate gaze-based phosphene mapping in suprachoroidal retinal implant recipients using a calibration-free stereoscopic eye tracker.3335 Different from conventional eye trackers, this system does not require the subject to perform a calibration routine and is, therefore, suitable for use with ultra-low-vision subjects. Results collected using the gaze-based mapping method, as well as a conventional finger-based mapping method, were assessed for correspondence between the measured phosphene locations and the expected phosphene locations (based on retinotopic electrode placement) and related to previously published visual function outcomes in the same subjects. In doing so, we demonstrate that gaze-based mapping yields repeatable measurements without the risk of open-loop pointing bias. 
Methods
Participants
Three subjects (S1–S3) enrolled in a clinical trial (NCT03406416) of a 44-channel suprachoroidal retinal implant participated in the study. The subjects each had end-stage retinitis pigmentosa (bare-light perception only before implantation) and received the implant in the eye with poorer vision at baseline (S1: left eye; S2, S3: right eye). Initial switch-on and fitting began 8 weeks postoperatively, followed by laboratory-based and at-home training. Participant information is summarized in Table 1. The study was approved by the Royal Victorian Eye and Ear Hospital Human Research and Ethics Committee and was carried out in accordance with the tenets of the Declaration of Helsinki with the informed consent of all participants. 
Table 1.
 
Participant Demographics
Table 1.
 
Participant Demographics
Suprachoroidal Retinal Implant
Full details of the implant, surgery, and training procedures are available in previous publications.5,6,26,36 Briefly, an array consisting of 44 platinum disc electrodes (1 mm diameter; 1.4 mm pitch) was implanted in the suprachoroidal space in one eye and were available for stimulation. The electrode array was connected via a subcutaneous lead-wire to a pair of stimulators implanted above the ear, and stimulation commands were transmitted wirelessly to the stimulators by an external unit. In normal use of the device, electrode activity is modulated by images captured by a head-mounted camera. However, in this study the research software controlled the stimulation sequence delivered to each electrode directly. The subjects had already undergone a device-fitting process that identified the electrodes (or shorted pairs of electrodes, to adhere to maximum per-electrode charge density limits) that yielded phosphenes, and established the operational stimulation parameters for those electrodes. Figure 1 displays near-infrared fundus imaging (Heidelberg Spectralis, Heidelberg, Germany) showing the implant location within the suprachoroidal space of each subject. Preoperative imaging is available for comparison in Supplementary Figure S1
Figure 1.
 
Near-infrared fundus images showing the 44-channel suprachoroidal retinal implant within the eye for each subject. The dashed blue line indicates the edge of the implant. Electrodes are visible as bright circles (some are hidden behind pigmentation). Concentric red circles indicate 10° eccentricities of visual field centered on the fovea according to the Drasdo and Fowler schematic eye.37,38 Green circles denote the subset of electrodes that were selected for stimulation in the phosphene mapping tasks. Green ovals encompassing two neighboring electrodes indicate they were operated as a shorted pair to adhere to per-electrode charge density limits.
Figure 1.
 
Near-infrared fundus images showing the 44-channel suprachoroidal retinal implant within the eye for each subject. The dashed blue line indicates the edge of the implant. Electrodes are visible as bright circles (some are hidden behind pigmentation). Concentric red circles indicate 10° eccentricities of visual field centered on the fovea according to the Drasdo and Fowler schematic eye.37,38 Green circles denote the subset of electrodes that were selected for stimulation in the phosphene mapping tasks. Green ovals encompassing two neighboring electrodes indicate they were operated as a shorted pair to adhere to per-electrode charge density limits.
Phosphene Mapping Tasks
The subjects each performed two phosphene mapping tasks, which required them to indicate the perceived location of phosphenes within their visual field to produce a map of phosphene locations. In the first task, the subjects pointed with their finger on an easel to indicate phosphene locations (finger-based mapping task). In the second task, they used eye movement to indicate phosphene locations (gaze-based mapping task). The two tasks were performed on different dates (up to 44 weeks apart), with an additional finger-based touchscreen task performed within the gaze-based session for S1 only (Table 2). A subset of electrodes, shown in Figure 1 circled in green (A–F), were selected for use in the tasks with the aim of sampling locations from as large an extent of visual field as possible. Circles encompassing two neighboring electrodes indicate that those electrodes were operated as a shorted pair to adhere to per-electrode charge density limits. Stimulation pulse trains used anodic phase–first biphasic constant current pulses, with 500 µs phase width and 500 µs interphase gap. The stimulation level, duration, and frequency for each electrode (or shorted pair) were chosen at the beginning of each session based on verbal feedback from the subject so as to produce readily localizable phosphenes. These settings are summarized in Table 2
Table 2.
 
Stimulation Parameters for Each Electrode (or Shorted Pair of Electrodes) in the Finger Pointing and Eye Gaze Phosphene Mapping Tasks
Table 2.
 
Stimulation Parameters for Each Electrode (or Shorted Pair of Electrodes) in the Finger Pointing and Eye Gaze Phosphene Mapping Tasks
Finger-Based Mapping Task
Data for a finger-based mapping task was obtained 3 to 6 weeks after device switch-on (Table 2). The subject was seated in front of a large easel at arm's length. A tactile marker was fixed to the easel at the subject's shoulder height, and the viewing distance was measured from the subject's head to the tactile marker using a laser distance measure (S1: 44 cm; S2: 44 cm; S3: 48 cm). The subject was instructed to keep their head still for the duration of the task to maintain a constant viewing distance. For the purpose of confirming fixation stability only, eye position was monitored using an uncalibrated head-mounted video eye tracker (Arrington Research, Scottsdale, AZ). Note that this process was different from the stereoscopic eye tracker system used in the gaze-based mapping task, which was unavailable for this experiment. Before each trial, the subject placed the index fingers of both hands on the tactile marker and fixated their gaze on their fingers using the sense of proprioception. Once the subject confirmed they were maintaining fixation, and this was verified by the researcher by observing the live eye tracker signal, stimulation was delivered to a single electrode or shorted electrode pair on a balanced-random schedule. Immediately after stimulation, the subject responded by moving the index finger of one hand across the easel to indicate the location of the center of the perceived and remembered phosphene while maintaining fixation on the central tactile marker. The researcher then marked the indicated location with a pen. The position of each indicated location relative to the fixation point (tactile marker) was measured in millimeters and converted to degrees of visual field using the previously measured viewing distance. 
Subject S1 participated in an additional session of finger-based mapping, conducted at the same time-point as the gaze-based mapping (week 25) (Table 2). The pointed locations were recorded digitally on a touchscreen monitor instead of on the easel, but the methodology in this session was otherwise identical to the first session. Time constraints in the ongoing clinical trial precluded the availability of touchscreen data for subjects S2 and S3. 
Gaze-Based Mapping Task
Data for the gaze-based mapping task were obtained 25 weeks after device switch-on for subjects S1 and S2 and 50 weeks after device switch-on for subject S3. The subject was seated and head fixed with their chin in a chinrest. Before each trial, the subject was instructed to fixate centrally, left of center, or right of center. This strategy ensured a range of initial eye positions were represented in the data, to facilitate a comparison of movement of the implanted and nonimplanted eyes during the task. Trials starting with leftward and rightward fixation were performed for three of the six electrodes for S1 and S2, and four of the six electrodes for S3, but for all electrodes for trials with central fixation. Once the subject confirmed they were maintaining fixation, and the researcher verified this by examining the live eye tracker output, stimulation was delivered to a single electrode (or shorted pair of electrodes) on a balanced-random schedule. After stimulation, the subjects were under instruction to immediately make an eye movement to the perceived and remembered location of the center of the resultant phosphene, report “yes” to the researcher, and maintain their gaze until told to return to center fixation shortly thereafter. A calibration-free stereoscopic video eye tracker was used to monitor the position of both eyes simultaneously at 300 Hz throughout the task. The eye tracker system, developed at the Donders Institute for Brain Cognition and Behaviour (Nijmegen, the Netherlands) and previously described by Barsingerhorn et al.33 was capable of measuring the orientation of the optical axis directly with less than 1° accuracy without requiring the subject to perform a calibration routine. Eye position was reported as the angle of the optical axis (azimuth and elevation) relative to a fixed vertical plane in front of the subject. No adjustment to account for the constant offset between the optical axis and the true visual axis (often referred to as the angle kappa; see Barsingerhorn et al.33) was necessary because the data analysis (described elsewhere in this article) was only concerned with relative changes in eye angle, not the absolute gaze point. The chin rest ensured that the subject's head position remained fixed; therefore, any change in the reported gaze angle could be attributed to eye movement alone. 
Saccades were detected using a velocity threshold of 20°/s. Saccades with amplitude of smaller than 2° were discarded. At least one saccade greater than 2° in amplitude occurred within 2 seconds after stimulus onset in every trial in which the subject reported seeing a percept. In each trial, the first saccade to occur after stimulus onset was identified, and the end point of the saccade was used to estimate the perceived and remembered phosphene location. Because large saccades may be followed quickly by a nonvolitional corrective saccade, the saccade offset was adjusted to enclose any subsequent saccades that were initiated up to 130 ms after the offset of the initial saccade.39 The perceived phosphene location relative to the fovea was then estimated as the eye position at the end point of the saccade minus the eye position at stimulus onset. Eye position data were inspected manually for every trial, and any trials in which the eye position signal was noisy or contained artefact (e.g., blink artefact) that interfered with the estimation of the phosphene location were excluded from further analysis. 
Data Analysis
To test the hypothesis that resection of the lateral rectus muscle during the prosthesis implantation surgery may have affected the oculomotor mechanics of the implanted eye, the movement of the implanted and nonimplanted eyes were compared. For each subject, a linear total least-squares regression model40 was fitted to the change in eye angle of the implanted eye versus the nonimplanted eye during the saccadic movements identified in the gaze-based phosphene mapping task. Separate linear regression models were calculated for the horizontal and vertical components of the saccades. A bootstrapping of the regression residuals was used to estimate regression line confidence intervals without assuming uniform residuals. A strong correlation with a gradient of unity would be indicative of conjugate eye movement, while a weak correlation or nonunity gradient would indicate disconjugate movement. 
Phosphene maps were produced by averaging the indicated phosphene locations (measured relative to the fovea or other preferred fixation locus) for each electrode. Separate maps were produced for the finger-based task and the gaze-based task. Electrode locations relative to the fovea were measured from infrared fundus imaging in mm and transformed to degrees of visual field using the Drasdo and Fowler schematic eye (Fig. 1). Congruency between the phosphene maps and the expected locations of phosphenes based on the electrode layout was quantified by the mean distance from the estimated phosphene location (behavioral response) to the expected phosphene location (retinotopic electrode location). To verify that the initial eye position did not affect the estimated phosphene location in gaze-based mapping, the distance for each subject from the estimated phosphene location to the expected phosphene location was compared between trials in which the subject was instructed to fixate left, right, and center using a nonparametric Kruskal–Wallis test with initial fixation as the test factor. The distortion of the phosphene map relative to the electrode locations was characterized using Procrustes analysis, which computed the linear transformation (translation, scaling, and rotation) that, when applied to the electrode locations, minimized the mean squared error between the transformed electrode locations and the phosphene locations. We confirmed that each Procrustes result was a true observation using a bootstrap analysis, wherein behavioral responses were repeatedly assigned to a random permutation of the electrode locations before calculating the Procrustes transformation again (n = 10,000). Data from trials in which the subject was unable to differentiate a phosphene against spontaneous background activity were excluded from all analyses. 
Results
Attenuated Movement of Implanted Eye
Figure 2 plots the change in gaze angle of the implanted eye versus the nonimplanted eye during periods of saccadic eye movement. Each point represents the eye movement during the saccade made in response to electrical stimulation in a single trial in the gaze-based phosphene mapping task. For all three subjects, the horizontal component of movement was smaller in the nonimplanted eye than the implanted eye, indicated by the 95% confidence interval of the gradient being less than unity (S1: 0.75 ± 0.11; S2: 0.64 ± 0.13; and S3: 0.81 ± 0.04). The vertical component of eye movement was similar in both eyes for S1 (gradient = 1.08 ± 0.06) and S2 (0.96 ± 0.07), but slightly smaller in the nonimplanted eye for S3 (0.91 ± 0.04). 
Figure 2.
 
Comparison of saccade amplitudes for the implanted eye versus the nonimplanted eye. Each point represents the horizontal (blue) or vertical (red) change in gaze angle during the saccade made in response to electrode stimulation in a single trial of the gaze-based mapping task. For horizontal eye movement (blue), positive values indicate rightwards movement. For vertical eye movement (red), positive values indicate upwards movement. A linear total least-squares regression model was fitted for each dataset (solid lines), followed by a bootstrapping of the regression residuals. The resulting 95% confidence interval of the gradient is displayed in the top left of each panel. Gradients of less than one indicate that movement of the implanted eye was attenuated with respect to the nonimplanted eye.
Figure 2.
 
Comparison of saccade amplitudes for the implanted eye versus the nonimplanted eye. Each point represents the horizontal (blue) or vertical (red) change in gaze angle during the saccade made in response to electrode stimulation in a single trial of the gaze-based mapping task. For horizontal eye movement (blue), positive values indicate rightwards movement. For vertical eye movement (red), positive values indicate upwards movement. A linear total least-squares regression model was fitted for each dataset (solid lines), followed by a bootstrapping of the regression residuals. The resulting 95% confidence interval of the gradient is displayed in the top left of each panel. Gradients of less than one indicate that movement of the implanted eye was attenuated with respect to the nonimplanted eye.
Phosphene Maps
Maps of phosphene locations derived from behavioral responses are compared with the retinotopic electrode locations in Figure 3. The number of trials per phosphene included in the dataset is given in the lower corner of each panel in Figure 3, and for gaze-based mapping the additional number after the stroke indicates the total number of trials performed including those that were discarded owing to artefact in the eye tracker signal. The total number of trials varies between subjects owing to time constraints and subject fatigue, ranging from 7 to 25 trials per phosphene for gaze-based mapping and 2 to 10 trials per phosphene for finger-based mapping. The similarity between the indicated phosphene locations and the electrode locations was quantified for each map by the mean distance between the estimated phosphene location (behavioral response) and the retinotopic electrode location, shown in Figure 4, with smaller distances indicating that phosphene locations more closely matched the electrode locations. 
Figure 3.
 
Comparison of retinotopic electrode locations to the estimated phosphene locations in the finger-based and gaze-based phosphene mapping tasks. Colored circles indicate expected phosphene locations, determined by the retinotopic placement of the electrode according to the Drasdo and Fowler schematic eye.37,38 Crosses with error bars indicate estimated perceived phosphene location relative to the fovea as measured by the behavioral response (mean ± SD). Text in the lower corner of each panel indicates the number of trials included in the dataset per phosphene, and for gaze-based mapping the number after the stroke indicates the total number of trials performed including those discarded owing to artefact in the eye tracker signal. For any given trial of the gaze-based mapping task, data from one or both eyes were discarded if it contained excessive artefact, resulting in uneven numbers of trials between the nonimplanted (A, E, and H) and implanted eyes (B, F, and I) in some cases. Note that no data is available for S2 nonimplanted eye for phosphene F (light blue), because the pupil of the nonimplanted eye fell out of eye-tracker range during large leftwards movements. (C) S1 performed a touchscreen finger-based mapping task at the same time-point as gaze-based mapping. (D, G, and J) Participants also performed a finger-based mapping task at an earlier time-point.
Figure 3.
 
Comparison of retinotopic electrode locations to the estimated phosphene locations in the finger-based and gaze-based phosphene mapping tasks. Colored circles indicate expected phosphene locations, determined by the retinotopic placement of the electrode according to the Drasdo and Fowler schematic eye.37,38 Crosses with error bars indicate estimated perceived phosphene location relative to the fovea as measured by the behavioral response (mean ± SD). Text in the lower corner of each panel indicates the number of trials included in the dataset per phosphene, and for gaze-based mapping the number after the stroke indicates the total number of trials performed including those discarded owing to artefact in the eye tracker signal. For any given trial of the gaze-based mapping task, data from one or both eyes were discarded if it contained excessive artefact, resulting in uneven numbers of trials between the nonimplanted (A, E, and H) and implanted eyes (B, F, and I) in some cases. Note that no data is available for S2 nonimplanted eye for phosphene F (light blue), because the pupil of the nonimplanted eye fell out of eye-tracker range during large leftwards movements. (C) S1 performed a touchscreen finger-based mapping task at the same time-point as gaze-based mapping. (D, G, and J) Participants also performed a finger-based mapping task at an earlier time-point.
Figure 4.
 
Mean distance (± SD) from the estimated phosphene location (behavioral response) to the retinotopic electrode location for each subject and task. Results from an additional touchscreen finger-based mapping session (orange) are included for S1.
Figure 4.
 
Mean distance (± SD) from the estimated phosphene location (behavioral response) to the retinotopic electrode location for each subject and task. Results from an additional touchscreen finger-based mapping session (orange) are included for S1.
For gaze-based mapping, phosphene locations were derived from the nonimplanted eye movement because the preceding analyses found that the movement of the implanted eye was possibly attenuated (Fig. 2). The mean distance from the estimated phosphene location to the expected phosphene location was significantly different between subjects (mean for S1 = 8.15°; S2 = 3.58°; S3 = 23.4°; P < 0.001), but there was no effect of initial eye position (all P > 0.05). The initial horizontal gaze angle was significantly different depending on the instruction (“look left,” “look right,” or “look forward”) for all three subjects, confirming that they were able to make eye movements on instruction (Table 3). 
Table 3.
 
Summary Statistics for the Initial Horizontal Gaze Angle in the Gaze-Based Mapping Task
Table 3.
 
Summary Statistics for the Initial Horizontal Gaze Angle in the Gaze-Based Mapping Task
Table 4 summarizes the results of the Procrustes analysis, characterizing the distortion (scaling, translation, and rotation) of the phosphene map relative to the electrode locations. Scale factors were not significantly different from the bootstrap distribution for responses derived from S3’s gaze-based mapping (P > 0.05). For the remaining data, the bootstrap analyses confirmed that the reported phosphene locations were directly related to the physical electrode layout and not from a random distribution. 
Table 4.
 
Characterization of the Distortion of the Phosphene Maps Relative to the Retinotopic Electrode Locations Using Procrustes Analysis
Table 4.
 
Characterization of the Distortion of the Phosphene Maps Relative to the Retinotopic Electrode Locations Using Procrustes Analysis
For S1, the gaze-based and finger-based phosphene maps all had a fair correspondence to the electrode locations (Figs. 3A–D). There was no significant difference in phosphene-electrode distances between the gaze-based map and the touchscreen finger-based map obtained in the same test session (Welch's analysis of variance for unequal variances; P = 0.55). All phosphene maps for S1 were compressed relative to the electrode layout (Table 4). This effect was most pronounced in the first finger-based phosphene map, which had a scale factor of 0.37 and was also vertically translated relative to the electrode locations by 11.7°. 
For S2, the gaze-based and finger-based phosphene locations both corresponded well with the retinotopic electrode locations (Figs. 3E–G). There was minimal bias (translation) between the phosphene locations and electrode locations for any of the phosphene maps for S2, most notably for the gaze-based map (scale factor = 0.99; translation ≤ 0.2°) (Table 4). There was some compression of the phosphene maps relative to the electrode locations, and this was most prominent in the finger-based phosphene map (scale factor 0.67) (Table 4). 
For S3, the finger-derived phosphene map resembled a compressed rendering of the electrode layout (scale factor 0.83) (Table 4) with the exception of phosphenes E (green) and F (light blue), which were displaced relative to their associated electrodes (Fig. 3J). Note that phosphenes E and F were excluded when computing the optimal linear transformation from electrode locations to phosphene locations. In gaze-based phosphene mapping for S3, all eye movements were towards a similar region of the visual field (Figs. 3H, I). Scale factors near zero for the gaze-based phosphene maps were not significantly different from a random bootstrap distribution (P > 0.05). This indicates that a linear mapping of electrode locations to phosphene locations was not possible; the phosphene locations had no correspondence to retinotopic electrode location, and the optimal Procrustes solution was to collapse the all electrode locations to a single point located at the centroid of the phosphene locations. 
Rotational distortion of the measured phosphene locations relative to the electrode locations was generally small (<10°), and might be explained by torsional rotation of the eye. One exception to this is the first finger-based map for S1 (rotation = −29.4°), but this rotation was not present in subsequent finger-based and gaze-based maps for this subject. A large rotational distortion was also reported for the gaze-based maps for S3; however, in this case Procrustes analysis failed to find a linear mapping of electrode locations to phosphene locations, so this result should be disregarded. 
Saccade Latency
Histograms of the latency between stimulus onset and saccade onset in the gaze-based mapping task are presented in Figure 5. Mean saccade latency (± standard deviation) was 559 ± 145 ms for S1, 421 ± 239 ms for S2, and 687 ± 279 ms for S3. On average, saccades were initiated after stimulus offset (500 ms) for S1 and S3, but rarely for S2. To investigate any possible response differences between when eye movements were made during and after stimulation, we conducted a Kruskal–Wallis test per subject on response error with fixed factor saccade latency (before offset, after offset). There was no effect of latency (all P > 0.05). The mean errors for the two latency conditions were S1 = 8.21 ± 4.75°, S2 = 3.68 ± 1.78°, and S3 = 25.0 ± 11.1° for saccades before stimulus offset, and S1 = 8.09 ± 4.21°, S2 = 3.00 ± 0.76°, and S3 = 22.9 ± 11.1° for saccades after stimulus offset (Supplementary Figure S2). 
Figure 5.
 
Histograms showing the distribution of saccade latency in the gaze-based phosphene mapping task for each subject, S1 to S3. Saccade latency is measured from stimulus onset to saccade onset. The mean latency (µ) and standard deviation (σ) are indicated at the top-right corner of each panel.
Figure 5.
 
Histograms showing the distribution of saccade latency in the gaze-based phosphene mapping task for each subject, S1 to S3. Saccade latency is measured from stimulus onset to saccade onset. The mean latency (µ) and standard deviation (σ) are indicated at the top-right corner of each panel.
Discussion
Gaze-Based Versus Finger-Based Mapping
The present study compared phosphene mapping using eye movement (gaze-based) with maps derived from electrode locations in three suprachoroidal retinal implant recipients. Procrustes analysis was used to quantify distortion in the measurements, and yielded a scale and translation factor for two subjects (S1, S2) that correlated well with the retinotopic electrode layout. S2 was exceptional with a 0.99 scale factor and less than 0.2° of translational distortion in the fitted model. Data for subject S1 were validated further against a conventional finger-based map collected on the same date with the same stimulation parameters. This comparison demonstrated that the gaze-based method yielded statistically similar location results. Taken together, these results provide some evidence that gaze-based mapping could be used in place of finger-based mapping in retinal implant recipients, allowing estimation of the size and position of the prosthetic visual field. 
Previous studies comparing gaze-based and finger-based phosphene mapping have been limited to simulated prosthetic vision and have directly compared the behavioral response to a ground-truth location of the high-contrast, punctate, simulated phosphene within the simulator display. In these studies, gaze-based phosphene maps more closely matched the true phosphene locations compared with finger-based phosphene maps (Kaskhedikar GP, et al. IOVS 2015;56(7):ARVO E-Abstract 4315; Weinreb S, et al. IOVS 2020;61(7):ARVO E-Abstract 4274). For electrically evoked phosphenes, the true location is more difficult to determine because the phosphene only exists in the subject's perception. Instead, we have compared the behavioral response to retinotopic electrode locations under the assumption that retinotopy is observed. In reality, phosphene locations may diverge from retinotopy owing to a number of practical considerations. First, the retinal remodeling associated with degenerative retinal disease25 may distort the mapping of retinal space to perceptual space. Second, phosphenes can be large, irregularly shaped, indistinct, or consist of multiple bodies, owing to current spread and the geography of the surviving target neurons. The incidental stimulation of retinal axon fibers also causes phosphenes to be elongated in the direction of the axon's trajectory.24 In the present study, all three subjects expressed that it could be difficult to judge the centroid of some phosphenes because they had an irregular or indistinct form. Nevertheless, our finding that gaze-based mapping can produce viable phosphene maps aligns with previous studies in simulated prosthetic vision. Finger-based mapping may remain necessary for subjects who have difficulty with eye movement. 
Based on previous simulated prosthetic vision studies, we had expected gaze-based mapping to produce more accurate estimates of phosphene locations compared with finger-based mapping by eliminating open-loop pointing bias (Kaskhedikar GP, et al. IOVS 2015;56(7):ARVO E-Abstract 4315; Weinreb S, et al. IOVS 2020;61(7):ARVO E-Abstract 4274). We also expected gaze-based mapping to be more intuitive to the subject, because suprachoroidal retinal implant recipients have previously described their phosphenes as existing “within the eye,” leading to difficulty conceptualizing them as existing outside of the eye during drawing and size estimation tasks.23 In contrast with our expectation, the gaze-based and finger-based maps obtained in the same session were substantially similar for subject S1. It is possible that any difference in accuracy between the two methods at this time-point was small compared with the uncertainty of phosphene locations. In contrast, the earlier finger-based map for the same subject (obtained 5 weeks after device switch-on) was compressed and overlapping. This finding may indicate a requisite learning process (with respect to the spatial relationship between phosphenes and open-loop pointing) has occurred between the two time-points. 
Gaze-based mapping may offer lesser task complexity compared with finger-based mapping because the subject is not required to fixate on a tactile marker before stimulus onset, and we confirmed that any initial eye position is acceptable. Moreover, we confirmed that saccades initiated before stimulus offset yielded similar phosphene maps to saccades initiated after stimulus offset. In this study, the researcher manually triggered each stimulus, but conceivably stimuli could be triggered automatically or even controlled by the subject via a button-press. This would improve efficiency and reduce the need for supervision. Additionally, the subject could press a button to timestamp the end of their eye movement, eliminating the need for saccade detection. 
A significant limitation of our comparison of gaze-based and finger-based mapping is the interval between data collection using the two methods, and the differing stimulation levels and durations used for the two methods; hence, we have only performed a direct comparison between the same-session data for subject S1. Changes in stimulation level and duration are not expected to affect the genesis of phosphene locations but may affect the size and shape, and hence the perceived center, of a phosphene.23 Additionally, the effect of eye position on phosphene location was not explicitly controlled in the finger-based mapping task. The subjects were instructed to maintain fixation on the tactile nub during finger pointing, as for previous studies,23,27,28 but some degree of eye movement is likely to have occurred, which may have introduced additional variation to the indicated phosphene locations. We also did not investigate the use of relative mapping to refine the results, as demonstrated in previous studies.15,20 
The estimated phosphene locations most closely matched the electrode locations in S2, followed by S1 and then S3. Variations in the fidelity of phosphene locations to electrode locations for different individuals are consistent with previous reports in retinal implants, and fidelity may be further decreased for the paired electrodes used in this study. A study in two intrascleral implant recipients found that phosphenes appeared in the expected quadrant of the visual field, but the topographical correspondence between actual phosphene and expected phosphene locations was not always conserved.27 In our first-generation study in three suprachoroidal retinal implant recipients, retinotopy was preserved in two subjects with some degree of distortion, whereas for the third subject (who, notably, had a parafoveal array placement) all phosphenes had the same appearance regardless, of which electrode was stimulated.23 A study in six Argus II recipients reported that the location of a phosphene appeared in the expected quadrant of the visual field in four subjects and not for the remaining two. And finally, a different study in two Argus II recipients reported that the general arrangement was preserved but the distances between phosphenes were considerably amplified relative to the retinotopic electrode locations.29 
Phosphene Locations Versus Functional Vision
Our previous reports for this cohort revealed a disparity in functional vision outcomes between subjects. Performance was worse for S3 compared with S1 and S2 in all screen-based assessments (target localization, motion discrimination, and spatial discrimination) and in functional vision assessments (modified door task, tabletop search, and obstacle avoidance).6 In a study on motion discrimination, we concluded that S3 had little or no retinotopic discrimination and instead depended on head scanning cues to determine direction of motion. In contrast, S1 and S2 were able to use retinotopic cues and could perform the task without head scanning.26 
The phosphene maps in the present study demonstrate that phosphenes for S1 and S2 were generally spatially distinct and appeared in approximately the expected location. For S3, the finger-based phosphene locations (6 weeks after switch-on) approximately matched the expected locations for four out of six phosphenes, but the gaze-based phosphene locations (50 weeks after switch-on) did not correspond with the expected locations at all. finding This could indicate simply that S3 could not perform the gaze-based mapping task. Alternatively, the response of the retina to stimulation may have changed over the 44 weeks between the two tasks. 
From week 43 after surgery onward, S3 began describing phosphenes as “7-shaped.” This description was given to phosphenes produced by a number of different electrodes. A “7” shape is approximately consistent with the layout of electrodes that were found to reliably produce phosphenes during device fitting, that is, the shape of the retinal tissue that was responsive to stimulation. It seems as if the same population, or overlapping populations, of neurons may have been stimulated by many different electrodes, as observed in one subject in our previous clinical trial of a 24-channel suprachoroidal retinal implant.23 Unfortunately, no finger-based mapping data are available for S3 after 6 weeks after switch-on to provide a definitive answer. However, phosphenes that are largely spatially indiscriminable, as suggested by the gaze-based phosphene map (week 50), is consistent with the generally poorer functional vision outcomes for S3; in particular, their apparent lack of retinotopic discrimination highlighted during motion discrimination.26 In this case, remapping the vision processing to reflect the measured phosphene locations would be unlikely to improve functional vision because the phosphenes are not distinct spatially. 
Before surgery, our vitreoretinal surgeon ranked the severity of the degeneration as more advanced in S3 compared with S1 and S2. This factor would be consistent with fewer surviving retinal neurons and more progressed retinal degeneration associated with cone–rod dystrophy (versus rod–cone in subjects S1 and S2) (Table 1).25 This finding reinforces the notion that the integrity of the remodeled inner retina is a key predictor of spatial discrimination. It is worth stressing that S3 did have positive outcomes; performance on all functional vision measures (except the spatial discrimination task) was better with device on versus off, and activities of daily living were improved with the device on versus off.5,6 This outcome demonstrates that positive outcomes with a retinal prosthesis are possible even when spatial discrimination of phosphenes is limited.41 
Postsurgical Oculomotor Behavior
The implanted eye moved relatively less than the nonimplanted eye during periods of saccadic movement in all three subjects. Mechanical tugging of the trans-scleral lead wire,42 resection of lateral rectus muscle during the implantation surgery,43 and fibrosis forming around the extraocular section of the lead wire,44 may have damped the oculomotor response of the implanted eye, resulting in less movement from the same motor command. In light of this, we consider the nonimplanted eye movement to be the best representation of the intended saccade, and therefore the most relevant for gaze-based phosphene mapping. Additionally, if eye trackers are implemented into retinal prostheses for naturalistic control of gaze,4547 they should target the nonimplanted eye, as this would more closely reflect the intended eye movement. 
Latencies of saccades in the gaze-based mapping task averaged 559 ms for S1, 421 ms for S2, and 687 ms for S3. A typical range of latencies for normal sighted adults in a pro-saccade task is 200 to 250 ms,48,49 which is considerably shorter than latencies observed in this study. Visual factors such as target size and contrast can influence saccade latency. High saccade latency may also reflect a latency between stimulation onset and perception, or the cognitive load involved in localizing electrically evoked phosphenes, because saccade latency increases with cognitive load.50,51 Other factors may include altered oculomotor behavior associated with profound blindness, and potential mechanical effects after surgery. 
Eye movements affect the perceived locations of phosphenes, and eye movements that occur during phosphene presentation cause a corresponding movement of the phosphene.16,19,32,47 In the gaze-based mapping task, eye movement often commenced before stimulus offset (Fig. 5). In these cases, the phosphene was presumably still visible and moved in conjugate with the eye movement. Despite this, the gaze-based phosphene locations for S1 and S2 approximately match the expected phosphene locations and the finger-based phosphene locations, and we found no significant difference in phosphene locations derived from saccades initiated before versus after stimulus offset. Previous finger-based phosphene mapping studies have reported that perceived phosphene locations were predominantly dependent on eye position at stimulus onset, rather than the eye position at response time, suggesting that the task required memorization of the spatial location at stimulation onset.29 However, in contrast with the present study, the Caspi et al. study29 did not analyze eye position during stimulation (only after stimulus offset) and so provides no information on comparing the effects of eye position at stimulus onset versus in the middle of stimulation. Taken as a whole, we found that deviant eye positions were inherently compensated for in the gaze-mapping task, whereas they may be considered a risk of confound in a finger-mapping task. 
Conclusions
This study demonstrated a gaze-based phosphene mapping method in three suprachoroidal retinal implant recipients and quantified the spatial relationship to the retinotopic layout of the implanted electrodes. Derived phosphene maps for two subjects correlated well with the retinotopic electrode layout. A third subject could not produce a coherent phosphene map using gaze-based mapping, but it is unclear if this represents an inability to perform the task or if the task revealed the phosphenes were spatially indistinct. A worse correspondence between perceived phosphene locations and retinotopic electrode locations was linked with worse functional vision outcomes (reported in previously published studies).26 We also noted oculomotor abnormalities in the implanted eye for all subjects, predominantly in the horizontal plane, which may be a result of lateral rectus muscle resection during surgery. The results of the present study provide further evidence that gaze-based systems can produce verifiable phosphene maps and may be less affected by behavioral confounds (such as open-loop pointing bias) observed in conventional finger-based mapping. 
Acknowledgments
The Bionics Institute and the Centre for Eye Research Australia acknowledge the support they receives from the Victorian Government through its Operational Infrastructure Support Program. The Bionics Institute gratefully acknowledges financial support from the estate of the late Brian Entwisle. 
Disclosure: S.A. Titchener, Bionic Vision Technologies Pty Ltd (F); J. Goossens, None; J. Kvansakul, None; D.A.X. Nayagam, Bionic Vision Technologies Pty Ltd (F); Bionic Vision Technologies Pty Ltd (P); M. Kolic, Bionic Vision Technologies Pty Ltd (F); Bionic Vision Technologies Pty Ltd (R); E.K. Baglin, Bionic Vision Technologies Pty Ltd (F); Bionic Vision Technologies Pty Ltd (R); L.N. Ayton, None; C.J. Abbott, Bionic Vision Technologies Pty Ltd (R); Bionic Vision Technologies Pty Ltd (F); C.D. Luu, Bionic Vision Technologies Pty Ltd (F); N. Barnes, Bionic Vision Technologies Pty Ltd (F); Data61 (P); W.G. Kentler, None; M.N. Shivdasani, None; P.J. Allen, Bionic Vision Technologies Pty Ltd (F); Bionic Vision Technologies Pty Ltd (P); M.A. Petoe, Bionic Vision Technologies Pty Ltd (F); Bionic Vision Technologies Pty Ltd (R); Bionic Vision Technologies Pty Ltd (P) 
References
Ostad-Ahmadi Z, Daemi A, Modabberi M-R, Mostafaie A. Safety, effectiveness, and cost-effectiveness of Argus II in patients with retinitis pigmentosa: A systematic review. Int J Ophthalmol. 2021; 14(2): 310–316, doi:10.18240/ijo.2021.02.20. [CrossRef] [PubMed]
Bloch E, da Cruz L. The Argus II retinal prosthesis system. In: Prosthesis. London: IntechOpen; 2019.
Stingl K, Schippert R, Bartz-Schmidt KU, et al. Interim results of a multicenter trial with the new electronic subretinal implant Alpha AMS in 15 patients blind from inherited retinal degenerations. Front Neurosci. 2017; 11: 445. [CrossRef] [PubMed]
Stingl K, Bartz-Schmidt KU, Besch D, et al. Subretinal visual implant alpha IMS–clinical trial interim report. Vision Res. 2015; 111: 149–160. [CrossRef] [PubMed]
Karapanos L, Abbott CJ, Ayton LN, et al. Functional vision in the real-world environment with a 44-channel suprachoroidal retinal prosthesis. Invest Ophthalmol Vis Sci. 2021; 62(8): 3203.
Petoe MA, Titchener SA, Kolic M, et al. A second generation (44 channel) suprachoroidal retinal prosthesis: Interim clinical trial results. Transl Vis Sci Technol. 2021; 10(10): 12, doi:10.1167/tvst.10.10.12. [CrossRef] [PubMed]
Fujikado T, Kamei M, Sakaguchi H, et al. One-year outcome of 49-channel suprachoroidal–transretinal stimulation prosthesis in patients with advanced retinitis pigmentosa. Invest Ophthalmol Vis Sci. 2016; 57(14): 6147–6157. [CrossRef] [PubMed]
Palanker D, Le Mer Y, Mohand-Said S, Muqit M, Sahel JA. Photovoltaic restoration of central vision in atrophic age-related macular degeneration. Ophthalmology. 2020; 127(8): 1097–1104, doi:10.1016/j.ophtha.2020.02.024. [CrossRef] [PubMed]
Charters L. NR600 retinal prosthesis: Safe with promising visual results. Mod Retin Ophthalmol. Published online October 2020.
Fernández E, Normann RA. CORTIVIS approach for an intracortical visual prostheses. In: Gabel VP, ed. Artificial Vision: A Clinical Guide. New York: Springer International Publishing; 2017: 191–201, doi:10.1007/978-3-319-41876-6_15.
Troyk PR. The intracortical visual prosthesis project. In: Gabel VP, ed. Artificial Vision: A Clinical Guide. New York: Springer International Publishing; 2017: 203–214, doi:10.1007/978-3-319-41876-6_16.
Second Sight Medical Products Announces Two-Year Results of its Orion Study. Business Wire. Available at: https://www.businesswire.com/news/home/20210512005147/en/Second-Sight-Medical-Products-Announces-Two-Year-Results-of-its-Orion-Study.
Purves D, Augustine GJ, Fitzpatrick D, et al. eds. Neuroscience. 2nd edition. Sunderland (MA): Sinauer Associates; 2001. Available from: https://www.ncbi.nlm.nih.gov/books/NBK10799/.
Benson NC, Butt OH, Brainard DH, Aguirre GK. Correction of distortion in flattened representations of the cortical surface allows prediction of V1-V3 functional organization from anatomy. PLoS Comput Biol. 2014; 10(3): e1003538, doi:10.1371/JOURNAL.PCBI.1003538. [CrossRef] [PubMed]
Stronks HC, Dagnelie G. Phosphene mapping techniques for visual prostheses. In: Visual Prosthetics. New York: Springer; 2011: 367–383.
Brindley GS, Lewin WS. The sensations produced by electrical stimulation of the visual cortex. J Physiol. 1968; 196(2): 479. [CrossRef] [PubMed]
Dobelle WH, Mladejovsky MG. Phosphenes produced by electrical stimulation of human occipital cortex, and their application to the development of a prosthesis for the blind. J Physiol. 1974; 243(2): 553–576. [CrossRef] [PubMed]
Mladejovsky MG, Eddington DK, Evans JR, Dobelle WH. A computer-based brain stimulation system to investigate sensory prostheses for the blind and deaf. IEEE Trans Biomed Eng. 1976; BME-23(4): 286–296, doi:10.1109/TBME.1976.324587. [CrossRef]
Caspi A, Barry MP, Patel UK, et al. Eye movements and the perceived location of phosphenes generated by intracranial primary visual cortex stimulation in the blind. Brain Stimul. 2021; 14(4): 851–860, doi:10.1016/j.brs.2021.04.019. [CrossRef] [PubMed]
Oswalt D, Bosking W, Sun P, et al. Multi-electrode stimulation evokes consistent spatial patterns of phosphenes and improves phosphene mapping in blind subjects. Brain Stimul. 2021; 14(5): 1356–1372, doi:10.1016/j.brs.2021.08.024. [CrossRef] [PubMed]
Ahuja AK, Behrend MR. The ArgusTM II retinal prosthesis: Factors affecting patient selection for implantation. Prog Retin Eye Res. 2013; 36: 1–23, doi:10.1016/j.preteyeres.2013.01.002. [CrossRef] [PubMed]
Shivdasani MN, Sinclair NC, Gillespie LN, et al. Identification of characters and localization of images using direct multiple-electrode stimulation with a suprachoroidal retinal prosthesis. Invest Ophthalmol Vis Sci. 2017; 58(10): 3962–3974. [CrossRef] [PubMed]
Sinclair NC, Shivdasani MN, Perera T, et al. The appearance of phosphenes elicited using a suprachoroidal retinal prosthesis. Invest Ophthalmol Vis Sci. 2016; 57(11): 4948–4961, doi:10.1167/iovs.15-18991. [CrossRef] [PubMed]
Beyeler M, Nanduri D, Weiland JD, Rokem A, Boynton GM, Fine I. A model of ganglion axon pathways accounts for percepts elicited by retinal implants. Sci Rep. 2019; 9(1): 9199. [CrossRef] [PubMed]
Marc RE, Jones BW, Watt CB, Strettoi E. Neural remodeling in retinal degeneration. Prog Retin Eye Res. 2003; 22(5): 607–655. [CrossRef] [PubMed]
Titchener SA, Kvansakul J, Shivdasani MN, et al. Oculomotor responses to dynamic stimuli in a 44-channel suprachoroidal retinal prosthesis. Transl Vis Sci Technol. 2020; 9(13): 31. [CrossRef] [PubMed]
Fujikado T, Kamei M, Sakaguchi H, et al. Testing of semichronically implanted retinal prosthesis by suprachoroidal-transretinal stimulation in patients with retinitis pigmentosa. Invest Ophthalmol Vis Sci. 2011; 52(7): 4726–4733, doi:10.1167/iovs.10-6836. [CrossRef] [PubMed]
Luo YHL, Zhong JJ, Clemo M, da Cruz L. Long-term repeatability and reproducibility of phosphene characteristics in chronically implanted Argus II retinal prosthesis subjects. Am J Ophthalmol. 2016; 170: 100–109. [CrossRef] [PubMed]
Caspi A, Roy A, Dorn JD, Greenberg RJ. Retinotopic to spatiotopic mapping in blind patients implanted with the Argus II retinal prosthesis. Invest Ophthalmol Vis Sci. 2017; 58(1): 119–127. [CrossRef] [PubMed]
Caspi A, Dorn J, Helder JB, Katyal KD, Roy A. Eye movements as a marker for visual prosthesis spatial mapping - a feasibility study using a blind patient implanted with the Argus II retinal prosthesis. In: Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS. 2016; 2016: 5443–5446. Lake Buena Vista, Florida, doi:10.1109/EMBC.2016.7591958.
Enright JT. The non-visual impact of eye orientation on eye-hand coordination. Vision Res. 1995; 35(11): 1611–1618. [CrossRef] [PubMed]
Sabbah N, Authié CN, Sanda N, Mohand-Said S, Sahel J-A, Safran AB. Importance of eye position on spatial localization in blind subjects wearing an Argus II retinal prosthesis. Invest Ophthalmol Vis Sci. 2014; 55(12): 8259–8266. [CrossRef] [PubMed]
Barsingerhorn AD, Boonstra FN, Goossens J. Development and validation of a high-speed stereoscopic eyetracker. Behav Res Methods. 2018; 50(6): 2480–2497. [CrossRef] [PubMed]
Barsingerhorn AD, Boonstra FN, Goossens J. Saccade latencies during a preferential looking task and objective scoring of grating acuity in children with and without visual impairments. Acta Ophthalmol. 2019; 97(6): 616–625. [CrossRef] [PubMed]
Tanke N, Barsingerhorn AD, Boonstra FN, Goossens J. Visual fixations rather than saccades dominate the developmental eye movement test. Sci Rep. 2021; 11(1): 1–13. [CrossRef] [PubMed]
Abbott CJ, Nayagam DAX, Luu CD, et al. Safety studies for a 44-channel suprachoroidal retinal prosthesis: A chronic passive study. Invest Ophthalmol Vis Sci. 2018; 59(3): 1410–1424, doi:10.1167/iovs.17-23086. [CrossRef] [PubMed]
Drasdo N, Fowler CW. Non-linear projection of the retinal image in a wide-angle schematic eye. Br J Ophthalmol. 1974; 58(8): 709. [CrossRef] [PubMed]
Dacey DM, Petersen MR. Dendritic field size and morphology of midget and parasol ganglion cells of the human retina. Proc Natl Acad Sci USA. 1992; 89(20): 9666–9670. [CrossRef] [PubMed]
Leigh RJ, Zee DS. The Neurology of Eye Movements. New York: Oxford University Press; 2015.
Hall J. Linear Deming regression. MATLAB central file exchange. Published 2021. Accessed November 23, 2021. Available at: https://www.mathworks.com/matlabcentral/fileexchange/33484-linear-deming-regression.
Petoe MA, McCarthy CD, Shivdasani MN, et al. Determining the contribution of retinotopic discrimination to localization performance with a suprachoroidal retinal prosthesis. Invest Ophthalmol Vis Sci. 2017; 58(7): 3231–3239. [CrossRef] [PubMed]
Faber H, Ernemann U, Sachs H, et al. CT assessment of intraorbital cable movement of electronic subretinal prosthesis in three different surgical approaches. Transl Vis Sci Technol. 2021; 10(8): 16, doi:10.1167/tvst.10.8.16. [CrossRef] [PubMed]
Ayton LN, Blamey PJ, Guymer RH, et al. First-in-human trial of a novel suprachoroidal retinal prosthesis. PLoS One. 2014; 9(12): e115239. [CrossRef] [PubMed]
Villalobos J, Nayagam DAX, Allen PJ, et al. A wide-field suprachoroidal retinal prosthesis is stable and well tolerated following chronic implantation. Invest Ophthalmol Vis Sci. 2013; 54(5): 3751–3762, doi:10.1167/iovs.12-10843. [CrossRef] [PubMed]
Paraskevoudi N, Pezaris JS. Eye movement compensation and spatial updating in visual prosthetics: Mechanisms, limitations and future directions. Front Syst Neurosci. 2018; 12: 73. [CrossRef] [PubMed]
Titchener SA, Shivdasani MN, Fallon JB, Petoe MA. Gaze compensation as a technique for improving hand–eye coordination in prosthetic vision. Transl Vis Sci Technol. 2018; 7(1): 2, doi:10.1167/tvst.7.1.2. [CrossRef] [PubMed]
Caspi A, Roy A, Wuyyuru V, et al. Eye movement control in the Argus II retinal-prosthesis enables reduced head movement and better localization precision. Invest Ophthalmol Vis Sci. 2018; 59(2): 792–802. [CrossRef] [PubMed]
Purves D, Augustine G, Fitzpatrick D, Hall WC, Lamantia AS, Mcnamara JO. Eye movements and sensory motor integration. In: Neuroscience. Sunderland, MA: Sinauer Associates; 2001: 453–468.
Yang Q, Bucci MP, Kapoula Z. The latency of saccades, vergence, and combined eye movements in children and in adults. Invest Ophthalmol Vis Sci. 2002; 43(9): 2939–2949. [PubMed]
Stuyven E, Van der Goten K, Vandierendonck A, Claeys K, Crevits L. The effect of cognitive load on saccadic eye movements. Acta Psychol (Amst). 2000; 104(1): 69–85, doi:10.1016/S0001-6918(99)00054-2. [CrossRef] [PubMed]
Fadardi MS, Abel LA. The effect of cognitive load on saccadic characteristics. Invest Ophthalmol Vis Sci. 2012; 53(14): 4865.
Figure 1.
 
Near-infrared fundus images showing the 44-channel suprachoroidal retinal implant within the eye for each subject. The dashed blue line indicates the edge of the implant. Electrodes are visible as bright circles (some are hidden behind pigmentation). Concentric red circles indicate 10° eccentricities of visual field centered on the fovea according to the Drasdo and Fowler schematic eye.37,38 Green circles denote the subset of electrodes that were selected for stimulation in the phosphene mapping tasks. Green ovals encompassing two neighboring electrodes indicate they were operated as a shorted pair to adhere to per-electrode charge density limits.
Figure 1.
 
Near-infrared fundus images showing the 44-channel suprachoroidal retinal implant within the eye for each subject. The dashed blue line indicates the edge of the implant. Electrodes are visible as bright circles (some are hidden behind pigmentation). Concentric red circles indicate 10° eccentricities of visual field centered on the fovea according to the Drasdo and Fowler schematic eye.37,38 Green circles denote the subset of electrodes that were selected for stimulation in the phosphene mapping tasks. Green ovals encompassing two neighboring electrodes indicate they were operated as a shorted pair to adhere to per-electrode charge density limits.
Figure 2.
 
Comparison of saccade amplitudes for the implanted eye versus the nonimplanted eye. Each point represents the horizontal (blue) or vertical (red) change in gaze angle during the saccade made in response to electrode stimulation in a single trial of the gaze-based mapping task. For horizontal eye movement (blue), positive values indicate rightwards movement. For vertical eye movement (red), positive values indicate upwards movement. A linear total least-squares regression model was fitted for each dataset (solid lines), followed by a bootstrapping of the regression residuals. The resulting 95% confidence interval of the gradient is displayed in the top left of each panel. Gradients of less than one indicate that movement of the implanted eye was attenuated with respect to the nonimplanted eye.
Figure 2.
 
Comparison of saccade amplitudes for the implanted eye versus the nonimplanted eye. Each point represents the horizontal (blue) or vertical (red) change in gaze angle during the saccade made in response to electrode stimulation in a single trial of the gaze-based mapping task. For horizontal eye movement (blue), positive values indicate rightwards movement. For vertical eye movement (red), positive values indicate upwards movement. A linear total least-squares regression model was fitted for each dataset (solid lines), followed by a bootstrapping of the regression residuals. The resulting 95% confidence interval of the gradient is displayed in the top left of each panel. Gradients of less than one indicate that movement of the implanted eye was attenuated with respect to the nonimplanted eye.
Figure 3.
 
Comparison of retinotopic electrode locations to the estimated phosphene locations in the finger-based and gaze-based phosphene mapping tasks. Colored circles indicate expected phosphene locations, determined by the retinotopic placement of the electrode according to the Drasdo and Fowler schematic eye.37,38 Crosses with error bars indicate estimated perceived phosphene location relative to the fovea as measured by the behavioral response (mean ± SD). Text in the lower corner of each panel indicates the number of trials included in the dataset per phosphene, and for gaze-based mapping the number after the stroke indicates the total number of trials performed including those discarded owing to artefact in the eye tracker signal. For any given trial of the gaze-based mapping task, data from one or both eyes were discarded if it contained excessive artefact, resulting in uneven numbers of trials between the nonimplanted (A, E, and H) and implanted eyes (B, F, and I) in some cases. Note that no data is available for S2 nonimplanted eye for phosphene F (light blue), because the pupil of the nonimplanted eye fell out of eye-tracker range during large leftwards movements. (C) S1 performed a touchscreen finger-based mapping task at the same time-point as gaze-based mapping. (D, G, and J) Participants also performed a finger-based mapping task at an earlier time-point.
Figure 3.
 
Comparison of retinotopic electrode locations to the estimated phosphene locations in the finger-based and gaze-based phosphene mapping tasks. Colored circles indicate expected phosphene locations, determined by the retinotopic placement of the electrode according to the Drasdo and Fowler schematic eye.37,38 Crosses with error bars indicate estimated perceived phosphene location relative to the fovea as measured by the behavioral response (mean ± SD). Text in the lower corner of each panel indicates the number of trials included in the dataset per phosphene, and for gaze-based mapping the number after the stroke indicates the total number of trials performed including those discarded owing to artefact in the eye tracker signal. For any given trial of the gaze-based mapping task, data from one or both eyes were discarded if it contained excessive artefact, resulting in uneven numbers of trials between the nonimplanted (A, E, and H) and implanted eyes (B, F, and I) in some cases. Note that no data is available for S2 nonimplanted eye for phosphene F (light blue), because the pupil of the nonimplanted eye fell out of eye-tracker range during large leftwards movements. (C) S1 performed a touchscreen finger-based mapping task at the same time-point as gaze-based mapping. (D, G, and J) Participants also performed a finger-based mapping task at an earlier time-point.
Figure 4.
 
Mean distance (± SD) from the estimated phosphene location (behavioral response) to the retinotopic electrode location for each subject and task. Results from an additional touchscreen finger-based mapping session (orange) are included for S1.
Figure 4.
 
Mean distance (± SD) from the estimated phosphene location (behavioral response) to the retinotopic electrode location for each subject and task. Results from an additional touchscreen finger-based mapping session (orange) are included for S1.
Figure 5.
 
Histograms showing the distribution of saccade latency in the gaze-based phosphene mapping task for each subject, S1 to S3. Saccade latency is measured from stimulus onset to saccade onset. The mean latency (µ) and standard deviation (σ) are indicated at the top-right corner of each panel.
Figure 5.
 
Histograms showing the distribution of saccade latency in the gaze-based phosphene mapping task for each subject, S1 to S3. Saccade latency is measured from stimulus onset to saccade onset. The mean latency (µ) and standard deviation (σ) are indicated at the top-right corner of each panel.
Table 1.
 
Participant Demographics
Table 1.
 
Participant Demographics
Table 2.
 
Stimulation Parameters for Each Electrode (or Shorted Pair of Electrodes) in the Finger Pointing and Eye Gaze Phosphene Mapping Tasks
Table 2.
 
Stimulation Parameters for Each Electrode (or Shorted Pair of Electrodes) in the Finger Pointing and Eye Gaze Phosphene Mapping Tasks
Table 3.
 
Summary Statistics for the Initial Horizontal Gaze Angle in the Gaze-Based Mapping Task
Table 3.
 
Summary Statistics for the Initial Horizontal Gaze Angle in the Gaze-Based Mapping Task
Table 4.
 
Characterization of the Distortion of the Phosphene Maps Relative to the Retinotopic Electrode Locations Using Procrustes Analysis
Table 4.
 
Characterization of the Distortion of the Phosphene Maps Relative to the Retinotopic Electrode Locations Using Procrustes Analysis
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×