Our study is not without limitations. First, we have presented a somewhat biased subset of medical conditions that cause vision impairment given the challenge we encountered in the calibration of the eye tracker for participants with certain ocular conditions. Although we were able to include people in particular with central or peripheral vision loss, we were less able to include participants with conditions such as aniridia, coloboma, Marfan's syndrome–related corneal abnormalities, cataracts, and nystagmus. Future eye-tracking research in individuals with vision impairment should consider using these new calibration-free eye trackers that will, first, need to be trained on individuals with vision impairment. Second, the results of our cluster analysis should be interpreted with caution because of the low number (
n = 11) of participants. Although we categorized participants based on their ocular condition and resulting vision impairment, specific details, such as secondary conditions and other manifestations, as well as the exact severities of impairment, were often not available. This lack of comprehensive medical data limits the depth of our understanding regarding how specific aspects of vision impairment, beyond broad categories, may influence gaze patterns during hitting. Furthermore, the impact of serve depth was not considered in the analysis. Variation in the depth could conceivably alter the likelihood of a saccade to ball bounce or contact point occurring.
16 Although we have no reason to believe that the serve depth did differ between participants, there is a possibility that some participants faced shallower or deeper serves, which could have influenced their gaze behavior and overall performance. Future research should take serve depth into account. Eye tracking can present significant challenges when used with people with vision impairment.
44 In our study, we used a video-based mobile eye tracker given that we collected data on court when hitting actual tennis balls. An eye tracker of this nature required the manual digitization of the video footage rather than the use of algorithms to automatically analyze gaze as might be possible when performing a task on screen. The manual digitization does, of course, add a margin for error. Moreover, it required us to make assumptions about where gaze was directed when the system was calibrated, particularly for participants with central vision loss. In those cases, we assumed, based on previous experience, that the participants would be able to direct their central vision toward the calibration targets. It is possible that there was some error in doing so, and that their fovea was not precisely directed toward the target as might be expected in those with an intact fovea. Nonetheless, it is clear that the participants with central vision loss were largely tracking the ball with areas of their retina that were different to that used during calibration (i.e., the gaze angle is different to the ball angle) (
Fig. 8). Moreover, any calibration offset does not change the central finding that the area of the retina used to view the ball in central vision loss changes in four out of five of the participants with central vision loss.