Free
Articles  |   December 2014
Chapter 7- Restoring Vision to the Blind: Advancements in Vision Aids for the Visually Impaired
Author Notes
  • Correspondence: See Appendix 2 in the Supplementary Material. 
Translational Vision Science & Technology December 2014, Vol.3, 9. doi:https://doi.org/10.1167/tvst.3.7.9
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      The Lasker/IRRF Initiative for Innovation in Vision Science; Chapter 7- Restoring Vision to the Blind: Advancements in Vision Aids for the Visually Impaired. Trans. Vis. Sci. Tech. 2014;3(7):9. https://doi.org/10.1167/tvst.3.7.9.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Discussion Leaders: Frank Werblin and Daniel Palanker 
Scribe: Daniel Rathbun 
Session Participants: Serge Picaud, Eberhart Zrenner, John Pezaris, James Weiland, Bradley Greger, Hugo Marx, Stephen Van Hooser, Joe Rizzo, and Dirk Trauner 
Introduction
Visual impairment is a significant limitation of visual capability resulting from either disease or trauma, which cannot be restored by conventional means, such as refractive correction or medication. Ocular disorders, which can lead to visual impairments include retinal degeneration, albinism, cataracts, glaucoma, corneal disorders, diabetic retinopathy, congenital disorders, infection, and macular problems. Visual impairment caused by brain and nerve disorders is usually termed cortical visual impairment. Census data of 2010 in the United States project that 13 million Americans aged 40 and older will have a visual impairment or be blind by the year 2050 (visual impairment and blindness increase in the over-40 population this past decade, 2012). 
According to the World Health Organization (Arditi & Rosenthal, 1998), when the best-corrected vision in the better eye is in the following ranges, it is considered: 
  • 1.  
    20/30 to 20/60 – near-normal vision or mild vision loss;
  • 2.  
    20/70 to 20/160 – moderate visual impairment, or moderate low vision;
  • 3.  
    20/200 to 20/400 – severe visual impairment, or severe low vision;
  • 4.  
    20/500 to 20/1000 – profound visual impairment, or profound low vision;
  • 5.  
    Below 20/1000 - near-total visual impairment, or near total blindness; and
  • 6.  
    No light perception – total visual impairment, or total blindness.
There are also levels of visual impairment based on loss of the visual field. In the United States, any person with best-corrected visual acuity (BCVA) below 20/200 or visual field smaller than 20° in the better-seeing eye is considered legally blind (Medicare Vision Rehabilitation Services Act of 2003 HR 1902 IH, 2003). 
The possibilities for the development of visual aids for the low vision and blind communities have expanded dramatically with the development of smartphone technologies and image recognition algorithms, and hold even bigger promise with the rapid advancement of video goggles. The proliferation of light tablet computers with high-resolution displays, high-speed processors and low power consumption, cameras of high resolution and small size, as well as connectivity of these devices to omnipresent network, offer broad new horizons to the development of sophisticated devices for augmentation of and compensation for visual impairment. 
This review is divided into two main areas: (1) devices currently available and proposed to aid the low vision and blind community, and (2) algorithms helpful for visual aids, visual augmentation, and restoration of sight to the blind. 
Devices
Optical and Electronic Magnifiers
A multiplicity of the optical and electronic devices for the visually impaired are already available on the market, and more are being developed to help with mobility of the patients beyond the traditional guide dog and white cane, with reading and with other daily functions. The most common visual aids are optical and electronic magnifiers, shown in Figures 7.1 and 7.2. Modern displays with high resolution, wide-dynamic range, and good contrast, as well as software for contrast enhancement and reversal (white text on black background) allow much more comfortable reading at the desk than just a magnifying glass. However, these simple devices fail to offer help with navigation and object recognition at home or in the supermarket. 
Figure 7.1.
 
Optical magnifier - the most popular prescription for the low vision patient. It is very useful for the static task of reading but fails for more mobile tasks such as navigation, object recognition, and “cooking dinner at the stove.”
Figure 7.1.
 
Optical magnifier - the most popular prescription for the low vision patient. It is very useful for the static task of reading but fails for more mobile tasks such as navigation, object recognition, and “cooking dinner at the stove.”
Figure 7.2.
 
A digital camera acquires the image, which is then digitally magnified and presented on electronic display. With the widespread availability of electronic books and other media, comfortable reading on a large display becomes even more natural.
Figure 7.2.
 
A digital camera acquires the image, which is then digitally magnified and presented on electronic display. With the widespread availability of electronic books and other media, comfortable reading on a large display becomes even more natural.
An implantable telescope (Fig. 7.3) is now available at many major low-vision clinics in the United States. However, so far such devices have not gained popularity due to severe reduction in the visual field and lack of reversibility due to implantation. 
Figure 7.3.
 
A small telescope is inserted in place of the conventional intra ocular lens. It provides magnification for the patient's central vision, but it reduces the visual field.
Figure 7.3.
 
A small telescope is inserted in place of the conventional intra ocular lens. It provides magnification for the patient's central vision, but it reduces the visual field.
Mobile Digital Devices
A different class of mobile visual aids is based on video goggles, where an image captured by a head-mounted camera is displayed on a near-the-eye display, such as eSight or the Jordy goggles shown in Figure 7.4. Modern cameras offer electronic zoom, autofocus, and adaptation to ambient lighting in a small package at moderate cost. However, so far these products have failed on the market due to narrow visual field and cumbersome adjustment of parameters, such as contrast or brightness, with a set of knobs; the cost is also perceived as not offering sufficient value for the money (Culham, Chabra, & Rubin, 2004; Culham, Chabra, & Rubin, 2009). 
Figure 7.4.
 
Top: Video goggles marketed by eSight. Bottom: A mobile visual aid called the “Jordy” marketed by Enhanced Vision. Both models suffer from a small field of view and cumbersome knobs to adjust parameters.
Figure 7.4.
 
Top: Video goggles marketed by eSight. Bottom: A mobile visual aid called the “Jordy” marketed by Enhanced Vision. Both models suffer from a small field of view and cumbersome knobs to adjust parameters.
Resolution, contrast, and the visual field of the video goggles keep improving, with Oculus Rift already providing stereoscopic vision with 1920 × 1080 resolution, and >110° of the visual angle. However, the social awkwardness of the bulky headwear devices is another barrier to acceptance on the market. Like hearing aids, patients do not want to advertise their disability by wearing a signpost on their heads. The goggles also cut off socially essential eye contact. Even the low-vision patient would like to be able to look other people “in the eyes.” A solution to some of these limitations is being developed by Lumus Inc. (see Lumus website). A thin (1.6 mm) semitransparent display allows seeing the world through the glass, and also being seen from outside. Small size, light weight, and ergonomic design of these video goggles (Fig. 7.5) make them appear similar to regular optical glasses, minimizing the social awkwardness of the electronic eyewear. High-resolution images (1280 × 720 pixels) in the visual field of up to 40°, with contrast of 250:1, offer comfortable viewing of the displayed information. This information could include magnified and enhanced version of the text or the objects, path guidance, face recognition, and other aspects of augmented reality. 
Figure 7.5.
 
Lumus video goggles with a semi-transparent display which allows overlaying digital images over the visual scene - a representation called “augmented reality.” (Photo courtesy of Lumus Inc.)
Figure 7.5.
 
Lumus video goggles with a semi-transparent display which allows overlaying digital images over the visual scene - a representation called “augmented reality.” (Photo courtesy of Lumus Inc.)
Which patients would benefit the most from electronic goggles? Patients with tunnel vision (advanced stages of retinitis pigmentosa and glaucoma) could benefit from a zoomed-out view, widening their visual angle. Age-related macular degeneration patients with reduced central vision could benefit from magnification and enhanced contrast of the image, especially if presented to the preferred retinal locus (PRL). 
Video Goggles for Restoration of Sight to the Blind
Similar video goggles could be used for optogenetic (Busskamp et al., 2010; Lagai et al., 2008), photopharmacological (Polosukhina et al., 2012; Tochitsky et al., 2014) or photovoltaic (Mandel et al., 2013; Matheison et al., 2012) restoration of sight to the blind. These approaches introduce additional challenges: the video goggles should provide very bright pulsed illumination at specific wavelengths (blue or yellow) to activate channel- or halorhodopsin in optogenetic approaches, azobenzene-based photoswitches in photopharmacological approaches, or near-infrared for photovoltaic implants. Direct activation of the retinal ganglion cells with pulse trains mimicking the natural firing patterns in optogenetic or photopharmacological approaches will also need direct control of the pixels in the digital light processing (DLP) or liquid-crystal display (LCD) array, a feature that will require custom electronic controllers. In addition, calculation and delivery of the “natural retinal code-like” trains of pulses for direct activation of retinal ganglion cells (RGCs) will require eye tracking to monitor movements of the visual scene on the retina. Eye tracking also allows more advanced image processing, including radial stretch and local magnification on the fovea (Asher, Segal, Baccus, Yaroslavsky, & Palanker, 2007), as described in the section Algorithmic Developments below. 
Alternative Sensory Substitution
An alternative approach to help the visually impaired patients is sensory substitution. One technique, called “Brainport” (Arnoldussen & Fletcher, 2012), includes an array of vibrating pixels that represent patterns of the visual scene on the tongue, as illustrated in Figure 7.6
Figure 7.6.
 
The “Brainport” generates a tactile display as an array of vibrating “pixels” placed upon the tongue. Patients have shown remarkable prowess using this device to perform sporting activities, for example. (Photo courtesy of Wicab.)
Figure 7.6.
 
The “Brainport” generates a tactile display as an array of vibrating “pixels” placed upon the tongue. Patients have shown remarkable prowess using this device to perform sporting activities, for example. (Photo courtesy of Wicab.)
Another alternative, called “EyeMusic,” encodes images into sequences of sounds, representing a scanning of the visual scene (Striem-Amit & Amedi, 2014). After training, patients equipped with a camera and earphones learn to use this system for orientation in the room, letter recognition and other visual tasks, as illustrated in Figure 7.7
Figure 7.7.
 
Images captured by the video camera are converted into the sequences of sounds representing brightness or color of various parts of the scene. (Photo courtesy of Amir Amedi Lab.)
Figure 7.7.
 
Images captured by the video camera are converted into the sequences of sounds representing brightness or color of various parts of the scene. (Photo courtesy of Amir Amedi Lab.)
Alternatively, image-to-voice conversion could be based on image recognition and voice guidance. For example, the Orcam system, which includes a head-mounted camera and a computer (Fig. 7.8), can recognize the text at which it is being pointed. The user can point to a street sign, newspaper text, items on a supermarket shelf, an approaching bus, and other targets. 
Figure 7.8.
 
The Orcam device. A small camera mounted to the temple of the glasses acquires the visual scene and “speaks” the text at which the patient points. In this example, the patient is pointing at the sign that reads MASARYK St., and the Orcam device speaks the word.
Figure 7.8.
 
The Orcam device. A small camera mounted to the temple of the glasses acquires the visual scene and “speaks” the text at which the patient points. In this example, the patient is pointing at the sign that reads MASARYK St., and the Orcam device speaks the word.
Algorithmic Development
A very important component of the visual augmentation and enhancement in conjunction with digital displays in general, and video goggles in particular, is software that can perform not only simple tasks such as edge enhancement and thresholding, but also more advanced functions, such as image recognition and simplification. This should allow easily recognizable symbolic or cartoon representation of the objects, “Platonic” ideas rather than real objects. One common example is the character recognition and representation of the clean and sharp fonts instead of the fuzzy text in the actual scene. Other objects could include simplified contours of the doors, tables, chairs, guide lines on sidewalks, faces, and so on. 
A recent advance is the depth encoding into the image, the Project Tango by Google (Fig. 7.9). This technology maps the distances to the objects in the visual field and encodes them in a false color on top of the actual contours of the objects in the scene. Such a technique could provide additional guidance to the depth and spatial relationship of the objects that might be hard to achieve for the visually impaired. 
Figure 7.9.
 
Google “Project Tango” is a smartphone-sized device that maps the immediate environment in 3D, and displays the color coded depth. This could be very useful for the low vision patient as a way of orienting for both indoor and outdoor environments.
Figure 7.9.
 
Google “Project Tango” is a smartphone-sized device that maps the immediate environment in 3D, and displays the color coded depth. This could be very useful for the low vision patient as a way of orienting for both indoor and outdoor environments.
Figure 7.10.
 
A. Foveal pit and parafoveal area of the human retina, with diagrammatic illustration of the radial spread of the connections between the photoreceptors in the fovea to bipolar and ganglion cells in the capillary-free zone. B. Visual scene on the photoreceptor plane, with a red cross indicating the fixation point for a particular direction of gaze. C. Same information remapped according to positions of the bipolar cells displaced from the fovea. Black disk corresponds to the central 2 degrees of the visual field with absent inner retinal neurons.
Figure 7.10.
 
A. Foveal pit and parafoveal area of the human retina, with diagrammatic illustration of the radial spread of the connections between the photoreceptors in the fovea to bipolar and ganglion cells in the capillary-free zone. B. Visual scene on the photoreceptor plane, with a red cross indicating the fixation point for a particular direction of gaze. C. Same information remapped according to positions of the bipolar cells displaced from the fovea. Black disk corresponds to the central 2 degrees of the visual field with absent inner retinal neurons.
Additional help with navigation is offered by GPS, accelerometers and gyroscopes, internal navigation systems, and detailed maps, which allow comfortable orientation both indoors and outdoors. Modern user interfaces are also becoming much more intuitive and multifunctional than knobs and switches of previous devices. Devices can be controlled by gestures, voice, touch screen, and other nonintrusive actions. 
Similar algorithms for image simplification and enhanced scarcity (such as representation of the contours) should help with optical approaches to restoration of sight. In addition, sequential rather than simultaneous activation of pixels in the prosthetic approach may help to reduce cross-talk between simultaneously activated pixels, and thereby increase the contrast in the image. Selective activation of different retinal cell types with these techniques (achieved with selective expression of transgenes in optogenetics, selective binding of photoswitches in photopharmacology, or the specific location of electrodes and use of stimulation waveforms optimized for specific cell layers in electronic prostheses) may further help improve proper interpretation of the stimulation patterns by the brain. For example, stimulation of the inner nuclear layer performed at sufficiently high frame rate may allow for flicker fusion (Lorach et. al., 2014). Alternatively, direct activation of specific types of RGCs may include encoding of the projected visual scene into bursts of pulses corresponding to patterns of natural activity that RGCs would produce in response to images projected onto the healthy retina. Because image location on the retina is affected by eye movements, the latter type of activation will require precise eye tracking. 
Eye tracking also enables remapping of the retinal images to properly account for radial spread of the cells near the fovea (Asher et al., 2007) for restoration of the central vision, as illustrated in Figure 7.10. Similarly, it can be used for dynamic magnification of the parts of the image corresponding to the PRL, creating an effect of the magnifying glass following the eye's direction of gaze, as illustrated in Figure 7.11
Figure 7.11.
 
Image captured by the camera, processed for magnification, edge enhancement and contrast enhancement and projected from the video goggles onto the parafoveal region of the retina. Additional smart processing for dynamic magnification in the fovea is enabled by eye tracking.
Figure 7.11.
 
Image captured by the camera, processed for magnification, edge enhancement and contrast enhancement and projected from the video goggles onto the parafoveal region of the retina. Additional smart processing for dynamic magnification in the fovea is enabled by eye tracking.
Recommendations for Future Research
  • 1.  
    Further advancement of the semitransparent video goggles with low weight and ergonomic design should minimize the social awkwardness of the electronic eyewear. Higher resolution (beyond extended video graphics array [XVGA]), wider visual field (>40°), high brightness and contrast should allow comfortable viewing of the displayed information overlaid on the natural scene;
  • 2.  
    Advanced image processing, and especially image recognition, are other very promising directions of future research. These include text recognition with conversion into audio or into magnified and sharpened image, face recognition, and simplification of the object representation by the retention of essential features and removal of less important details, etc.;
  • 3.  
    Advancements in the field of three-dimensional cameras will help encode depth information and provide additional warnings about the obstacles. The integration of maps (internal and external) with GPS, gyroscopes, and other guiding devices will help improve orientation on streets and inside buildings;
  • 4.  
    Progress in eye tracking technology, and its miniaturization will allow its comfortable integration with video goggles, which will, in turn, enable advanced image processing related to direction of gaze; and
  • 5.  
    Similar algorithms for image simplification and enhanced scarcity should help with optical approaches to restoration of sight. Better understanding of pathways of retinal and cortical processing of artificial vision will help to optimize the algorithms of image presentation in these vision restoration approaches.
This chapter is part of the Restoring Vision to the Blind report by the Lasker/IRRF Initiative for Innovation in Vision Science. The full report, Restoring Vision to the Blind, including a complete list of contributors, is available in the Supplementary Material
References
Anon. (2012). Organization and Institution News: Visual impairment and blindness increase in over 40 population this past decade. Optometry and Vision Science 89 (8) 1239. Retrieved from http://journals.lww.com/optvissci/Fulltext/2012/08000/In_The_News_New_Products.22.aspx
Arditi A.& Rosenthal B. (1998). Developing an objective definition of visual impairment. In Proceedings in Vision ‘96: Proceedings of the international low vision conference (pp. 331– 334). Madrid, Spain: Medicare.
Arnoldussen A.& Fletcher D.C. (2012). Visual perception for the blind: The BrainPort Vision Device. Retinal Physician 9 32– 34.
Asher A. Segal W.A. Baccus S.A. Yaroslavsky L.P.& Palanker D.V. (2007). Image processing for a high-resolution optoelectronic retinal prosthesis. IEEE Transactions on Bio-Medical Engineering 54 993– 1004. [CrossRef] [PubMed]
Busskamp V. Duebel J. Balya D. Fradot M. Viney T.J. Siegert S.… Groner A.C. (2010). Genetic reactivation of cone photoreceptors restores visual responses in retinitis pigmentosa. Science 329 413– 417. [CrossRef] [PubMed]
Culham L.E. Chabra A.& Rubin G.S. (2009). Users' subjective evaluation of electronic vision enhancement systems. Ophthalmic & Physiological Optics: The Journal of the British College of Ophthalmic Opticians (Optometrists) 29 138– 149. PubMed PMID: 19236583. [CrossRef] [PubMed]
Culham L.E. Chabra A.& Rubin G.S. (2004). Clinical performance of electronic, head-mounted, low-vision devices. Ophthalmic & Physiological Optics: The Journal of the British College of Ophthalmic Opticians (Optometrists) 24 281– 290. PubMed PMID: 15228505. [CrossRef] [PubMed]
Lagali P.S. Balya D. Awatramani G.B. Munch T.A. Kim D.S. Busskamp V.… Cepko C.L. (2008). Light-activated channels targeted to ON bipolar cells restore visual function in retinal degeneration. Nature Neuroscience 11 667– 675. [CrossRef] [PubMed]
Lorach H. Goetz G. Mandel Y. Lei X. Kamins T.I. Mathieson K.… Huie P. (2014). Performance of photovoltaic arrays in-vivo and characteristics of prosthetic vision in animals with retinal degeneration [published online ahead of print September 26, 2014]. Vision Research. doi: 10.1016/j.visres.2014.09.007.
Mandel Y. Goetz G. Lavinsky D. Huie P. Mathieson K. Wang L.… Kamins T. (2013). Cortical responses elicited by photovoltaic subretinal prostheses exhibit similarities to visually evoked potentials. Nature Communications, 4 1980.
Mathieson K. Loudin J. Goetz G. Huie P. Wang L. Kamins T.I.… Galambos L. (2012). Photovoltaic retinal prosthesis with high pixel density. Nature Photonics 6 391– 397. [CrossRef] [PubMed]
Medicare Vision Rehabilitation Services Act of 2003 HR 1902 IH. 2003.
Polosukhina A. Litt J. Tochitsky I. Nemargut J. Sychev Y. De Kouchkovsky I.… Huang T. (2012). Photochemical restoration of visual responses in blind mice. Neuron 75 271– 282. [CrossRef] [PubMed]
Striem-Amit E.& Amedi A. (2014). Visual cortex extrastriate body-selective area activation in congenitally blind people “seeing” by using sounds. Current Biology 24 687– 692. [CrossRef] [PubMed]
Tochitsky I. Polosukhina A. Degtyar V.E. Gallerani N. Smith C.M. Friedman A.… Van Gelder R.N. (2014). Restoring visual function to blind mice with a photoswitch that exploits electrophysiological remodeling of retinal ganglion cells. Neuron 81 800– 813. [CrossRef] [PubMed]
Figure 7.1.
 
Optical magnifier - the most popular prescription for the low vision patient. It is very useful for the static task of reading but fails for more mobile tasks such as navigation, object recognition, and “cooking dinner at the stove.”
Figure 7.1.
 
Optical magnifier - the most popular prescription for the low vision patient. It is very useful for the static task of reading but fails for more mobile tasks such as navigation, object recognition, and “cooking dinner at the stove.”
Figure 7.2.
 
A digital camera acquires the image, which is then digitally magnified and presented on electronic display. With the widespread availability of electronic books and other media, comfortable reading on a large display becomes even more natural.
Figure 7.2.
 
A digital camera acquires the image, which is then digitally magnified and presented on electronic display. With the widespread availability of electronic books and other media, comfortable reading on a large display becomes even more natural.
Figure 7.3.
 
A small telescope is inserted in place of the conventional intra ocular lens. It provides magnification for the patient's central vision, but it reduces the visual field.
Figure 7.3.
 
A small telescope is inserted in place of the conventional intra ocular lens. It provides magnification for the patient's central vision, but it reduces the visual field.
Figure 7.4.
 
Top: Video goggles marketed by eSight. Bottom: A mobile visual aid called the “Jordy” marketed by Enhanced Vision. Both models suffer from a small field of view and cumbersome knobs to adjust parameters.
Figure 7.4.
 
Top: Video goggles marketed by eSight. Bottom: A mobile visual aid called the “Jordy” marketed by Enhanced Vision. Both models suffer from a small field of view and cumbersome knobs to adjust parameters.
Figure 7.5.
 
Lumus video goggles with a semi-transparent display which allows overlaying digital images over the visual scene - a representation called “augmented reality.” (Photo courtesy of Lumus Inc.)
Figure 7.5.
 
Lumus video goggles with a semi-transparent display which allows overlaying digital images over the visual scene - a representation called “augmented reality.” (Photo courtesy of Lumus Inc.)
Figure 7.6.
 
The “Brainport” generates a tactile display as an array of vibrating “pixels” placed upon the tongue. Patients have shown remarkable prowess using this device to perform sporting activities, for example. (Photo courtesy of Wicab.)
Figure 7.6.
 
The “Brainport” generates a tactile display as an array of vibrating “pixels” placed upon the tongue. Patients have shown remarkable prowess using this device to perform sporting activities, for example. (Photo courtesy of Wicab.)
Figure 7.7.
 
Images captured by the video camera are converted into the sequences of sounds representing brightness or color of various parts of the scene. (Photo courtesy of Amir Amedi Lab.)
Figure 7.7.
 
Images captured by the video camera are converted into the sequences of sounds representing brightness or color of various parts of the scene. (Photo courtesy of Amir Amedi Lab.)
Figure 7.8.
 
The Orcam device. A small camera mounted to the temple of the glasses acquires the visual scene and “speaks” the text at which the patient points. In this example, the patient is pointing at the sign that reads MASARYK St., and the Orcam device speaks the word.
Figure 7.8.
 
The Orcam device. A small camera mounted to the temple of the glasses acquires the visual scene and “speaks” the text at which the patient points. In this example, the patient is pointing at the sign that reads MASARYK St., and the Orcam device speaks the word.
Figure 7.9.
 
Google “Project Tango” is a smartphone-sized device that maps the immediate environment in 3D, and displays the color coded depth. This could be very useful for the low vision patient as a way of orienting for both indoor and outdoor environments.
Figure 7.9.
 
Google “Project Tango” is a smartphone-sized device that maps the immediate environment in 3D, and displays the color coded depth. This could be very useful for the low vision patient as a way of orienting for both indoor and outdoor environments.
Figure 7.10.
 
A. Foveal pit and parafoveal area of the human retina, with diagrammatic illustration of the radial spread of the connections between the photoreceptors in the fovea to bipolar and ganglion cells in the capillary-free zone. B. Visual scene on the photoreceptor plane, with a red cross indicating the fixation point for a particular direction of gaze. C. Same information remapped according to positions of the bipolar cells displaced from the fovea. Black disk corresponds to the central 2 degrees of the visual field with absent inner retinal neurons.
Figure 7.10.
 
A. Foveal pit and parafoveal area of the human retina, with diagrammatic illustration of the radial spread of the connections between the photoreceptors in the fovea to bipolar and ganglion cells in the capillary-free zone. B. Visual scene on the photoreceptor plane, with a red cross indicating the fixation point for a particular direction of gaze. C. Same information remapped according to positions of the bipolar cells displaced from the fovea. Black disk corresponds to the central 2 degrees of the visual field with absent inner retinal neurons.
Figure 7.11.
 
Image captured by the camera, processed for magnification, edge enhancement and contrast enhancement and projected from the video goggles onto the parafoveal region of the retina. Additional smart processing for dynamic magnification in the fovea is enabled by eye tracking.
Figure 7.11.
 
Image captured by the camera, processed for magnification, edge enhancement and contrast enhancement and projected from the video goggles onto the parafoveal region of the retina. Additional smart processing for dynamic magnification in the fovea is enabled by eye tracking.
Supplementary Material
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×