Open Access
Articles  |   August 2020
Structured Laser Light Improves Tripping Hazard Recognition for People with Visual Impairments
Author Affiliations & Notes
  • Michael Stahl
    Schepens Eye Research Institute of Mass. Eye and Ear, Harvard Medical School, Boston, MA, USA
    Northeastern University, Department of Bioengineering, Boston, MA, USA
  • Eli Peli
    Schepens Eye Research Institute of Mass. Eye and Ear, Harvard Medical School, Boston, MA, USA
  • Correspondence: Eli Peli, Schepens Eye Research Institute of Massachusetts Eye and Ear, 20 Staniford Street, Boston, MA 02114, USA. e-mail: [email protected] 
Translational Vision Science & Technology August 2020, Vol.9, 6. doi:https://doi.org/10.1167/tvst.9.9.6
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Michael Stahl, Eli Peli; Structured Laser Light Improves Tripping Hazard Recognition for People with Visual Impairments. Trans. Vis. Sci. Tech. 2020;9(9):6. https://doi.org/10.1167/tvst.9.9.6.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose: Using a geometrically derived model and a virtual curb simulator, we quantify the degree to which a wearable device that projects a laser line onto tripping hazards in a pedestrian's path improves visual recognition for people with visual impairments (VI). We confirm this with subjects' performance on computer simulations of low contrast curbs.

Methods: We derive geometric expressions quantifying the visual cue users perceive when a single laser line is projected from their hip onto a curb. We show how the efficacy of this cue changes with the angle of the laser line relative to the subject's walking trajectory. We confirm this result with data from three subjects with VI in a simulated curb recognition task in which subjects classified computer images as an “Ascending,” “Flat,” or “Descending” curb.

Results: The derived model predicts that human recognition performance depends strongly on the laser line angle and the subject data confirms this (r2 = 0.86, P < 0.001). The laser line cue improved subject accuracy from a chance level of 33% to 95% for a simulated, one-inch, low-contrast curb at a distance of five feet.

Conclusions: Recognition of curbs in low light can be improved by augmenting the scene with a single laser line projected from a user's hip, if the angle of laser line is appropriately selected.

Translational Relevance: A majority of people with VI rely on their impaired residual vision for mobility, rather than a mobility aid, resulting in increased injury for this population. Enhancing residual vision could promote safety, increase independence, and reduce medical costs.

Introduction
Eighty-five percent of people classified as visually impaired (VI) have some residual vision.1 This group faces challenges such as stairs, curbs, and other tripping hazards when walking in public spaces. Competent use of the long cane would serve as a low-cost, reliable method of addressing many of these challenges, but a majority of people with functional vision decline to use the cane for several reasons.2,3 These reasons can be divided into two categories: social reasons/stigmas (not addressed here, but discussions can be found elsewhere46) and functional reasons. The three main functional reasons are: (1) the person feels that they have enough residual vision for independent mobility without the use of a cane,2,3 (2) the person is unable or unwilling to complete the training required to use the long cane effectively,5,7 and (3) the long cane occupies one of the user's free hands, which can be inconvenient or obtrusive. Finally, some do not use the long cane because they require the use of a support cane. Not considered in this study are those with VI who require a support cane due to other mobility impairments. Although the support cane is not a vision aid, it does protect against tripping hazards. 
The rejection of the long cane as a mobility aid contributes to those with impaired vision experiencing more injuries from falls than the general public.811 This causes two major public health problems: (1) the medical costs of treating falls (an estimated 4% of fall-related injuries treated at hospitals result from impaired vision12) and (2) a fear of traveling in unfamiliar places restricts independence impacting professional and social well-being.1315 Over the past several decades, electronic travel aids (ETAs) have been developed aiming to address these issues.16,17 These devices detect features in the environment using various sensing modalities (e.g. computer vision, sonar, or laser time-of-flight) and alert the user with tactile or auditory signals. 
Despite this major focus of research and development, people with impaired vision do not buy and use these ETAs.18 This continues to be the case even though advances in electronics have made these devices smaller, cheaper, and easier to maintain. We suggest with this project that the reason for the continued disuse of ETAs by those with residual functional vision is the limitations of sensory substitution. Most of the ETAs developed over the past four decades have used sensory substitution- transforming visual information into tactile or auditory signals. For the totally or functionally blind, this approach has been very impactful, leading to major quality-of-life improvements for the totally blind. But sensory substitution has not been accepted by people with residual functional vision.19 The current project combines auditory sensory substitution with a novel visual sensory enhancement strategy that can aid the majority of people with VI who prefer using their residual vision for mobility. 
Enhancing residual vision to aid mobility is already common in vision rehabilitation. Bioptic telescopes20 help people see distant details and field expanding prisms21 help many become aware of lateral hazards outside their restricted visual field, such as other pedestrians on a collision course. Tripping hazards have not yet been adequately addressed with a sensory enhancement strategy. 
A Method to Augment Tripping Hazard Visibility
Here we introduce a sensory enhancement strategy that specifically makes tripping hazards more recognizable. The present study focuses on curbs, which are common in urban environments and are not addressed by many of the previously proposed ETAs. The present approach borrows many of its underlying concepts from structured light technology, particularly laser line stripers,22,23 which are commonly used in autonomous robots24 and vehicle sensors.25 
Target Population
Deficits in acuity, contrast sensitivity, and depth perception have been associated with increased fall risk in multiple studies.26 Other studies show that the most significant visual risk factor for falls is visual field loss,27 especially loss in the inferior field.28 Finally, poor dark adaptation can put people at increased risk when they move from a bright to dark environment.29 Some common visual impairments that result in these deficiencies are retinitis pigmentosa, diabetic retinopathy, and glaucoma.30 More generally, the approach in this study is intended to help people with impairments roughly equivalent to legal blindness (visual acuity of 20/200 or worse), who do not regularly use a long cane. 
Overview of Design
The prototype for the proposed electronic travel aid (Fig. 1) has three main components. On the left hip is worn an infrared laser that projects an invisible-to-humans pattern onto the user's walking surface. This pattern is captured by the camera on the right hip, and it is analyzed to determine whether there is a tripping hazard. This analysis is conducted by the processor in the user's backpack using established methods.22,23 If a hazard is detected, a bright, visible laser line is projected onto it from the right hip (from point LP). This article derives a geometric model describing how subject performance on curb recognition (ascending, descending, or flat surface) depends on the laser line angle relative to walking trajectory. We then use data from three subjects tasked with recognizing computer-generated images of curbs to confirm this result. 
Figure 1.
 
Overview of our electronic travel aid design. (a) The components that are worn on the user's hips: The LP (laser projector) houses the two green laser line projectors (only one laser line is shown in all three illustrations) and the IR camera used in detecting hazards. The component IR houses the two infrared laser line projectors used to mark the hazards for the IR camera. The central processor of the ETA is worn in a backpack. World coordinates are marked at the user's feet whereas the cyclopean eye (CP) is a point cantered between the user's eyes and at x = y = 0. Also in (a) is an illustration of how a green laser might appear when projected on flat ground. This would not occur under normal operation since there is no hazard but is helpful for comparison with the other two panels. The angle the laser line makes with the x-axis is denoted by ϴLaser. (b) Illustration of a green laser line projected onto a descending curb that appears as a set of two unconnected line segments. (c) A green laser line projected onto an ascending curb that appears as a set of three contiguous line segments.
Figure 1.
 
Overview of our electronic travel aid design. (a) The components that are worn on the user's hips: The LP (laser projector) houses the two green laser line projectors (only one laser line is shown in all three illustrations) and the IR camera used in detecting hazards. The component IR houses the two infrared laser line projectors used to mark the hazards for the IR camera. The central processor of the ETA is worn in a backpack. World coordinates are marked at the user's feet whereas the cyclopean eye (CP) is a point cantered between the user's eyes and at x = y = 0. Also in (a) is an illustration of how a green laser might appear when projected on flat ground. This would not occur under normal operation since there is no hazard but is helpful for comparison with the other two panels. The angle the laser line makes with the x-axis is denoted by ϴLaser. (b) Illustration of a green laser line projected onto a descending curb that appears as a set of two unconnected line segments. (c) A green laser line projected onto an ascending curb that appears as a set of three contiguous line segments.
Structured Light Sensor
Although it is not addressed in this article, it is of interest to note that the hazard detection process operates on similar principles to those described in Figures 1 and 2. If the detected infrared laser pattern is a straight, unbroken line, the system decides no hazard is present, and therefore the visible laser line is not activated. If, however, the detected pattern deviates from a straight line beyond a threshold, the system declares that a hazard is present and activates a visible laser to enhance recognition. An auditory alert also sounds to inform users to attend to the walking path. 
Figure 2.
 
Diagram of the perspective signals created by a laser line projected from a user's right hip for a seven-inch descending curb and observed at the cyclopean eye (all other parameter values are taken from Table 1). (a) Signals for two values of ϴLaser are shown (thin solid line for 75° and thick dotted line for 105°). Note that ϴLaser does not affect the locations of P2 and P3 and so the gap between these points is the same for both cases. ϴLaser does affect the orientations of the line segments P1P2 and P3P4. (b) These same patterns rotated about P2 such that the line segment P1P2 is aligned with the vertical axis for both cases. Viewed this way, the gap between P2 and P3 can be decomposed into two components, parallel perspective signal (Δ) and perpendicular perspective signal (too small to mark). The ϴLaser = 75° case has a much larger Δ component than ϴLaser = 105°, and we show below that this variable drives the performance for curb recognition.
Figure 2.
 
Diagram of the perspective signals created by a laser line projected from a user's right hip for a seven-inch descending curb and observed at the cyclopean eye (all other parameter values are taken from Table 1). (a) Signals for two values of ϴLaser are shown (thin solid line for 75° and thick dotted line for 105°). Note that ϴLaser does not affect the locations of P2 and P3 and so the gap between these points is the same for both cases. ϴLaser does affect the orientations of the line segments P1P2 and P3P4. (b) These same patterns rotated about P2 such that the line segment P1P2 is aligned with the vertical axis for both cases. Viewed this way, the gap between P2 and P3 can be decomposed into two components, parallel perspective signal (Δ) and perpendicular perspective signal (too small to mark). The ϴLaser = 75° case has a much larger Δ component than ϴLaser = 105°, and we show below that this variable drives the performance for curb recognition.
The Need to Maximize Detection Sensitivity
Urban environments are defined by streets and sidewalks. The transition between these is typically a single step or “curb” of four to eight inches. These and other miscellaneous ground-level changes or objects obstructing walking paths make tripping hazards a ubiquitous concern. The Federal Highway Administration (FHA) defines a “tripping hazard” as any level change exceeding 13 mm (0.50 inch).31 This is a very small change, especially when one considers studies3234 that show that people with severe visual impairment struggle with recognizing 7-inch curbs, especially descending curbs. The ETA we describe here aims to close as much of this gap as possible. 
We approach this by studying how a user's ability to recognize an ascending and descending curb is affected by our proposed laser-line enhancement. For this, we select a curb that lacks a change in reflectance or color at the edges- a worst-case and yet common visual scenario. A laser line projected from a user's hip that spans such a curb would have four key points, as illustrated in Figures 1b and 1c. We define the start of the laser line segment, P1, as the closest point to the user. The point where the line intersects the curb closest to the user is defined as P2. Beyond the tripping hazard is another flat ground plane. The nearest laser line point visible to the user on this second ground plane is P3. Finally, the laser line segment terminates at P4, the point farthest away from the user. A descending curb (Fig. 1b) exhibits a discontinuity (or gap) between P2 and P3. An ascending curb (Fig. 1c), on the other hand, exhibits a connecting line segment between P2 and P3. These distinctive patterns allow the user to discriminate between these two common hazards and make safer foot placements. 
Three factors influence the magnitude of the perspective signal, Δ. The first is the baseline distance between the cyclopean eye (CP) and the laser projector position (LP) (see Fig. 1a). As this distance is increased, the perspective signal is increased (i.e., for a descending curb, the gap between P2 and P3 increases), allowing for smaller curbs to remain visible. Maximizing the baseline distance was, therefore, a design goal for our wearable device. Placing the projector on the hip created a baseline distance with a larger vertical component than lateral component making this system (laser projector and human observer) an unusual structured light design (most structured light systems have a larger lateral component). This study investigates the impact of this unusual vertical component on human curb recognition performance. 
The second factor that affects the magnitude of Δ is the height of the curb or “step-height” (SH), where SH < 0 denotes a descending curb and SH > 0 denotes an ascending curb. A larger absolute step-height, |SH|, causes a larger perspective signal. Subjects who can detect a curb at |SH| can also detect a curb at larger values of |SH| since Δ would be greater. 
The third factor that affects Δ is the angle of the laser line, ϴLaser. We define ϴLaser as the angle the laser line makes with the positive x-axis on the ground plane (Fig. 1a). Figure 2 shows how the perspective signal changes on a descending curb for two different values of ϴLaserFigure 2a shows ϴLaser = 75° and ϴLaser = 105°. These simulations show that ϴLaser does not affect the locations of P2 and P3 and indeed the gap between these points is the same for any laser line angle. ϴLaser does affect the orientations of the line segments P1P2 and P3P4, which can impact recognition performance. Below we derive theoretically and then verify with human data how ϴLaser impacts a user's ability to recognize ascending and descending curbs. 
Figure 2b shows these same patterns rotated such that the line segment P1P2 is aligned with the vertical axis in both cases. Viewed this way, the gap between P2 and P3 can be decomposed into two components, parallel perspective signal (Δ), which is aligned with the vertical axis, and a perpendicular perspective signal (Δ), which is aligned with the horizontal axis. We show that Δ drives the performance on recognizing both ascending and descending curbs, and therefore a design that maximizes Δ maximizes a user's sensitivity for detecting tripping hazards. 
Figure 2 shows that Δ is larger when ϴLaser equals 75° than when it equals 105°, and Figure 3 illustrates why. The user's point-of-view (CP), the laser projector position (LP), and a “target point” in the user's path (e.g., P2) form the epipolar plane (depicted in transparent gray) for this unique structured light system. When the laser line coincides with this epipolar plane, patterns similar to the ϴLaser = 105° case in Figure 2 are observed (that is Δ is small), whereas laser line angles that run perpendicular to this plane, patterns similar to the ϴLaser = 75° case, as shown in Figure 2 are observed (Δ is large). The epipolar plane is a critical parameter for this wearable aid. The user's cyclopean eye location (CP) and their immediate walking path (P2) are fixed. Thus the only practical way we can affect the epipolar plane is by changing the location of the laser projector position (LP). The hip location illustrated in Figure 1a and Figure 3 is our selection as it satisfies the following criteria: 
  • 1. Less Conspicuous: Compared with other sites on the body- such as the head, shoulder, or chest- the hip has the least impact on daily activity and social interaction.
  • 2. Stability: Wearing the laser projector on the hip places it on a fairly stable section of the body during walking.
  • 3. Adequate Baseline Distance: The magnitude of the perspective signal increases as the baseline distance between CP and LP increases. In an early exploration of this device, we found that users were unable to detect the break between P2 and P3 for a descending curb if the baseline distance was small (i.e., green laser projecting from the shoulder instead of the hip). The hip location, by virtue of its larger baseline distance from CP, allows a user to recognize smaller curbs.
Figure 3.
 
The epipolar plane (shown in transparent gray) in this structured light system is the plane that contains the cyclopean vision sensor (CP), laser projector (LP), and a target (chosen to be P2, the intersection of the laser line with leading edge of the curb).
Figure 3.
 
The epipolar plane (shown in transparent gray) in this structured light system is the plane that contains the cyclopean vision sensor (CP), laser projector (LP), and a target (chosen to be P2, the intersection of the laser line with leading edge of the curb).
We derive a geometric model of curb recognition performance as a function of ϴLaser. The model shows that projection of the laser line along or close to the epipolar plane may substantially reduce from the efficacy of this technique. 
A Geometric Model for Perpendicular Perspective Signal (Δ)
Here we define the key parameters of our model and derive the formula describing how Δ depends on these parameters. Full formal derivations can be found in the Appendices. 
The laser line pattern has the form of a line segment between P1 and P2 on the user's ground plane, a distinctive pattern between P2 and P3 that depends on the nature of the tripping hazard, and a line segment between P3 and P4 on the ground plane beyond the hazard. The results here apply only for the cases where the line segments P1P2 and P3P4 are long enough to be visible to the user, so we assume here the hazard is not abutting P1 or P4
We set the point-of-view of the user as the “cyclopean eye position” (CP), a point midway between the user's two eyes. Specifically:  
\begin{equation}C_P^W = {\left[ {\begin{array}{*{20}{c}} {{x_E},}&{{y_E},}&{{z_E}} \end{array}} \right]^T} = {\left[ {\begin{array}{*{20}{c}} {0,}&{0,}&{{z_E}} \end{array}} \right]^T}\end{equation}
(1)
where xE = yE = 0, because we have set the origin of World Coordinates (denoted with a superscript W) to be between the user's feet, the x-axis going to the user's right, and the y-axis pointing along the user's path as shown in Figure 1a. The position of the laser projector, LP, on the user's hip is related to CP by a baseline distance described in spherical coordinates as:  
\begin{equation}L_P^W = C_P^W + r\left[ {\begin{array}{@{}*{1}{c}@{}} {\cos ({\theta _{ELEV}})\cos ({\theta _{AZ}})}\\ {\cos ({\theta _{ELEV}})\sin ({\theta _{AZ}})}\\ {\sin ({\theta _{ELEV}})} \end{array}} \right]\end{equation}
(2)
where r is the baseline distance between CP and LP, ϴELEV is the angle the vector CPLP makes with the xy-plane, and ϴAZ is the angle this vector makes with the xz-plane (see Appendix A: Derivation of World Coordinates for an illustration). In the analysis below, we will assume ϴAZ = 0, putting both the laser projector and the cyclopean eye position on the xz-plane. This setting simplifies the expressions we shall derive. Simulation results in the Discussion show that the derived results hold for small, non-zero values of ϴAZ
Next, we define the nearest point where the laser line intersects the tripping hazard (P2) as located along the y-axis at a distance of yT (“T” denotes “target”) from the user. Specifically:  
\begin{equation}P_2^W = {\left[ {\begin{array}{*{20}{c}} {0,}&{{y_T},}&0 \end{array}} \right]^T}\end{equation}
(3)
 
Deriving an expression for Δ using these parameters requires four steps, all of which are detailed in the Appendices: 
  • 1. Derive expressions for P1, P3, and P4 in World Coordinates (Appendix A).
  • 2. Convert these world coordinate expressions into a coordinate system centered at the user's point-of-view (Appendix B).
  • 3. Convert these to Image Coordinates using a perspective transformation (Appendix C).
  • 4. Rotate the image to align the line segment P1P2 with the y-axis as illustrated in Figure 2b (Appendix C).
These geometric derivations in the appendices result in an expression for the perpendicular perspective signal given in Equation (4).  
\begin{eqnarray} {\Delta _ \bot } = - \frac{{{S_H}f\left( {y_T^2 + z_E^2} \right)\left( {{z_E}r\sin {\theta _{Laser}}\cos {\theta _{AZ}}\cos {\theta _{ELEV}} - {y_T}r\cos {\theta _{Laser}}\sin {\theta _{ELEV}}} \right)}}{{{y_T}\left( {\left( {{z_E} + r\sin {\theta _{ELEV}}} \right)\left( {y_T^2 + z_E^2 - {S_H}{z_E} + r\sin {\theta _{ELEV}}} \right) + \Sigma } \right)\sqrt {y_T^2{{\cos }^2}{\theta _{Laser}} + z_E^2} }}\end{eqnarray}
(4)
where \(\Sigma = \left\{ {\begin{array}{@{}*{1}{c}@{}} { - {S_H}y_T^2}\\ 0 \end{array}}\right. \begin{array}{@{}*{1}{c}@{}} \textit{for descending curb}\\ \textit{for ascending curb} \end{array}\)
In Equation (4), f is the effective focal length of the user's eye (in simulations we assume f to be 17 mm). Note that Δ correctly reduces to zero as the step-height (SH) or the baseline distance (r) goes to zero. Equation (4) suggests that Δ depends strongly on ϴLaser. We would therefore expect human performance to exhibit a correlating dependence on ϴLaser
Methodology
Equation (4) predicts that the derived parameter Δ is strongly dependent on laser line angle. To confirm this result, we created a simulated environment using MATLAB (Natick, MA, USA), where subjects could view computer generated patterns of a simulated laser line striper on a curb. The on-screen simulation was calibrated such that the laser line appeared to project from their right hip onto a target located approximately 5 feet in front of the subject. Depending on the trial, the target was an ascending curb, a descending curb, or a flat surface. The simulated values for subject eye-height, waist-width, and visual system focal length are shown in Table 1
Table 1.
 
Parameter Values Assumed for Simulations Approximately Corresponding to a Person of Six-Foot Height (Used for Calculations When Not Otherwise Specified)
Table 1.
 
Parameter Values Assumed for Simulations Approximately Corresponding to a Person of Six-Foot Height (Used for Calculations When Not Otherwise Specified)
Photographs
To match the simulated curb textures with a real public space with many low-contrast tripping hazards, photographs of low-contrast cement curbs at Boston's City Hall Plaza were acquired as a guide for generating simulated laser line scenes. A 500- mW green laser (532 nm wavelength) was passed through a Powell35 laser line generator lens (Edmund Optics, Barrington, NJ, USA) with a 60° fan angle to project a laser line on the curb. A ruler was placed in the scene to facilitate camera calibration, planar rectification, and scaling. These images were used to create models of laser lines on ascending and descending concrete curbs of variable step-height. The step-height, SH, in these simulations was set by the experimenter. Examples of these simulated images are shown in Figure 4
Figure 4.
 
Examples of stimuli presented to subjects in a 3AFC task for the conditions of ϴLaser = 60° and |SH| = six inches. (a) a descending curb, (b) a flat surface, and (c) an ascending curb. For illustration, the background is removed for the left half of each frame to allow the reader to see the contour of the simulated surface. In the experiment, however, background noise that matched the mean and variance of the walking surface was added- as shown on the right side of each frame—so that only the laser cue could be used by the subject for the judgment and not the contour cues. The transition between platform texture and background was invisible to the three subjects with VI.
Figure 4.
 
Examples of stimuli presented to subjects in a 3AFC task for the conditions of ϴLaser = 60° and |SH| = six inches. (a) a descending curb, (b) a flat surface, and (c) an ascending curb. For illustration, the background is removed for the left half of each frame to allow the reader to see the contour of the simulated surface. In the experiment, however, background noise that matched the mean and variance of the walking surface was added- as shown on the right side of each frame—so that only the laser cue could be used by the subject for the judgment and not the contour cues. The transition between platform texture and background was invisible to the three subjects with VI.
Three subjects participated in this IRB approved study. Visual acuity (VA) was tested using the ETDRS 2000 Letter Chart at 4 meters. Visual fields (VF) were measured with Goldmann perimetry (isopter V4e). The first subject (S1, Retinopathy of Prematurity, age 31) had VA of 20/200 and VF extent of 90° horizontal and 70° vertical in the only functioning eye, and the second subject (S2, optic nerve atrophy, age 37) had VA of 20/500 and VF extent of 35° horizontal and 35° vertical in the only functioning eye. The third subject (S3, normal vision, age 36) wore cataract-simulating glasses,36,37 resulting in acuity of 20/40 with normal VF extents. 
Subjects were positioned using a chin rest 12 inches from the monitor and the angular size of the image was calibrated such that the virtual curb appeared to be five feet away from subject. The laser line patterns for these stimuli had an approximate visual angle of 25°, which allowed all three subjects to view without scanning. Subjects were asked to classify the virtual images as either a (1) descending curb, (2) flat surface, or (3) ascending curb. Subjects took as much time as they needed to respond. 
To prevent the subject from using intertrial cues, the image was shifted randomly (horizontally and vertically) by a small amount between trials. In this way, the subject was unable to use the on-screen location of the laser line on one trial to aid in the judgment for the next trial. 
The simulated lighting was “flat” which was very similar to the “Rear Lighting” condition in similar studies.3234 Thus there was no change in shading or reflectance between the three planes of the curb. This lighting condition was found to be the “worst-case” in the previous studies.3234 Without the laser line, there would be no way of visually distinguishing between an ascending curb, descending curb, or flat surface as the surfaces had no differences in shading and undistinguishable from the background for the VI subjects. Thus a subject's percent-correct for a “No-Laser” condition was taken to be chance-level (33% for this three-alternative task). Such conditions occur in either complete darkness or when the curb surfaces have no difference in color, texture, or lighting. This condition is similar to the many curb in Boston's City Hall Plaza, under the clouded, early evening light. 
The goal was to find subject recognition performance as a function of both curb height, |SH| and laser line angle (ϴLaser). To do this, we implemented the psychophysics method of constant stimuli. For example, for trials where |SH| = 2 inches and ϴLaser = 130°, the subject was presented 20 descending curbs of two inches, 20 flat surfaces, and 20 ascending curbs of two inches in random order for a total of 60 trials. Each of these images would have a simulated laser line at ϴLaser = 130°, and all 60 trials would result in a single data point providing the subject's performance (percent correct recognition) for the condition (|SH|, ϴLaser) = (2 inches, 130°). 
For each (|SH|, ϴLaser) condition, subject responses were tabulated in confusion matrices similar to previous studies3234 where correct responses fall along the diagonal cells of a matrix and incorrect responses fall in the off-diagonal cells (examples are shown in Table 2). Although various aspects of performance can be read from these tables, it is useful to score a single metric that quantifies performance. Cohen's Kappa (κ) statistic38 was used, which rewards responses along the diagonal of the confusion matrix (percent correct, pc) and subtracts chance performance (pe):  
\begin{equation}\kappa = 1 - \frac{{1 - {p_c}}}{{1 - {p_e}}}\end{equation}
(5)
 
Table 2.
 
Confusion Matrices for the ϴlaser Values Resulting in the Worst Performance for Each Subject
Table 2.
 
Confusion Matrices for the ϴlaser Values Resulting in the Worst Performance for Each Subject
This metric equals 1 for perfect performance, when all entries are along the diagonal of the confusion matrix (pc = 1) and equals 0 when the percent correct equals chance performance (pc = pe). For a three-alternative forced choice paradigm, pe = 1/3
This study had two phases. In Phase 1, the subject's |SH| threshold was found for a ϴLaser angle that was substantially far from the epipolar plane shown in Figure 3. In Phase 2, the step- height was held constant at this value while ϴLaser varied causing the laser line to vary relative to the epipolar plane. For both phases, the subjects’ task was a three-alternative, forced choice between “Descending Curb,” “Flat Path,” or “Ascending Curb.” Responses were tabulated in a confusion matrix and threshold for Phase 1 was defined as the condition (curb height) producing the lowest κvalue among those greater than 0.9. 
Specifically, in Phase 1, the laser line angle was held constant at ϴLaser = 60° while |SH| was presented in the order |SH| = 4, 3, 2, 1, and 0.5 inches. One of these was selected as that subject's threshold curb height. In Phase 2, |SH| was held fixed at that subject's individual threshold and ϴLaser varied from 20° to 160° in steps of 10°. 
Results
Phase 1 Results
In the Phase 1, subject responses were tabulated in confusion matrices and the corresponding κ values are shown in Figure 5. The subjects’ |SH| thresholds were found to be: 
  • S1: |SH| = 2 inches
  • S2: |SH| = 2 inches
  • S3: |SH| = 1 inch
Figure 5.
 
Performance (Cohen's K) as a function of step-height |SH| for the three subjects. The threshold for each subject was chosen to be the smallest |SH| with a K > 0.9. The horizontal dashed line marks the K = 0.9 level.
Figure 5.
 
Performance (Cohen's K) as a function of step-height |SH| for the three subjects. The threshold for each subject was chosen to be the smallest |SH| with a K > 0.9. The horizontal dashed line marks the K = 0.9 level.
These thresholds are substantially lower than the seven-inch curbs used in prior studies that measured curb recognition performance without laser line enhancement for subjects with moderate to severe visual impairment.3234 
Phase 2 Results
Simulation Results
Examples of how Δ from Equation (4) varies with changing ϴLaser using values of other parameters that correspond to a person whose height is six feet (see Table 1) are shown in Figure 6a. As the functions for ascending and descending cases are so similar (in fact, their linear correlation is r2 = 1), we will use the descending expression for Δ in subsequent analysis as this is the least visible hazard.3234 This assumption is revisited in the Discussion
Figure 6.
 
(a) Model predictions: magnitude of Δ on the retinal plane as a function of ϴLaser for ascending and descending curbs (Equation [4]) using the representative values from Table 1. These parameters are nearly identical and are lowest near ϴLaser = 100°. (b) Performance of three subjects as a function of laser line angle. (c) Mean of three subjects from (b) compared to the model of performance found by fitting the results of Equation (4) using Equation (6) with the single constant set to 0.927. The Pearson linear correlation between the subjects mean and predicted performance is r2 = 0.86, P < 0.001.
Figure 6.
 
(a) Model predictions: magnitude of Δ on the retinal plane as a function of ϴLaser for ascending and descending curbs (Equation [4]) using the representative values from Table 1. These parameters are nearly identical and are lowest near ϴLaser = 100°. (b) Performance of three subjects as a function of laser line angle. (c) Mean of three subjects from (b) compared to the model of performance found by fitting the results of Equation (4) using Equation (6) with the single constant set to 0.927. The Pearson linear correlation between the subjects mean and predicted performance is r2 = 0.86, P < 0.001.
Human Subject Results
|SH| was fixed at each subject's threshold and ϴLaser was varied. The measured human performance is shown in Figure 6b, and these data correlate with our derivation of Δ in Figure 6a (r2 for each subject is: S1 = 0.76, S2=0.75, and S3 = 0.68). 
Because for a three-alternative task like this, κ (Equation [5]) can range from −0.5 to 1 as Δ varies from low to high (with κ= 0 being chance level performance and κ < 0 being worse than chance), we fit this data with a single parameter sigmoidal function which captures the asymptotic behavior of κ:  
\begin{equation} {\textit{Fit To Cohen's}}\ \kappa = \frac{1} {{2/3 + \exp (c \cdot {\Delta _ \bot })}} - \frac{1}{2}\end{equation}
(6)
where c is a constant found here to be 0.927. Figure 6c shows the mean κ for all three subjects as a function of ϴLaser along with the model fit from Equation (6). The Pearson correlation between measured human performance and the model fit as a function of laser line angle is r2 = 0.86 (P < 0.001). Both derived and observed κ agree on the location of minimum performance near 100°. They also reveal a wide range of angles where performance for these very small curbs is near perfect (>95% correct). 
Although the κ statistic provides a good overall measure of performance, it is instructive to review the specific confusion matrices for the laser line conditions that cause low κ. Table 2 shows the confusion matrices for the three subjects on the laser line angle condition that caused their worst performances. The columns in each matrix are the presented stimuli, and the rows are the subject responses (each column in all matrices sums to 100%). Thus the first column shows the percentage of time a subject responded “Descending,” “Flat,” or “Ascending” when a descending curb image was presented. The diagonal cells in each matrix (in italics) are the percent correct responses while the off-diagonal cells show the pattern of confusions. 
Discussion
Effect of Epipolar Plane
As hypothesized, we found a strong dependence of recognition performance on ϴLaser with a clear minimum near ϴLaser = 100°. Good performance (κ > 0.7) was achieved on most angles outside of the range 70° < ϴLaser <130°. The epipolar plane illustrated in Figure 3 forms a line where it intersects the ground plane, and when the values we used to generate the simulations (Table 1) are used to specify the parameters, this resolves to a line on the z = 0 plane that makes an angle of 101.3° with the x-axis in Figure 1a. This matches the observed minimum performance angle in Figure 6
Asymmetric Effects of Epipolar Plane
When the ϴLaser value was sufficiently different from the epipolar plane angle, subject performance was good (κ> 0.7), and there was no difference between the ascending and descending curb cases (both were roughly 100% correct). Table 2, however, shows an asymmetry between these cases when the ϴLaser value approaches the epipolar plane angle. All three subjects were better at recognizing descending curbs than ascending curbs. For subjects S1 and S3, confusion between “Ascending” and “Descending” was far less common than confusion between “Ascending” and “Flat” (S2 exhibited this as well, but the effect was much smaller). This asymmetry can be understood in terms of the epipolar geometry presented in this paper. When the laser line is approximately parallel with the epipolar plane, its perspective signal on an ascending curb is nearly a straight line. It is therefore easily confused with the straight-line pattern of a flat surface. 
Although the subjects performed better on the descending curb condition than the ascending curb condition, their performances at these angles were still diminished compared with the strong performances at other laser line angles (overall they dropped from 95% correct to 71% correct for the descending curb condition). Figure 2 illustrates that the size of discontinuity does not change with ϴLaser, so it is worth discussing why subjects’ performance does change. The reason for this can be understood by considering a class of visual stimuli known as Vernier (sometimes called Nonius) lines.39,40 When the gap in Figure 2 has a large Δ component, it more closely resembles this Vernier case than when the same-sized gap has a larger Δ component. Human observers are orders of magnitude more sensitive to Vernier shifts because they can use the full length of line segments P1P2 and P3P4 to aid in the judgment. These same studies of Vernier acuity have also demonstrated that this increased acuity holds even when noise is added to the image—suggesting that many classes of visual impairment would be able to use this high Vernier acuity despite their impairment. 
The Smallest Detectable Curbs
The goal of this project is to help people with low vision recognize small tripping hazards. When the laser line angle is off of the epipolar plane, hazards as small as two inches are detectable. Previous studies2325 have shown people with visual impairments struggle to recognize larger curbs (seven inches), so this study suggests that laser line augmentation can increase a person's sensitivity. These previous studies used a physical setup, rather than simulated images and demonstrated that their subjects used motion parallax to improve curb recognition performance. Because subjects in this study could not use this cue with the two-dimensional images, it can be argued that the simulated task used here was more difficult than a physical, real-world task and recognition performance could improve when laser line augmentation is applied to real-world tripping hazards. Note that for this application, parallax would be induced by moving the eye (CP) relative to the laser projector (LP) as opposed to simply moving CP as is typically the case. 
Selection of ϴLaser
Because the epipolar plane in our prototype is not fixed because of head movement, body sway, and equipment shifts, it is important to select a ϴLaser that lies within a wide region of good performance as its actual value is likely to vary. Furthermore, we recognize that in practice, a tripping hazard's primary edges will be at random orientations relative to the user's walking trajectory. Imagine, for instance, the user encountering a curb whose edges run parallel to our selected ϴLaser. In this extreme situation, the model described here with the points (P1, P2, P3, and P4) would not apply because the laser line will not span the curb. For this reason, we have designed our prototype with two mutually perpendicular laser lines. This way, the system will activate the laser line that has the highest perspective signal. The hazard detection IR system will be using similar pair of IR laser lines and thus will know which of the two green lasers should be activated. We have selected ϴLaser = 60° and ϴLaser = 150° as the two laser line angles. Inspection of Figure 6 shows that these provide strong cues to users. In this way, no matter the orientation of the curb, at least one of the device's laser lines can effectively aid recognition. Note that these are not activated simultaneously. The system selects the laser angle that is most informative to the user. Whichever angle is activated, the poor performance confusions presented in Table 2 are avoided, and good recognition performance is expected. 
Assumptions and Limitations of this Study
Although we are only reporting results from three subjects in this study, these data are strongly supported by geometric derivations, which apply established computer vision concepts. The fact that the “sensor” is a human observer rather than a camera does not change the underlying signal sensitivity. The curves derived in Figure 6a clearly indicate which laser line angles will have poor recognition performance and this curve does not depend on human performance. The human subject data therefore, are only confirmative that the mathematical analysis is free of errors. Taken together, this strongly suggests the result of Figure 6 will generalize to other subjects and circumstances. 
Various assumptions were made to simplify the mathematical expressions. We argue that the strong dependence of subject curb recognition performance on ϴLaser will hold even if these assumptions are relaxed. First, in our derivation of Δ, we assumed that subject fixation was on P2. Although this was used in the derivation of Equation (4), we did not instruct our subjects to fixate on P2 and indeed the image position was shifted randomly between trials to prevent such consistent fixation. Despite this, the model still predicts human performance suggesting that this simplifying assumption was a reasonable one. 
Second, the assumption that ϴAZ = 0 served to cancel many terms, but in practice, small values of ϴAZ will occur. We can get a sense of how this can affect the results by using the model we derived in Equation (4) and simulating other small values of ϴAZ. Changing ϴAZ to −30° and +30° has little impact on model prediction. Their correlations with the ϴAZ = 0° case were r2 = 0.981 and r2 = 0.999, respectively, suggesting that ϴAZ = 0° was also a fair assumption to make. 
Finally, two of the subjects in this study had only one seeing-eye, and so the location of the cyclopean eye used in Equation (4) should more accurately be positioned to the side of the subject's seeing eye. This would produce three slightly different performance models, for the three different subjects. We implemented this to determine whether it made an impact, placing the cyclopean eye 3.175 cm to the left for S1, 3.175 cm. to the right for S2, and keeping it centered for S3. This had almost no impact on the model fit to human performance (out to three decimal places) suggesting that the simpler model with the centered cyclopean eye is sufficient for all three cases. 
Target Population and ETA Acceptance
The heterogeneous population of people with VI has different needs when it comes to negotiating tripping hazards. Those with decreased contrast sensitivity will benefit directly from the laser light illumination of low contrast surfaces. Those with diminished acuity, depth perception, or both will benefit from the Vernier nature of the signal. Finally visual field loss is a risk factor for falls because the patients are unable to consistently attend to their walking path. Such users would benefit from the audible alert that would direct their attention down when a hazard is detected. The laser light should help them locate the edge of the curb or other hazard by first locating the light and then following it to the visible edge. 
For each of these visual impairments, this approach is unlikely to work in bright daylight conditions where sunlight will wash out the laser light, greatly reducing its contrast. Thus this approach is recommended primarily for low light conditions. 
ETAs for people with VI have demonstrated numerous benefits in controlled experiments yet have failed to achieve wide acceptance.18 There is not evidence yet that the proposed device will make the impact that other devices have not, but there are two reasons for optimism. The first is that this type of sensory enhancement has not yet been available for people with VI. HMDs have provided types of sensory enhancement/augmentation, but the necessity of goggles over the eyes presents additional social and visual challenges that our proposed device avoids. The second reason for optimism is that the previous ETAs have not focused on the ground-plane as this device does. The long cane—the most impactful mobility aid thus far—primarily senses the ground plane, yet ETAs have not, often stipulating that they were to be used only with a long cane. When walking, the primary focus of vision is on the ground-plane.41,42 The proposed device has the potential to liberate a pedestrian's residual vision from this constant attention to the ground plane to search for landmarks and other collision obstacles. It is hoped that these innovations will increase acceptance of ETAs among people with VI. 
Conclusion
We proposed a technique that allows people with VI to use and augment their residual vision, particularly in low light, to avoid tripping hazards and therefore avoid falls. The Vernier nature of the laser pattern allows users with poor acuity to recognize smaller curbs than they could otherwise. Additionally, this study shows that although the epipolar plane has the potential to interfere with the efficacy of laser line enhancement, a pair of orthogonal laser lines avoid this pitfall. In all, these data suggest a promising concept to increase safety for the visually impaired. 
Acknowledgments
The authors thank Claire Jeon for aiding in image acquisition, and data collection. 
Supported by NIH Grants F31 EY023929 and core grant P30EY003790. MS has also been awarded funds from the Stiftelsen Promobilia Foundation (Stockholm, Sweden) and a grant of supplies from Edmund Optics (Barrington, NJ). 
Disclosure: M. Stahl, None; E. Peli, None 
References
WHO W. World Health Statistics. Geneva, Switzerland: WHO World Health Organization. 2012; 27.
Corn AL, Erin JN. Foundations of Low Vision: Clinical and Functional Perspectives. New York: AFB Press. 2010.
JVIB News Service. Demographics update: use of white “long” canes. J. Vis. Impair. Blind. 1994; 88: 4–5.
Worth N . Visual impairment in the city: young people's social strategies for independent mobility. Urban Stud. 2013; 50: 574–586. [CrossRef]
Nathan AJ, Scobell A, Hersh M. Cane use and late onset visual impairment. Technol. Disabil. 2015; 27: 103–116. [CrossRef]
Deshen S, Deshen H. On social aspects of the usage of guide-dogs and long-canes. Sociol. Rev. 1989; 39: 89–103. [CrossRef]
Thaler L . Echolocation may have real-life advantages for blind people: an analysis of survey data. Front. Physiol. 2013; 4 MAY:1–8. [PubMed]
Ivers RQ, Cumming RG, Mitchell P, Attebo K. Visual impairment and falls in older adults: the Blue Mountains Eye Study. J. Am. Geriatr. Soc. 1998; 46: 58–64. [CrossRef] [PubMed]
Klein BE, Klein R, Lee KE, Cruickshanks KJ. Performance-based and self-assessed measures of visual function as related to history of falls, hip fractures, and measured gait time. The Beaver Dam Eye Study. Ophthalmology. 1998; 105: 160–164. [CrossRef] [PubMed]
Lord SR . Visual risk factors for falls in older people. Age Ageing. 2006; 35: 42–45. [CrossRef] [PubMed]
de Boer MR . et al. Different aspects of visual impairment as risk factors for falls and fractures in older men and women. J Bone Miner Res. 2004; 19: 1539–1547. [CrossRef] [PubMed]
Scuffham PA, Legood R, Wilson ECF, Kennedy-Martin T. The incidence and cost of injurious falls associated with visual impairment in the UK. Vis. Impair. Res. 2009; 4: 1–14. [CrossRef]
Gray P, Todd J. Mobility and Reading Habits of the Blind. Richmond, UK: Her Majesty's Stationery Office. 1968; 386.
Clark-Carter D . The visually handicapped in the city of Nottingham 1981: a survey of their disabilities, mobility, employment and daily living skills. Blind Mobility Research Unit, Department of Psychology, University of Nottingham. 1983.
Marston J, Golledge R. The hidden demand for participation in activities and travel by persons who are visually impaired. J Vis Impair Blind. 2003; 97: 475–488. [CrossRef]
Dakopoulos D, Bourbakis NG. Wearable obstacle avoidance electronic travel aids for blind: A survey. IEEE Transactions on Systems, Man and Cybernetics Part C: Applications and Reviews 2010; 40: 25–35. [CrossRef]
Roentgen U, Gelderblom G. Inventory of electronic mobility aids for persons with visual impairments: a literature review. J Vis Impair Blind. 2008; 102: 702–724. [CrossRef]
Gori M, Cappagli G, Tonelli A, Baud-Bovy G, Finocchietti S. Devices for visually impaired people: high technological devices with low user acceptance and no adaptability for children. Neurosci Biobehav Rev. 2016; 69: 79–88. [CrossRef] [PubMed]
Elli GV, Benetti S, Collignon O. Is there a future for sensory substitution outside academic laboratories? Multisens Res. 2014; 27: 271–291. [CrossRef] [PubMed]
Doherty AL, Peli E, Luo G. Hazard detection with a monocular bioptic telescope. Ophthalmic Physiol Opt. 2015; 35: 530–539. [CrossRef] [PubMed]
Peli E . Field expansion for homonymous hemianopia by optically induced peripheral exotropia. Optom Vis Sci. 2000; 77: 453–464. [CrossRef] [PubMed]
Mertz C, Kozar J, Miller JR, Thorpe C. Eye-safe laser line striper for outside use. IEEE. 2002; 2: 507–512.
Ilstrup D, Manduchi R. Active triangulation in the outdoors: A photometric analysis. Fifth Int. Symp. 3D 2010.
Zhang X, Rad AB, Wong YK. Sensor fusion of monocular cameras and laser rangefinders for line-based simultaneous localization and mapping (SLAM) tasks in autonomvous mobile robots. Sensors. 2012; 12: 429–452. [CrossRef] [PubMed]
Aufrère R . et al. Perception for collision avoidance and autonomous driving. Mechatronics. 2003; 13: 1149–1161. [CrossRef]
Harwood RH . Visual problems and falls. Age Ageing 2001; 30(Suppl 4):13–18. [CrossRef] [PubMed]
Freeman EE, Muñoz B, Rubin G, West SK. Visual field loss increases the risk of falls in older adults: the Salisbury eye evaluation. Investig. Ophthalmol. Vis. Sci. 2007; 48: 4445–4450. [CrossRef]
Black AA, Wood JM, Lovie-Kitchin JE. Inferior field loss increases rate of falls in older adults with glaucoma. Optom Vis Sci. 2011; 88: 1275–1282. [CrossRef] [PubMed]
McMurdo MET, Gaskell A. Dark adaptation and falls in the elderly. Gerontology. 1991; 37: 221–224. [CrossRef] [PubMed]
Wiener W, Welsh R, Blasch B. Foundations of orientation and mobility. Arlington County, VA: American Foundation for the Blind. 2010.
Rouphail NM, Allen DP. Recommended Procedures Chapter 13, ‘Pedestrians,’ of the Highway Capacity Manual. No. FHWA-RD-98-107. Highw Capacit Man. 1998; 1–56.
Bochsler TM, Legge GE, Gage R, Kallie CS. Recognition of ramps and steps by people with low vision. Invest Ophthalmol Vis Sci. 2013; 54: 288–294. [CrossRef] [PubMed]
Legge GGE, Yu D, Kallie CCS, Bochsler TM, Gage R. Visual accessibility of ramps and steps. J Vis. 2010; 10: 1–19. [CrossRef] [PubMed]
Bochsler TM, Legge GE, Kallie CS, Gage R. Seeing steps and ramps with simulated low acuity: impact of texture and locomotion. Optom Vis Sci. 2012; 89: E1299–E1307. [CrossRef] [PubMed]
Powell I . Design of a laser beam line expander. Appl Opt. 1987; 26: 3705–3709. [CrossRef] [PubMed]
Perez GM, Archer SM, Artal P. Optics optical characterization of Bangerter foils. Invest Ophthalmol Vis Sci. 2010; 51;609–613.
Odell NV, Leske DA, Hatt SR, Adams WE, Holmes JM. The effect of Bangerter filters on optotype acuity, Vernier acuity, and contrast sensitivity. J AAPOS. 2008; 12;555–559.
Viera AJ, Garrett JM. Understanding interobserver agreement: the kappa statistic. Fam Med. 2005; 37: 360–363. [PubMed]
Westheimer G . Hyperacuity. In: Encyclopedia of Neuroscience. 2010; 5: 45–50.
Essock EA, Williams RA, Enoch JM, Raphael S. The effects of image degradation by cataract on Vernier acuity. Investig. Ophthalmol. Vis. Sci. 1984; 25: 1043–1050.
Miyasike-daSilva V, Allard F, McIlroy WWE. Where do we look when we walk on stairs? Gaze behaviour on stairs, transitions, and handrails. Exp Brain Res. 2011; 209: 73–83. [CrossRef] [PubMed]
Marigold DS, Patla AE. Visual information from the lower visual field is important for walking across multi-surface terrain. Exp Brain Res. 2008; 188: 23–31. [CrossRef] [PubMed]
Szeliski R . Computer vision: Algorithms and applications. Berlin: Springer Science & Business Media. 2011; 5.
Appendix A: Derivation of World Coordinates
Here we derive expressions for the three main world coordinate points (denoted with superscript W) of Figure 1 in terms of defined parameters, namely cyclopean eye position (\(C_P^W\)), laser projector position (\(L_P^W\)), the distance to the target (yT), and the laser line angle with respect to the positive x-axis, ϴLaser. 
In Equation (1), we defined cyclopean eye position to be:  
\begin{eqnarray}C_P^W = {\left[ {\begin{array}{*{20}{c}} {{x_E},}&{{y_E},}&{{z_E}} \end{array}} \right]^T} = {\left[ {\begin{array}{*{20}{c}} {0,}&{0,}&{{z_E}} \end{array}} \right]^T},\end{eqnarray}
(A1)
where we note that for our chosen world coordinate system in Figure 1 xE = yE = 0. The laser projector position is defined in Equation (2) as:  
\begin{eqnarray}L_P^W = C_P^W + r\left[ {\begin{array}{@{}*{1}{c}@{}} {\cos ({\theta _{ELEV}})\cos ({\theta _{AZ}})}\\ {\cos ({\theta _{ELEV}})\sin ({\theta _{AZ}})}\\ {\sin ({\theta _{ELEV}})} \end{array}} \right],\end{eqnarray}
(A2)
where r, ϴAZ, and ϴELEV represent spherical coordinates as illustrated in Figure A.1
Figure A.1.
 
Spherical coordinate relationship between the cyclopean eye (CP) and the laser projector position (LP) As demonstrated in the Discussion, we can safely assume ϴAZ = 0 which keeps the y-component of \(L_P^W\) equal to zero.
Figure A.1.
 
Spherical coordinate relationship between the cyclopean eye (CP) and the laser projector position (LP) As demonstrated in the Discussion, we can safely assume ϴAZ = 0 which keeps the y-component of \(L_P^W\) equal to zero.
The point \(P_2^W\) is defined as being on the z = 0 plane along the y-axis at a distance of yT from the user:  
\begin{eqnarray}P_2^W = {\left( {\begin{array}{*{20}{c}} {0,}&{{y_T},}&0 \end{array}} \right)^T}.\end{eqnarray}
(A3)
 
Now we will derive the locations of the other points of the projected laser line—\(P_1^W\), \(P_3^W\), and \(P_4^W\)—by first considering where these points would appear if there were no curb present (i.e., a flat path such as Fig. 1a). In this special case, \(P_2^W\) would be the midpoint of the laser line. The two endpoints—\(P_1^W\)and \(P_4^W\)—would each be located at a distance of 0.5·LLaser from \(P_2^W\) in a direction defined by the laser line angle, ϴLaser. We set \(P_2^W\) as the midpoint of this line because it is convenient, but it is not necessary. It matters only that the resulting line segments be visible to the user. Because \(P_1^W\) lies on the z = 0 plane whether a curb is present, it is defined in all cases by:  
\begin{eqnarray}P_1^W = P_2^W - \frac{1}{2}{L_{Laser}}\left[ {\begin{array}{@{}*{1}{c}@{}} {\cos {\theta _{Laser}}}\\ {\sin {\theta _{Laser}}}\\ 0 \end{array}} \right].\end{eqnarray}
(A4)
 
The point \(P_3^W\) is derived differently for descending and ascending curbs. 
Descending Case
First we note that a line in 3D space is described by two points—i.e. Q0 and Q1. We denote this line as \(\overline {{Q_0}{Q_1}} \). Any point along this line, Q2, can be found by:  
\begin{eqnarray}{Q_2} = {Q_0} + \lambda \left( {{Q_1} - {Q_0}} \right),\end{eqnarray}
(A7)
where λ is a real number. If λ = 0, then Q2 = Q0. If λ = 1, then Q2,= Q1. By varying λ, therefore Q2 can be made to lie anywhere along the \(\overline {{Q_0}{Q_1}} \) line. In the case of descending curb, we observe that \(P_3^W\) will lie along the 3D line described by the laser projector location, LP, and \(P_2^W\) as follows:  
\begin{eqnarray}P_3^W = {L_P} + \lambda \left( {P_2^W - {L_P}} \right).\end{eqnarray}
(A8)
 
The locations of LP, and \(P_2^W\) are known, and because we know by definition that the z-component of \(P_3^W = {S_H}\) (the step-height), we can derive an explicit expression for λ by solving the z-component for λ:  
\begin{eqnarray}\begin{array}{@{}l@{}} {S_H} = {z_P} + \lambda \left( {0 - {z_P}} \right)\\ \lambda = \frac{{{z_P} \,-\, {S_H}}}{{{z_P}}} = \frac{{z_C} \,+\, r\sin ({\theta _{ELEV}}) - {S_H}}{{z_C} \,+\, r\sin ({\theta _{ELEV}})}. \end{array}\end{eqnarray}
(A9)
 
After substituting this value for λ into Equation (A8), \(P_3^W\) becomes:  
\begin{eqnarray}P_3^W = {L_P} + \left( {\frac{{{z_P} - {S_H}}}{{{z_P}}}} \right)\left( {P_{{2_{}}}^W - {L_P}} \right).\end{eqnarray}
(A10)
 
Let the |X, |Y, and |Z denote the x, y, and z components of a vector respectively. Table A1 shows the derivation of each component of \(P_3^W\) for a descending curb. 
Table A1.
 
Derivation of the X, Y, and Z Components of \(P_3^W\)
Table A1.
 
Derivation of the X, Y, and Z Components of \(P_3^W\)
Ascending Case
The ascending case must be handled differently as the ascending \(P_3^W\) location does not lie on the 3D line, \(\overline {L_P^{}P_2^W} \) as it does for the descending case. First we define the plane containing the laser fan beam. The equation of a plane is given by:  
\begin{eqnarray}a \cdot x + b \cdot y + c \cdot z + d = 0.\end{eqnarray}
(A11)
 
When a plane is expressed in this way, its normal vector is given by:  
\begin{eqnarray}\overline n = \left( {\begin{array}{*{20}{c}} {a,}&{b,}&c \end{array}} \right).\end{eqnarray}
(A12)
 
We know that three points on the fan-beam plane are: LP, \(P_1^W\), and \(P_2^W\). We can derive \(\overline n \) by finding the cross product:  
\begin{eqnarray}\overline n = \left( {P_1^W - {L_P}} \right) \times \left( {P_2^W - {L_P}} \right).\end{eqnarray}
(A13)
 
Each parameter in the plane equation (a, b, c, and d) is solved for separately in Table A2
Table A2.
 
Derivation of Fan-Beam Plane
Table A2.
 
Derivation of Fan-Beam Plane
Once the laser fan beam plane is known, solving for \(P_3^W\) is done by plugging in the known y coordinate (yT) and z coordinate (SH) of \(P_3^W\) and solving for the only unknown coordinate, x: 
\begin{eqnarray*}a \cdot x + b \cdot y + c \cdot z + d = 0,\end{eqnarray*}
 
\begin{eqnarray}x = \frac{{ - b \cdot y - c \cdot z - d}}{a} = \frac{{ - b \cdot {y_T} - c \cdot {S_H} - d}}{a},\;\qquad\end{eqnarray}
(A16)
 
\begin{eqnarray*}x = \frac{{\frac{1}{2}{L_{Laser}}\cos {\theta _{Laser}}\left( {\left( {{z_C} + r\sin ({\theta _{ELEV}})} \right) \cdot {y_T} + \left( {{y_T} + r\tan ({\theta _{LASER}})\cos ({\theta _{ELEV}})} \right) \cdot {S_H} \!-\! {y_T}\left( {{z_C} + r\sin ({\theta _{ELEV}})} \right)} \right)}}{{\frac{1}{2}{L_{Laser}}\sin {\theta _{Laser}}\left( {{z_C} + r\sin ({\theta _{ELEV}})} \right)}},\end{eqnarray*}\begin{eqnarray}x = \frac{{\cot {\theta _{Laser}}\left( {\left( {{z_C} + r\sin ({\theta _{ELEV}})} \right) \cdot {y_T} - {y_T}\left( {{z_C} + r\sin ({\theta _{ELEV}})} \right) + \left( {{y_T} + r\tan ({\theta _{LASER}})\cos ({\theta _{ELEV}})} \right) \cdot {S_H}} \right)}}{{{z_C} + r\sin ({\theta _{ELEV}})}},\;\;\qquad\end{eqnarray}
(A17)
 
\begin{eqnarray*}x = \frac{{\cot {\theta _{Laser}}{S_H}\left( {{y_T} + r\tan ({\theta _{LASER}})\cos ({\theta _{ELEV}})} \right)}}{{{z_C} + r\sin ({\theta _{ELEV}})}},\end{eqnarray*}
 
\begin{eqnarray}x = {S_H}\frac{{{y_T}\cot {\theta _{Laser}} + r\cos ({\theta _{ELEV}})}}{{{z_C} + r\sin ({\theta _{ELEV}})}}.\end{eqnarray}
(A18)
 
Appendix B: Derivation of World to Cyclopean Eye Transformation
To facilitate coordinate transformations, we express our coordinates as homogeneous coordinates as follows:  
\begin{eqnarray}\left[ {\begin{array}{@{}*{1}{c}@{}} x\\ y\\ z \end{array}} \right] \to \left[ {\begin{array}{@{}*{1}{c}@{}} x\\ y\\ z\\ 1 \end{array}} \right]\end{eqnarray}
(A19)
 
For a review on converting between equivalent coordinate representations, see Szeliski.43 Using this representation, coordinate transformations such as world-coordinates to cyclopean-eye-coordinates or the later rotation of image coordinates are accomplished by a matrix multiplication:  
\begin{eqnarray}\left[ {\begin{array}{@{}*{1}{c}@{}} {{x_2}}\\ {{y_2}}\\ {{z_2}}\\ 1 \end{array}} \right] = \left[ {\begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{}} {{m_{1,1}}}&{{m_{1,2}}}&{{m_{1,3}}}&{{m_{1,4}}}\\ {{m_{2,1}}}&{{m_{2,2}}}&{{m_{2,3}}}&{{m_{2,4}}}\\ {{m_{3,1}}}&{{m_{3,2}}}&{{m_{3,3}}}&{{m_{3,4}}}\\ 0&0&0&1 \end{array}} \right]\left[ {\begin{array}{@{}*{1}{c}@{}} {{x_1}}\\ {{y_1}}\\ {{z_1}}\\ 1 \end{array}} \right],\qquad\end{eqnarray}
(A20)
where [x1, y1, z1]T denote the point location in the first coordinate system and [x2, y2, z2]T its corresponding point in the second coordinate system. Specifically to convert from world coordinates (W) to cyclopean eye coordinates (C), we must identify four points in world coordinates that correspond to four points in cyclopean-eye coordinates. Using these points, we can compute the transformation matrix for all points. 
The first correspondence point will be the cyclopean eye position which we will define as the origin in eye coordinate space:  
\begin{eqnarray}C_P^W = \left[ {\begin{array}{*{20}{c}} {{x_E},}&{{y_E},}&{{z_E}} \end{array}} \right] \to O_{}^C = \left[ {\begin{array}{*{20}{c}} {0,}&{0,}&0 \end{array}} \right]\end{eqnarray}
(A21)
 
Next, we define the eye coordinate's positive z-axis by assuming the user is looking directly at \(P_2^W\) (this assumption is addressed in the Discussion). Mathematically, the gaze direction is the unit vector \(\overline {u_Z^W} = \frac{{P_2^W - C_P^W}}{{\| {P_2^W - C_P^W} \|}}\), where the \(\| {\overline v } \|\) operator computes the magnitude of vector \(\overline v \) ensuring that \(\overline {u_Z^W} \) is a unit vector. In this specific case:  
\begin{eqnarray}\begin{array}{@{}l@{}} \overline {u_Z^W} = \displaystyle\frac{{P_2^W - C_P^W}}{{\left\| {P_2^W - C_P^W} \right\|}} = \displaystyle\frac{{\left( {\begin{array}{*{20}{c}} {0,}&{{y_T},}&0 \end{array}} \right) - \left( {\begin{array}{*{20}{c}} {0,}&{0,}&{{z_E}} \end{array}} \right)}}{{\left\| {\left( {\begin{array}{*{20}{c}} {0,}&{{y_T},}&0 \end{array}} \right) - \left( {\begin{array}{*{20}{c}} {0,}&{0,}&{{z_E}} \end{array}} \right)} \right\|}}\\\\[-8pt] u_Z^W = \frac{{\left( {\begin{array}{*{20}{c}} {0,}&{{y_T},}&{ - {z_E}} \end{array}} \right)}}{{\sqrt {y_T^2 + z_E^2} }} \end{array},\qquad\end{eqnarray}
(A22)
 
Thus, we have two more correspondence points:  
\begin{eqnarray}\overline {u_z^W} = \left( {\begin{array}{*{20}{c}} {0,}&{0,}&1 \end{array}} \right) \to \overline {u_z^C} = \frac{{\left( {\begin{array}{*{20}{c}} {0,}&{{y_T},}&{ - {z_E}} \end{array}} \right)}}{{\sqrt {y_T^2 + z_E^2} }}.\end{eqnarray}
(A23)
 
The positive x-axis remains the same in both coordinates. That is:  
\begin{eqnarray}\overline {u_X^W} = \left( {\begin{array}{*{20}{c}} {1,}&{0,}&0 \end{array}} \right) \to \overline {u_X^C} = \left( {\begin{array}{*{20}{c}} {1,}&{0,}&0 \end{array}} \right).\end{eqnarray}
(A24)
 
And the positive y-axis unit vector is found using the cross product:  
\begin{eqnarray}\begin{array}{@{}l@{}} \overline {u_Y^W} = \overline {u_Z^W} \times \overline {u_X^W} = \displaystyle\frac{{\left( {\begin{array}{*{20}{c}} {0,}&{{y_T},}&{ - {z_E}} \end{array}} \right)}}{{\sqrt {y_T^2 \,+\, z_E^2} }} \times \left( {\begin{array}{*{20}{c}} {1,}&{0,}&0 \end{array}} \right)\\ \overline {u_Y^W} = \left( {\begin{array}{*{20}{c}} {0,}&{\displaystyle\frac{{{z_E}}}{{\sqrt {y_T^2 \,+\, z_E^2} }},}&{\displaystyle\frac{{{y_T}}}{{\sqrt {y_T^2 \,+\, z_E^2} }}} \end{array}} \right) \end{array}\end{eqnarray}
(A25)
 
The origin along with the three unit vectors in both world and cyclopean eye coordinates produce the four points we need (PW and PC, respectively) to compute the transition matrix from the world to cyclopean eye coordinates:  
\begin{eqnarray}{{\rm P}^W} = {\left( {\begin{array}{@{}*{2}{c}@{}{c}@{}} {C_P^W} &1\\ \\[-8pt] {C_P^W + \overline {u_X^W} } &1\\\\[-8pt] {C_P^W + \overline {u_Y^W} }&1\\\\[-8pt] {C_P^W + \overline {u_Z^W} }&1 \end{array}} \right)^T} \to {{\rm P}^C} = {\left( {\begin{array}{@{}*{4}{c}@{}{c}@{}{c}@{}{c}@{}} 0&0&0&1\\ 1&0&0&1\\ 0&1&0&1\\ 0&0&1&1 \end{array}} \right)^T}.\qquad\end{eqnarray}
(A26)
 
Once PW and PC are defined, the transition matrix, MWorld → Eye, is found by:  
\begin{eqnarray}\begin{array}{@{}l@{}} {M_{World\to Eye}} = {{\rm P}^C}inv( {{{\rm P}^W}})\\\\[-8pt] \left[ {\begin{array}{@{}*{4}{c}@{}} {\displaystyle\frac{{\sqrt {y_T^2 + z_C^2} }}{{{y_T}}}}&0&0&0\\\\[-8pt] 0&{\displaystyle\frac{{{z_C}}}{{{y_T}}}}&1&{ - {z_C}}\\\\[-8pt] 0&{\displaystyle\frac{{{y_T}}}{{\sqrt {y_T^2 + z_C^2} }}}&{\displaystyle\frac{{{z_C}}}{{\sqrt {y_T^2 + z_C^2} }}}&{\displaystyle\frac{{z_C^2}}{{\sqrt {y_T^2 + z_C^2} }}}\\\\[-8pt] 0&0&0&1 \end{array}} \right] \end{array}.\qquad\end{eqnarray}
(A27)
 
Applying this matrix to \(P_1^W\), \(P_2^W\), and \(P_3^W\)yields:  
\begin{eqnarray}\left[ {\begin{array}{@{}*{3}{c}@{}{c}@{}{c}@{}} {P_1^C}&{P_2^C}&{P_3^C}\\\\[-8pt] 1&1&1 \end{array}}\right] = {M_{World \to Eye}}\left[ {\begin{array}{@{}*{3}{c}@{}{c}@{}{c}@{}} {P_1^W} &{P_2^W}&{P_3^W}\\\\[-8pt] 1&1&1 \end{array}}\right].\quad\end{eqnarray}
(A28)
 
Solving this for \(P_1^C\), \(P_2^C\), and \(P_3^C\):  
\begin{eqnarray}P_1^C = - \displaystyle\frac{{{L_{Laser}}}}{2}\left[ {\begin{array}{@{}*{1}{c}@{}} {\cos {\theta _{Laser}}}\\\\[-8pt] {\displaystyle\frac{{{z_C}\sin {\theta _{Laser}}}}{{\sqrt {y_T^2 + z_C^2} }}}\\\\[-8pt] {\displaystyle\frac{{\frac{{{y_T}\sin {\theta _{Laser}}}}{{\sqrt {y_T^2 + z_C^2} }} + \frac{2}{{{L_{Laser}}}}}}{{y_T^2 + z_C^2}}} \end{array}} \right].\end{eqnarray}
(A29)
 
\begin{eqnarray}P_2^C = \left[ {\begin{array}{*{20}{c}} 0&0&{\sqrt {y_T^2 + z_C^2} } \end{array}} \right]. \end{eqnarray}
(A30)
 
Descending Step
 
\begin{eqnarray}P_3^C = {\left[ {\begin{array}{@{}*{1}{c}@{}} {\displaystyle\frac{{{S_H}\left( {{z_C} + r\cos {\theta _{ELEV}}} \right)}}{{{z_C} + r\sin {\theta _{ELEV}}}}}\\\\[-8pt] {\displaystyle\frac{{{S_H}\left( {{y_C}{z_C} + {y_T}r\sin {\theta _{ELEV}}} \right)}}{{\sqrt {y_T^2 + z_C^2} \left( {{z_C} + r\sin {\theta _{ELEV}}} \right)}}}\\\\[-8pt] {\displaystyle\frac{{\left( {y_T^2{z_C} - {S_H}z_C^2 - {S_H}y_T^2 + z_C^3 + y_T^2r\sin {\theta _{ELEV}} + z_C^2r\sin {\theta _{ELEV}} - {S_H}{z_C}r\sin {\theta _{ELEV}}} \right)}}{{{{\left( {{z_C} + r\sin {\theta _{ELEV}}} \right)}^{ - 1}}\left( {z_C^2 + {r^2}\sin {\theta _{ELEV}}} \right)\sqrt {y_T^2 + z_C^2} }}} \end{array}} \right]^T}\end{eqnarray}
(A31)
 
Ascending Step
 
\begin{eqnarray}P_3^C = {\left[ {\begin{array}{@{}*{1}{c}@{}} {\displaystyle\frac{{{S_H}\left( {{y_T} - r\sin {\theta _{AZ}}\cos {\theta _{ELEV}} + r\cos {\theta _{ELEV}}\tan {\theta _{Laser}}} \right)}}{{\tan {\theta _{Laser}}\left( {{z_C} + r\sin {\theta _{ELEV}}} \right)}}}\\\\[-8pt] {\displaystyle\frac{{{S_H}{y_T}}}{{\sqrt {y_T^2 + z_C^2} }}}\\\\[-8pt] {\displaystyle\frac{{\left( {y_T^2 + z_C^2 - {S_H}{z_C}} \right)}}{{\sqrt {y_T^2 + z_C^2} }}} \end{array}} \right]^T}.\quad\end{eqnarray}
(A32)
 
Note that \(P_3^C \to P_2^C = \left[0\quad 0 \quad {\sqrt {y_T^2 + z_C^2} }\right]\) as SH → 0 for both the ascending and descending cases. 
Appendix C: Derivation of Image Coordinates (Perspective Transformation)
We have two image coordinate systems in this article. The first, which we label I1, is a direct perspective transformation from eye coordinates given by:  
\begin{eqnarray}\begin{array}{*{20}{c}} {P_X^{I1} = f\displaystyle\frac{{P_X^C}}{{P_Z^C}}}&{P_Y^{I1} = f\displaystyle\frac{{P_Y^C}}{{P_Z^C}}}&{P_Z^{I1} = f} \end{array}.\end{eqnarray}
(A33)
where f is the focal length of the imaging system, in this case the human eye. Again, see Szelisk43 for a review. This results in the following expressions for \(P_1^{I1}\) and \(P_2^{I1}\):  
\begin{eqnarray}P_1^{I1} = f\left[ {\begin{array}{@{}*{1}{c}@{}} { - \displaystyle\frac{{{L_{Laser}}\cos {\theta _{Laser}}\sqrt {y_T^2 + z_E^2} }}{{2y_T^2 - {L_{Laser}}\sin {\theta _{Laser}}{y_T} + 2z_E^2}}}\\\\[-8pt] { - \displaystyle\frac{{{L_{Laser}}{z_E}\sin {\theta _{Laser}}}}{{2y_T^2 - {L_{Laser}}\sin {\theta _{Laser}}{y_T} + 2z_E^2}}} \end{array}} \right],\end{eqnarray}
(A34)
 
\begin{eqnarray}P_2^{I1} = \left[ {\begin{array}{*{20}{c}} {0,}&0 \end{array}} \right].\end{eqnarray}
(A35)
 
As discussed in the Introduction we disregard the expression for the ascending \(P_3^{I1}\) and focus on the descending expression, which we compute to be:  
\begin{eqnarray}\!\! P_3^{I1} \,{=}\, f\!\!\left[\!\! {\begin{array}{@{}*{1}{c}@{}} {\displaystyle\frac{{{S_H}r\cos {\theta _{ELEV}}\sqrt {y_T^2 + z_E^2} }}{{y_T^2{z_E} \,{-}\, {S_H}z_E^2 \,{-}\, {S_H}y_T^2 \,{+}\, z_E^3 \,{+}\, r\sin {\theta _{ELEV}}\!\!\left( {y_T^2 \,{+}\, z_E^2 \,{-}\, {S_H}{z_E}} \right)\!}}}\\\\[-8pt] { - \displaystyle\frac{{{L_{Laser}}{z_E}\sin {\theta _{Laser}}}}{{y_T^2{z_E} - {L_{Laser}}\sin {\theta _{Laser}}{y_T} + 2z_E^2}}} \end{array}}\!\! \right]\!\!.\quad\;\end{eqnarray}
(A36)
 
The second image coordinate system is a rotation of I1 such that the laser line segment \(\overline {{P_1}{P_2}} \) is collinear with the vertical axis (as shown in Fig. 2b). We label this second image coordinate system I2. Here we derive the matrix used to rotate the image from the I1 to the I2 coordinate system, reducing the computation of Δ to the taking the x-component of \(P_3^{I2}\). As above this entails finding points that correspond in the two coordinate systems. Because this is only a two-dimensional conversion, only three points are needed. This transformation is a rotation only, so the origin (O) remains the same in both coordinate systems:  
\begin{eqnarray}P_2^{I1} = {O^{I1}} = \left[ {\begin{array}{*{20}{c}} {0,}&0 \end{array}} \right] \to P_2^{I2} = {O^{I2}} = \left[ {\begin{array}{*{20}{c}} {0,}&0 \end{array}} \right].\qquad\end{eqnarray}
(A37)
 
To align the laser line with the y-axis, we want to place the point \(P_1^{I1}\) along the y-axis. Above we found:  
\begin{eqnarray}P_1^{I1} = f\left[ {\begin{array}{@{}*{1}{c}@{}} { - \displaystyle\frac{{{L_{Laser}}\cos {\theta _{Laser}}\sqrt {y_T^2 + z_E^2} }}{{2y_T^2 - {L_{Laser}}\sin {\theta _{Laser}}{y_T} + 2z_E^2}}}\\\\[-8pt] { - \displaystyle\frac{{{L_{Laser}}{z_E}\sin {\theta _{Laser}}}}{{2y_T^2 - {L_{Laser}}\sin {\theta _{Laser}}{y_T} + 2z_E^2}}} \end{array}} \right].\qquad\end{eqnarray}
(A38)
 
The normalized unit vector is:  
\begin{eqnarray}\begin{array}{@{}l@{}} \overline {u_Y^{I1}} = \displaystyle\frac{{P_1^{I1}}}{{\left\| {P_1^{I1}} \right\|}} = \displaystyle\frac{{P_1^{I1}}}{{\left( {{L_{Laser}}f\displaystyle\frac{{y_T^2{{\cos }^2}{\theta _{Laser}} + z_E^2}}{{{{\left( {2y_T^2 - {L_{Laser}}\sin {\theta _{Laser}}{y_T} + 2z_E^2} \right)}^2}}}} \right)}}\\\\[-8pt] \overline {u_Y^{I1}} = \displaystyle\frac{{P_1^{I1}\left( {2y_T^2 - {L_{Laser}}\sin {\theta _{Laser}}{y_T} + 2z_E^2} \right)}}{{{L_{Laser}}f\left( {y_T^2{{\cos }^2}{\theta _{Laser}} + z_E^2} \right)}} \end{array}\qquad\end{eqnarray}
(A39)
 
To get the orthogonal \(u_X^{I1}\) unit vector we take advantage of the fact that for any two-dimensional vector \([ {\begin{array}{*{20}{c}} {a,}&b \end{array}} ]\) an orthogonal vector can be formed by \( [ {\begin{array}{*{20}{c}} { - b,}&a \end{array}} ]\). Thus we create a point \(P_X^{I1}\):  
\begin{eqnarray}P_X^{I1} = \left[ {\begin{array}{*{20}{c}} { - P_1^{I1}(2)}&{P_1^{I1}(1)} \end{array}} \right].\end{eqnarray}
(A40)
 
And so:  
\begin{eqnarray}\overline {u_X^{I1}} = \displaystyle\frac{{P_X^{I1}\left( {2y_T^2 - {L_{Laser}}\sin {\theta _{Laser}}{y_T} + 2z_E^2} \right)}}{{{L_{Laser}}f\left( {y_T^2{{\cos }^2}{\theta _{Laser}} + z_E^2} \right)}}.\qquad\end{eqnarray}
(A41)
 
The origin along with the two unit vectors produce the three points we need to compute the rotation matrix from I1 to I2 coordinates:  
\begin{eqnarray}{{\rm P}^{I1}} = {\left[ {\begin{array}{@{}*{2}{c}@{}} {\begin{array}{@{}*{2}{c}@{}} {\left[ {\begin{array}{*{20}{c}} {0,}&0 \end{array}} \right]}\\\\[-8pt] {\overline {u_X^{I1}} }\\\\[-8pt] {\overline {u_Y^{I1}} } \end{array}}&{\begin{array}{@{}*{1}{c}@{}} 1\\ 1\\ 1 \end{array}} \end{array}} \right]^T} \to {{\rm P}^{I2}} = {\left[ {\begin{array}{@{}*{3}{c}@{}} 0&0&1\\ 1&0&1\\ 0&1&1 \end{array}} \right]^T}.\end{eqnarray}
(A42)
 
The rotation matrix, \({M_{I1\, \to\, I2}}\), is then given by:  
\begin{eqnarray} &&\!\!\!\!\!{M_{I1 \to I2}} = {{\rm P}^{I2}}inv( {{{\rm P}^{I1}}} ) \nonumber\\ &&\!\!\!\!\!= \left(\!\! {\begin{array}{@{}*{3}{c}@{}} {\displaystyle\frac{{ - {z_E}\sin {\theta _{LASER}}}}{{\sqrt {y_T^2{{\cos }^2}{\theta _{Laser}} + z_E^2} }}}&{\displaystyle\frac{{\cos {\theta _{Laser}}\sqrt {y_T^2 + z_E^2} }}{{\sqrt {y_T^2{{\cos }^2}{\theta _{Laser}} + z_E^2} }}}&0\\\\[-8pt] {\displaystyle\frac{{\cos {\theta _{Laser}}\sqrt {y_T^2 + z_E^2} }}{{\sqrt {y_T^2{{\cos }^2}{\theta _{Laser}} + z_E^2} }}}&{\displaystyle\frac{{{z_C}\sin {\theta _{LASER}}}}{{\sqrt {y_T^2{{\cos }^2}{\theta _{Laser}} + z_E^2} }}}&0\\\\[-8pt] 0&0&1 \end{array}}\! \right)\!.\qquad\end{eqnarray}
(A43)
 
To summarize, to find the parameter Δ in Equation (4)
  • 1. Convert the descending world coordinates expression for \(P_3^W\) into its eye coordinate counterpart, \(P_3^C\), using:  
    \begin{eqnarray}P_3^C = {M_{World \to Eye}}P{_3^W}.\end{eqnarray}
    (A44)
  • 2. Use perspective transformation to convert \(P_3^C\) into its corresponding 2D image coordinate, \(P_3^{I1}\).
  • 3. Use the derived rotation matrix to convert \(P_3^{I1}\) to \(P_3^{I2}\):  
    \begin{eqnarray}P_3^{I1} = {M_{I1 \to I2}}P_3^{I2}.\end{eqnarray}
    (A45)
  • 4. Δ is the x-component of the point \(P_3^{I2}\).
Figure 1.
 
Overview of our electronic travel aid design. (a) The components that are worn on the user's hips: The LP (laser projector) houses the two green laser line projectors (only one laser line is shown in all three illustrations) and the IR camera used in detecting hazards. The component IR houses the two infrared laser line projectors used to mark the hazards for the IR camera. The central processor of the ETA is worn in a backpack. World coordinates are marked at the user's feet whereas the cyclopean eye (CP) is a point cantered between the user's eyes and at x = y = 0. Also in (a) is an illustration of how a green laser might appear when projected on flat ground. This would not occur under normal operation since there is no hazard but is helpful for comparison with the other two panels. The angle the laser line makes with the x-axis is denoted by ϴLaser. (b) Illustration of a green laser line projected onto a descending curb that appears as a set of two unconnected line segments. (c) A green laser line projected onto an ascending curb that appears as a set of three contiguous line segments.
Figure 1.
 
Overview of our electronic travel aid design. (a) The components that are worn on the user's hips: The LP (laser projector) houses the two green laser line projectors (only one laser line is shown in all three illustrations) and the IR camera used in detecting hazards. The component IR houses the two infrared laser line projectors used to mark the hazards for the IR camera. The central processor of the ETA is worn in a backpack. World coordinates are marked at the user's feet whereas the cyclopean eye (CP) is a point cantered between the user's eyes and at x = y = 0. Also in (a) is an illustration of how a green laser might appear when projected on flat ground. This would not occur under normal operation since there is no hazard but is helpful for comparison with the other two panels. The angle the laser line makes with the x-axis is denoted by ϴLaser. (b) Illustration of a green laser line projected onto a descending curb that appears as a set of two unconnected line segments. (c) A green laser line projected onto an ascending curb that appears as a set of three contiguous line segments.
Figure 2.
 
Diagram of the perspective signals created by a laser line projected from a user's right hip for a seven-inch descending curb and observed at the cyclopean eye (all other parameter values are taken from Table 1). (a) Signals for two values of ϴLaser are shown (thin solid line for 75° and thick dotted line for 105°). Note that ϴLaser does not affect the locations of P2 and P3 and so the gap between these points is the same for both cases. ϴLaser does affect the orientations of the line segments P1P2 and P3P4. (b) These same patterns rotated about P2 such that the line segment P1P2 is aligned with the vertical axis for both cases. Viewed this way, the gap between P2 and P3 can be decomposed into two components, parallel perspective signal (Δ) and perpendicular perspective signal (too small to mark). The ϴLaser = 75° case has a much larger Δ component than ϴLaser = 105°, and we show below that this variable drives the performance for curb recognition.
Figure 2.
 
Diagram of the perspective signals created by a laser line projected from a user's right hip for a seven-inch descending curb and observed at the cyclopean eye (all other parameter values are taken from Table 1). (a) Signals for two values of ϴLaser are shown (thin solid line for 75° and thick dotted line for 105°). Note that ϴLaser does not affect the locations of P2 and P3 and so the gap between these points is the same for both cases. ϴLaser does affect the orientations of the line segments P1P2 and P3P4. (b) These same patterns rotated about P2 such that the line segment P1P2 is aligned with the vertical axis for both cases. Viewed this way, the gap between P2 and P3 can be decomposed into two components, parallel perspective signal (Δ) and perpendicular perspective signal (too small to mark). The ϴLaser = 75° case has a much larger Δ component than ϴLaser = 105°, and we show below that this variable drives the performance for curb recognition.
Figure 3.
 
The epipolar plane (shown in transparent gray) in this structured light system is the plane that contains the cyclopean vision sensor (CP), laser projector (LP), and a target (chosen to be P2, the intersection of the laser line with leading edge of the curb).
Figure 3.
 
The epipolar plane (shown in transparent gray) in this structured light system is the plane that contains the cyclopean vision sensor (CP), laser projector (LP), and a target (chosen to be P2, the intersection of the laser line with leading edge of the curb).
Figure 4.
 
Examples of stimuli presented to subjects in a 3AFC task for the conditions of ϴLaser = 60° and |SH| = six inches. (a) a descending curb, (b) a flat surface, and (c) an ascending curb. For illustration, the background is removed for the left half of each frame to allow the reader to see the contour of the simulated surface. In the experiment, however, background noise that matched the mean and variance of the walking surface was added- as shown on the right side of each frame—so that only the laser cue could be used by the subject for the judgment and not the contour cues. The transition between platform texture and background was invisible to the three subjects with VI.
Figure 4.
 
Examples of stimuli presented to subjects in a 3AFC task for the conditions of ϴLaser = 60° and |SH| = six inches. (a) a descending curb, (b) a flat surface, and (c) an ascending curb. For illustration, the background is removed for the left half of each frame to allow the reader to see the contour of the simulated surface. In the experiment, however, background noise that matched the mean and variance of the walking surface was added- as shown on the right side of each frame—so that only the laser cue could be used by the subject for the judgment and not the contour cues. The transition between platform texture and background was invisible to the three subjects with VI.
Figure 5.
 
Performance (Cohen's K) as a function of step-height |SH| for the three subjects. The threshold for each subject was chosen to be the smallest |SH| with a K > 0.9. The horizontal dashed line marks the K = 0.9 level.
Figure 5.
 
Performance (Cohen's K) as a function of step-height |SH| for the three subjects. The threshold for each subject was chosen to be the smallest |SH| with a K > 0.9. The horizontal dashed line marks the K = 0.9 level.
Figure 6.
 
(a) Model predictions: magnitude of Δ on the retinal plane as a function of ϴLaser for ascending and descending curbs (Equation [4]) using the representative values from Table 1. These parameters are nearly identical and are lowest near ϴLaser = 100°. (b) Performance of three subjects as a function of laser line angle. (c) Mean of three subjects from (b) compared to the model of performance found by fitting the results of Equation (4) using Equation (6) with the single constant set to 0.927. The Pearson linear correlation between the subjects mean and predicted performance is r2 = 0.86, P < 0.001.
Figure 6.
 
(a) Model predictions: magnitude of Δ on the retinal plane as a function of ϴLaser for ascending and descending curbs (Equation [4]) using the representative values from Table 1. These parameters are nearly identical and are lowest near ϴLaser = 100°. (b) Performance of three subjects as a function of laser line angle. (c) Mean of three subjects from (b) compared to the model of performance found by fitting the results of Equation (4) using Equation (6) with the single constant set to 0.927. The Pearson linear correlation between the subjects mean and predicted performance is r2 = 0.86, P < 0.001.
Figure A.1.
 
Spherical coordinate relationship between the cyclopean eye (CP) and the laser projector position (LP) As demonstrated in the Discussion, we can safely assume ϴAZ = 0 which keeps the y-component of \(L_P^W\) equal to zero.
Figure A.1.
 
Spherical coordinate relationship between the cyclopean eye (CP) and the laser projector position (LP) As demonstrated in the Discussion, we can safely assume ϴAZ = 0 which keeps the y-component of \(L_P^W\) equal to zero.
Table 1.
 
Parameter Values Assumed for Simulations Approximately Corresponding to a Person of Six-Foot Height (Used for Calculations When Not Otherwise Specified)
Table 1.
 
Parameter Values Assumed for Simulations Approximately Corresponding to a Person of Six-Foot Height (Used for Calculations When Not Otherwise Specified)
Table 2.
 
Confusion Matrices for the ϴlaser Values Resulting in the Worst Performance for Each Subject
Table 2.
 
Confusion Matrices for the ϴlaser Values Resulting in the Worst Performance for Each Subject
Table A1.
 
Derivation of the X, Y, and Z Components of \(P_3^W\)
Table A1.
 
Derivation of the X, Y, and Z Components of \(P_3^W\)
Table A2.
 
Derivation of Fan-Beam Plane
Table A2.
 
Derivation of Fan-Beam Plane
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×