In a head-up display (HUD), reflected information is superimposed on a background scene and a HUD user observes the transparent image through the reflecting equipment (a combiner) that is placed in front of them. HUDs have been used in aircraft for many years, and they entered practical use for cars in the 1990s.1 In recent years, the HUD market for cars has been growing. An advantage of HUDs is that users can glance at the display information with minimum movement of their gaze point. Generally, for aircraft HUDs, the focus point is set to infinity, whereas for automotive applications the view point is typically 2–3m away (near the front bumper). For several years, however, setting the display image at any depth position from the driver's gaze point has been desired.
There are two reasons for the difference in the focus point of HUDs for aircraft and cars. First, when driving a car, one must be able to see the background at various distances (i.e., from infinity to close range). Second, it is necessary to reduce the sense of incongruity for drivers when they are driving around a bend.1 When binocular parallax is used to set the focus point, the parallax image and the observer gaze point must be made to correspond during the HUD generation process.2 This type of processing, however, is complex and difficult to achieve.
We have therefore been working on advanced research and development of a hyperrealistic display. With our new kind of display, the user faces the non-plane display of a wide visual field. We thus find that monocular vision affects the reinforcement of space perception and that the reinforcement of the stereoscopic sense is 0.5–1.3 points higher than for binocular vision (see Figure 1).3,4 We have therefore used these depth perception monocular vision enhancement results to develop a new monocular HUD concept, which we call the windshield-refracted augmented reality projector (WARP). In this system (see Figure 2), the display image can be set to any depth position and thus represents an improvement over conventional approaches.
Figure 1. Photograph of a user operating our monocular head dome projector display (with a field of view of more than 160°). Inset shows the subjective evaluation scores given by users, comparing monocular vision with binocular vision (a score of 0 indicates the same evaluation for monocular and binocular vision).
Figure 2. Schematic illustration of the monocular head-up display (HUD) concept known as the windshield-refracted augmented reality projector (WARP).
Our monocular HUD approach is also expected to offer the advantage of increased perception speeds. In the case of a typical binocular HUD, the HUD image becomes blurred when a distant point is viewed. This double vision is caused by binocular parallax (see Figure 3). In the case of monocular HUD, however, the projected image can be seen clearly, irrespective of the fixed point's distance. Our WARP system thus allows minimum accommodated times and maximum perception speeds.
Figure 3. Schematic view of binocular and monocular HUD images, illustrating the viewing condition (left) and the perceived vision (right). For the binocular case, the HUD was observed with both the left (dashed line) and right (solid line) eyes, which causes the double image. For the monocular case, the HUD was observed with only one eye.
An important function of our WARP system is the control of the perceived depth position in the display image. We have thus developed a control method for monocular conditions. Our results indicate that the depth perception position can be controlled—using a static display image with static perspective depth cues—up to a distance of 60m, with 30% error. In contrast, when a dynamic display image (with dynamic depth cues) is used, the depth perception position can be controlled up to a distance of 120m (also with 30% error).5, 6
These previous results were obtained under conditions where there were no obstacles on the road ahead. In many cases, however, several cars may be in the way of the driver. The depth perception effects of our WARP system, when the HUD image overlaps a preceding car or other obstacles, however, have not previously been investigated. In this study, it was thus our objective to develop a control method for the perceived depth position when there is a preceding obstacle. Results of the novel control method we have developed for this purpose are illustrated in Figure 4.
Figure 4. Perceived depth position under real space conditions with a preceding obstacle. The red dashed line indicates a reasonable result that is almost identical to the result obtained with the ‘turn dynamic perspective’ method.
We have also conducted a long-period fatigue test to demonstrate the safety performance of WARP. In this test, users watched two movies that consisted of background images without captions and superimposed HUD images with captions. Each movie lasted for two hours and was watched with a one-hour interval in between. To our surprise, the test results indicate that eye fatigue from using WARP was statistically lower than from the use of a conventional binocular HUD system (see Figure 5). We have therefore shown that monocular observation reduces long-term eye fatigue. We believe that this phenomenon can be explained by the fact that monocular vision involves a moderate amount of eye exercise.6
Figure 5. Results from long-period eye fatigue tests for WARP and a binocular HUD system.
In summary, we have proposed and developed a novel monocular projection-type head-up display known as WARP. With a prototype of this system we have achieved a free depth perception capability and high visibility. In our future work we will improve our WARP prototype by adding the monocular eye tracking method. We will also evaluate the level of human attention for WARP, including a comparison of monocular and binocular vision.
Haruhiko Okumura, Taksahi Sasaki, Aira Hotta
Corporate Research and Development Center
Haruhiko Okumara obtained his BSc, MSc, and PhD in electrical engineering from Waseda University, Japan, in 1981, 1983, and 1995, respectively. In 1998 he joined the Toshiba Corporate Research and Development Center, where he has been engaged in developing image-pickup and video compression technologies. He is now working on the research and development of see-through and wearable displays.
1. S. Okabayashi, Visual Optics of Head-Up Displays (HUDs) in Automotive Applications, p. 129, Gordon and Breach Publishers, 1996.
2. K. Nakamura, J. Inada, M. Kakizaki, T. Fujikawa, S. Kasiwada, H. Ando, N. Kawahara, Windshield display for safe and comfortable driving, Tech. Paper 2005-01-1603, SAE, 2005.
3. H. Okumura, T. Sasaki, A. Hotta, N. Okada, Monocular hyper-reality display, Proc. Eurodisplay 09, p. P24, 2009.
4. S. Nagata, Visual sensitivities to cue for depth perception (in Japanese), J. Inst. Televis. Eng. Japan 31, p. 649-655, 1977.
5. T. Sasaki, A. Hotta, A. Moriya, T. Murata, H. Okumura, K. Horiuchi, N. Okada, M. Ogawa, O. Nagahara, Hyper-realistic display for automotive application, Soc. Inf. Display Int'l Symp. Digest Tech. Papers 41, p. 953-956, 2010.
6. H. Okumura, T. Sasaki, A. Hotta, K. Shinohara, Monocular AR display for automobile navigation and safety driving, Soc. Inf. Display Int'l Symp. Digest Tech. Papers
46, p. 64, 2015. doi:10.1002/sdtp.10541