Polarized aerial imaging by retroreflection for 2.5D floating image displays
With the advent of low-cost 3D motion sensors in consumer markets (e.g., Microsoft Kinect and Leap Motion), various floating displays have been proposed for users to directly view and touch 3D objects in mid-air without wearing special devices such as 3D glasses or data gloves.1, 2 However, most of these displays have limited viewing angles and distort the background. Recently, a solution to these issues was proposed using aerial imaging by retroreflection (AIRR) in which reflective sheeting is used to produce floating images.3 Despite these advances, the low light efficiency of AIRR still limits its application, and the question remains of how to effectively combine floating images with familiar 2D displays to enable interaction above 2D surfaces. To improve the light efficiency of AIRR, we recently proposed a polarized AIRR (pAIRR) system using an LCD backlight recycling technique.4 Here, we show that pAIRR can be used to create 2.5D displays that provide visible floating images above tabletop projection screens.5, 6
Our pAIRR design has a number of improvements when compared with AIRR: see Figure 1(a) and (c). We use reflective polarizers (i.e., 3M brand Dual Brightness Enhancement Film)—which normally capture and use light lost to absorption in LCD panel backlights and improve power efficiency—instead of conventional half mirrors. We use corner-cube array retro reflectors, which have 100% active retroreflective prisms and retroreflect wide-angle incident light by total internal reflection, rather than conventional microbead array retroreflectors (which are only 25% active and lose light by absorption and scattering). We also place a quarter-wave retarder film on top of the retro reflector to recycle retroreflected s-polarized light. As a result, our method increases the net light efficiency of AIRR from 7.7% to 19.3% and makes floating images visible under standard room lighting conditions: see Figure 1(b) and (d).
To present a floating display over a tabletop projection screen, we employ a time-division multiplexing method using polymer network liquid crystal (PNLC) panels, which changes a transparent state to an opaque state and vice versa within 1ms by modulating input voltages (a technology currently used in 3D shutter glasses). We set one PNLC panel (PNLC1) at the display position and the other PNLC panel (PNLC2) on top of the reflective polarizer: see Figure 1(c). Using an NVIDIA Corporation 3D projector, we take the two 3D input signals from a pair of 3D shutter glasses and amplify each signal to drive PNLC1 and PNLC2, respectively (see Figure 2). By projecting a floating image on PNLC1 and a tabletop image on PNLC2 time-sequentially, we can virtually superimpose a floating display on the tabletop screen.
We developed two interactive applications to demonstrate the results of this prototype system: a magic book and a 2.5D miniature world (see Figure 3). Using a Leap Motion Controller to sense the user's hand movements, floating lightning effects are produced between the user's fingers and the magic book displayed on the tabletop screen: see Figure 3(a). The user can turn pages with a touch gesture, enabling them to use other floating effects. The 2.5D miniature world application uses a monocular depth cue to enhance the virtual space between a floating display and the tabletop screen. In the example shown in Figure 3(b), a floating dragon attacks a village shown on the tabletop screen. The floating sensation is enhanced by adding the dragon's shadow to the tabletop village scene. These 3D spatial sensations can be enhanced further through other monocular depth cues (e.g., relative size, occlusion, shadow, and perspective view).
In summary, we have developed a new optical technology based on pAIRR, and shown that this design can produce mid-air floating images over tabletop projection screens, enabling users to interact with digital objects beyond the 2D surface. Our future work in this area includes applying pAIRR technology to volumetric 3D displays. Current volumetric display technologies (e.g., multi-layered 3D displays, swept-screen multiplanar volumetric displays, and 3D LED displays) do not allow users to interact with the 3D contents directly (i.e., with bare hands). Using pAIRR, we can potentially re-image volumetric light sources in mid-air and make 3D images accessible. This technology could lead to applications in medical training with hands-on 3D simulation of medical procedures, direct 3D modeling/rapid prototyping, and tabletop games.
The authors appreciate the support of the Microsoft Applied Sciences Group. This research was partially supported by the Japan Science and Technology Agency CREST funding program.
Yutaka Tokuda is a researcher at Utsunomiya University. He received a BS (Honors) degree in applied physics from Purdue University, MS degree in mechano-informatics from the University of Tokyo, and is currently defending his PhD at the University of Tokyo. His research interests include 3D displays and human-computer interactions.
Hirotsugu Yamamoto received BE, ME, and PhD degrees from the University of Tokyo. He then joined Tokushima University, and is currently an associate professor at Utsunomiya University. His recent work has included AIRR and novel imaging systems based on information photonics.