Floating 3D images by 2D image scanning
Optical imaging and scanning act together to display a volumetric image suspended in midair with natural depth perception and very large viewing angle.
Three-dimensional displays are a promising next-generation visual interface. Although natural depth perception is the most important issue for such a display, additional features are expected, such as floating images with a very wide viewing angle. Images that float in the air offer the possibility of interactive operation, either directly using fingers or via 3D positioning devices. A very large viewing angle (ideally 360°, a surrounding viewing angle) would enable a group of people working together to stand around the display and consider the same image: see Figure 1.
Most 3D display techniques for televisions and movie theaters require troublesome eyeglasses, and conventional autostereoscopic displays based on parallax barriers or lenticular sheets usually have restricted viewing positions. Although holography is a true 3D display technique that can satisfy all the criteria for stereoscopic vision, realizing a practical holographic display is difficult with existing technology. Several techniques for forming 3D images with a ‘surround’ viewing zone have been developed, including a volumetric display with a rotating screen,1 a cylindrical parallax barrier display,2 and a 360° scanning display with a rotating mirror.3 However, none of these forms a floating image.
Colleagues and I are developing two types of autostereoscopic displays that form a floating 3D image on the basis of volume scanning with a moving optical real image. One is a volumetric display, which can form a volumetric 3D image composed of light points as voxels by stacking cross-sectional images of a 3D object.4–6 These cross-sectional images are formed in the air by high-speed scanning and modulation of a 2D real image. The other is an autostereoscopic display, which can form a floating full-parallax stereoscopic image viewable from the surrounding area.7 The proposed technique is based on integral imaging, 360-degree scanning, and imaging with a concave mirror.
Figure 2 schematically shows the principles of a basic volumetric display, in which a 2D display is placed obliquely in an optical imaging system with an optical scanner (a mirror scanner in this case). An inclined image formed by the optical imaging system moves with the optical scanner in the lateral direction perpendicular to the optical axis. Cross-sectional images of a 3D object appear to float in midair as the viewer perceives the source of the light to be where it has converged in midair. High-speed scanning and the afterimage effect (in which the brain continues to perceive recent images for a while after they disappear) enable observers to recognize a 3D image from a stack of whole cross-sectional images.
We have built on this idea by developing a volumetric display5 using a micromirror imaging device called a dihedral corner reflector array (DCRA), which is a transmissive optical imaging element composed of a plurality of dihedral corner reflectors (an array of minute flat mirrors arranged as rectangular roofs) as shown in Figure 3. Light that is reflected twice in each corner reflector and transmitted through a DCRA travels the plane-symmetric path for the incident light: see the inset image in Figure 3 (top right). This transformation of light causes a plane-symmetric imaging function of the DCRA. Imaging with a DCRA is theoretically free from optical distortion. Moreover, a DCRA can form an image close to the device. We used a digital micromirror device (DMD) as a high-speed spatial light modulator. A real image formed by the DCRA is moved with a galvanometric mirror scanner. To form a volumetric 3D image, cross-sectional images of an object corresponding to the position of the image are sequentially projected from the DMD.
To expand the viewing angle to 360°, we proposed a combination of integral imaging, scanning with a rotating mirror, and aerial imaging with a concave mirror. Figure 4 schematically shows a typical system of our proposed display method.7 An image generated by the integral imaging system is transferred to the scanning system. A real image, which is an autostereoscopic image based on integral imaging, is formed around the center of the curvature of the concave mirror. When the inclined mirror rotates, the direction of the real image and light propagation is rotated accordingly. The integral image is changed in accordance with the scanning angle, so that this system forms a 3D image that has different views corresponding to viewing angle.
In summary, we have shown it is possible to generate a true 3D image in the air that is composed of light points arranged over three dimensions, and a floating 3D image viewable from the surroundings on the basis of 3D volume scanning with a 2D image formed in the air. Improving image quality, such as image size, resolution, distortion, colorization, and refresh rate is important to increase the possible applications. For future work, we want to construct a novel visual interface that enables interactive manipulation of a floating 3D image with natural depth perception by introducing a 3D pointing technique using hands or a device.
Osaka City University
Daisuke Miyazaki is an associate professor. His current research interests include 3D displays, optical 3D measurement, imaging technology, information photonics, and medical applications of optical measurement.