A real-time 3D system using electronic holography
Generating holograms from images captured and reconstructed in real time represents a step toward live, full-motion 3D scenes.
Electronic holography—generating holograms using electro-optical apparatus—could enable the ultimate interactive 3D display system.1 The technique shows promise for 3D television, virtual workspaces for telework, and teleconferences, all of which require the construction of a moving 3D image without the need for 3D glasses. However, to realize a real-time electronic holography system requires processing a large amount of captured 3D data to generate the holograms. To resolve this issue we used integral photography (IP)—a 3D imaging technique—instead of the conventional depth camera (a system that captures a color image using a digital camera and a depth ‘map’ with IR light) and a graphics processing unit to capture 3D objects for parallel processing.2 In addition, we adapted the IP's optical setup to use a fast Fourier transform algorithm for real-time hologram calculation.
Figure 1 shows a schematic overview of our system for real-time image capturing and reconstruction. The setup consists of three component blocks: capture, calculation, and display, which are shown in Figures 2–4.2 The first block captures a 3D image with IP, using a system that comprises a lens array, a field lens, a spatial filter, and a 4K (3860×2160 pixels) camera, and the 3D image is recorded as an IP version (consisting of many elemental images). Unlike other holographic techniques, capturing 3D images by IP can be done under natural light conditions.
In the calculation block, IP images are converted to holograms using multiple GPUs in real time, and images are enlarged four times by the 8K (7680×4320 pixels) spatial light modulator (SLM) used to display the hologram. In the display block, the 3D image is reconstructed using our optical system for electronic holography. The hologram generated by the calculation block is displayed on 8K LCD panels, and the full-color 3D image is reconstructed by irradiating the hologram with reference light.
We realized a full-color 3D image using a system of three LCDs, and expanded the viewing zone by displaying images on one LCD panel sequentially (the ‘time division’ method).
Figure 5 shows parts of video images reconstructed by this system. The bottom left of each picture shows the actual objects, which were taken using a separate camera. The true size of the capture area is approximately 20cm diagonally, and the distance from the lens array to the background is 30cm. At reconstruction, the size is 4cm diagonally with a depth of 6cm and a viewing zone angle of 5°.
We are currently working on a method to automatically correct color aberration,3 which occurs because light passes through the elemental lenses and field lens before the IP image is captured. Another issue we are working to address is the current maximum viewing zone angle of the electronic holography display, which is limited to approximately 15°.4 We aim to widen the angle during capture and reconstruction using a lens array with a focal length shorter than that currently used.5
One critical aspect of electronic holography is the small display size relative to that of other 3D systems. Therefore, we sought to increase the image size by placing an optical system containing lens arrays and other components in front of multiple SLMs.6 We colored the image using time division multiplexing with laser light sources of red, green, and blue light. Figure 6 shows an external view of our experimental setup, and Figure 7 shows a display of hologram data converted from images captured by the camera system in Figure 2. These pictures were focused on the panda—see Figure 7(a)—and on the angel to the left (positioned in front of the panda): see Figure 7(b). From these results, we confirmed we had accurately reproduced depth and surface texture in the 3D images.
In future work, we will seek to develop a real-time 3D system for large-scale electronic holography. Moreover, we will try to improve the field of view in the IP, and aim to realize a real-time 3D system with a larger display and viewing zone.
National Institute of Information and Communications Technology (NICT)
Yasuyuki Ichihashi received a PhD in engineering from Chiba University in 2010. His current research interests include 3D scene capturing, processing, and electronic holography reconstruction systems.
Ryutaro Oi is a senior researcher. He received his PhD from the University of Tokyo. He joined the Japan Broadcasting Corporation (NHK) Science and Technology Research Laboratories as a visiting researcher in 2004, and joined NICT in 2006. His current research interests include capturing and reconstructing electronic holography.
Takanori Senoh is a senior researcher. He received his DEng from the University of Tokyo in 2007. His current research interests include 3D imaging and electronic holography systems.
Hisayuki Sasaki is a research expert. He received his MSc in engineering from the University of Tsukuba in 2001 before joining NHK. In 2006 he joined the NHK Science and Technology Research Laboratories. His current research interests include IP and electronic holography.
Koki Wakunami is a researcher. He received his PhD in engineering from the Tokyo Institute of Technology in 2013. His current research interests include computational holography and electronic holography systems.
Kenji Yamamoto is a director at NICT. He received his DEng from Nagoya University in 2007, when he joined NICT. His current research interests include electronic holography, multiview images, and IP.