Reproducing deep 3D images from captured light fields

A new method of high-resolution computational holography has been developed and experimentally demonstrated.

17 December 2014
Masahiro Yamaguchi

For the next generation of 3D displays, it is expected that realistic 3D scenes—with the same depth cues as human vision—will be produced. For this purpose, a 3D display that reproduces real—or virtual—optical images is required. So-called light field displays are a promising option for these applications.1–3 According to the concept of geometrical optics, a light field is produced by a set of dense light rays. A variety of ray-based light field displays have thus been developed. There are, however, several primary limitations to the geometrical optics approach that make it difficult to create high-resolution images of deep 3D scenes with ray-based 3D displays.4–6 First, when wave optics are considered, the diffraction effect must also be taken into account (see Figure 1). In light field displays, a light ray's width is determined by the aperture of the display surface. Diffraction at the aperture, however, also thickens the light ray. The smaller the aperture, the greater the diffraction effect, and the more the image on the display plane becomes blurred. The resolution of light field displays is also affected by the sampling of the light rays, which is sparse around the image far from the display plane (light rays are usually sampled on the display plane).


Figure 1. The resolution of an image far from the display plane is degraded through diffraction and ray sampling in ray-based 3D displays.

In contrast, holographic 3D displays are based on wave optics and can produce high-resolution images even when the image is located at a significant distance from the display plane.6, 7 These displays are therefore suitable for the creation of deep 3D scenes. To achieve realistic image displays, however, it is also necessary to render 3D images. Although advanced rendering techniques for conventional computer graphics can be used for ray-based displays, wave-based rendering is required for computational holography. It is also necessary to realistically generate the holographic images of a real scene. Some techniques, which use ray-based rendering methods,8 are based on the principle of a holographic stereogram, but these involve the same limitations as ray-based 3D displays, i.e., the resolution of the deep scene is degraded and the main advantage of holography is lost.

We previously proposed a method for computing holograms from ray information without losing resolution in the deep 3D scene.1 In this method, we define a virtual plane—the ray-sampling (RS) plane—near an object, on which the light rays from an object are sampled. We then convert the set of light rays that pass through the RS plane, using fast Fourier transforms, to the wavefront on the RS plane. We calculate the wave propagation from the RS plane to the hologram plane using a discrete Fresnel transform because the hologram and RS planes are different (especially in the case of a deep 3D scene display). We obtain the hologram pattern from the interference with a reference wave.

In our computational hologram method, we can use the ray-based rendering technique while the long-distance wave propagation is simulated via the wave optics approach. This results in high-resolution reproduction, even in the image far from the hologram plane. We can reproduce deep scenes when the depth range of the objects is large by defining multiple RS planes near each object.9 We have also developed a method for reducing speckle noise in our technique.10

We have used our integral imaging approach for the computational holography of a real scene using the RS plane.11 To capture an image with a wider field of view, however, a large lens array is required. We therefore have applied a scanning camera system to capture a full parallax 3D image, as shown in Figure 2.12 In this system, a vertical camera array scans horizontally, and the images between the cameras in the vertical direction are interpolated using the depth image-based rendering technique. We then generate photorealistic 3D images with high-density light rays. We calculate the light rays on the RS plane, convert them to a wavefront, and produce a hologram via our computational technique. Although this system can only be applied to static scenes, it has a small scale and is capable of capturing high-resolution images.


Figure 2. Method for calculating a hologram using a ray-sampling (RS) plane. A real scene is captured by scanning a vertical camera array.

During our experiments, we horizontally scanned a vertical array of seven cameras for about 40 seconds. We captured 577 × 7 images each with 480 × 640 pixels. We then interpolated the vertical views and computed the holograms based on the geometry shown in Figure 3. We defined an RS plane near the object so that we could calculate the Fresnel diffraction from the RS plane and each hologram (i.e., left and right). The holograms had dimensions of 34.4 × 34.4mm and had 16,384 × 16,384 pixels. The reconstructed images from both holograms are shown in Figure 4. Although the image size was small and speckle noise affected the reconstructed image, we were able to observe a high-resolution 3D image. Another example of a reconstructed image is shown in Figure 5.


Figure 3. Geometry for calculating holograms for the left and right eyes.

Figure 4. (a) Example of an interpolated image used in tests of the computational holography method. Reconstructed images from the (b) left and (c) right holograms are also shown.

Figure 5. A reconstructed image from the hologram of a human portrait.

We have developed and successfully demonstrated a method of computational holography that is suitable for high-resolution displays of deep 3D scenes. Since large levels of computation are required for larger images, the current limitations to our technique are the size of the hologram and the RS plane. The next step in our work will therefore involve reducing our computational costs. We expect that in the future our method will be applicable to various scenes, such as life-sized 3D human portraits, by increasing the scale of the RS plane.


Masahiro Yamaguchi
Tokyo Institute of Technology
Tokyo, Japan

Masahiro Yamaguchi has been a professor in the Global Scientific Information and Computing Center since 2011. From 1996 to 2011 he was an associate professor at the Imaging Science and Engineering Laboratory. His research interests include color and multispectral imaging, holography, and pathology image analysis.


References:
1. F. Okano, J. Arai, K. Mitani, M. Okui, Real-time integral imaging based on extremely high resolution video system, Proc. IEEE 94, p. 490-501, 2006.
2. A. Jones, I. McDowall, H. Yamada, M. Bolas, P. Debeyec, Rendering for an interactive 360°light field display, Trans. Graphics 26, p. 40, 2007. doi:10.1145/1276377.1276427
3. G. Wetzstein, D. Lanman, M. Hirsch, R. Raskar, Tensor displays: compressive light field synthesis using multilayer displays with directional backlighting, Trans. Graphics 31, p. 80, 2012. doi:10.1145/2185520.2185576
4. T. Okoshi, Three-Dimensional Imaging Techniques , Academic Press, New York, 1976.
5. B. Lee, S.-W. Min, B. Javidi, Theoretical analysis for three-dimensional integral imaging systems with double devices, Appl. Opt. 41, p. 4856-4865, 2002.
6. K. Wakunami, M. Yamaguchi, Calculation for computer generated hologram using ray-sampling plane, Opt. Express 19, p. 9086-9101, 2011.
7. E. N. Leith, J. Upatnieks, Wavefront reconstruction with diffused illumination and three-dimensional objects, J. Opt. Soc. Am. 54, p. 1295-1301, 1964.
8. T. Mishina, M. Okui, F. Okano, Calculation of holograms from elemental images captured by integral photography, App. Opt. 45, p. 4026-4036, 2006.
9. K. Wakunami, H. Yamashita, M. Yamaguchi, Occlusion culling for computer generated hologram based on ray-wavefront conversion, Opt. Express 21, p. 21811-21822, 2013.
10. T. Utsugi, M. Yamaguchi, Speckle-suppression in hologram calculation using ray-sampling plane, Opt. Express 22, p. 17193-17206, 2014.
11. K. Wakunami, M. Yamaguchi, B. Javidi, High-resolution three-dimensional holographic display using dense ray sampling from integral imaging, Opt. Lett. 37, p. 5103-5105, 2012.
12. M. Yamaguchi, K. Wakunami, M. Inaniwa, Computer generated hologram from full-parallax 3D image data captured by scanning vertical camera array, Chinese Opt. Lett. 12, p. 060018, 2014.
PREMIUM CONTENT
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research