Reproducing deep 3D images from captured light fields
A new method of high-resolution computational holography has been developed and experimentally demonstrated.
For the next generation of 3D displays, it is expected that realistic 3D scenes—with the same depth cues as human vision—will be produced. For this purpose, a 3D display that reproduces real—or virtual—optical images is required. So-called light field displays are a promising option for these applications.1–3 According to the concept of geometrical optics, a light field is produced by a set of dense light rays. A variety of ray-based light field displays have thus been developed. There are, however, several primary limitations to the geometrical optics approach that make it difficult to create high-resolution images of deep 3D scenes with ray-based 3D displays.4–6 First, when wave optics are considered, the diffraction effect must also be taken into account (see Figure 1). In light field displays, a light ray's width is determined by the aperture of the display surface. Diffraction at the aperture, however, also thickens the light ray. The smaller the aperture, the greater the diffraction effect, and the more the image on the display plane becomes blurred. The resolution of light field displays is also affected by the sampling of the light rays, which is sparse around the image far from the display plane (light rays are usually sampled on the display plane).
In contrast, holographic 3D displays are based on wave optics and can produce high-resolution images even when the image is located at a significant distance from the display plane.6, 7 These displays are therefore suitable for the creation of deep 3D scenes. To achieve realistic image displays, however, it is also necessary to render 3D images. Although advanced rendering techniques for conventional computer graphics can be used for ray-based displays, wave-based rendering is required for computational holography. It is also necessary to realistically generate the holographic images of a real scene. Some techniques, which use ray-based rendering methods,8 are based on the principle of a holographic stereogram, but these involve the same limitations as ray-based 3D displays, i.e., the resolution of the deep scene is degraded and the main advantage of holography is lost.
We previously proposed a method for computing holograms from ray information without losing resolution in the deep 3D scene.1 In this method, we define a virtual plane—the ray-sampling (RS) plane—near an object, on which the light rays from an object are sampled. We then convert the set of light rays that pass through the RS plane, using fast Fourier transforms, to the wavefront on the RS plane. We calculate the wave propagation from the RS plane to the hologram plane using a discrete Fresnel transform because the hologram and RS planes are different (especially in the case of a deep 3D scene display). We obtain the hologram pattern from the interference with a reference wave.
In our computational hologram method, we can use the ray-based rendering technique while the long-distance wave propagation is simulated via the wave optics approach. This results in high-resolution reproduction, even in the image far from the hologram plane. We can reproduce deep scenes when the depth range of the objects is large by defining multiple RS planes near each object.9 We have also developed a method for reducing speckle noise in our technique.10
We have used our integral imaging approach for the computational holography of a real scene using the RS plane.11 To capture an image with a wider field of view, however, a large lens array is required. We therefore have applied a scanning camera system to capture a full parallax 3D image, as shown in Figure 2.12 In this system, a vertical camera array scans horizontally, and the images between the cameras in the vertical direction are interpolated using the depth image-based rendering technique. We then generate photorealistic 3D images with high-density light rays. We calculate the light rays on the RS plane, convert them to a wavefront, and produce a hologram via our computational technique. Although this system can only be applied to static scenes, it has a small scale and is capable of capturing high-resolution images.
During our experiments, we horizontally scanned a vertical array of seven cameras for about 40 seconds. We captured 577 × 7 images each with 480 × 640 pixels. We then interpolated the vertical views and computed the holograms based on the geometry shown in Figure 3. We defined an RS plane near the object so that we could calculate the Fresnel diffraction from the RS plane and each hologram (i.e., left and right). The holograms had dimensions of 34.4 × 34.4mm and had 16,384 × 16,384 pixels. The reconstructed images from both holograms are shown in Figure 4. Although the image size was small and speckle noise affected the reconstructed image, we were able to observe a high-resolution 3D image. Another example of a reconstructed image is shown in Figure 5.
We have developed and successfully demonstrated a method of computational holography that is suitable for high-resolution displays of deep 3D scenes. Since large levels of computation are required for larger images, the current limitations to our technique are the size of the hologram and the RS plane. The next step in our work will therefore involve reducing our computational costs. We expect that in the future our method will be applicable to various scenes, such as life-sized 3D human portraits, by increasing the scale of the RS plane.
Masahiro Yamaguchi has been a professor in the Global Scientific Information and Computing Center since 2011. From 1996 to 2011 he was an associate professor at the Imaging Science and Engineering Laboratory. His research interests include color and multispectral imaging, holography, and pathology image analysis.