SPIE Digital Library Get updates from SPIE Newsroom
  • Newsroom Home
  • Astronomy
  • Biomedical Optics & Medical Imaging
  • Defense & Security
  • Electronic Imaging & Signal Processing
  • Illumination & Displays
  • Lasers & Sources
  • Micro/Nano Lithography
  • Nanotechnology
  • Optical Design & Engineering
  • Optoelectronics & Communications
  • Remote Sensing
  • Sensing & Measurement
  • Solar & Alternative Energy
  • Sign up for Newsroom E-Alerts
  • Information for:
    Advertisers
SPIE Defense + Commercial Sensing 2017 | Register Today

OPIE 2017

OPIC 2017

SPIE Journals OPEN ACCESS

SPIE PRESS

SPIE PRESS




Print PageEmail PageView PDF

Electronic Imaging & Signal Processing

Realistic 3D deep scene display by computational holography

Light-ray to wavefront interconversion enables high-resolution displays.
22 April 2011, SPIE Newsroom. DOI: 10.1117/2.1201103.003658

Holography is a superior medium for high-quality 3D displays since it reproduces the depth cues of human vision. Its critical feature is a capability to reproduce 3D deep scenes, which is difficult by other methods. For electronic holographic display, it is necessary to develop methods for holographic fringe calculation that reproduce high-quality 3D images, as well as technologies for high-resolution display devices and high-performance computing. Computational holography simulates wave propagation, and it currently limits reproduced image quality, relative to common computer graphics.

There are two common approaches for hologram computation. The wavefront-based method synthesizes spherical waves from point-sources on the object surface.1 It simulates physical phenomena of wavefront propagation and produces a high-resolution image, even of 3D deep scenes. However, it is challenging to overcome occlusion and surface reflection, which is vital for realistic 3D displays. Light-ray recording and reproduction is another approach.2 Advanced graphics techniques, such as ray-tracing and image-based rendering, are employed to generate a hologram fringe. However, spatial resolution decreases in the reconstructed image located far from the display plane due to light-ray sampling and the diffraction limit. We propose a hologram computation method that takes advantage of both approaches.

Consider a rectangular window in 3D space, as shown in Figure 1. An ideal 3D display is generated by reproducing all light-rays that pass through the window, i.e. the light field. In our proposed method,3 the light field is sampled within the window, designated the ray-sampling (RS) plane, which is defined near the object to avoid a decrease in resolution. The sampled light field is converted to a wavefront by taking the Fourier transform of the light-ray intensity angular distribution. Wavefront propagation from the RS plane to the final hologram is calculated by a Fresnel transform, and the hologram fringe is obtained by interfering with the virtual reference wave. Figure 2 presents simulated reconstructed images by a ray-based method and our own. The object is two-dimensional (2D) and located 200mm behind the hologram plane. A high-resolution image is reproduced by our approach, while the other image is largely blurred.


Figure 1. Hologram calculation schematic using a ray-sampling (RS) plane. Light-ray angular distribution (top) is collected at each sampled point. A fast Fourier transform (FFT) is applied to ray information after random phase modulation, yielding the wavefront of a small area on the RS plane (bottom). A Fresnel transform is subsequently applied to derive the wavefront on the hologram plane.

Figure 2. Top: simulation result. Reconstructed image by (a) a ray-based hologram and (b) our method. The hologram is 4096×4096 pixels, and the number of ray-sampling points is 128×128. Bottom: schematic of our proposed method, incorporating mutual occlusion processing. R2W and W2R represent ray-to-wavefront and wavefront-to-ray transformations, respectively.

Occlusion processing is a crucial issue in hologram calculation.4 Self-occlusion refers to surfaces hidden by other surfaces of the same object, which can be corrected by rendering. Mutual occlusion is between different objects, and can be corrected by defining an RS plane for each object located at different depths, as shown in Figure 2. Wavefront propagation from the first RS plane to the second is initially calculated by a Fresnel transform to derive the wavefront on the second RS plane. This is converted to ray-information by applying an inverse Fourier transform. Occlusion processing is easily effected in the light-ray domain. Ray-information after occlusion processing is again converted to the wavefront, and a Fresnel transform results in the wavefront on the hologram plane. Reconstructed images of a hologram calculated by our method are shown in Figure 3. Graphics rendering addresses surface reflection, and both self- and mutual occlusions are properly corrected.


Figure 3. Optical reconstruction of a calculated hologram by our method. (a) The object is located 20mm behind the hologram plane. (b) Two objects are located 300mm and 200mm behind the hologram plane. An RS plane is defined near each object.

In summary, we convert light-rays to wavefronts, and employ advanced computer graphics rendition techniques to hologram computation. We improve occlusion processing and surface shading for 3D deep image display at high resolution. More complex scenes, such as objects around an areal plane, as well as speckle noise reduction, are subjects of future research.


Masahiro Yamaguchi
Global Scientific Information and Computing Center, Tokyo Institute of Technology
Tokyo, Japan

Masahiro Yamaguchi received his BS, MEng, and PhD degrees from Tokyo Institute of Technology in 1987, 1989, and 1994, respectively. He was an associate professor in the imaging science and engineering laboratory from 1996 to 2011. He is currently a professor.

Koki Wakunami, Hiroaki Yamashita
Department of Information Processing, Tokyo Institute of Technology
Yokohama, Japan

Koki Wakunami is a doctoral student. His research focuses on computer-generated holography for 3D displays.

Hiroaki Yamashita is a master's student. His research focuses on hologram computation for 3D displays.

Tokyo Institute of Technology
Global Scientific Information and Computing Center, Japan

References:
1. P. St-Hilaire, S. A. Benton, M. E. Lucente, M. L. Jepsen, J. Kollin, H. Yoshikawa, J. S. Underkoffler, Electronic display system for computational holography, Proc. SPIE 1212, pp. 174-182, 1990. doi:10.1117/12.17980
2. P. W. McOwan, W. J. Hossack, R. E. Burge, Three-dimensional stereoscopic display using ray traced computer generated holograms, Opt. Commun. 82, pp. 6-11, 1993. doi:10.1016/0030-4018(91)90181-C
3. K. Wakunami, M. Yamaguchi, Calculation of computer-generated hologram for 3D display using light-ray sampling plane, Proc. SPIE 7619, pp. 76190A, 2010. doi:10.1117/12.843149
4. K. Matsushima, S. Nakahara, Extremely high-definition full-parallax computer-generated hologram created by the polygon-based method, Appl. Opt. 48, pp. H54-H63, 2009. doi:10.1364/AO.48.000H54