Share Email Print

Proceedings Paper

Image-based computational holography for deep 3D scene display
Author(s): Masahiro Yamaguchi
Format Member Price Non-Member Price
PDF $17.00 $21.00

Paper Abstract

Holographic technology is considered to be vastly promising as a 3D electronic display in future. The advantage of holographic display is a capability of reproducing all the depth cues for 3D perception, and it is emphasized especially in the case of deep 3D scene. This paper introduces the technique for computation of hologram that can reproduce deep 3D scene. Conventional methods for computational holography mostly based on the wave propagation from point cloud. Those methods provide accurate simulation of wave propagation from 3D objects, but the generation of realistic image is not straightforward; such as hidden-surface removal, surface glossiness, translucency, etc. Another approach for hologram computation is based on holographic stereogram, or ray-based 3D display. In this case the images are rendered using conventional computer graphic techniques, which can generate very realistic and natural images. However, as this approach relies on ray-based image generation, the resolution of deep scene deteriorates depending on the distance of the image from the screen. We have proposed a method for the computation of hologram utilizing the ray-based rendering technique without the decrease of image resolution. In the method, a virtual ray-sampling (RS) plane is defined near the object, and the set of light-rays is converted to wavefront on the RS plane using Fourier transform. The propagation from the RS plane to the hologram plane is calculated by wave-propagation theory, such as Fresnel diffraction formula. Then it is possible to generate realistic 3D images by exploiting existing ray-based rendering techniques, such as ray-culling, shading and texture mapping, while the simulation of wavefront propagation enables high-resolution image reproduction even in the case of deep 3D scene. In the proposed method, the RS plane is defined near the object for high-resolution image reproduction, and if there are plural objects in 3D deep scene, RS planes should be defined for those respective objects. In this case, it is necessary to consider the mutual occlusion between the objects located at different depths. For this purpose, we have also developed a method for the occlusion-culling between the objects registered to different RS planes. In this method, the wavefront from background object is converted to light-rays, and the occlusion-culling is implemented in the ray-space. Therefore the occlusion between objects is accurately processed without huge computational difficulty. We also demonstrate the number of RS planes required for reproducing very deep 3D scene, which covers the region near the hologram plane to infinity. As a result of theoretical analysis, if we define 23 layers of RS planes, it will be possible to display 3D image from hologram plane to infinite distance with satisfactory high-resolution for human vision. Experimental results by hardcopy hologram show the effectiveness of the proposed technique for computational holography.

Paper Details

Date Published: 31 December 2013
PDF: 5 pages
Proc. SPIE 9042, 2013 International Conference on Optical Instruments and Technology: Optical Systems and Modern Optoelectronic Instruments, 90420A (31 December 2013); doi: 10.1117/12.2049180
Show Author Affiliations
Masahiro Yamaguchi, Tokyo Institute of Technology (Japan)

Published in SPIE Proceedings Vol. 9042:
2013 International Conference on Optical Instruments and Technology: Optical Systems and Modern Optoelectronic Instruments
Yongtian Wang; Xiaocong Yuan; Yunlong Sheng; Kimio Tatsuno, Editor(s)

© SPIE. Terms of Use
Back to Top
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research
Forgot your username?