SPIE Startup Challenge 2015 Founding Partner - JENOPTIK Get updates from SPIE Newsroom
  • Newsroom Home
  • Astronomy
  • Biomedical Optics & Medical Imaging
  • Defense & Security
  • Electronic Imaging & Signal Processing
  • Illumination & Displays
  • Lasers & Sources
  • Micro/Nano Lithography
  • Nanotechnology
  • Optical Design & Engineering
  • Optoelectronics & Communications
  • Remote Sensing
  • Sensing & Measurement
  • Solar & Alternative Energy
  • Sign up for Newsroom E-Alerts
  • Information for:
SPIE Defense + Commercial Sensing 2017 | Register Today

OPIE 2017

OPIC 2017




Print PageEmail PageView PDF

Illumination & Displays

Large-depth 3D integral imaging system using an active sensor

A technique for capturing true 3D objects in a real-world environment and displaying them faithfully shows promise for applications such as 3DTV.
11 December 2007, SPIE Newsroom. DOI: 10.1117/2.1200711.0892

A wide variety of methods for realizing 3D imaging and displays have been developed to date. Among them, so-called integral imaging, proposed by Lippmann in 1908,1 has been a keen area of research as a next-generation 3D display technique. Its advantages include full parallax (both vertical and horizontal depth information), continuous viewing angle, and color. In integral imaging, objects are recorded through a collection of miniature lenses (lenslets) as a 2D elemental image array (EIA). Three-dimensional images are formed by integrating the results from the EIA in a display. In general, obtaining large-depth integral imaging (LDII) without distortion requires setting the distance to the display equal to the focal length of each lenslet.

LDII provides excellent depth for real (in front of the display panel) and virtual image fields. But 3D reconstructed images suffer from very low resolution. Moreover, it is extremely difficult to display true 3D owing to limitations of the conventional capture (pickup) scheme. Solving this problem is an essential prerequisite for practical applications of 3D integral imaging.

Here, I describe a novel approach that combines real-time pickup of true 3D objects via an active sensor with reconstruction using LDII for faithful real and virtual image display. Figure 1 shows a schematic diagram of the proposed system. It consists of four steps: pickup of 3D objects, color-image sectioning, generating the EIA, and LDII display. The recently developed active sensors used for pickup are based on measuring time of flight along the field of view. These devices directly generate conventional 2D video2 and provide both a color image and a depth map for each pixel. The quality of the images depends on the performance of the sensor.

Figure 1. Schematic diagram of the proposed pickup method. DPII: Depth-priority integral imaging.

Following pickup of 3D objects, the images captured in Figure 1(a) are sectioned by color as shown in Figure 1(b). The process entails first separating the depth map into individual channel images to select pixels of similar intensity. The sectioned images are then extracted from the color original according to the channel pixel data. The next step introduces an EIA to display the 3D images in LDII. Figure 1(c) shows the computational model of the EIA using sectioned images that are located at a uniform interval Δz along the z-axis. The EIA is calculated in the pickup plane, employing a formula based on ray optics,3 before being displayed by the LDII (see Figure 2). When the EIA shown in a projector is focused at a distance f from a lenslet array, the result is large-depth 3D images in both real and virtual space.

Figure 2. (a) The LDII system. (b) Experimental setup. (c) Reconstructed images.

Preliminary experiments were carried out to show the usefulness of the proposed scheme. The 3D object is composed of three character patterns, ‘A’, ‘B,’ and ‘C,’ located at 330, 350, and 370cm, respectively, from the Z-cam sensor: see Figure 1(a). The size of the characters is approximately 8.5×8.5cm. We captured the color images and depth map using the active sensor, which is equipped with 740×468 pixels to generate a 50×40 EIA for LDII display. Each elemental image is mapped with 30×30 pixels through a single pinhole. The optical setup for displaying the EIA in LDII, in a projector, is shown in Figure 2(a). The lenslet array comprises 50×50 devices, each with a focal length of 3mm and a diameter of 1.08mm. The reconstructed images are approximately 12mm across. ‘A’ was observed at z=−30mm, ‘B’ at z= 0mm, and ‘C’ at z= 30mm. Figure 2(c) shows the experimental results of the reconstructed 3D images. The measured viewing angle was approximately 17 degrees.

In summary, this work demonstrates that an EIA obtained from an active sensor can be used to display both real and virtual 3D images in an LDII system. A next step is to study large-scale implementation of the approach. Ultimately, the method will facilitate the practical use of real-time integral imaging systems for 3DTV and 3D movies.

Dong-Hak Shin
Dongseo University
Pusan, Korea

Dong-Hak Shin obtained his BS (1996), MS (1998), and PhD (2001) in telecommunications and information engineering from Pukyong National University, Korea. He is currently a research professor in the Department of Visual Contents at Dongseo University. His research interests include 3D displays, optical information processing, and optical data storage.