SPIE Startup Challenge 2015 Founding Partner - JENOPTIK Get updates from SPIE Newsroom
  • Newsroom Home
  • Astronomy
  • Biomedical Optics & Medical Imaging
  • Defense & Security
  • Electronic Imaging & Signal Processing
  • Illumination & Displays
  • Lasers & Sources
  • Micro/Nano Lithography
  • Nanotechnology
  • Optical Design & Engineering
  • Optoelectronics & Communications
  • Remote Sensing
  • Sensing & Measurement
  • Solar & Alternative Energy
  • Sign up for Newsroom E-Alerts
  • Information for:

SPIE Photonics West 2017 | Register Today

SPIE Defense + Commercial Sensing 2017 | Call for Papers

Get Down (loaded) - SPIE Journals OPEN ACCESS


Print PageEmail PageView PDF

Electronic Imaging & Signal Processing

Three-dimensional information capture and manipulation using integral imaging

Capturing a spatio-angular distribution of rays using a lens array enables 3D visualization and hologram generation.
10 January 2011, SPIE Newsroom. DOI: 10.1117/2.1201012.003332

Acquiring 3D object information usually requires well-controlled illumination. However, illumination with controlled light makes it hard to capture 3D information from large scenes with backgrounds. Passive capturing, which only uses the light from an object without any active coherent or patterned illumination, is beneficial since it can be applied to objects in a wide variety of conditions. Many current passive-capture approaches use multiple cameras to capture different perspectives of an object. Using many cameras, however, makes the overall system bulky and less practical. To realize passive acquisition with a compact configuration, we have been working on a new approach, based on integral imaging using a single image sensor combined with a lens array.

Integral imaging was first invented by Lippmann a century ago.1 Although it was originally a display technique, it has recently been recognized as a versatile 3D capture technique as well. The fundamental underlying idea is to capture the spatio-angular distribution of the light rays from the object using a lens array.2,3 Each lens in the array forms an image of the object, called an ‘elemental image,’ which represents the angular distribution of the light rays at the principal point of the lens (see Figure 1). Using an array of such lenses, the spatio-angular distribution of light rays can thus be sampled as a form of the elemental-image array. Once the elemental-image array has been captured, it is possible to extract the 3D object information. Integral imaging requires only one camera to capture the elemental-image array. Therefore, the overall system can be made very compact.

Figure 1.Concept of spatio-angular ray distribution and its capture using integral imaging.

We have developed compact 3D-information-capturing and processing techniques based on integral imaging. Arbitrary-view image generation, shown in Figure 2(a), is one example.4 Capturing a number of object perspectives is crucial for multiview autostereoscopic 3D displays. We have developed an algorithm to synthesize a high-resolution perspective view at an arbitrary angle from an array of low-resolution elemental images.

Figure 2.3D information processing using integral imaging for (a) arbitrary-view synthesis, (b) 3D microscopy, and (c) 3D holography synthesis.

The basic idea uses a coarse estimation of the object's 3D structure plus texture patching. The geometric 3D structure is estimated in a lens-array grid by performing correspondence matching for a small set of regularly spaced pixels in the elemental-image array. This estimated structure is projected onto the desired-view image plane to form the skeleton image. The textures are patched from the corresponding elemental-image parts. Our method can generate images with both perspective- and orthographic-projection geometry. This makes our technique versatile for various types of autostereoscopic displays that require images of different projection geometry.

We have also developed 3D microscopy based on integral imaging: see Figure 2(b).5 A microlens array is inserted around the intermediate image plane of the conventional microscopy system to capture the elemental-image array of the magnified microscopic specimen. By extracting pixels at the same local position in each elemental image or by back tracing elemental images to the object space, the various views and depth slices of the microscopic specimen can be reconstructed. The resolution, however, is fundamentally limited by the sampling density of the spatio-angular ray distribution. To overcome this limit, we developed a lens-array shifting technique. Capturing multiple sets of elemental images while shifting the lens array using a piezo-actuator with sublens-pitch movement increases the sampling density, enhancing the reconstruction resolution accordingly. We have demonstrated resolution enhancements of a factor of five.

We have further extended our approach to synthesize holography of a 3D object from its elemental-image array: see Figure 2(c).6 The latter, captured under regular incoherent illumination, is processed to generate view images with orthographic-projection geometry by simply extracting pixels with regular spacing. Since the projection lines of the orthographic-view image are parallel, it can be considered as a bundle of parallel rays, which corresponds to a single point in Fourier holography. By integrating the orthographic-view images with phase consideration, we can synthesize the Fourier holography of the 3D object.

Compact and passive 3D imaging is critical for a range of applications, and integral imaging is one of the prominent approaches. So far, we have successfully demonstrated that this technique can be applied to arbitrary-view image generation for 3D displays, 3D visualization of microscopic specimens, and incoherent capture of 3D holography. We will extend our research to find optimum sampling strategies for the spatio-angular ray distribution and its realization using variable-lens arrays.

This work was partly supported by a grant from the Korean Ministry of Education, Science, and Technology (MEST), specifically by the Regional Core Research Program/Chungbuk BIT Research-Oriented University Consortium, and partly by a National Research Foundation of Korea grant, funded by the Korean government (2010-0028094).

Jae-Hyeung Park, Nam Kim
Chungbuk National University
Cheongju, Republic of Korea

Jae-Hyeung Park received his PhD degree in 2005 from Seoul National University (Republic of Korea). From 2005 to 2007, he worked for Samsung Electronics. He joined the faculty of the School of Electrical and Computer Engineering in 2007, where he is an assistant professor.

Nam Kim received his PhD in electronic engineering from Yonsei University (Republic of Korea) in 1988. Since 1989, he has been a professor at the School of Electrical and Computer Engineering.