3D integral imaging monitors with fully programmable display parameters

An algorithm that resolves structural differences between a capture setup and a display monitor provides realistic images without the need for special goggles.
02 June 2011
Manuel Martínez-Corral, Hector Navarro, Genaro Saavedra, Rául Martínez-Cuenca and Bahram Javidi

Stereoscopic and auto-stereoscopic monitors usually produce visual fatigue in the audience due to the convergence-accommodation conflict (the discrepancy between actual focal distance and depth perception). An attractive alternative to these technologies is integral photography (integral imaging, InI), initially proposed by Lippmann in 1908,1 and reintroduced approximately two decades ago due to the fast development of electronic matrix sensors and displays. Lippmann's concept is that one can store the 3D image of an object by acquiring many 2D elemental images of it from different positions. This is readily achieved by using a microlens array (MLA) as the camera lens.

When the elemental images are projected onto a 2D display placed in front of an MLA, the different perspectives are integrated as a 3D image. Every pixel of the display generates a conical ray bundle when it passes through the array. The intersection of many ray bundles produces a local concentration of light density that permits object reconstruction. The resulting scene is perceived as 3D by the observer, whatever his or her position relative to the MLA. Since an InI monitor truly reconstructs the 3D scene, the observation is produced without special goggles, with full parallax, and with no visual fatigue.2

An important challenge of projecting integral images in a monitor is the structural differences between the capture setup and the display monitor. Addressing this challenge, we have developed an algorithm that we call smart pseudoscopic-to-orthoscopic conversion (SPOC). It permits the calculation of new sets of synthetic elemental images (SEIs) that are fully adapted to the display monitor characteristics. Specifically, this global pixel-mapping algorithm permits one to select the MLA display parameters (such as pitch, focal length, and size), the depth position and size of the reconstructed images, and even MLA geometry.3

The algorithm is the result of the cascade application of three processes: the simulated display, virtual capture, and homogeneous scaling. The array of captured elemental images is first used as the algorithm input. The virtual capture is simulated to occur through an array of pinholes. The position of the array, pitch, gap, and number of pixels are arbitrarily assigned to match the monitor characteristics. Finally, homogeneous scaling adapts the size of the SEIs to the InI monitor.

To demonstrate the algorithm's utility, we generated an array of SEIs ready to be displayed in a monitor of parameters very different from those used in capture. For the elemental images, we prepared a 3D scene over a black background (see Figure 1), and picked up the elemental images with only one digital camera that was mechanically shifted. In Figure 2 we show the recorded integral image, composed of 31×21 elemental images with 51×51 pixels each.


Figure 1. Schematic of the experimental setup used for capturing an integral image of a 3D scene.

Figure 2. Experimentally obtained integral image. This image is the input for the smart pseudoscopic-to-orthoscopic conversion (SPOC) algorithm.

Since our aim was to produce a synthetic integral image for display in a commercial MP4 device, comprising a matrix display with 900×600 pixels of 79.0μm each, we calculated a matrix of 75×50 elemental images with 12×12 pixels each. Figure 3 presents the calculated integral image prepared for displaying a floating orthoscopic 3D image. We next displayed the SEIs in the matrix display of the MP4 device, and placed a microlens array in perfect alignment with the elemental images. To avoid a braiding effect, we ensured that the distance between the screen and the microlenses was equal to their focal length.4 As shown in Figure 4 (and more clearly in a video5), the display produced an orthoscopic floating 3D reconstruction of the 3D scene that is observed with full parallax.


Figure 3. Collection of 75×25 synthetic elemental images calculated with the SPOC algorithm, and therefore ready to produce a real orthoscopic image.

Figure 4. Reconstruction of the orthoscopic floating 3D image through an MP4 device.

In summary, we have presented an algorithm for production of realistic 3D images that can be observed without the need for special goggles. Critical to our approach was the SPOC algorithm, enabling us to overcome structural differences between the capture setup (an array of digital cameras) and display monitor (e.g., a commercial MP4 device). In the future, we will apply our algorithm for elemental image calculation to a wide range of 3D monitors, such as cellular phones, tablets, and large-screen billboards.


Manuel Martínez-Corral, Hector Navarro, Genaro Saavedra
Department of Optics
University of Valencia
Burjassot, Spain
Hector Navarro, Rául Martínez-Cuenca
Jaume I University
Castellón, Spain
Hector Navarro, Bahram Javidi
University of Connecticut
Storrs, CT

References:
1. M. G. Lippmann, Epreuves reversibles donnant la sensation du relief, J. Phys. 7, pp. 821-825, 1908.
2. http://www.uv.es/imaging3/lineas/InI.htm  3D Imaging and Display Lab article on 3D display by integral imaging, with illustrative pictures and movies. Accessed 14 May 2011.
3. H. Navarro, R. Martínez-Cuenca, G. Saavedra, M. Martínez-Corral, B. Javidi, 3D integral imaging display by smart pseudoscopic-to-orthoscopic conversion (SPOC), Opt. Express 18, pp. 25573-25583, 2010. doi:10.1364/OE.18.025573
4. H. Navarro, R. Martínez-Cuenca, A. Molina-Martín, M. Martínez-Corral, G. Saavedra, B. Javidi, Method to remedy image degradations due to facet braiding in 3D integral imaging monitors, J. Disp. Technol. 6, pp. 404-411, 2010. doi:10.1109/JDT.2010.2052347
5. http://spie.org/documents/newsroom/videos/3647/mediana.gif The video shows a movie constructed with different perspectives obtained as the observer moved from left to right while looking at the MP4. Credit: Hector Navarro, University of Valencia.
PREMIUM CONTENT
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research