Restoring lost resolution of plenoptic images

Optical resolution lost due to the use of microlenses can be recovered by using deconvolution.
27 March 2015
José Manuel Rodríguez-Ramos, Juan Manuel Trujillo Sevilla and Luis Fernando Rodríguez-Ramos

Although the original concept of light field measurement was first proposed in 1908,1 there has been substantial work on applications of this technique during the past decade. The basic objective in light field measurement is to register the 2-dimensional light distribution of a particular scene (the standard approach for conventional cameras) as well as the direction—or angle of arrival—of the light rays to a sensor. In the past, several different approaches have been proposed to capture the scene in a simple and compact way (e.g., with a multi-stereovision camera). With these previous techniques, several points of view can be obtained by physically moving the camera, with the use of a static array of cameras, or by generating all viewpoints after the measurement of the ‘focal stack’ (i.e., the set of defocused images). Moving the camera or measuring the focal stack, however, results in acquisition times that are too slow compared with scene changes. In addition, using an array of cameras is both expensive and a non-portable solution.

Purchase SPIE Field Guide to Adaptive OpticsAn additional approach—known as the plenoptic design—involves placing a microlens array in the optical path to capture the plenoptic function (the light field).2 Once the light field has been measured, post-processing allows the generation of interesting and useful results. These results can include a posteriori refocusing of the captured image, all-in-focus for the whole scene, depth maps, and stereo 3D imaging. In particular, 3D integral imaging is a promising future option for 3D displays because it does not cause fatigue or discomfort for the viewer (unlike 3D stereo imaging).

Our team, at the University of La Laguna, have pioneered and developed the use of plenoptic sensors for the measurement of optical aberrations produced by atmospheric turbulence.3 In our innovative approach, we use information from the light field to obtain a tomographic distribution of the wavefront phase. This distribution is associated with changes in refractive index that are caused by the propagation of light through a heterogeneous or turbulent medium. Through the tomographic measurement of the wavefront phase from a single shot, it is possible to extend the corrected field of view beyond the optical axis and thus observe extended objects (e.g., the Sun, the Moon, planets, and galaxies) with high spatial resolution. With our technique—unlike conventional wavefront sensors (e.g., Shack-Hartmann or a pyramid sensor)—atmospheric turbulence correction is not limited to just the optical axis. As such, we have developed the use of light fields from measuring mere intensity distributions, to more elaborate phase mapping activities.

The plenoptic design for light field measurement is now commercially available through Lytro.4 For several reasons, however, this instrument has had only very limited success. Despite its fairly compact physical design, a large amount of post-processing (super-resolution) is required to recover the optical resolution that is expected from a particular imaging lens size. The resolution can be improved with the use of much smaller microlenses, but computation of a depth map is then required to recover the original optical resolution. This process makes the system response rather slow. Furthermore, the quality of the depth map directly influences the quality of the image restoration and is strongly affected by the optical quality of the microlenses.

A recently developed deconvolution method5 can be used to obtain the full optical resolution. With this approach, the system response to a point source—located at any coordinate of the scene's volume—is calculated. A restoration much closer to the theoretical maximum can thus be achieved with this technique. An even greater amount of processing is required, however, and this can only be achieved with the use of specialized parallel processing hardware. If the behavior of every microlens is assumed to be approximately the same, the time required for deconvolution-related processing could be small enough for commercial platforms (e.g., tablets and mobile phones). Unfortunately, microlens behavior is not sufficiently uniform to make this assumption valid. New microlens manufacturing processes—in addition to improved optical and electronic hardware components, and algorithm developments—are therefore required before our plenoptic technology can be incorporated into conventional cameras, tablets, and mobile phones.

We have also shown that deconvolution associated with a plenoptic wavefront sensor can be used to recover blurred images that are obtained during turbulence.6 As illustrated in Figure 1, we have demonstrated the viability of restoring an image blurred by the effects of random changes in refractive index that are associated with atmospheric turbulence. We have also used laboratory experiments to confirm the results of our simulations. In these experiments we used a display of 256 × 256 pixels, an atmospheric wavefront plate, and a plenoptic camera (see Figure 2). In this approach, image restoration is only conducted at a single object space plane (the plane where the images are displayed before they are degraded). A real scene would contain many of these planes and the deconvolution algorithm must therefore be reproduced for each of the planes. This requirement also demands a high processing capability (equivalent to a parallelizable algorithm), but we believe that this can be achieved with the use of existing graphics processors in real time.


Figure 1. Restoration of a blurred image (simulated data) through deconvolution associated with a plenoptic wavefront sensor. Top left: Original object. Top right: Degraded (blurred) image. Bottom left: Plenoptic acquisition. Bottom right: Restored image. This simulation shows that it is possible to recover an image (obtained with a 512×512-pixel sensor) with only 32 × 32 microlenses.

Figure 2. Left: Optical arrangement of components used to experimentally confirm the simulations illustrated in Figure 1. Right: Restoration of the imaged object. The original resolution of the object is completely restored, except for the interstitial regions between the microlenses.

We have successfully demonstrated that a deconvolution approach can be used to restore lost resolution of images that are obtained with plenoptic sensors. Although our technique requires a large amount of processing power, this can be achieved with existing graphics processors. We now plan to implement a CUDA (compute unified device architecture) system to work in real time. We will also use square-shaped microlens arrays for plenoptic acquisition to address the loss of information from interstitial regions between microlenses.


José Manuel Rodríguez-Ramos, Juan Manuel Trujillo Sevilla
University of La Laguna (ULL)
La Laguna, Spain

José Manuel Rodríguez-Ramos is an associate professor in the Department of Electronic Engineering. He received his BS degree in astrophysics in 1990 and his PhD in physics from ULL in 1997. He was previously a research and postdoctoral fellow at the Institute of Astrophysics of the Canary Islands.

Juan Manuel Trujillo Sevilla is a PhD candidate in the Department of Industrial Engineering and has a Master's degree in biomedical engineering from the Polytechnic University of Madrid. His main research interests are light fields and wavefront phase tomography.

Luis Fernando Rodríguez-Ramos
Institute of Astrophysics of the Canary Islands
La Laguna, Spain

Luis F. Rodríguez-Ramos has been the head of the Electronics Department since 1991. He has been involved with the management of several astronomy projects, holds a perception system patent, and has led the development of new approaches to the measurement and compensation of atmospheric turbulence. He also leads the electronics group who are developing the controls for the HARMONI instrument on the European Extremely Large Telescope.


References:
1. G. Lippmann, Epreuves reversibles. Photographies integrals, Comptes Rendus Acad. Sci. 146(446-451), 1908.
2. E. H. Adelson, J. R. Bergen, The plenoptic function and the elements of early vision, Computational Models of Visual Processing, p. 3-20, MIT Press, 1991.
3. J. M. Rodriguez Ramos, R. González, J. G. Marichal-Hernández, Wavefront aberration and distance measurement phase camera, Int'l Patent Appl. WO2007082975, 2007.
4. https://www.lytro.com/ Lytro website. Accessed 27 February 2015.
5. S. A. Shroff, K. Berkner, Image formation analysis and high resolution image reconstruction for plenoptic imaging systems, Appl. Opt. 52, p. D22-D31, 2013.
6. J. M. Trujillo-Sevilla, L. F. Rodríguez-Ramos, I. Montilla, J. M. Rodríguez-Ramos, High resolution imaging and wavefront aberration correction in plenoptic systems, Opt. Letters 39, p. 5030-5033, 2014.
PREMIUM CONTENT
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research