3D imaging and wavefront sensing with a plenoptic objective
Plenoptic cameras—which use microlens arrays to capture ‘four-dimensional’ light-field information—have recently been developed to provide a passive means for 3D characterization of realistic scenes. A conventional camera uses a convergent lens to concentrate rays originating from sharp points in the scene into a single pixel at the sensor. Focusing/defocusing is the price one has to pay to collect more light. A plenoptic camera uses a microlens array to separate a fraction of those incident rays into bundles that originate from the same position but under different angular directions. For recording, trade-offs must be made to simultaneously observe the angular and positional structure of the light.
To date, the most common treatment of plenoptic images has consisted of applying multistereo computer-vision techniques, taking into account that there are as many points of view as the number of pixels behind each microlens. An alternative and efficient procedure—also useful for, e.g., adaptive-optics applications in astrophysics—employs plenoptic cameras as tomographic wavefront-phase sensors. The dual treatment we propose is in accordance with the results of a number of scientists who claim to use computer-vision tools to solve wave-optics problems, and vice versa.
Within a given plenoptic frame, enough information is available to recreate a posteriori a volume of images covering a range of focusing distances. This is known as a focal stack. Moreover, pairs of refocused images can be generated for stereo 3D applications (see Figure 1). By measuring the degree of focusing required for each ray that crosses the focal stack, it is possible to estimate distances to objects. Algorithms have been developed to tackle all of these processes at video-acquisition rate, and it is now possible to build a 3D camera based on these methods, even one that is suitable to feed a 3D display without the need for special glasses. Because of speed considerations, parallel hardware is usually employed in the form of graphics-processing units (GPUs) and field-programmable gate arrays (FPGAs).
Of course, plenoptic methods have their associated inconveniences. The valid range of refocusing distances is constrained by the number of pixels that contribute to collect angular information. In addition, the specific placement of centers of depth for a given field in the resulting images is highly nonlinear and best suited for filming at close distances. This is a difficult task when using stereo techniques. If we increment the number of angular pixels, fewer pixels remain to gather positional resolution. This sacrifice cannot be fully overcome, but the effect can be reduced using superresolution methods. Additionally, when the microlens array is placed close to the sensor, it is hard to avoid a certain mutual tilt. This is the reason for the mandatory calibration step prior to any further processing.
We developed and patented CAFADIS,1,2 a plenoptic camera that offers several advantages with respect to previously employed methods, including a new calibration method for plenoptic frames that can be computed in real time, a superresolution algorithm, and a construction design of plenoptic, interchangeable objectives. To achieve a wide field of view, focal ratios (f) on the order of f/1.4 to f/2.8 are required and, therefore, the microlens array must be placed just micrometers from the sensor. To avoid this limitation, we designed and implemented a plenoptic, interchangeable objective that converts any conventional, single-body, single-lens camera into a 3D device (see online video3).
In certain fields a plenoptic sensor can be used for both depth extraction and tomographic wavefront sensing. Earth's atmosphere degrades telescope images because of diffraction-index changes associated with turbulence. Correcting for these changes requires the high-speed processing supplied by GPUs and FPGAs. Sodium artificial laser-guide stars (exciting atmospheric emission at an altitude of approximately 90km) must be used to obtain the system's reference wavefront phase and optical-transfer function, but they are affected by defocus because of the finite distance to the telescope. Using the latter as a plenoptic camera allows correction of the defocus and tomographic recovery of the wavefront phase.4
In summary, plenoptic cameras can measure both the amplitude and phase of the electromagnetic field. They also couple nicely with increasing sensor capabilities (in terms of the number of pixels). Where display technologies ignore this increase, a plenoptic objective such as our proposed CAFADIS camera is useful to achieve 3D sensing with 2D cameras. A remarkable characteristic of plenoptic cameras is that most of the required changes involve processing rather than construction. We aim at further improving—and even commercializing—the CAFADIS camera in the near future.
José Rodríguez-Ramos received his BS in astrophysics in 1990 and a PhD in physics in 1997, both from ULL (Canary Islands). He was subsequently employed as research fellow and postdoctoral fellow at the Instituto de Astrofísica de Canarias. He is currently an assistant professor in electronic engineering.