Plenoptic cameras are an emerging technology whose images capture both intensity and directional information for rays striking a given point in the image plane. Traditional cameras integrate over all directions and simply record the net intensity of the rays striking a point in the plane. Consequently, the resulting image is static, and properties such as best focus, depth of focus, and perspective are dictated by the aperture and focus settings at the time of capture. For plenoptic cameras, the additional directional information enables multiple images to be generated from a single snapshot. Images with different planes in focus, different depth of focus, as well as slight shifts in perspective can all be achieved from a single image. For instance, Figure 1 shows sample images generated from a single plenoptic camera image. In Figure 1 (top), the foreground is in focus. In Figure 1 (bottom), the background is in focus. This technology enables photographers to modify or create a desired effect even after the image has been captured. Our work examines the image formation process in plenoptic cameras to determine trade-offs in sampling for different configurations.
Figure 1. Images created from a single snapshot of a plenoptic camera. Top: The plane of the book is in focus, while the building is out of focus. Bottom: The plane of focus has been shifted to the building, and the foreground is out of focus.
Figure 2 shows the setup of a conventional plenoptic system.1 A camera lens images the scene onto a lenslet array. The lenslets in turn image the exit pupil of the camera lens onto a sensor. The lenslets sample the position on the object, while the camera sensor records the angle from which the rays arrive at the image plane. Commercial systems based on this principle have been demonstrated (e.g., Lytro, Mountain View, CA, and Ratrix, Kiel, Germany). A variation on this layout, called the focused plenoptic or plenoptic 2.0, has also been proposed in which the lenslet array and sensor are shifted back so that the lenslets relay the image onto the camera sensor.2 Both systems give up spatial resolution to record the directional information. Our goal is to generalize the plenoptic system and to understand the trade-offs between spatial and angular resolution.3
Figure 2. A traditional plenoptic camera projects an image of the scene onto a lenslet array. A given lenslet records the same point in the scene, but the sensor also encodes the directionality of rays from this point. The rear focal point F′, as well as the entrance pupil, E, and exit pupil, E′, of the main camera lens are shown.
We initially assume a well-corrected camera lens so that the only aberration introduced into the image is defocus caused by variations in object distance for a 3D scene. We then consider the light field formed at the plane of the lenslet and relate it to the properties of the camera lens and the scene itself. Finally, armed with the light field, we examine the sampling effects of the finite-sized lenslets and the pixels of the camera sensor.
In aberration analysis of optical systems, transverse ray error is often used as a metric of system performance. Formulas relating the wavefront error in the exit pupil of an optical system to the transverse ray error are widely used. Transverse ray error describes the deviation of the position where a ray strikes the image plane relative to the ideal position. A spot diagram, for example, traces rays through various positions in the pupil and creates a scatter plot of the deviations of these rays from a non-aberrated ray. Here, we seek to reverse this analysis by considering a single point in the image plane and determining which field points contribute to that image position. This knowledge provides a description of both the directionality and position of the rays in the image plane.
When only defocus is considered, the ray striking a given point on the image plane can only come from a region of object space bounded by a double oblique cone. One base of the double cone is defined by the entrance pupil of the camera lens. The apex of the double cone resides at the point conjugate to the image point, and the other base extends to minus infinity. A point source at the apex of the double cone contributes to the image point from all angles defined by the exit pupil. A point source within the bounds of the double cone contributes to the image point, but only from one specific direction. Sources outside the double cone do not contribute to the image point. Thus, for a given object we can determine the image-plane irradiance distribution and incident ray directionality.
Our analysis enables a mapping of a given 3D object to the image plane, as well as knowledge of the direction from which the rays approach the image point. With this knowledge, the effects of sampling the image with a lenslet array can be determined. These results give insight into the trade-offs of various plenoptic configurations. Future work will include analysis of aberrations beyond defocus on plenoptic systems.
University of Arizona
Jim Schwiegerling is a professor of optical sciences and ophthalmology and vision science. His research interests include computational photography, ophthalmic instrumentation, metrology, and lens design.
1. R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, P. Hanrahan, Light field photography with a hand-held plenoptic camera, Tech. Rep. CSTR 2005-02, Stanford University, CA, 2005.
2. A. Lumsdaine, T. Georgiev, The focused plenoptic camera, Proc. IEEE Int'l Conf. Computat. Photogr., 2009.
3. J. Schwiegerling, J. S. Tyo, Relating transverse ray error and light fields in plenoptic camera images, Proc. SPIE 8842, 2013. (Invited paper.)