Computational methods for aberration correction in simple lens imaging
The complexity of camera optics has greatly increased in recent decades. The lenses of modern single-lens reflex (SLR) cameras may contain a dozen or more individual lens elements, which are used to optimize the light efficiency of optical systems while minimizing the imperfections inherent in them. Geometric distortions, chromatic aberrations, and spherical aberrations are prevalent in simple lens systems and cause blurring and loss of detail. Unfortunately, complex optical designs come at a significant cost and weight.
Instead of developing ever more complex optics, we propose1 an alternative approach using much simpler optics of the type used for hundreds of years,2 while correcting for the ensuing aberrations computationally. Although using computational methods for aberration correction has a long history,3–7 these methods are most effective in the removal of residual aberrations found in already well-corrected optical systems. A combination of large-aperture simple lens optics with modern high-resolution image sensors can result in very large wavelength-dependent blur kernels (i.e., point spread functions or PSFs: the response of an imaging system to a point source), with disk-shaped supports of 50–100 pixels diameter (see Figure 1). Such large PSFs destroy high-frequency image information, which cannot be recovered using existing methods.
The fundamental insight of our work is that the chromatic part of the lens aberration (which occurs when colors are not focused to the same convergence point) can be used to our advantage, since the wavelength dependence of the blur means that different spatial frequencies are preserved in different color channels. Moreover, combining information from different color channels and enforcing consistency of edges (and similar image features) across these channels allows our method to work for large kernel sizes.
We calibrated a camera with a simple lens by observing a calibration target based on a broadband random noise pattern. Since the blur kernels for simple lenses are expected to change smoothly over the image plane, we assume the PSF to be constant over small image tiles (e.g., 100×100 pixels on a 14 megapixel sensor). The PSF for each tile is recovered using an algorithm that solves a deconvolution problem, where a sharp reference image is obtained by stopping down the aperture and taking another image of the calibration target. Combining these two images—a blurred image and a sharp reference image—yields the PSFs of the optical system, which enable the later removal of aberrations from images that are taken using the same camera configuration.
The deblurring problem is expressed as a linear least squares problem, allowing the data to be expressed linearly. Added to this are non-linear regularization terms that include prior assumptions on the image, or ‘image priors.’ One generic image prior that has been used frequently in the past few years is the assumption of sparsity in image gradients and second derivatives.8 We enforce this assumption by seeking solutions that minimize either the ℓ1 or the Huber norm of first and second derivatives in the image.
To this generic prior, we add a new cross-channel prior that enforces consistency between color channels by assuming sparsity of hue changes across the image. This assumption can be expressed as optimizing for solutions in which the image gradients in the individual color channels are consistent (see Figure 2). From an algorithmic point of view, the combined deconvolution problem can be expressed as a single convex optimization problem that can be solved efficiently using recently developed numerical methods.9 Reconstruction is performed per image tile and can be implemented in Fourier space for computational efficiency. With this approach, we are able to achieve excellent image quality, even for severe aberration (see Figure 3).
Despite these encouraging results, the method suffers from several shortcomings. For example, PSF calibration is so far performed for a single focus setting and target distance. It is, however, technically possible to calibrate for multiple planes and automatically or manually select the appropriate kernel to use, either for the whole image or for each image region. In the latter case, one would obtain an image with an extended depth of field.8 Unfortunately, the hours of work required for this calibration process would limit many practical applications. Given the simplicity of the optical system, we believe that calibration can be simplified using analytical PSF models that are controlled by just a few parameters.
In summary, we have developed a technique that enables the removal of aberrations inherent in simple optical systems computationally. In future work, we intend to combine the kinds of methods that we have derived with traditional lens design methods to arrive at computational optics systems with better tradeoffs between cost, weight, complexity, and image quality.
Felix Heide received his BSc and MSc from the University of Siegen and has been a PhD candidate in Wolfgang Heidrich's group at the University of British Columbia since 2012. His research interests center on computational photography, optimization, and displays.
Matthias B. Hullin is a professor of digital material appearance and held a postdoctoral position at the University of British Columbia, where this work was done. He obtained a PhD in computer science from Saarland University in 2010, and his dissertation won the Otto Hahn Medal of the Max Planck Society.
The University of British Columbia
Wolfgang Heidrich is the director of the Visual Computing Center at KAUST and has been serving in this position since early 2014. He is also a professor at the University of British Columbia, where this research was done.