Computational imaging for 3D micrographs with 10-fold depth-of-field enhancement
The state-of-the-art microscopes that are found in today's scientific and industrial facilities benefit from a century of scientific innovations, underpinned by cutting-edge technologies such as ultrasensitive cameras and intense, agile light sources. The fundamental approach to imaging, however, is unchanged from the principles that were used in the 17th century—e.g., by Galileo, Hooke, and Leeuwenhoek—in the construction of instruments that eventually became the compound microscope. The focus today remains on engineering optical elements to optimally focus light from the sample to form a sharp image at a single plane. In this traditional approach to imaging, a specific sample plane (which is imaged to the camera plane) is inherently defined so that objects not located exactly in the sample plane are out of focus. In high-resolution microscopy, the depth of field (DOF) over which a sharp image is recorded is typically a micrometer or less, which offers the benefit of optical sectioning (i.e., the ability to produce clear images of focal planes within a thick sample). However, samples that exceed the DOF of a microscope are the norm, which means it is necessary refocus the image throughout its depth to build a clear picture.
In the first computational solution to this dilemma (developed by Gerd Häusler), a single image was recorded as the sample was swept through the plane of best focus, and a time-consuming coherent optical processor was subsequently used to recover a sharp image.1 In a modern approach commonly used by microscopists, a ‘Z-stack’ of up to 100 images are recorded and combined computationally into a single sharp image with an extended DOF. Other more sophisticated techniques have been developed for high-resolution 3D microscopy, such as light-sheet fluorescence microscopy, confocal/multiphoton microscopy, and localization super-resolution microscopy (although the latter two are not strictly 3D techniques in themselves). Due to their scanning nature, however, none of these techniques can be used for snapshot or video-rate imaging.
We have developed a new approach, called complementary kernel matching (CKM),2 that can be used to extend the DOF by a factor of 10 in a single snapshot, thus enabling its use for time-resolved or video-rate microscopy. Furthermore, 3D ranging of the sample is achieved simultaneously in our technique. An example of extended-DOF 3D micrographs obtained with CKM are compared with conventional microscopy images in Figure 1. We also compare a CKM reconstruction (from one snapshot) with an image derived from a Z-stack in Figure 2.


CKM—a computational imaging technique—involves optical encoding of the captured image, as well as digital decoding that is used to reconstruct a sharp output image. We achieve optical encoding by using a phase plate at the aperture of the microscope objective and capturing two distinct images with complementary information. In one such implementation of CKM (see Figure 3), a microscope is equipped with the CKM-encoding element (i.e., a phase plate) and a CKM-splitting element (consisting of a beam splitter, and mirrors to replicate the desired image) that separates the two encoded images onto a single camera to realize snapshot operation.3, 4

Our use of a phase plate with a cubic profile provides the desired DOF extension. It also means that all spatial frequencies up to the optical cut-off—i.e., providing a modulation-transfer function (MTF) without nulls in the passband—are simultaneously transmitted. This makes image deconvolution by MTF inversion possible. In addition, the shape of the point spread function (PSF) of a cubic-phase wavefront is preserved over larger depth distances than for a conventional diffraction-limited spot, which thereby allows us to extend the DOF. Furthermore, the PSF produces a lateral shift that varies quadratically with defocus.5 This lateral shift is relatively innocuous for a single image and has therefore been largely ignored in the past.6 With CKM, however, we exploit this translation to infer defocus and to enable high-quality image recovery and range estimation. We use the differential shift between the two images to uniquely determine the depth map of the scene and to enable optimal image recovery with 3D ranging. It is this ability to simultaneously range with an extended DOF that enables high-quality imaging. The results of this approach are thus free of the artifacts that have previously plagued extended-DOF computational imaging.7
In summary, we have developed an imaging technique that provides a new capability for capturing, in a single snapshot, high-quality range-resolved images with a DOF that is an order of magnitude greater than that of a conventional microscope. Furthermore, we have demonstrated this technique in different imaging modalities, such as fluorescence, bright-field transmission, and bright-field reflection. Our future work will focus on accommodating CKM into a range of applications that require its unique properties, including particle image velocimetry, super-resolution microscopy, industrial inspection without moving parts, tracking of particles or small objects (such as molecules), and 3D machine vision for high-resolution applications.