SPIE Startup Challenge 2015 Founding Partner - JENOPTIK Get updates from SPIE Newsroom
  • Newsroom Home
  • Astronomy
  • Biomedical Optics & Medical Imaging
  • Defense & Security
  • Electronic Imaging & Signal Processing
  • Illumination & Displays
  • Lasers & Sources
  • Micro/Nano Lithography
  • Nanotechnology
  • Optical Design & Engineering
  • Optoelectronics & Communications
  • Remote Sensing
  • Sensing & Measurement
  • Solar & Alternative Energy
  • Sign up for Newsroom E-Alerts
  • Information for:
    Advertisers
SPIE Defense + Commercial Sensing 2017 | Register Today

OPIE 2017

OPIC 2017

SPIE Journals OPEN ACCESS

SPIE PRESS

SPIE PRESS




Print PageEmail PageView PDF

Electronic Imaging & Signal Processing

Enabling a computer to do the job of a lens

A new kind of diffractive phase grating permits computational imaging of polychromatic distant objects in situations where focal optics are not convenient.
4 September 2013, SPIE Newsroom. DOI: 10.1117/2.1201309.005108

If we think of a camera as a sensor that allows us to create a picture of a distant object, then the device must record how much light comes from different incident angles. So far, nearly all cameras use focusing optics (lenses or curved mirrors) to map light incident from distinct angles onto distinct points on a photosensitive medium. Intermediate-scale, visible-light focusing optics are a mature technology. However, the same is not true for extremely large or small lenses and curved mirrors, or focusing optics working in other bands of the electromagnetic spectrum, which are difficult to manufacture. Taking pictures without focusing optics would enable us to sense more kinds of scenes.

Computational imaging shows that images can be synthesized through a merger of optics and computation. The optics produce nothing resembling an image, yet distill information about the scene that a computer can interpret. Some computational imaging techniques that have attracted recent interest include plenoptic cameras,1 multi-aperture photography,2 and cubic phase plates.3 All three use at least one ray-optical focusing element.

While a postdoctoral researcher at Cornell University, I helped build the first computational imager that uses no ray-optical element: the planar Fourier capture array (PFCA, see Figure 1).4 To build the device, we patterned pairs of optical gratings above an array of photodiodes. These grating pairs pass light only from particular incident angles, in the same way a pair of picket fences will pass light only at angles where the gaps in the fences align. Thus, each photodiode becomes sensitive to one component of the 2D Fourier transform of the faraway scene. It is then possible to compute what the scene must have looked like from the PFCA's readings. Although small, PFCAs have limited resolution and spectral bandwidth.


Figure 1. Light micrograph of two planar Fourier capture arrays, each measuring 570μm across, made from photodiodes with patterned gratings at a variety of orientations and spacings. Each grating is sensitive to one component of the 2D Fourier transform of the faraway scene, which can be reconstructed from sensor outputs up to the Nyquist limit imposed by the highest spatial frequency observed in the array.

At Rambus, I helped develop a new class of diffractive optical element, the phase anti-symmetric grating,5 which allows us to build lensless computational imagers that overcome many of the limitations of PFCAs. Phase anti-symmetric gratings diffract broadband light into linear regions of a photosensor array. Further, these gratings contain linear regions called curtains where there is minimal intensity in the near-field diffraction pattern regardless of wavelength or manufacturing depth. Moreover, unlike PFCAs where one grating structure relates information exclusively about one component of the 2D Fourier transform, an array of densely packed photodiodes under a phase anti-symmetric grating can relate information about all spatial frequencies transverse to the orientation of the grating.

While a single phase anti-symmetric grating can relate information only about a 1D slice of the 2D Fourier spectrum of the faraway scene, images in general are 2D, containing information at every spatial orientation. To relate spatial information about arbitrary scenes, we need to build phase anti-symmetric gratings of every orientation.6 It is possible to arrange grating structures that are locally approximately phase anti-symmetric, yet globally comprise every orientation. We are particularly interested in spiral arrangements of phase anti-symmetric gratings (see Figure 2), which sweep out not only a full range of orientations, but also include a range of grating widths that yield optimal information for a corresponding range of incident wavelengths. Simulations show that phase anti-symmetric gratings have the promise of delivering much better image quality using less area than PFCAs.


Figure 2. Reconstructing a scene with a phase anti-symmetric computational camera. A 100μm-wide spiral arrangement of phase anti-symmetric gratings (a) maps light from faraway scenes (b) to patterns on a photodiode array (c) that do not resemble the faraway scene. However, the scene can (d) be computationally reconstructed. (e) Lensless cameras based on phase anti-symmetric gratings outperform much larger planar Fourier capture arrays.

Our work has revealed a new way to make sensors that, in conjunction with adequate computation, qualify as cameras. Tiny, cheap, light optics can jointly shoulder the burden of image formation with essential computation. Next, we are interested in developing a lensless thermographic camera since thermographic lenses are particularly difficult to produce. More generally, by removing the need for a focusing element, we can increase the range of what a camera can be.

I would like to thank David G. Stork for his help in developing optical sensors employing phase anti-symmetric gratings.


Patrick R. Gill
Rambus Labs
Sunnyvale, CA

Patrick R. Gill was a national champion of both mathematics and physics contests in Canada prior to conducting his doctoral work at the University of California at Berkeley and postdoctoral research at Cornell University. He joined the Computational Sensing and Imaging group at Rambus Labs in 2012.


References:
1. R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, P. Hanrahan, Light field photography with a hand-held plenoptic camera, Comp. Sci. Tech. Report 2(11), 2005.
2. P. Green, W. Sun, W. Matusik, F. Durand, Multi-aperture photography, ACM Trans. Graph. 26(3), p. 68, 2007.
3. S. Tucker, W. T. Cathey, E. Dowski Jr., Extended depth of field and aberration control for inexpensive digital microscope systems, Opt. Express 4(11), p. 467-474, 1999.
4. P. R. Gill, C. Lee, D.-G. Lee, A. Wang, A. Molnar, A microscale camera using direct Fourier-domain scene capture, Opt. Lett. 36(15), p. 2949-2951, 2011.
5. P. R. Gill, Odd-symmetry phase gratings produce optical nulls uniquely insensitive to wavelength and depth, Opt. Lett. 38(12), p. 2074-2076, 2013.
6. P. R. Gill, D. G. Stork, Lensless ultra-miniature imagers using odd-symmetry spiral phase gratings, Comput. Opt. Sens. Imag., 2013.