Lenses are the very symbol of science, from telescope to microscope. But they also have drawbacks, such as high cost, large size (much of it essentially empty), and diffraction-limited resolution. They are very useful when nothing is known about the scene. However, if we already have a priori knowledge, imaging may waste precious information contained in the incident wavefront.1,2 Thus, there may be some instances when enough information is known and imaging becomes a choice, not a necessity.
The balance may tip to nonimaging point location when we know that the scene is comprised of point sources at various positions. Each point has three parameters: horizontal and vertical angles (θH and θV) and intensity (I), or what purists call ‘irradiance.’ Ideally, only three measurements per point would suffice, but more yield a better signal-to-noise ratio.
We have focused on understanding how such a system would work. With no image to measure, there must be some way to encrypt the wavefront's direction of arrival so that computer decryption will yield the desired point parameters. Fortunately, detectors are very-low-cost elements that have a profound angle-of-arrival sensitivity.3 The latter is based purely on geometry, which dictates that if a flat square is held in a collimated beam of light, the amount of light intercepted is a function of the tilt angle. The maximum flux is recorded when illumination is normal to the square and no signal is detected when the rays and the square are coaligned. Between these two extremes, the maximum light intensity is multiplied by the cosine of the tilt angle. We plan to use this phenomenon, the obliquity effect, to encode the incoming wavefront's direction of arrival.
Figure 1. There are two feasible ways of using angle-of-incidence sensitivity to produce a unique detector pattern for each angle of arrival: curve the detector array (A) or the wavefront (B). The cylindrical surface configuration is insensitive to one of the two angles but very sensitive to the other. Two of these would cover 4π steradians.
To produce a distinctive spatial pattern, we must curve either the detector array or the wavefront.4 That observation leads to two generic types of sensors. One example of each is shown in Figure 1, although many other valid geometries exist as well, which all produce one curved and one flat component. In each case, a change in wavefront direction produces a new spatial pattern: the encryption just proposed. Yet retrieval of the encrypted parameters is not the ultimate goal. Rather, we seek the number of points and the θH, θV, and I of each. That requires an ideal decryption method, such as Bayesian inversion or maximum likelihood.
We are building and characterizing computer and laboratory models for comparison purposes. To gain a basic understanding, we have computer modeled simple point-measuring devices and then used maximum-likelihood methods to see if they could handle multiple noisy points gracefully.5 Table 1 compares imaging and nonimaging point locators. Nonimaging point locators have several potential advantages, including compactness, wider field of view, and a broader wavelength range. In addition to modeling and building such systems, we are also studying other questions. We are trying to determine which mathematical approach—maximum likelihood or Bayesian inversion—is best for information retrieval. We are also determining whether the imaging or nonimaging system will give the best resolution. While a lens is a matched filter for a point source, Shannon's coding theorem suggests that the nonimaging system provides better encryption.
Nonimaging angle detectors have significant advantages over their imaging counterparts
|Property||Imaging system||Nonimaging system|
|Volume, weight, etc.||Sometimes large||Can be tiny|
|Cost||Often huge||Much less|
|Field of view||Up to approximately π||As much as you like, up to 4π|
|Resolution limiting factor||Diffraction||Signal to noise|
|Wavelength domain||Limited by lens materials||Radio to x-rays are practical|
In collaboration with Leonid Yaroslavsky and some of his students, we have published three articles in this area.3–5 The field of nonimaging sensors is not new. Much of the work is discussed in or inspired by the ideas and discussion in our past work.4 A number of our colleagues have also worked in a closely related field, computational imaging.1
In certain scenarios, nonimaging point locators could provide a significant advantage over traditional lenses (see Figure 2). These devices are less expensive, less bulky, and their resolution is not limited by diffraction. We have just completed development of computer models and a simple experimental model to address this issue. Our next step is exploration to characterize performance under many different conditions in many different wavelength domains for both types of sensor.
Figure 2. Advantages of nonimaging systems, when applicable.
The work described here was supported in part by the Air Force Research Laboratory at Wright Laboratory, Ohio, under their Minority Leaders Program. I deeply appreciate the wonderful assistance from two people (Leonid Yaroslavsky and Ramarao Inguva) who both view image processing as something compatible with physics and not just ad hoc.
H. John Caulfield
John Caulfield is widely published and honored for his work in many fields. He has edited SPIE's flagship journal, Optical Engineering, and received more honors and prizes than anyone else in SPIE history, including the Dennis Gabor Award and the SPIE Gold Medal. He also served on the SPIE board for 15 years. He has chaired 12 conferences, collected two critical reviews and three hardcover books, and assembled three milestones books.