SPIE Startup Challenge 2015 Founding Partner - JENOPTIK Get updates from SPIE Newsroom
  • Newsroom Home
  • Astronomy
  • Biomedical Optics & Medical Imaging
  • Defense & Security
  • Electronic Imaging & Signal Processing
  • Illumination & Displays
  • Lasers & Sources
  • Micro/Nano Lithography
  • Nanotechnology
  • Optical Design & Engineering
  • Optoelectronics & Communications
  • Remote Sensing
  • Sensing & Measurement
  • Solar & Alternative Energy
  • Sign up for Newsroom E-Alerts
  • Information for:
SPIE Photonics West 2017 | Register Today

OPIE 2017

OPIC 2017

SPIE Defense + Commercial Sensing 2017 | Register Today




Print PageEmail PageView PDF

Remote Sensing

Under Cover

Flash laser radar system sees obscured targets.

From oemagazine April 2005
31 April 2005, SPIE Newsroom. DOI: 10.1117/2.5200504.0005

It has long been accepted that a sensor that could produce a full 3-D image of a target would simplify the task of target acquisition and recognition. Several programs have been undertaken to develop just such a sensor. Producing a useful image requires range measurements at many individual pixels. The most common approach is direct time-of-flight (TOF) incoherent design to image a target located a distance d away. In the simplest implementation, a short-pulse (3 to 5 ns) laser generates an illumination pulse. A timing circuit records the time of this event. The beam traverses a distance equal to 2d and arrives at a photonic detector in time t, expressed as

t = 2d/c

where c is the speed of light. A time interval of 1 ns, for example, represents a 300-mm round-trip flight or an absolute range of 150 mm.

Such architectures suffer from some technical difficulties, however. The system makes the measurements one pixel at a time, necessitating extremely stable, accurate (and expensive) optical systems, as well as sophisticated computer programs, to maintain pixel-to-pixel alignment.

A sensor that could acquire all pixel measurements simultaneously would greatly relax the optical and computational requirements. It is a good idea in theory, but early attempts at such a sensor were limited to only a few pixels before the complexity of the wiring and supporting electronics became unmanageable.

An ideal system would capture all of the scene information in a single shot or laser flash. This 3-D flash ladar approach would freeze each pixel in relation to the others, reducing the need to preprocess the image to correct pixel registration. The approach would reduce pointing speed, accuracy, and agility requirements to only those necessary to track the target, as opposed to what would be needed for pixel-to-pixel alignment. Because the entire scene would be illuminated, the per-pulse energy requirements would be similar to those for a range-gated approach. Only one pulse would be needed for each frame, however, so the average laser power required would be comparable to that of a single-pixel scanned ladar approach. This could be a factor of at least 10 less than that needed by a comparable range-gate system.

Vision to Reality

In the Advanced Obscuration Penetrating Laser Radar Program at the Electro-Optic Sensor Technology Division of the Air Force Research Laboratory Sensors Directorate (AFRL/SNJ; Wright-Patterson AFB, OH), we are developing an approach that will enable the capture of a complete 3-D ladar image (angle-angle-range (θ, φ, r)) with a single pulse. During this project, members produced a read-out integrated circuit chip (ROIC) to provide individual measuring circuits (referred to as unit cells) to individual detectors in a detector array. Each unit cell contains the processing circuitry to capture the return-pulse signal from the detector and the pixel, and operates independently of the other unit cells in the array.

Project members designed this ROIC to be bump bonded directly to the backside of a detector array (see figure 1). The resultant hybrid sensor contains independent range-finder circuitry for each pixel. This configuration also eliminates the wiring problems associated with earlier attempts. The chip records the TOF for each pixel for subsequent serial read out. This approach allows the range resolution to be determined by the laser pulse width and electronics bandwidth, and enables it to be independent of image framing rates.

Figure 1: The hybrid 3-D flash sensor consists of a read-out integrated circuit bump-bonded to a detector array chip.

The original version of this concept consisted of a 32 x 32-pixel device with a silicon photodiode array bonded to the processor chip (Advanced Scientific Concepts; Santa Barbara, CA). The approach limited the useful wavelength of the sensor to 1 µm and below. The second generation is a 128 x 128-pixel device that takes advantage of advances in integrated circuit processes that fabricate the unit cells on a 100-µm pitch and increase the functions in the imager.

Design Details

The hybrid design is a very general configuration compatible with a variety of detectors, such as silicon, indium-gallium-arsenide (InGaAs), or mercury-cadmium-telluride (HgCdTe) positive-intrinsic-negative (PIN) photodiode or avalanche photodiode arrays. The system that generated the images discussed here incorporates a laser operating at an eye-safe wavelength of 1.54 µm and an InGaAs PIN photodiode detector array (Sensors Unlimited; Princeton, NJ). The detectors consist of 20-µm elements, spaced to match the 100-µm pitch of the ROIC. This yields a fill factor of approximately 4% for the detectors.

The read out of the ROIC is the critical component in the hybrid. Each pixel contains circuitry that independently counts time from the emission of a laser pulse to the capture of the reflected pulse in a camera pixel. In addition, pixel circuitry captures temporal information about the returned pulse. The new design incorporates both a digital clock to determine range to the target through TOF measurements and a return-pulse shape sampler. One of the advantages of the return-pulse sampler is that it allows the resolution of secondary peaks like those produced by vehicles hidden behind camouflage, which makes it possible to image the shape of those obscured targets.

In the first version of the 128 x 128 hybrid, the pulse sampler operated at a frequency of 400 MHz, capturing 20 slices of the return pulse at 2.5-ns intervals for a total sample period of 50 ns. The laser, which used a flashlamp-pumped CFR400 with an optical parametric oscillator (Big Sky Laser Technologies Inc.; Bozeman, MT), produced a 1.5-µm pulse with a width of about 5 ns at a per-pulse energy of 70 mJ.

Seeing Is Believing

Figure 2. A series of data slices reflects the increasing illumination of the target at one sample step (a), three sample steps (b; 75 cm further), 10 sample steps (c; 3.75 m), and 18 sample steps (d; 6.75 m). In the final frame, the pulse has actually moved past the truck.

Using this sensor, AFRL/SNJM collected a series of images representing successive slices captured from a single return at one-, three-, 10-, and 18-sample step intervals (see figure 2). In the first frame (upper left corner), the laser pulse has just started to illuminate the front corner of a truck and the ground before it. In the next frame (upper right), the pulse has traveled approximately 75 cm farther toward the truck and illuminated different parts of the vehicle. In the third frame (lower left), the pulse has moved 3.75 m. In the final frame (lower right), the pulse has moved 6.75 m, so that the leading edge has actually moved past the truck; the trailing edge of the pulse continues to illuminate portions of the vehicle.

To demonstrate the potential utility of the sensor in detecting objects that might be partially occluded by foliage, camouflage, or other obscurants, we placed a test subject and a stool before a wall and lowered a Venetian blind between the subject and the sensor (see figure 3). In a conventional flash photo of the scene, the subject is completely obscured. A data frame taken with the hybrid imager at slice one (37.5 cm) shows the blinds as the predominant object. In slice six (22.5 cm), the stool starts to become discernable while the blinds are still reflecting some of the laser tail. Finally, in slice 10 (3.75 m), the person comes into view.

Figure 3. In a system test, we placed a subject and a stool (a) behind a Venetian blind (b). Sequential data slices show the blind (c; sample step one, 37.5 cm), the stool (d; sample step six, 22.5 cm), and the person (e; sample step 10, 3.75 m).

As we gain more experience with this sensor, we will continue to make improvements in the ROIC, enhancing it to increase dynamic range and reduce the unit-cell size. Team members are working to produce a database that can be used to develop algorithms to display images and extract objects and other information from the data. oe

Richard Richmond
Richard Richmond is laser radar technology team leader for the Electro-Optics Combat ID Technology Branch, Sensors Directorate, AFRL/SNJM, WPAFB OH.