SPIE Startup Challenge 2015 Founding Partner - JENOPTIK Get updates from SPIE Newsroom
  • Newsroom Home
  • Astronomy
  • Biomedical Optics & Medical Imaging
  • Defense & Security
  • Electronic Imaging & Signal Processing
  • Illumination & Displays
  • Lasers & Sources
  • Micro/Nano Lithography
  • Nanotechnology
  • Optical Design & Engineering
  • Optoelectronics & Communications
  • Remote Sensing
  • Sensing & Measurement
  • Solar & Alternative Energy
  • Sign up for Newsroom E-Alerts
  • Information for:
OPIE 2017

OPIC 2017

SPIE Defense + Commercial Sensing 2017 | Register Today

2017 SPIE Optics + Photonics | Call for Papers




Print PageEmail PageView PDF

Defense & Security

LADAR puts the puzzle TOGETHER

The Jigsaw airborne laser radar will identify obscured military targets.

From oemagazine April 2003
31 April 2003, SPIE Newsroom. DOI: 10.1117/2.5200304.0002

One of the enduring problems for the military is finding and identifying targets concealed by foliage or camouflage. Since the early 1980s, researchers have been exploring the advantages of active laser imaging for this task and have made several attempts to develop systems. The technology, however, has always been too immature. The lasers were too large and heavy and required high power. The detectors lacked the bandwidth and large array formats for reasonable fields of view. Computers were too slow to enable onboard processing. Today, the missing technology pieces are coming together in the Jigsaw program.

A joint effort of the U.S. Army's Future Combat Systems and the Defense Advanced Research Projects Agency (DARPA; Arlington, VA), the Jigsaw program aims to demonstrate a 3-D laser radar (ladar) that can fly on an unmanned aerial vehicle (UAV) and perform rapid, reliable, and confident identification of targets day or night, through foliage or camouflage.

The traditional approach to imaging objects hidden by foliage relies on long-wavelength synthetic aperture radar (SAR) to penetrate through the vegetation with minimal attenuation. Foliage-penetrating SAR has proven successful at detecting potential targets at long ranges, day or night, and in all weather.1 Because the wavelength is typically several meters, however, the resolution is limited, and the false-alarm rate for target detection can be high. More importantly, foliage-penetrating SAR is not well suited for high-confidence identification of obscured targets, which requires very-high-resolution imagery.

the concept

Imagine walking through a forest on a sunny day. As the sunlight strikes the canopy of leaves and branches, much of the light is reflected. A small amount of light, however, pokes through gaps in the leaves and branches and creates a sparse pattern of spots called sunflecks on the forest floor. As the sun traverses the sky, the sunflecks move across the ground and randomly change as new gaps in the canopy emerge and others close to block the sunlight.

Figure 1. Jigsaw ladar will fly over a target to collect images from multiple angles (a) and feed them to a processor (b), which will produce a composite (c).

Jigsaw takes advantage of this same effect (see figure 1). The sensor uses a short-pulse laser to illuminate the forest canopy above a potential target. While most of the laser energy is reflected or scattered by the foliage, a small amount passes through the interstices in the canopy to reach the ground.

The radar captures the time of flight of the returned pulses and encodes them as a function of range to form a 3-D image of the scene called a frame. Because of the occlusion, it is unlikely a single 3-D frame of data will provide enough information to identify the target. Instead, the Jigsaw sensor will combine many frames of data from multiple aspects to create a composite, high-resolution 3-D image of the target beneath the foliage.

Because Jigsaw collects range imagery, as opposed to intensity, a human operator can spatially manipulate the image for improved target recognition. In addition, 3-D imagery improves the performance of automatic-target-recognition algorithms, which often rely on projections of a 3-D target model onto the 2-D plane of the passive sensor.2

The Jigsaw design offers improved performance compared to conventional imagers. Given sufficiently high spatial resolution and contrast, passive visible imagers can identify unobscured targets. Unlike passive sensors, which collect only spatial intensity information, the Jigsaw ladar is designed to collect data in three dimensions. Thermal IR imagers may offer nighttime imaging capabilities, but they require larger optics and specialized cooling for optimum performance.3 Laser-based active imaging provides high-resolution day and night imaging and can help eliminate the problem of diurnal contrast reversal common to thermal imagers. The tradeoff for Jigsaw's high performance is the fact that it requires a large, heavy, high-power laser and is limited to short ranges by the two-way atmospheric attenuation of the laser energy.

the challenges

For identification of targets under foliage or camouflage, high-resolution spatial (intensity) imagery is insufficient because many target pixels will be obscured, making it difficult to distinguish the target from the canopy. To resolve a forest canopy from a target on the ground with ladar requires a short pulse of laser energy on the order of 10 ns or less. A variety of airborne laser terrain-mapping systems use this technique to collect measurements of obscured forest floor.

Separating a target from overlaid camouflage requires pulse widths of less than 2 ns and bandwidths of greater than 500 MHz for the time-of-flight approach used by Jigsaw. Resolving such closely spaced returns requires a high-bandwidth detector and readout electronics. For confident identification of a target, the Jigsaw sensors must provide high spatial resolution and fast imaging rates. To limit the amount of scanning, the design requires a detector array.

Finding an appropriate detector is a challenge. Conventional CCD detectors behave as light buckets, which fill with electrons depending upon the rate of the incoming photons and the frame rate of the CCD. Driving the detector too fast can introduce greater noise in the output. Moreover, slowing the frame rate of the CCD in low-light conditions allows more time for photons from the scene to be collected. For the short-pulse ladars developed for Jigsaw, CCD detectors do not provide sufficient bandwidth or sensitivity to resolve closely spaced returns.

Although single-element detectors, such as avalanche photodiodes and positive-intrinsic-negative (PIN) diodes, can achieve the desired bandwidth, the associated readout electronics must also be able to capture the pulse with high resolution. The bandwidth of the readout electronics is proportional to the area on the chip available for the circuit design. For optimal performance in detector arrays, the readout electronics typically lie adjacent to or behind each pixel. If the electronics are placed adjacent to the pixel, the pixel pitch must increase along with the size of the entire detector. This presents not only yield issues in manufacturing but also optical-design challenges. By bonding the detector on top of the readout electronics, we can restrict the area for circuits to the pitch of the detector elements, greatly limiting the number of circuits, and hence the bandwidth. Jigsaw program members are developing state-of-the-art focal planes arrays to overcome this constraint.

Another challenge for the Jigsaw team is the development of an efficient and compact laser transmitter with a short pulse and high repetition rate. Eye safety is also a major concern; for the sensor to be effective on the battlefield, the designers must minimize the potential eye hazard.

The system challenge is to develop a payload weighing only a few pounds and powered by a battery to fly in an UAV. In addition, the payload must be able to rapidly perform the coordinate transformation, registration, combination, and compression of the 3-D image to fit within the bandwidth of the UAV's communications downlink.

the demonstration

Figure 2. Laser pulses passing through gaps in dense foliage reflect off the test targets and return to the sensors (a to c), which combine them to reveal an image of a Humvee and a Chevy Blazer (d).

We demonstrated the feasibility of the Jigsaw concept using a ladar testbed developed by the U.S. Army Night Vision Electronics Systems Directorate. We studied two targets placed behind a tree line of dense foliage. The ladar collected high-resolution 3-D images from a variety of aspect angles (see figure 2). As expected, individual frames did not provide sufficient information to enable target identification; however, with the help of registration algorithms developed by Sarnoff Corp. (Princeton, NJ), the composite 3-D image clearly revealed the two targets. Although the ground test was a major success, the sensor required nearly two minutes to scan a single 3-D image from a fixed location. The Jigsaw sensor will have to collect 10 or more images in only a few seconds from an airborne platform.

Figure 3. Platform motion, obscuration, and trajectory stand out as the issues that most affect the ability of Jigsaw to identify a truck under a range of trees.

The modeling and simulation effort was a challenge with many aspects (see figure 3). The most important considerations were the degree of target obscuration, the trajectory of the drone, and the motion of the UAV platform. Using high-fidelity tree models provided by Areté Associates, Dynetics Inc. (Ft. Walton Beach, FL) developed 3-D test scenes and modeled the performance of candidate ladar sensors on a variety of UAVs. The results of the simulations for targets under dense obscuration using realistic UAV platform motion were highly promising and agreed with the data obtained with the Army Directorate's sensor.

Multiple contractor teams developed designs for the Jigsaw ladars. The goal of the program was to build a prototype payload to fly on a helicopter with the ability to collect frames, rapidly form a composite 3-D image, and confidently identify the target using data from only a single pass. While the shape and size of the prototype were important, its performance was essential. In May 2002, we selected two contractor teams to build prototypes.

One version, built by Irvine Sensors Corp. (Costa Mesa, CA) and Northrop Grumman Corp. (Apopka, FL), incorporates a 1.06-µm laser and an 8 x 128 element indium-gallium-arsenide PIN diode detector bonded to a stacked set of readout chips. Irvine Sensors took a novel approach to the focal-plane design, bonding a column of pixels from a detector array to the end of another readout chip. The area for the readout electronics is no wider than a pixel but takes advantage of the full length of the chip. This long, narrow channel provides significantly more area, providing more circuits and a much higher bandwidth design. The companies have built the ladar, and ground testing is underway. The sensor will begin airborne flight tests in the near future.

Another sensor, designed by Harris Corp. (Melbourne, FL) and the Massachusetts Institute of Technology's Lincoln Laboratory (MITLL; Lexington, MA), uses a 532-nm microchip laser and a 32 x 32 array of avalanche photodiodes operated in photon-counting mode. Because the detector triggers on single photons, the laser power required for each pulse is significantly reduced. In addition, to maximize the energy returned to the active area of each detector element, the transmitter contains a holographic diffractive optic that produces a 2-D array of spots aligned with each pixel in the sparse array. The laser operates at a very high pulse repetition rate to capture several single-photon detections for each range return. MITLL has developed a technique called range coincidence processing that estimates the range position for each return and generates a range profile of the target. To eliminate scanning mirrors, the lab developed counter-rotating wedge optics, known as Risley prisms, to scan a target area with a rosette-like pattern. This ladar prototype successfully completed its first checkout flight on a helicopter in December 2002.

The Jigsaw program is on track to demonstrate the first-ever capability to rapidly and reliably perform target identification day or night, through foliage or camouflage. Future applications of the Jigsaw technology include the development of real-time, 3-D video imaging to track moving targets under a canopy, maintain perimeter security, and designate targets. By adding multiple laser wavelengths, it will be possible to perform active multispectral imaging, with improved radiometric performance and fewer shadows. oe


1. The DARPA Counter Camouflage Concealment and Deception program has built foliage-penetrating SAR systems and continues to develop advanced hardware and software.

2. The DARPA Exploitation of 3-D Data (E3D) program is exploring these advantages.

3. DARPA has developed a technology called microbolometer imagers which do not require specialized cooling.

Robert Hauge
Robert Hauge is the Jigsaw program manager for the Information Exploitation Office at DARPA, Arlington, VA.