Long-range 3D imaging lidar
Lidar (light detection and ranging) is a commonly used remote sensing technique in which a laser is used to measure the distance to an illuminated target. For military purposes, 3D imaging lidar can be used for target identification, tracking, and surveillance in air-to-ground and ground-to-ground engagements. Advanced development of 3D imaging technologies is now focused on providing cost-effective integrated systems solutions that produce performance, size, and weight advantages.
Single-photon counting using Geiger-mode (GM) detection is a particular candidate technology that can be used to improve 3D imaging lidar techniques. A 3D lidar concept that uses a 32 × 32 GM indium phosphide/indium gallium arsenide (InP/InGaAs) semiconductor array was originally developed at the Massachusetts Institute of Technology's Lincoln Laboratory,1 and subsequently at Princeton Lightwave2 and Spectrolab,3 primarily for short-range (<200m) applications. This array can now be incorporated into a new system for long-range lidar measurements.
We have developed a lidar system at Selex—see Figure 1(a)—that is able to record range (or depth) profiles to a precision of <4cm.4 It is also able to penetrate through clutter—foliage and tree canopies—that is distributed along its range. Although the data is recorded from a single vantage point (necessary for extreme range engagements), the complete shape of the canopy and the partially occluded terrain topography behind the clutter can be measured with high accuracy. This work represents advanced GM time-of-flight sensing developments we conducted to improve 3D lidar, hyperspectral, and polarimetric imaging concepts for airborne targeting and surveillance.5–7

We have optimized the GM array in our system for a wavelength of 1.5μm. We use a mode-locked erbium-doped fiber amplifier (EDFA) laser that is operated at 100kHz (with ∼4μJ, ∼800ps pulses). This laser is synchronized to, and limits, the frame rate of the instrument. In our instrument, the laser is emitted through a small beam-expanding telescope using a bistatic transceiver design. We record the data output from the array with commercial data logging software that is provided by Princeton Lightwave, and through additional proprietary data analysis software that we have developed.
To assess the performance of our lidar system, we conducted a variety of tests on targets at distances between about 6 and 9km. In one test, we chose a site that included distributed clutter from a tree canopy (at a range of 9.086km) against a sloping terrain: see Figure 1(b). Figure 1(c) shows that the narrow field of view of our GM lidar contains the target tree. From the 3D point cloud that we measured, we can produce a full side view of the terrain behind the tree (see Figure 2). Using the range profile along the line-of-sight (LOS), we can determine the absolute depth profile. We calculate that the slope of the ground behind the tree is ∼24.6%, the tree is 8m tall, and the depth of the canopy (along the beam LOS) is also 8m. We use color-coded range maps (see Figure 3) to illustrate depth penetration along the LOS.


Our results show that GM lidar can clearly be applied to military remote sensing operations. In these situations fiber lasers operating at low pulse energy and high repetition rate offer advantages over solid-state lasers that operate at high pulse energy and low repetition rate. In addition, fiber lasers are cheaper and much smaller (about the size of a mobile phone) than the solid-state alternatives. The fiber lasers operate at low average power (∼0.4W), compared with ∼2W for analog active imaging at long range. With our most recent innovation, we use proprietary encoding schemes to obtain absolute range and depth profiles.8 This removes the requirement for gated arrays and takes advantage of the high frame rate that is available.
In summary, we have designed a new GM-mode 3D imaging lidar system that can operate at long ranges and penetrate through obstructions that clutter the LOS. We are currently developing signal encoding and processing protocols that will allow rapid 3D imaging and laser range finding at data rates of more than 10–100Mb/s. Operating the instrument at very high frame rates will allow near real-time measurements. Automated post-processing techniques can therefore be used to correct for atmospheric turbulence-induced blurring of images. Our results have shown the potential of our system for both military and future space-borne Earth observation9 (where GM arrays negate the need for multiple beam lidar designs) applications.
The author acknowledges useful discussions with colleagues at Princeton Lightwave and at Micro Photon Devices.
Robert Lamb is the chief technical officer for electro-optics at Selex ES. He directs the applied research program for electro-optics, which covers laser designators, countermeasures, and future airborne targeting and tracking sensors. He is an honorary professor at Heriot-Watt University in Edinburgh.