Lidar (light detection and ranging) is a commonly used remote sensing technique in which a laser is used to measure the distance to an illuminated target. For military purposes, 3D imaging lidar can be used for target identification, tracking, and surveillance in air-to-ground and ground-to-ground engagements. Advanced development of 3D imaging technologies is now focused on providing cost-effective integrated systems solutions that produce performance, size, and weight advantages.
Single-photon counting using Geiger-mode (GM) detection is a particular candidate technology that can be used to improve 3D imaging lidar techniques. A 3D lidar concept that uses a 32 × 32 GM indium phosphide/indium gallium arsenide (InP/InGaAs) semiconductor array was originally developed at the Massachusetts Institute of Technology's Lincoln Laboratory,1 and subsequently at Princeton Lightwave2 and Spectrolab,3 primarily for short-range (<200m) applications. This array can now be incorporated into a new system for long-range lidar measurements.
We have developed a lidar system at Selex—see Figure 1(a)—that is able to record range (or depth) profiles to a precision of <4cm.4 It is also able to penetrate through clutter—foliage and tree canopies—that is distributed along its range. Although the data is recorded from a single vantage point (necessary for extreme range engagements), the complete shape of the canopy and the partially occluded terrain topography behind the clutter can be measured with high accuracy. This work represents advanced GM time-of-flight sensing developments we conducted to improve 3D lidar, hyperspectral, and polarimetric imaging concepts for airborne targeting and surveillance.5–7
Figure 1. (a) The Selex Geiger-mode (GM) lidar (light detection and ranging) system. (b) A line-of-sight (LOS) view obtained during a test of the lidar system. The target tree is at a distance of about 9km. (c) Image from the lidar telescope, showing the tree within the GM camera's field of view (white box).
We have optimized the GM array in our system for a wavelength of 1.5μm. We use a mode-locked erbium-doped fiber amplifier (EDFA) laser that is operated at 100kHz (with ∼4μJ, ∼800ps pulses). This laser is synchronized to, and limits, the frame rate of the instrument. In our instrument, the laser is emitted through a small beam-expanding telescope using a bistatic transceiver design. We record the data output from the array with commercial data logging software that is provided by Princeton Lightwave, and through additional proprietary data analysis software that we have developed.
To assess the performance of our lidar system, we conducted a variety of tests on targets at distances between about 6 and 9km. In one test, we chose a site that included distributed clutter from a tree canopy (at a range of 9.086km) against a sloping terrain: see Figure 1(b). Figure 1(c) shows that the narrow field of view of our GM lidar contains the target tree. From the 3D point cloud that we measured, we can produce a full side view of the terrain behind the tree (see Figure 2). Using the range profile along the line-of-sight (LOS), we can determine the absolute depth profile. We calculate that the slope of the ground behind the tree is ∼24.6%, the tree is 8m tall, and the depth of the canopy (along the beam LOS) is also 8m. We use color-coded range maps (see Figure 3) to illustrate depth penetration along the LOS.
Full side-view topographic profile of the terrain behind the target tree (see Figure 1
). The red line traces the slope of the ground, and the point marked ‘A’ is close to the top of the hill. The tree height and depth (width of the canopy) are clearly seen in this side view.
Figure 3. Lidar depth penetration through the canopy and along the LOS. White gaps in the canopy are due to the threshold setting of the array, which can be adjusted by the user. Points are color-coded so that dark blue shows the foreground and more distant points are shown as light blue to red. Orange-red points within the canopy shape lie beyond the tree and are due to reflections from the background slope.
Our results show that GM lidar can clearly be applied to military remote sensing operations. In these situations fiber lasers operating at low pulse energy and high repetition rate offer advantages over solid-state lasers that operate at high pulse energy and low repetition rate. In addition, fiber lasers are cheaper and much smaller (about the size of a mobile phone) than the solid-state alternatives. The fiber lasers operate at low average power (∼0.4W), compared with ∼2W for analog active imaging at long range. With our most recent innovation, we use proprietary encoding schemes to obtain absolute range and depth profiles.8 This removes the requirement for gated arrays and takes advantage of the high frame rate that is available.
In summary, we have designed a new GM-mode 3D imaging lidar system that can operate at long ranges and penetrate through obstructions that clutter the LOS. We are currently developing signal encoding and processing protocols that will allow rapid 3D imaging and laser range finding at data rates of more than 10–100Mb/s. Operating the instrument at very high frame rates will allow near real-time measurements. Automated post-processing techniques can therefore be used to correct for atmospheric turbulence-induced blurring of images. Our results have shown the potential of our system for both military and future space-borne Earth observation9 (where GM arrays negate the need for multiple beam lidar designs) applications.
The author acknowledges useful discussions with colleagues at Princeton Lightwave and at Micro Photon Devices.
Edinburgh, United Kingdom
Robert Lamb is the chief technical officer for electro-optics at Selex ES. He directs the applied research program for electro-optics, which covers laser designators, countermeasures, and future airborne targeting and tracking sensors. He is an honorary professor at Heriot-Watt University in Edinburgh.
1. R. M. Marino, W. R. Davis Jr., Jigsaw: a foliage-penetrating 3D imaging laser radar system, Lincoln Lab. J. 15, p. 23-36, 2005.
4. R. A. Lamb, P. A. Hiskett, G. J. Wong, K. J. Gordon, A. M. Pawlikoska, R. Pilkington, P. Sinclair, Advanced 3D imaging lidar concepts for long range sensing, Proc. SPIE 9114, 2014. Paper accepted at the SPIE Conf. on Advanced Photon Counting Techniques VIII in Baltimore, MD, 7-8 May 2014.
5. R. A. Lamb, A technology review of time-of-flight photon counting for advanced remote sensing, Proc. SPIE
7681, p. 768107, 2010. doi:10.1117/12.852138
6. P. A. Hiskett, R. A. Lamb, Design considerations for high-altitude altimetry and lidar systems incorporating single-photon avalanche diode detectors, Proc. SPIE
8033, p. 80330F, 2011. doi:10.1117/12.883871
7. R. A. Lamb, P. Hiskett, Pseudo-random single photon counting: absolute rangefinding without aliasing, Saudi Int'l Electron. Commun. Photonics Conf.
, 2011. doi:10.1109/SIECPC.2011.5876950
8. P. A. Hiskett, A. McCarthy, R. Lamb, G. S. Buller, A photon-counting time-of-flight ranging technique developed for the avoidance of range ambiguity at gigahertz clock rates, Proc. SPIE
7320, p. 732002, 2009. doi:10.1117/12.818455
9. A. Wallace, C. Nichol, I. Woodhouse, Recovery of forest canopy parameters by inversion of multispectral lidar data, Remote Sens. 4, p. 509-531, 2012.