SPIE Membership Get updates from SPIE Newsroom
  • Newsroom Home
  • Astronomy
  • Biomedical Optics & Medical Imaging
  • Defense & Security
  • Electronic Imaging & Signal Processing
  • Illumination & Displays
  • Lasers & Sources
  • Micro/Nano Lithography
  • Nanotechnology
  • Optical Design & Engineering
  • Optoelectronics & Communications
  • Remote Sensing
  • Sensing & Measurement
  • Solar & Alternative Energy
  • Sign up for Newsroom E-Alerts
  • Information for:
    Advertisers
SPIE Photonics West 2018 | Call for Papers

SPIE Defense + Commercial Sensing 2018 | Call for Papers

SPIE Journals OPEN ACCESS

SPIE PRESS

SPIE PRESS

Print PageEmail Page

Defense & Security

Unattended ground sensors stop and analyze the roses

From OE Reports Number 196 - April 2000
31 April 2000, SPIE Newsroom. DOI: 10.1117/2.6200004.0004

By using a combination of small, limited-capacity sensors, unattended ground sensor systems could gather information about what's going on in a battlefield, a customs inspection station, or a police stakeout, among other uses. Unattended ground sensors could even make a good home security system.


The CyberATV, designed and built by researchers at Carnegie Mellon Univ., is an unmanned all-terrain vehicle equipped with a variety of sensors and communications equipment.

The main components of this kind of system are low-power, miniature devices with a sensor, battery, and radio. Edward M. Carapezza, program manager for Tactical and Unattended Sensors at the Defense Advanced Research Projects Agency (DARPA) and cochair of the Unattended Ground Sensor Technologies and Applications II conference at AeroSense '00, said the sensors can be magnetic, chemical, seismic, acoustic, or any of a number of other types, including visible or infrared imagers.

The advantage of such sensors -- if they can be made to work together intelligently -- is that they can achieve similar performance to a more sophisticated sensor at a fraction of the cost. A system that uses many simple sensors rather than a single sophisticated sensor can also be made more robust by designing it so performance degrades gracefully as sensors fail. The trade-off is that the amount of data increases with multiple sensors and, for most applications, data processing needs to occur in near real time.

In addition to developing the sensors, researchers must develop a number of computing algorithms. "You need computationally efficient algorithms," Carapezza said, "because you don't have a CRAY computer to run them." Nevertheless, the individual sensors may need to perform a variety of tasks, including detection, classification, and localization or tracking of a target.

In addition, the sensors need to manage communication tasks. A typical system may include a host of sensors that send information to nodes, in a hierarchic system, or possibly just share information in a distributed processing system. A typical node may manage a number of sensors, have a larger battery, and contain both greater processing power and a radio that can transmit farther than the individual sensors. Node processing tasks include data fusion -- combining the data from individual sensors to determine both kinematic information (the speed, range, and position of a target) and attributes of the target. "Is it a person or a car?" Carapezza said. "If it's a vehicle, is it light or heavy? You can think of it as a giant classification tree."

A group at Carnegie Mellon Univ. (Pittsburgh, PA) is one of many that are working on developing unattended ground sensors.1 Graduate student Chris Diehl is part of a DARPA-funded team developing the CyberATV, an all-terrain vehicle fitted with several sensors (see figure). Carapezza said Diehl is developing methods for classifying images captured by the stereo camera system mounted on the mobile sensor platform. For example, the system attempts to determine whether the object is a person, vehicle, etc., and what its attributes are.

Diehl said, "In my research where we are trying to identify the moving object in the scene, the more views of the target we have, the better. With more images, we have a high probability of obtaining a discriminating view (which allows the object to be identified)."

Most traditional target recognition work assumes that only one image is available, and uses computationally intensive algorithms in an attempt to determine whether, for example, an FLIR image is probably a tank. "Our contention is that the additional data may actually simplify matters..." Diehl said.

In 1999, the researchers reported their method. First the system finds a moving object while rejecting artifacts of the camera panning and tilting, as well as artifacts due to the ATV's movement. Then, the target is roughly identified by a classifier. Using a method called differential learning, the system is trained to assign the most likely class label for each example. The confidence with which the label is assigned is measured by the classification figure of merit, which is a function of the difference between the classifier outputs associated with the correct class and the largest other class.

Further work from Carnegie Mellon and other groups developing unattended ground sensors will be presented at AeroSense '00.

Reference:

1. C. P. Diehl, M. Saptharishi, J. B. Hampshire II, Pradeep K. Khosla, "Collaborative Surveillance Using Both Fixed and Mobile Unattended Ground Sensor Platforms" SPIE Proceedings 3713, 1999.


Yvonne Carts-Powell
Yvonne Carts-Powell, based in Boston, writes about optoelectronics and the Internet.