SPIE Membership Get updates from SPIE Newsroom
  • Newsroom Home
  • Astronomy
  • Biomedical Optics & Medical Imaging
  • Defense & Security
  • Electronic Imaging & Signal Processing
  • Illumination & Displays
  • Lasers & Sources
  • Micro/Nano Lithography
  • Nanotechnology
  • Optical Design & Engineering
  • Optoelectronics & Communications
  • Remote Sensing
  • Sensing & Measurement
  • Solar & Alternative Energy
  • Sign up for Newsroom E-Alerts
  • Information for:
OPIE 2017

OPIC 2017



Print PageEmail Page

Sensing & Measurement

How Smart is Your Automatic Target Recognizer?

This SPIE Tutorial Text excerpt provides an inside view of the automatic target recognition (ATR) field from the perspective of an engineer working in the field for 40 years.

21 February 2017, SPIE Newsroom. DOI: 10.1117/2.2201702.08

This excerpt is from the SPIE Press book Automatic Target Recognition.

Cover of SPIE Press Book Automatic Target RecognitionThe human brain has about 100 billion neurons, most with one thousand to ten thousand synaptic connections to other neurons. However, it does not follow that if we build a computer with equivalent processing power and connectivity, it would match human functionality as an emergent property. We do not now know how the brain does the vast majority of what it does. The long-term goal of neuromorphic engineering is to devise artificial systems patterned after both the design and functionality of biological neural systems (not necessarily human).

Much of the human brain is used for scene understanding, object detection, recognition, tracking, multisensor fusion, and motor control. So, in a sense, its function is similar to that of an Automatic Target Recognizer (ATR). An ATR can be viewed as a substitute, or at least a workload reducer, for the warfighter's brain. For the purpose of this discussion, we will consider the neuromorphic ATR to be a black box. Engineers aren't too concerned if the black box perfectly mimics biology. As ATR engineers, we just want the black box to transform its inputs to the required outputs. We want the black box to meet certain key performance requirements involving size, weight, power, cost, latency, mean time between failure, and logistics trail. The black box must demonstrate capabilities that are needed in combat. It must become fully operational in a military environment, having passed a difficult operational test and evaluation process. That is, it must be more than "just research." It should also be more rugged and reliable than a comparable commercial product.

Purchase SPIE Field Guide to IR Systems, Detectors and FPAsThe human brain has evolved solely for survival of the species. It is "designed" to work as part of a system, which includes various sensors, vestibular (IMU) data, positioning system, and articulated parts and processes that it controls. The brain learns over a lifetime in both supervised and unsupervised modes. It never stops changing its wiring. It's hard to view the brain as disembodied from the rest of the system. Visual perception works with a pair of eyes, feeding retinal code (not video) to the brain. The eyes are always in motion. Human vision often functions as part of a multisensory fusion system. Except for relatively recent passive pastimes, such as reading and watching TV, visual perception's main function is to initiate and guide motor control. A person walks to an unknown object to get a closer look, might first push it to see how it reacts, then might pick it up, touch it, smell it, shake it, and might even take a bite. Humans are highly social animals, with actions in collaboration with or in reaction to those of other persons. There are good reasons to study biological systems. Multisensor fusion, networked processing, and robotic self-controlled platforms are of interest to civilian and military system designers. Biological systems provide models known to work. They help spark the engineer's imagination.

When we ask: "How smart is your ATR?" we mean: "How well can this machine perform some of the tasks performed by the well-trained human pilot, soldier, sailor, Marine or analyst?" If we arrive at the point where a machine can do all of the tasks of humans, then what are the implications? For example, is a smart ATR more humane than a dumb landmine? Table 1 compares the levels of intelligence of different types of artificial intelligence.

 Table 1:  Levels of artificial intelligence. (SWaP is size, weight, and power.)
(Click on image to enlarge.)


Rather than precisely defining what constitutes a smart ATR, the new book Automatic Target Recognition, provides a list of capabilities that an intelligent ATR should possess. This Turing-like test is narrowly focused on ATR. If this approach is deemed reasonable, a test and evaluation (T&E) organization will have to transform the eleven questions into an actual scorable test that can be competitively taken by ATRs considered for procurement. The test will have to be tailored to specific sensors, platforms, and missions. It will have to be taken over sufficiently varied data to reach meaningful conclusions. The questions are as follows, see the book for details and conclusions on where we now stand in ATR design.

  1. Does the ATR understand human culture?
  2. Can the ATR deduce the gist of a scene?
  3. Does the ATR understand physics?
  4. Can the ATR participate in a pre-mission briefing?
  5. Does the ATR possess deep conceptual understanding?
  6. Can the ATR adapt to the situation, learn on-the-fly and make analogies?
  7. Does the ATR understand the rules of engagement?
  8. Does the ATR understand the order of battle and force structure?
  9. Can the ATR control platform motion?
  10. Can the ATR fuse information from a wide variety of sources?
  11. Does the ATR possess metacognition?

Consider an ATR group founded in the 1960s. A 1% improvement (e.g., reduction in classification error) per year would have been a remarkable achievement. A continued 1% improvement per year for the next 100 years will result in an ATR surpassing human capabilities. But, not so fast; as the ATR improves, the demands on it are likely to broaden. It will be expected to take over more functions now done by humans-possibly even control a robotic craft. Humans and robots will have to learn how to operate in proximity, to cooperate, and to collaborate. ATRs may learn from humans as apprentices and assistants. Human-robot teams will develop over the long run. Civil societies will require both humans and autonomous weapons to strictly adhere to the rules of engagement and laws of armed conflict.

It will be a long time before the ATR designer can answer all eleven questions listed to the affirmative. In the meantime, the percent of affirmative answers can be used to answer the question: "How smart is your ATR?"

Bruce Schachter, author of How Smart is Your Automatic Target Recognizer? Bruce J. Schachter
is an engineer whose work has focused on automatic target recognition (ATR) for more than forty years. He was on the team that developed the first Automatic Target Recognizer, at the University of Maryland then later at Northrop Grumman. Schachter has been program manager or principal investigator of a dozen ATR programs. His previous books are Pattern Models and the award-winning Computer Image Generation. The author can be contacted at Bruce.Jay.Schachter@gmail.com