Share Email Print

Proceedings Paper

Problem set guidelines to facilitate ATR research, development, and performance assessments
Author(s): Lori A. Westerkamp; Thomas J. Wild; Donna Meredith; S. Andrew Morrison; John C. Mossing; Randy K. Avent; Annette Bergman; Arthur Bruckheim; David A. Castanon; Francis J. Corbett; Douglas Hugo; Robert A. Hummel; John M. Irvine; Bruce Merle; Louis Otto; Robert Reynolds; Charles Sadowski; Bruce J. Schachter; Katherine M. Simonson; Gene Smit; Clarence P. Walters
Format Member Price Non-Member Price
PDF $17.00 $21.00

Paper Abstract

In November of 2000, the Deputy Under Secretary of Defense for Science and Technology Sensor Systems (DUSD (S&T/SS)) chartered the ATR Working Group (ATRWG) to develop guidelines for sanctioned Problem Sets. Such Problem Sets are intended for development and test of ATR algorithms and contain comprehensive documentation of the data in them. A problem set provides a consistent basis to examine ATR performance and growth. Problem Sets will, in general, serve multiple purposes. First, they will enable informed decisions by government agencies sponsoring ATR development and transition. Problem Sets standardize the testing and evaluation process, resulting in consistent assessment of ATR performance. Second, they will measure and guide ATR development progress within this standardized framework. Finally, they quantify the state of the art for the community. Problem Sets provide clearly defined operating condition coverage. This encourages ATR developers to consider these critical challenges and allows evaluators to assess over them. Thus the widely distributed development and self-test portions, along with a disciplined methodology documented within the Problem Set, permit ATR developers to address critical issues and describe their accomplishments, while the sequestered portion permits government assessment of state-of-the-art and of transition readiness. This paper discusses the elements of an ATR problem set as a package of data and information that presents a standardized ATR challenge relevant to one or more scenarios. The package includes training and test data containing targets and clutter, truth information, required experiments, and a standardized analytical methodology to assess performance.

Paper Details

Date Published: 25 July 2002
PDF: 6 pages
Proc. SPIE 4726, Automatic Target Recognition XII, (25 July 2002); doi: 10.1117/12.477039
Show Author Affiliations
Lori A. Westerkamp, Air Force Research Lab. (United States)
Thomas J. Wild, Air Force Research Lab. (United States)
Donna Meredith, U.S. Army Night Vision & Electronic Sensors Directorate (United States)
S. Andrew Morrison, Air Force Research Lab. and Jacobs Sverdrup Technology, Inc. (United States)
John C. Mossing, Air Force Research Lab. and Jacobs Sverdrup Technology, Inc. (United States)
Randy K. Avent, MIT Lincoln Lab. (United States)
Annette Bergman, Naval Air Warfare Ctr. (United States)
Arthur Bruckheim, Private Consultant (United States)
David A. Castanon, Boston Univ. (United States)
Francis J. Corbett, Textron Systems Corp. (United States)
Douglas Hugo, NIMA (United States)
Robert A. Hummel, DARPA (United States)
John M. Irvine, Science Applications International Corp. (United States)
Bruce Merle, Boeing Co. (United States)
Louis Otto, BAE Systems (United States)
Robert Reynolds, Science Applications International Corp. (United States)
Charles Sadowski, Veridian Inc. (United States)
Bruce J. Schachter, Northrop Grumman Corp. (United States)
Katherine M. Simonson, Sandia National Labs. (United States)
Gene Smit, Sandia National Labs. (United States)
Clarence P. Walters, U.S. Army Night Vision & Electronic Sensors Directorate (United States)

Published in SPIE Proceedings Vol. 4726:
Automatic Target Recognition XII
Firooz A. Sadjadi, Editor(s)

© SPIE. Terms of Use
Back to Top
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research
Forgot your username?