SPIE Startup Challenge 2015 Founding Partner - JENOPTIK Get updates from SPIE Newsroom
  • Newsroom Home
  • Astronomy
  • Biomedical Optics & Medical Imaging
  • Defense & Security
  • Electronic Imaging & Signal Processing
  • Illumination & Displays
  • Lasers & Sources
  • Micro/Nano Lithography
  • Nanotechnology
  • Optical Design & Engineering
  • Optoelectronics & Communications
  • Remote Sensing
  • Sensing & Measurement
  • Solar & Alternative Energy
  • Sign up for Newsroom E-Alerts
  • Information for:
    Advertisers
SPIE Photonics West 2017 | Register Today

SPIE Defense + Commercial Sensing 2017 | Register Today

2017 SPIE Optics + Photonics | Call for Papers

Get Down (loaded) - SPIE Journals OPEN ACCESS

SPIE PRESS




Print PageEmail PageView PDF

Defense & Security

Data-compression technique proves useful for automatic target detection

An improved classical vector-quantization algorithm enables robust computer-based object discovery in IR imagery.
18 August 2008, SPIE Newsroom. DOI: 10.1117/2.1200807.1214

Automatic target detection (ATD) constitutes a very challenging problem for many reasons. IR imagery is relatively high in irrelevantinformation and variability,1 including complex and unpredictable characteristics of 3D targets and scene clutter. Thesame target can vary wildly in appearance depending on lighting, aspect angle, atmospheric effects, and a host of othervariables.2 ATD through passive-IR sensors suffers from limitations due to a lack of sufficient contrast between the targetsand their background,3 compounded by the difficulty of trying to find these low-contrast targets in the presence of an IRbackground dominated by scene clutter.4

Classical detection and estimation theory provides optimum-signal processors and their performance for very simple scenes. Extension ofthese theories to multiobject, multidimensional problems dealing with complex targets embedded in complicated clutter is in its earlieststages of development.5 The ATD developer also has little control over the sensor resolution and operating conditions andparameters which are of relevance when the technique is applied. In addition, a developer may face an advisory who is actively trying todefeat the algorithm.6

In an unpublished technical report from 1957, eventually published7 in 1982, Stuart Lloyd proposed a method todevelop a quantizer for which the expected squared-error analog-to-digital quantization noise achieves a value close to theminimum.8 The algorithm describes an iterative process where a given codebook is converted to a new and improved version andsubsequently tested to see if the improvement, in measured distortion, is small enough to warrant stopping further iterations.

Lloyd describes a scalar quantizer for code words of one dimension. Linde, Buzo, and Gray (LBG)8 were among otherswho extended the technique to N dimensions. The LBG algorithm iswidely used for vector-quantizer (VQ) design and ‘training.’ The method requires an initial codebook, such as composed by Gaussianrandom-pixel blocks (see Figure 1), as also used in this article.


Figure 1. Initial codebook.

Figure 2. Clutter-trained codebook.

The codebook is trained using clutter images containing no targets, thus creating a clutter codebook. Figure 2 shows the codebook after clutter training. The idea is to encode and decode new images using the resulting clutter correction and calculate the VQerror. We simply subtract the original from the VQ-reconstructed image to create a VQ-error distribution (see Figures 3, 4, and 5, respectively).


Figure 3. Original image.

Figure 4. VQ-reconstructed image.

Figure 5. VQ-error image.

The error due solely to the compression will be approximately consistent across the image. In the areas containing new objects(i.e., objects the codebook has not been trained to deal with), we see the consistent compression error as well as an increased ‘non-trainingerror’ due to pixel blocks representing new objects not included in the clutter-correction codebook. After decoding, areas in the imagewith large overall error correlate with pixel blocks not in the codebook. The Kolomogorov–Smirnov (KS) distance is used to classifynew objects from a reference clutter-error distribution.9

When the algorithm is applied to an image data set, the results show that the VQ-detection algorithm performs as well as the Army benchmarkalgorithm.9 It operates at an acceptable probability-of-detection level while maintaining a low false-alarmrate. It does well with low-contrast targets, because it is not just looking for hot spots.

This work has shown that VQ is a very viable solution to the ATD problem. The counter-training idea can work and will be robust tochanges in targets and environments. The KS test as a classifier is very good at separating the targets from the clutter. Providing areference distribution is essential to keep confidence values consistent across the data set. Future work will include applying thealgorithm to different data sets, aimed at gaining a better feel for the ATD's robustness.


Brian Wemett
VirtualScopics Inc.
Rochester, NY