SPIE Membership Get updates from SPIE Newsroom
  • Newsroom Home
  • Astronomy
  • Biomedical Optics & Medical Imaging
  • Defense & Security
  • Electronic Imaging & Signal Processing
  • Illumination & Displays
  • Lasers & Sources
  • Micro/Nano Lithography
  • Nanotechnology
  • Optical Design & Engineering
  • Optoelectronics & Communications
  • Remote Sensing
  • Sensing & Measurement
  • Solar & Alternative Energy
  • Sign up for Newsroom E-Alerts
  • Information for:
SPIE Photonics West 2018 | Call for Papers




Print PageEmail PageView PDF

Remote Sensing

Is there a best hyperspectral detection algorithm?

Apparent superiority of sophisticated detection algorithms in test conditions does not necessarily imply the same in real-world hyperspectral imaging applications.
17 June 2009, SPIE Newsroom. DOI: 10.1117/2.1200906.1560

Hyperspectral imaging sensors1,2 measure the spectrum of each pixel in a 2D image in hundreds of very narrow spectral wavelength (color) bands, resulting in a 3D data cube (hypercube) with one spectral and two spatial dimensions (see Figure 1). This high-resolution spectral data can be used to detect and identify spatially resolved or unresolved objects on the basis of their spectral signatures. If each material had a unique spectrum, the solution of detection and identification problems would be straightforward. Unfortunately, variabilities in material composition and atmospheric propagation, in addition to sensor noise, introduce random spectral variations. Also, for pixels containing unresolved objects, the measured spectrum includes a mixture of object and background contributions. Thus, every detection algorithm has to overcome two major obstacles, i.e., spectral variability and background interference.

Figure 1. Data-cube structure, spectral variability, and subpixel interference in hyperspectral imaging. Measured spectra corresponding to pixels with the same surface type exhibit an inherent variability that prevents characterization of homogeneous surface materials with unique spectral signatures. Radiance from all materials within a ground resolution cell is seen by the sensor as a single image pixel. Therefore, the result is a hyperspectral data cube of pure and mixed pixels, where a pure pixel contains a single surface material and a mixed pixel includes multiple materials.

A large number of hyperspectral detection algorithms have been developed and used in the past two decades.1–5 A partial list includes classical, finite-target, and mixture-tuned matched filters; Reed-Xiaoli (RX) anomaly detector; orthogonal-subspace projector; adaptive-cosine estimator; and subspace, kernel-matched subspace, and joint subspace detectors. In addition, different methods for dimensionality reduction, background-clutter modeling, end-member selection, and radiance-versus-reflectance domain processing multiply the number of detection algorithms yet further. New algorithms, new variants of existing algorithms, and new implementations of existing methods appear all the time. Furthermore, a large number of papers have been published in attempts to establish the relative superiority of these algorithms. In this context, it is both time consuming and difficult for designers of hyperspectral imaging systems to navigate through the existing literature to choose a detector or decide if a certain level of performance can be expected.

The key to the understanding, design, and evaluation of detection algorithms is the use of sufficiently accurate and mathematically tractable models of spectral variability. Each spectrum in the hypercube can be interpreted as a vector (x) in a p-dimensional space, where p is the number of spectral channels. Statistical models of spectral variability consist of multivariate probability distributions. Subspace models are essentiallybreak linear vector spaces defined by q<p basis vectors. If these vectors are spectra of pure constituents (end members), we have the well-known linear mixing model. Estimating the number of end members and their spectra from a hyperspectral cube is difficult. Thus, the use of the linear mixing model in practical detection algorithms is very limited. Most widely-used detection algorithms assume that spectral variability can be modeled by a multivariate normal (Gaussian) distribution with mean vector μ and covariance matrix Σ. This model requires estimation of p+p(p+1)/2 parameters.

Natural backgrounds have multimodal distributions and can be better modeled by Gaussian mixture models. However, estimating their parameters is complicated, and the results are often inaccurate. For a given data cube, it seems preferable to use all data to estimate one covariance matrix, rather than split the data to estimate several matrices (bias-variance tradeoff).

We have chosen to focus on two robust and easy-to-use detection algorithms that model background variability using a mean vector μb and a covariance matrix Σb, and represent the target signature as a known spectrum, s. These algorithms are the classical matched filter and the adaptive-cosine estimator, defined by


where the superscript T refers to a transverse matrix operation.

A simple geometrical interpretation is provided in Figure 2. The mean and covariance of the background are estimated from the data cube. The target signature is obtained from a spectral library. To our knowledge, despite all assumptions regarding the background and target signal models being violated, both algorithms (if properly implemented) compete favorably with any other detector. The key is to estimate Σb without using target-like pixels, and its inverse using ‘dominant-mode rejection’ combined with ‘diagonal loading.’ The resulting algorithms are numerically stable and robust to target mismatch, target variability, and corruption of background covariance by target spectra.5 Furthermore, the formula for the inverse covariance matrix provides a link between covariance- and subspace-based algorithms. These can be applied in both the reflectance and radiance domains.

Figure 2. Geometrical interpretation of detection algorithms. The background covariance matrix is first used to ‘whiten’ or ‘spherize’ the background distribution. The RX anomaly detector (AD), matched filter (MF) method, and adaptive-cosine estimator (ACE) are defined as shown by the spherical, linear, and conical hypersurfaces, respectively.

Practical applications of hyperspectral detection algorithms have to consider many issues that are often overlooked. For example, selecting a threshold to maintain a constant false-alarm rate is challenging. Additional practical limitations, like sensor calibration, sensor noise, atmospheric compensation, small number of pixels compared to the number of spectral channels, background variability, and target mismatch, suggest that any small performance gains attained by more sophisticated detectors may be irrelevant in practical applications, where the goal is to provide the best performance for the greatest number of target-background combinations with the least amount of a priori knowledge required or assumed. Finally, because it is very difficult to acquire data with a variety of targets and a sufficient number of pixels per target, detection algorithms are often evaluated with simulated targets or a single data set with a limited number of target pixels. Although such evaluations are useful, they are not conclusive and they cannot be used to demonstrate the superiority of one algorithm over another.5

So, is there a best hyperspectral detection algorithm? Our main conclusion is that if we take into account important aspects of real-world hyperspectral imaging problems, proper use of simple detectors, like the matched filter and adaptive-cosine estimators, may provide acceptable performance for practically relevant applications. Are we certain that an undiscovered optimal detector does not exist? Probably not. However, even if such a detector were found, we may never have sufficient data to prove its superiority.

Dimitris G. Manolakis, Ronald Lockwood
Lincoln Laboratory
Massachusetts Institute of Technology
Lexington, MA

Dimitris Manolakis received his BS in physics and PhD in electrical engineering from the University of Athens, Greece. He is currently a member of technical staff. His research experience and interests include digital signal and array processing, adaptive filtering, pattern recognition, remote sensing, and radar systems.

Ronald Lockwood is the hyperspectral exploitation deputy program manager. His current work focuses on sensor performance and calibration for the Advanced Tactically Effective Military Imaging Spectrometer (ARTEMIS), the primary payload on the Tactical Satellite 3 (TacSat-3).

Thomas Cooley
Space Vehicles Directorate
Air Force Research Laboratory
Hanscom Air Force Base, MA

Thomas Cooley is the principal investigator of ARTEMIS (a hyperspectral sensor aimed at demonstrating the use of this rich data source for a wide range of hyperspectral applications) and program manager for the TacSat-3 satellite, to be launched in spring 2009. He holds a BS in electrical engineering from Rensselaer Polytechnic Institute, an MS in electrical engineering and applied physics from the California Institute of Technology, and a PhD in optical sciences from the University of Arizona. Since 1995, he has been supporting the Air Force Research Laboratory in the area of imaging spectroscopy from space as the technical lead for the Hyperspectral Exploitation Program.

John Jacobson
National Air and Space Intelligence Center
Wright-Patterson Air Force Base, OH

John Jacobson is a physicist working in the field of spectral data exploitation. He holds a BA in physics from Holy Cross College and an MS in nuclear physics from the Air Force Institute of Technology.