Share Email Print

Proceedings Paper

Adaptive model-based 3-D target detection--Part II: statistical behavior
Author(s): Mac L. Hartless; Hong Wang
Format Member Price Non-Member Price
PDF $17.00 $21.00

Paper Abstract

Previous work in adaptive space-time processing has concentrated on either covariance estimation or on using a region segmentation approach where a bank of filters is designed from previously collected data. The first method typically involves performing a generalized likelihood ratio test (GLRT), which for a typical 3-D target signature involves estimation of an enormous number of covariance elements. The second method relies on filter construction from previously collected data. Two of the major penalties which are incurred in any covariance estimation technique include a large amount of computational complexity and a lack of robustness in a changing environment due to the large number of covariance samples required for statistical stability. The region segmentation approach is useful when the clutter being processed resembles that which was used for filter construction, but suffers large potential losses when the data which is operated on has different statistical properties than that which was used in filter construction. The method which is being addressed in this study for mitigating the problems associated with a space-time covariance estimation procedure and/or the dependence on a bank of fixed filters, is to assume a low degree of freedom model for the space-time clutter characteristics. This allows the adaptive filter to be estimated over a much smaller region. The detection algorithm can therefore track the clutter characteristics of a changing environment more closely while minimizing any losses in a stationary environment. This paper addresses the statistical behavior of the model-based algorithms. The statistical behavior is analyzed as a function of the number of filter tap weights and the estimation region size used for filter construction. Performance in a non-stationary environment is analyzed via Monte-Carlo techniques on both simulated and recently collected longwave IR clutter. The results indicate that the reduced degree of freedom model-based algorithms can provide significant performance improvement when the dimensions of the test vector are large and only a small amount of data is available for covariance estimation.

Paper Details

Date Published: 25 August 1992
PDF: 11 pages
Proc. SPIE 1698, Signal and Data Processing of Small Targets 1992, (25 August 1992); doi: 10.1117/12.139368
Show Author Affiliations
Mac L. Hartless, GE Aerospace (United States)
Hong Wang, Syracuse Univ. (United States)

Published in SPIE Proceedings Vol. 1698:
Signal and Data Processing of Small Targets 1992
Oliver E. Drummond, Editor(s)

© SPIE. Terms of Use
Back to Top
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research
Forgot your username?