Proceedings Volume 2759

Signal and Data Processing of Small Targets 1996

Oliver E. Drummond
cover
Proceedings Volume 2759

Signal and Data Processing of Small Targets 1996

Oliver E. Drummond
View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 31 May 1996
Contents: 7 Sessions, 54 Papers, 0 Presentations
Conference: Aerospace/Defense Sensing and Controls 1996
Volume Number: 2759

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Signal Processing
  • Weak-Target Detection
  • Signal and Data Processing
  • Weak-Target Detection
  • Signal and Data Processing
  • Signal Processing
  • Tracking: Association and Filtering
  • Signal and Data Processing
  • Signal and Track Processing
  • Signal and Data Processing
  • Tracking: Association and Filtering
  • Signal and Data Processing
  • Tracking: Association and Filtering
  • Multiple-Sensor, Multiple-Target Tracking
  • Data Processing
  • Weak-Target Detection
  • Signal and Track Processing
Signal Processing
icon_mobile_dropdown
Multiresolution detection of small objects using bootstrap methods and wavelets
Gary A. Hewer, Wei Kuo, Lawrence A. Peterson
A Daubechies' wavelet-based constant false alarm rate (CFAR) small-target detection algorithm is evaluated using measured and simulated infrared images. The wavelet-based detection algorithm is compared with the matched filter to establish relative performance curves. The adaptive CFAR detection statistics are derived from the lexicographically ordered image vectors using Efron's bootstrap method. The bootstrap employs repeated resampling to overcome the difficulties of modeling the post-transform detection statistics of the underlying clutter or fixed pattern noise. The performance of the detection algorithm is evaluated using a simulated Gaussian target with parametrically varying amplitude, size, and polarity. It is embedded in fixed pattern noise and measured images that will stress the detection algorithms.
Speckle noise reduction of airborne SAR images with symmetric Daubechies wavelets
Langis Gagnon, Fatima Drissi Smaili
We report the study of a multiresolution speckle reduction method for airborne synthetic aperture radar (SAR) images. The SAR image is first subband-coded using complex symmetric Daubechies wavelets, followed by a noise estimate on the three high-pass bands. An elliptic wavelet coefficient thresholding rule is then applied, that preserves the global orientation of the complex wavelet coefficient distribution. FInally, a multiresolution synthesis (inverse wavelet transform) is done in a last small dim objects. A speckle index is computed to quantify the speckle reduction performance. We compare our results with those obtained using median and geometrical (Crimmins) filters.
Target capture and target ghosts
Steven P. Auerbach
Optimal detection methods for small targets rely on whitened matched filters, which convolve the measured data with the signal model, and whiten the result with the noise covariance. In real-world implementations of such filters, the noise covariance must be estimated from the data, and the resulting covariance estimate may be corrupted by presence of the target. The resulting loss in SNR is called 'target capture'. Target capture is often thought to be a problem only for bright targets. This presentation shows that target capture also arises for dim targets, leading to an SNR loss which is independent of target strength and depends on the averaging method used to estimate the noise covariance. This loss is due to a 'coherent beat' between the true noise and that portion of the estimated noise covariance due to the target. This beat leads to 'ghost targets', which diminish the target SNR by producing a negative target ghost at the target's position. A quantitative estimate of this effect will be given, and shown to agree with numerical results. The effect of averaging on SNR is also discussed for data scenes with synthetic injected targets, in cases where the noise covariance is estimated using 'no target' data. For these cases, it is shown that the so-called 'optimal' filter, which uses the true noise covariance, is actually worse than a 'sub-optimal' filter which estimates the noise from scene. This apparent contradiction is resolved by showing that the optimal filter is best if the same filter is used for many scenes, but is outperformed by a filter adapted to a specific scene.
Performance of an adaptive algorithm for suppressing intraframe jitter in a 3D signal processor
Cheuk L. Chan, Joseph B. Attili, Teodoro Azzarelli, et al.
An adaptive algorithm is proposed here for the suppression of intraframe jitter in images acquired from mechanically scanned electro-optic sensors. This artifact can arise in images obtained from sensors whose focal plane detectors are assembled in a staggered fashion to alleviate aliasing in the cross-scan direction. The artifact manifests itself in a local tearing of the image. The problem is exacerbated in high frequency clutter and may affect clutter cancellation in subsequent matched filtering steps. The proposed algorithm is applied, in this paper, to actual sensor data which contains jitter taken from the Airborne Infrared Measurement System sensor. Performance is assessed based on a suite of injected targets as well as real targets in the scene.
Understanding and mitigation of false alarms as measured in LWIR clutter by the AIRMS sensor
Morton S. Farber, David C. Campion
A key issue for an IRST is the occurrence of false alarms, especially against a structured background. This paper reviews observations of false target detections and mitigation techniques on a variety of scenes obtained by the AIRMS sensor (Airborne IR Measurement System). A collection of scenes with cloud and terrain clutter backgrounds of various types and ranges were processed. The processing stream consisted of a combination of space-time filtering, including scene registration, in order to optimally remove clutter. For scenes with appreciable clutter background, the dominant source of false alarms is clutter leakage due to imperfect subtraction of the clutter. An approach to false alarm mitigation based on a first principles understanding of the causes of leakage was applied to the scenes with favorable results. The technique was able to reduce the impact of the false alarms. By comparing the technique to the conventional method of background normalization a further improvement was observed. The technique shows great potential for obtaining the low false alarm rates required for today's IRST scenarios.
Analysis of signal capture loss for fully adaptive matched filters
Paul Frank Singer, Doreen M. Sasaki
The matched filter is a common solution to the problem of detecting a known signal in noise. The matched filter is composed of the signal template to enhance the signal response and second order noise statistics to suppress the noise. The second order statistics of the noise are typically unknown. Fully adaptive implementations estimate these statistics from the noise present in the data to be filtered. If the signal is present, then it will be included in the estimate of the noise statistics used in the matched filter. Since these statistics are used by the matched filter to suppress noise, the signal will act to suppress itself, this is referred to as signal capture loss. In this paper an analytic model for signal capture loss is developed and experimentally verified. The use of the sample statistics to suppress the noise from which they are derived alters the noise rejection performance of the filter. Unlike the analysis of Reed et. al. which considers the use of the sample covariance to filter data which is independent of the sample covariance, the case of filtering the same data which was used to calculate the sample covariance is explicitly analyzed. This form of noise suppression is called self- whitening. The effect of self-whitening upon the noise rejection performance of the filter is analyzed and the results are verified experimentally. Signal capture loss and self-whitening are competing effects in terms of the number of samples used to form the sample covariance matrix. The output SNR includes both of these effects and is used to measure filter performance as a function of the number of samples. The output SNR performance is obtained by combining the results for signal capture loss with the self-whitening results. To obtain the performance of a fully adaptive filter relative to the optimal matched filter designed with the true population covariance, the results derived in this paper are combined with those of Reed et. al.
Weak-Target Detection
icon_mobile_dropdown
Analysis of advantages and disadvantages of scene registration as part of space-time processing
Cynthia C. Piotrowski, Morton S. Farber, Stuart J. Hemple, et al.
It is generally accepted that scene registration is an essential step in space-time detection processing. This paper isolates the effect of registration in order to evaluate its utility apart from its implications for filter construction or other implementation issues. Model calculations and simulation studies indicate that perfect registration can theoretically provide large gains in detection performance (up to approximately 9 dB) depending on clutter content and motion, but that the gains are quickly reduced in the presence of registration error. Registration error degrades performance in two ways: (1) it lowers the gain achievable by stacking, and (2) it results in increased clutter leakage. Registration of real data can never be perfect, of course, due to properties of physical edges in real data that the typical registration algorithm simply cannot handle, such as differential motion of background and foreground regions at a scene edge, and hiding and revealing of scene elements. Processing of several sequences of AIRMS data, with and without registration, using a three-dimensional (space-time) matched filter defined in the spectral domain, has shown that the overall gains for real data from a realizable registration algorithm are modest, usually less than 3 dB, for a sensor with relatively little jitter.
Adaptive multispectral detection of small targets using spatial and spectral convergence factor
Laurent Nolibe, Julien Borgnino, Marc Ducoulombier, et al.
The main infrared search and track systems (IRST) purpose is to realize optimal discrimination between true targets and background clutter (false alarm). In such single band systems, two dimensional least mean square (TDLMS) adaptive filter achieve good results in small target detection. However, detection performance are strongly dependent on background correlation length. When difference between background and target correlation length is too small, performance detection decreases. The method presented in this paper is applied to a naval dual band panoramic surveillance system for target detection at low elevation angles. It consists in adjusting the time-varying convergence factor of TDLMS filter, not only by using spatial statistics, but also by integrating a local spectral parameter. The use of this information is based on theoretical spectral radiance discrimination in LWIR and MWIR bands, between targets and backgrounds. When the local spectral parameter matches spectral background response, the filter reactivity is optimum via the spatial convergence factor, whereas it is decreased in the presence of spectral target characteristics. We achieve in this way a better target-to-clutter discrimination and independently of background correlation length.
Analysis of hyperspectral infrared and low-frequency SAR data for target classification
Scott G. Beaven, Xiaoli Yu, Lawrence E. Hoff, et al.
Multispectral and hyperspectral infrared (IR) sensors have been utilized in the detection of ground targets by exploiting differences in the statistical distribution of the spectral radiance between natural clutter and targets. Target classification by hyperspectral sensors such as the Spatially Modulated Imaging Fourier Transform SPectrometer (SMIFTS) sensor, a mid-wave infrared imager, depends on exploiting target phenomenology in the infrared. Determination of robust components from hyperspectral IR sensors that are useful for discriminating targets is a key issue in classification of ground targets. Both synthetic aperture radars (SAR) and IR imagers have been utilized in the target detection and recognition processes. Improved target classification by sensor fusion depends on exploitation of target phenomenology from both of these sensors. Here we show the results of an investigation of the use of hyperspectral infrared and low-frequency SAR signatures for the purpose of target recognition. Features extracted from both sensors on similar targets are examined in terms of their usefulness in separating between various classes of targets. Simple distance measures are computed to determine the potential for classifying targets based on a fusion of SAR and hyperspectral infrared data. These separability measures are applied to measurements on similar vehicle targets obtained from separate experiments involving the SMIFTS hyperspectral imager and the Stanford Research Institute SAR.
Experiments to support the development of techniques for hyperspectral mine detection
Edwin M. Winter, Michael J. Schlangen, Anu P. Bowman, et al.
Under the sponsorship of the DARPA Hyperspectral Mine Detection program, a series of both non-imaging and imaging experiments have been conducted to explore the physical basis of buried object detection in the visible through thermal infrared. Initially, non-imaging experiments were performed at several geographic locations. Potential spectral observables for detection of buried mines in the thermal portion of the infrared were found through these measurements. Following these measurements with point spectrometers, a series of hyperspectral imaging measurements was conducted during the summer of 1995 using the SMIFTS instrument from the University of Hawaii and the LIFTIRS instrument from Lawrence Livermore National Laboratory. The SMIFTS instrument (spatially modulated imaging Fourier transform spectrometer) acquires hyperspectral image cubes in the short-wave and mid-wave infrared and LIFTIRS (Livermore imaging Fourier transform infrared spectrometer) acquires hyperspectral image cubes in the long-wave infrared. Both instruments were optimized through calibration to maximize their signal to noise ratio and remove residual sensor pattern. The experiments were designed to both explore further the physics of disturbed soil detection in the infrared and acquire image data to support the development of detection algorithms. These experiments were supported by extensive ground truth, physical sampling and laboratory analysis. Promising detection observables have been found in the long-wave infrared portion of the spectrum. These spectral signatures have been seen in all geographical locations and are supported by geological theory. Data taken by the hyperspectral imaging sensors have been directly input to detection algorithms to demonstrate mine detection techniques. In this paper, both the non-imaging and imaging measurements made to date will be summarized.
Signal and Data Processing
icon_mobile_dropdown
Detection of a small target against bottom of water reservoir
This paper studies image contrast and limiting visibility range of a small target observed against the background of the bottom of a water reservoir under active or passive illumination. We use the small-angle approximation of the radiative transfer theory to simulate analytically multiple scattered radiation from the target tracked and that from a water medium with taking the shadowing of the medium portion by the target into account. The contrast magnitude is shown to behave 'unusually' under certain conditions and grow up with submerging the target into water having even a maximum at some depth. The physical explanation of this maximum is given and the conditions for such an 'unusual' behavior of the contrast are evaluated. On the other hand, the target with certain albedo could be invisible over all depths starting from the water surface and don tot he bottom. The case of equal albedos of the target and bottom is also considered. The target is observable here at its location depth being smaller than some threshold value. We provide an analytical estimate of this value to derive it via optical characteristics of water. In addition, the results of some case studies are presented here for different optical characteristics of water medium. The obtained data would be useful for experts in the development of optical vision systems for small targets and in the simulation of the same by advanced algorithms for sensor signal and data processing.
Detection of moving pixel-sized target based on quasi-continuity-filter
Yan Xiong, Jiaxiong Peng, Mingyue Ding, et al.
A method based on quasi-continuity-filter for the detection of moving pixel-sized target under low SNR is presented in this paper. First each frame of the input sequence is binarized based on maximum error probability rule. And then the quasi-continuity-filter is designed to use the continuity property of the target pixels in the adjacent frames and randomness of noise pixels to filter out the noise.
Radar target recognition with fractal technology
Lianping Deng
This paper presents an approach to radar target classification using fractal geometry techniques. The classification techniques used here exploit the geometric nature of the target and its effect on the backscattered signals. Target high range resolution radar signatures are transformed into fractals via fractal interpolation techniques, and their fractal dimensions are used to discriminate different targets. The results showed that different target have different targets. The results showed that different target have different fractal dimensions and thus can be discriminated according to their fractal dimensions. High range resolution fully-polarized radar backscattered signals of five aircrafts at different aspects are used to test the algorithm. The classification results presented in this paper are promising. The experiments indicate that the fractal dimension feature used in this paper seems to be independent of amplitude, thus is regarded as a promising new way for radar target classification. It also shows that this opens up an entirely new feature space which needs to be explored further in the field of radar target classification.
Small moving target indication based on linear-variant-coefficient-difference equation
Yan Xiong, Jiaxiong Peng, Mingyue Ding, et al.
The movement model of target limited by track-before-detect approaches is extended to that of target with constant acceleration in this paper. And an algorithm based on linear-variant-coefficient-difference-equation for moving target indication is proposed. Moreover, based on parametric models of target and background, this paper presents an analysis of its optimal SNR gain versus target and background characteristics as well as the sensitivity of this gain to mismatch.
Influence of spacecraft's external atmosphere on the point-target-location performance of space optical telescope
Sergey V. Shultz, Peter Alexseevich Bakut, Yurij Petrovich Shumilov
From the view point of target detection, the combined image of spacecraft's own external atmosphere particles is equivalent to some background noise. Therefore the received (or observed) field is the combination of target, background and photodetector noise components. In the paper statistical properties of received field are defined. On the base of statistical field model the point target detection algorithm is created. The relationships between probability of right detection and parameter of defocusing under changes of background intensity are numerically calculated.
Nonlinear parameter filtering on the interference with variable structure
Valery A. Cherdyntsev, Viktor M. Cozel
The method of synthesis and analysis of the small signals processing on the variable interference is presented. The synthesis is based on the interference models description by means of probability density sets. It is assumed that the change of interference structure described by Markov's process. The main results of synthesis and analysis of signals processing in the conditions with various types of interference are associated with the necessity of adaptive nonlinear processing of a received signal and variable interference. The adaptive nonlinear processing may attenuate an intensive interference.
Novel techniques for restoration of images of highly variable extended objects
Aleksandr N. Safronov, Andrew A. Pahomov
This study focuses on the development of new post-detection image processing methods meant for joint recovery of a temporal succession of the atmospherically corrupted images, corresponding to an unknown variable object, and simultaneous estimation of an unknown blur function. The key to these techniques, called multiple object speckle interferometry (MOSI) and multiple object deconvolution (MODE), is the generalized projecting onto convex sets (POCS) methodology exploiting such a qualitative information as: linearity and shift-invariance of a whole optical channel, non-negativity and spatio-temporal boundedness of the object's brightness distribution, statistical wide-sense temporal stationarity of the phase distortions combining both quasi-static (deterministic) aberrations and rapidly- fluctuating (turbulence-induced) perturbations. The proposed techniques are referenceless: at modest SNR, it is able to offer high resolution without appealing to an auxiliary wavefront sensor, natural or laser guide star and adaptive optics. Among other things, any detailed structural and/or statistical information about the net transfer function and object itself to be imaged is not essential. The basic reasons for the convergence and uniqueness of the derived algorithms are briefly elucidated. Several applications of the MOSI and MODE being of interest to defense and observational astrophysics are numerically exemplified, including the preliminary imagery results of the field trial on ground-based observation of the Space Shuttle Atlantis docked with Mir Space Station (June, 1995). The data- collection scheme being implied within the described methods is shown to be general enough to accommodate to a wide variety of the viewing scenarios, ranging form the terrestrial through aircraft (balloon)-borne to space-based observations affected by any type of phase distortions being the wide-sense stationary in time.
Weak-Target Detection
icon_mobile_dropdown
Image-quality degradation and retrieval errors introduced by registration and interpolation of multispectral digital images
Bradley G. Henderson, Christoph C. Borel, James P. Theiler, et al.
Full utilization of multispectral data acquired by whiskbroom and pushbroom imagers requires that the individual channels be registered accurately. Poor registration introduces errors which can be significant, especially in high contrast areas such as boundaries between regions. We simulate the acquisition of multispectral imagery in order to estimate the errors that are introduced by co-registration of different channels and interpolation within the images. We compute the modulation transfer function and image quality degradation brought about by fractional pixel shifting and calculate errors in retrieved quantities (surface temperature and water vapor) that occur as a result of interpolation. We also present a method which might be used to estimate sensor platform motion for accurate registration of images acquired by a pushbroom scanner.
Partial removal of correlated noise in thermal imagery
Correlated noise occurs in many imaging systems such as scanners and push-broom imagers. The sources of correlated noise can be from the detectors, pre-amplifiers and sampling circuits. Correlated noise appears as streaking along the scan direction of a scanner or in the along track direction of a push-broom imager. We have developed algorithms to simulate correlated noise and pre-filter to reduce the amount of streaking while not destroying the scene content. The pre-filter in the Fourier domain consists of the product of two filters. One filter models the correlated noise spectrum, the other is a windowing function, e.g. Gaussian or Hanning window with variable width to block high frequency noise away from the origin of the Fourier Transform of the image data. We have optimized the filter parameters for various scenes and find improvements of the RMS error of the original minus the pre-filtered noisy image.
Signal and Data Processing
icon_mobile_dropdown
Modeling of heavily tailed aliasing distribution for undersampled IRST systems
Robert A. Makl, Hector A. Quevedo
Common module and staring focal plane arrays used in IR search and track applications exhibit inherent under sampling in either one or both spatial dimensions producing signal and clutter aliasing is modeled as a stochastic noise process with a uniformly distributed sample phasing. This paper transcends previous attempts at modeling aliasing by deriving the joint density function of the matched filtered SNR normalized by the density function of the local nose estimate. The resulting probability of detection distribution has been compared with experimental results through simulation. Finally, the use of this probability density function is discussed to further enhance the performance of a multiple hypothesis tracker.
Online performance evaluation of disk-shadowing subsystem
Hai Jin, Dan Feng, Jiangling Zhang
Disk shadowing technology is the best way to enhance the availability and reliability of I/O subsystem. The most important metric of on-line performance of disk shadowing subsystem is the utilization of CPU. This paper first employs the way to calculate the CPU utilization of system connected with shadowed disk using statistic average method system. From the simulation results of CPU utilization of system connected with disk shadowing subsystem can we se that in most cases the on-line performance of shadowed disk are better than single disk, thus using disk shadowing technology can have enhanced on-line performance combined with highest availability and highest reliability.
Signal Processing
icon_mobile_dropdown
STAS: a code suite for space-time adaptive processing of IR images
Steven P. Auerbach, Lawrence E. Hauser, Frederick P. Boynton, et al.
This presentation describes STAS (space-time analysis stream), a suite of advanced signal processing codes for the detection of dim targets in 'look-down' IR clutter. STAS has been tested on several hundred thousand IR clutter scenes in the NAWC SkyBall Data Base, and has been shown to be very robust. Code modules in the STAS detection stream perform the following functions: (1) calibration, bad-pixel identification and editing; (2) image registration, clutter estimation and clutter subtraction; (3) velocity stacking and matched-filtering. Other code modules in STAS allow synthesis and rework of IR images, to emulate data from arbitrary IR sensors, with arbitrary trajectories over the clutter scene. THe theory and performance of various STAS modules will be described, with emphasis on registration method, fast methods for mapping from the focal plane to the ground, treatment of bad pixels, digital interpolator design/implementation, and matched filter design. Examples of processed IR images will be displayed.
Tracking: Association and Filtering
icon_mobile_dropdown
Nonlinear optimal semirecursive filtering
This paper describes a new hybrid approach to filtering, in which part of the filter is recursive but another part in non-recursive. The practical utility of this notion is to reduce computational complexity. In particular, if the non- recursive part of the filter is sufficiently small, then such a filter might be cost-effective to run in real-time with computer technology available now or in the future.
Signal and Data Processing
icon_mobile_dropdown
Data fusion of multiple-sensors attribute information for target-identity estimation using a Dempster-Shafer evidential combination algorithm
Marc-Alain Simard, Jean Couture, Eloi Bosse
The research and development group at Loral Canada is in the second phase in the development of a data fusion demonstration model (DFDM) for a naval anti-air warfare platform to be used as a workbench tool to perform exploratory research. The software has been designed to be implemented within the software environment of the Canadian Patrol Frigate (CPF). The second version of DFDM has the capability to fuse data from the following CPF sensors: surveillance radars, electronics support measure, identification friend or foe, communication intercept operator and a tactical data link. During the first phase, the project has demonstrated the feasibility of fusing the sensor attribute information using a modified version of the Dempster-Shafer evidential combination algorithm. A significant enhancement has been the addition of pruning rules to reduce the set of identity propositions which otherwise would be too large to comply with the DFDM real- time requirements. Another improvement has been the use of fuzzy logic to make possible the fusion of apparently incomplete attribute information coming from different sensors. This paper describes the main features of the evidential combination algorithm that we have implemented in the DFDM system. A benchmark scenario has been selected to quantitatively demonstrate the capability of the attribute fusion algorithm.
Signal and Track Processing
icon_mobile_dropdown
Resilient networked sensor-processing implementation
Glen Wada, J. Steven Hansen
The spatial infrared imaging telescope (SPIRIT) III sensor data processing requirement for the calibrated conversion of data to engineering units at a rate of 8 gigabytes of input data per day necessitated a distributed processing solution. As the sensor's five-band scanning radiometer and six- channel Fourier-transform spectrometer characteristics became fully understood, the processing requirements were enhanced. Hardware and schedule constraints compounded the need for a simple and resilient distributed implementation. Sensor data processing was implemented as a loosely coupled, fiber distributed data interface network of Silicon Graphics computers under the IRIX Operating Systems. The software was written in ANSI C and incorporated exception processing. Interprocessor communications and control were done both by the native capabilities of the network and Parallel Virtual Machine (PVM) software. The implementation was limited to four software components. The data reformatter component reduced the data coupling among sensor data processing components by providing self-contained data sets. The distributed processing control and graphical user interface components encased the PVM aspect of the implementation and lessened the concern of the sensor data processing component developers for the distributed model. A loosely coupled solution that dissociated the sensor data processing from the distributed processing environment, a simplified error processing scheme using exception processing, and a limited software configuration have proven resilient and compatible with the dynamics of sensor data processing.
Neural network point detection using a coning scan imager
Emily D. Claussen, Kim T. Constantikes
A very compact and inertially pointed imaging device can be constructed by combining the functions of a telescope and a gyroscope into a single assembly. However, the image resulting from this device is not easily processed owing to scan-induced geometric distortions. We have devised a method for adaptively processing the imager outputs to facilitate detection of bright points in a cluttered background. Pseudo-image neighborhoods are vectorized and have scan angle bits appended, allowing a neural net to learn the best matched filter for each scan configuration. We present the results of testing this filter using both synthetic and measured solar sea glint clutter.
Algorithms for calibration and point-source extraction for a LWIR space-based sensor
Dean S. Garlick, Mark E. Greenman, Mark F. Larsen, et al.
The Midcourse Space Experiment (MSX) satellite is scheduled for launch in early 1996. The Spatial Infrared Imaging Telescope (SPIRIT) III sensor, the primary instrument of MSX, covers the spectrum from the midwave infrared to the longwave infrared. The SPIRIT III instrument is cryogenically cooled and consists of an interferometer and a five-band scanning radiometer with a spatial resolution of 90 (mu) rad. This paper describes the unique algorithms and software implementation developed to support the SPIRIT III radiometer. The algorithms for converting raw radiometer counts to calibrated counts and then to engineering units are described. The standard process (raw counts to corrected counts) consists of dark offset correction, linearity correction, integration mode normalization, non-uniformity correction, field of regard non-uniformity correction, and bad pixel processing. The algorithm to convert corrected counts to point source engineering units consist of pixel position tagging (non-uniform grid), color coalignment, distortion correction, background subtraction, correction for spacecraft attitude, and position and amplitude determination. The algorithms implemented in the software must produce goniometric estimates to within 5 (mu) rad (0.05 pixel) and radiometric results to within 1 percent. The results of the algorithms are demonstrated in this paper.
Impact of the SPIRIT III sensor design on algorithms for background removal, object detection, and point-source extraction
Mark F. Larsen, Joseph J. Tansock Jr., Garth O. Sorenson, et al.
This paper describes background removal, point source detection, and position and irradiance extraction data processing algorithms that have been developed for the Spatial Infrared Imaging Telescope (SPIRIT) III design. The SPIRIT III sensor is the primary instrument on the Midcourse Space Experiment (MSX) satellite and is scheduled for launch in early 1996. The sensor consists of an off-axis reimaging telescope, and, among other instruments, a six-band scanning radiometer that covers the spectrum from midwave infrared to longwave infrared. The radiometer has five arsenic-doped silicon (Si:As) focal plane detector arrays with 8 X 192 pixels. The angular separation between adjacent pixels is 90 (mu) rad. A single axis scan mirror can operate at a constant 0.46 deg/sec scan rate to give programmable fields of regard of 1 X 0.75, 1 X 1.5, and 1 X 3 degrees or can remain fixed. Scanned images are non-uniformly sampled because of non-linear scan mirror motion, array misalignment, optical distortion, detector readout ordering, and satellite rotation. In addition, three of the five arrays contain multiple cross-scan aligned columns of pixels that five scanned images that have spatially overlapping in- scan data. Algorithms for processing data sampled on a uniform grid, such as data obtained from a CCD array, are enhanced and applied to the SPIRIT III radiometer where scanned images are non-uniformly sampled and have spatially overlapping data. The performance of these algorithms are evaluated with point source data acquired during ground measurements.
False track discrimination in a 3D signal/track processor
Joseph B. Attili, Robert W. Fries, Cheuk L. Chan, et al.
Long range detection and tracking of moving targets against clutter requires advanced signal and track processing techniques in order to exploit the ultimate capabilities of modern electro-optical sensors. These include three- dimensional filtering and multiple hypothesis tracking. Unfortunately, features present in real backgrounds can lead to false alarms which must be recognized in order to achieve a low false track rate. This paper describes one approach which was successful at mitigating clutter-induced false tracks while maintaining the low thresholds necessary for the detection of weak targets. This technique uses information derived in the signal processor describing the local background as additional discriminants in the track processor to identify false tracks caused by clutter leakage. We present an overview of the 3D signal track/processor, the false track mitigation methodology, and experimental results against real background data.
Bias phenomena study/compensation for tracking algorithms
Shan Cong, Lang Hong, Michael W. Logan, et al.
Bias phenomenon in multiple target tracking has been observed for some time. Beginning with a new view of a tracking algorithm structure, this paper is devoted to a study of the bias resulting from the miscorrelation in data association. The main result of this paper is a necessary condition for miscorrelation to cause bias. Relying on the main result, one new step is added to the tracking algorithm structure to compensate the bias generated by miscorrelation. A case study on the bias phenomenon in global nearest neighbor tracking is launched as a practice of the ideas and results presented in this paper. Tracking examples are given as an illustration. A discussion of several problems related to our results is given in the end of this paper.
Signal and Data Processing
icon_mobile_dropdown
Gray-scale morphology for small object detection
In this paper, we present morphological processing using median operation for small object detection. First, we perform median morphological operation on the gray-scale image with structuring element A and make all scene regions of size equal to the central A's area or larger brighter (for bright objects) or darker (for dark objects) and other regions approximate invariably. Second, we perform median morphological operation on the gray-scale image with larger structuring element B and make all scene regions of size equal to the central B's area or smaller darker (for bright objects) or brighter (for dark objects) and other regions approximate invariably. Third, we calculate the absolute difference of above two outputs. All object regions between the smallest and largest will be enhanced and all background regions will be weakened. Then a simple threshold can extract all objects with some smaller background regions. Finally, those smaller background regions whose areas are smaller than structuring element A can be eliminated by region labeling processing. We find that if (1) contrary to background, the object regions have the signature of discontinuity with their neighbor regions. (2) Each object concentrates relatively in a small region, which can be considered as a homogeneous compact region, our algorithm can achieve satisfactory detection performance.
Tracking: Association and Filtering
icon_mobile_dropdown
Automated search for moonlets orbiting about asteroid Eros
Patricia K. Murphy, Gene A. Heyler
The NEAR (near Earth asteroid rendezvous) spacecraft was launched on February 17, 1996 for subsequent orbit insertion about the asteroid Eros. During the approach and preliminary flyby prior to the insertion, a continuous series of images will be taken of a region within a few hundred kilometers of Eros. These images will be processed to detect the presence of small orbiting moonlets which could seriously affect the safety and orbit stability of the spacecraft. These moonlets would, of course, also be of great interest to astronomers and space physicists. This paper discusses the image processing and motion discrimination (track processing) techniques that will be applied to the time series of images and reviews the NEAR Eros encounter simulation used for the development of these algorithms. Image processing techniques include frame registration, background detrending, object detection, feature extraction for both point and extended sources, and preliminary object classification. Motion discrimination algorithms include pattern matching of observations, kinematic tracking of observations via proximity gating, removal of observations determined as stars, inertial velocity estimation of moving objects, and final moonlet classification. The simulation models a variable number of synthetic moonlets, Eros orbit and kinematics, an 80,000-plate shape model of the rotating Eros, spacecraft orbit and attitude, solar illumination, star catalog, and imaging sensor characteristics.
Signal and Data Processing
icon_mobile_dropdown
Classification of stationary movers with simplified MTI tracking
David B. Brown
Todays landscape is populated with many significant man-made structures which have moving parts, but do not move themselves. This type of element is termed a stationary mover. A relevant example of this type of element is a rotating antenna of a ground-based search radar. As an airborne platform equipped with moving-target indication (MTI) radar sweeps the ground, the motion of the moving parts is detected and reported by position and range rate. Even though these elements appear to be moving, the reported location remains unchanged from sweep to sweep. The algorithm described int his paper successfully detects stationary movers by first finding MTI tracks which do not move over time. These tracks are then interrogated to determine if the range rate history is consistent with that of a rotating object. Those determined to be rotators are reported along with a likelihood of correct classification.
Tracking: Association and Filtering
icon_mobile_dropdown
Image tracking using a scale function-based nonlinear estimation algorithm
Craig S. Agate, Ronald A. Iltis
A refined version of a nonlinear estimation algorithm for tracking extended targets using imaging array data is presented. The algorithm is applied to a situation in which there is no closed-form functional representation for the image of the target. Based on the reduced sufficient statistic method of R Kulhavy, the algorithm recursively propagates, in a Bayes-closed sense, a set of sufficient statistics which approximate the true posterior density of the target parameter vector. The approximation is based on minimizing the Kullback-Leibler distance between the true posterior density and the approximating density. In previous work this density was a Gaussian mixture, while here scale functions are used to approximate the posterior density from which an approximate minimum variance estimate can be calculated. As the tracking progresses the posterior density is estimated on an increasingly finer scale. In order to reduce the number of scale functions, however, a pruning process is necessary. In this way, the number of scale functions approximating the density increases in areas for which the true density is significant while scale functions which approximate the density over regions where it is insignificant are ignored. Results are presented for simulations carried out in which the algorithm is applied to tracking an aircraft based on a sequence of synthetic images.
Heuristic task assignment algorithms applied to multisensor-multitarget tracking
Robert L. Popp, Krishna R. Pattipati, Richard R. Gassner
In this paper, we are concerned with the problem of assigning track tasks, with uncertain processing costs and negligible communication costs, across a set of homogeneous processors within a distributed computing system to minimize workload imbalances. Since the task processing cost is uncertain at the time of task assignment, we propose several fast heuristic solutions that are extensible, incur very little overhead, and typically react well to changes in the state of the workload. The primary differences between the task assignment algorithms proposed are: (i) the definition of a task assignment cost as a function of past, present, and predicted workload distribution, (ii) whether or not information sharing concerning the state of the workload occurs among processors, and (iii) if workload state information is shared, the reactiveness of the algorithm to such information (i.e., high-pass, moderate, low-pass information filtering). We show, in the context of a multisensor-multitarget tracking problem, that using the heuristic task assignment algorithms proposed can yield excellent results and offer great promise in practice.
Bayesian target selection after group pattern distortion
The following problem is considered: a group of point targets is observed via an imperfect sensor and one of the measurements chosen. The measurements of each target position is corrupted by an independent error, although every object is detected. Two processes then act to move and distort the group: one is a bulk effect that acts equally on all members of the group while the other is independent for each target. The group is observed again by a (possibly different) imperfect sensor which may not detect every target. The problem is to construct the posterior distribution of the chosen target's position, given the two sets of measurements. Probability models of the sensors and of the pattern distortion processes are assumed to be available. A formal general solution has been obtained for this problem. For the special linear-Gaussian case this reduces to a closed form analytic expression. To facilitate implementation, a hypothesis pruning technique is given. A simulation example illustrating performance is provided.
Optimal nonlinear filtering with the method of virtual measurements
The method of virtual measurements (MOVM) will be described for designing nonlinear filters. The new nonlinear filter theory generalizes the Kalman filter, and in some important applications, the performance of the new filter is vastly superior to the extended Kalman filter (EKF). Unlike the EKF, the new theory does not use linearization. The new design approach, MOVM, can be applied to exact nonlinear filters as well as nonlinear approximate filters.
IMAM algorithm for tracking maneuvering targets in clutter
Target tracking in clutter is difficult because there can be several contact-to-track associations for a given track update. The nearest neighbor approach is traditionally used but probabilistic methods, such as probabilistic data association (PDA), have since proved more capable. Tracks are also lost during maneuvers and the interacting multiple model (IMM) algorithm has been demonstrated to be effective at tracking maneuvering targets by responding to different target modes. By combining the IMM and PDA, the resulting algorithm responds to target maneuvers and is effective in clutter. The interacting multiple bias model (IMBM) algorithm is also an effective technique when tracking maneuvering targets but considers the target acceleration a system bias. The bias is estimated in an IMM algorithm framework and then used to compensate a constant velocity filter estimate. The integrated PDA filter will be incorporated into the IMBM algorithm and applied to tracking maneuvering targets in clutter. A performance comparison of IMM and IMBM techniques for tracking maneuvering targets in clutter will also be presented.
Multiple-Sensor, Multiple-Target Tracking
icon_mobile_dropdown
Search for optimal sensor management
Several sensor management schemes based on information theoretic metrics such as discrimination gain have been proposed, motivated by the generality of such schemes and their ability to accommodate mixed types of information such as kinematic and classification data. On the other hand, there are many methods for managing a single sensor to optimize detection. This paper compares the performance against low signal-noise ratio targets of a discrimination gain scheme with three such single sensor detection schemes: the Wald test, an index policy that is optimal under certain circumstances and an 'alert-confirm' scheme modeled on methods used in some existing radars. For the situation where the index policy is optimal, it outperforms discrimination gain by a slight margin. However, the index policy assumes that there is only one target present. It performs poorly when there are multiple targets while discrimination gain and the Wald test continue to perform well. In addition, we show how discrimination gain can be extended to multisensor/multitarget detection and classification problems that are difficult for these other methods. One issue that arises with the use of discrimination gain as a metric is that it depends on both the current density and an a priori distribution. We examine the dependence of discrimination gain on this prior and find that while the discrimination depends on the prior, the gain is prior-independent.
Neural network fusion capabilities for efficient implementation of tracking algorithms
Malur K. Sundareshan, Farid Amoozegar
The ability to efficiently fuse information of different forms for facilitating intelligent decision-making is one of the major capabilities of trained multilayer neural networks that is being recognized int eh recent times. While development of innovative adaptive control algorithms for nonlinear dynamical plants which attempt to exploit these capabilities seems to be more popular, a corresponding development of nonlinear estimation algorithms using these approaches, particularly for application in target surveillance and guidance operations, has not received similar attention. In this paper we describe the capabilities and functionality of neural network algorithms for data fusion and implementation of nonlinear tracking filters. For a discussion of details and for serving as a vehicle for quantitative performance evaluations, the illustrative case of estimating the position and velocity of surveillance targets is considered. Efficient target tracking algorithms that can utilize data from a host of sensing modalities and are capable of reliably tracking even uncooperative targets executing fast and complex maneuvers are of interest in a number of applications. The primary motivation for employing neural networks in these applications comes form the efficiency with which more features extracted from different sensor measurements can be utilized as inputs for estimating target maneuvers. Such an approach results in an overall nonlinear tracking filter which has several advantages over the popular efforts at designing nonlinear estimation algorithms for tracking applications, the principle one being the reduction of mathematical and computational complexities. A system architecture that efficiently integrates the processing capabilities of a trained multilayer neural net with the tracking performance of a Kalman filter is described in this paper.
Track fusion with feedback
Oliver E. Drummond
Track fusion is one of the algorithm architectures for tracking multiple targets with data from multiple sensors. In track fusion for example, sensor-level tracks are combined to form global-level tracks that are based on data from all the sensors. These multiple sensor, global-level can then be fed back to the sensor-level trackers to reduce the data association errors and to improve the accuracy of the sensor-level tracks. The global tracks, however, are cross-correlated with the sensor-level tracks. This track- to-track cross-correlation should be taken into account in algorithm design. This cross-correlation must be considered when providing the global tracks to the sensor trackers as well as when providing the sensor tracks to the global tracker. With feedback, both the global tracks and the tracks from each sensor are based on prior data from not only the sensor itself, but also the other sensors. This paper goes beyond a previous paper and presents algorithm architectures are compared qualitatively.
Sensor data fusion of radar, ESM, IFF, and data LINK of the Canadian Patrol Frigate and the data alignment issues
Jean Couture, Edouard Boily, Marc-Alain Simard
The research and development group at Loral Canada is now at the second phase of the development of a data fusion demonstration model (DFDM) for a naval anti-air warfare to be used as a workbench tool to perform exploratory research. This project has emphatically addressed how the concepts related to fusion could be implemented within the Canadian Patrol Frigate (CPF) software environment. The project has been designed to read data passively on the CPF bus without any modification to the CPF software. This has brought to light important time alignment issues since the CPF sensors and the CPF command and control system were not important time alignment issues since the CPF sensors and the CPF command and control system were not originally designed to support a track management function which fuses information. The fusion of data from non-organic sensors with the tactical Link-11 data has produced stimulating spatial alignment problems which have been overcome by the use of a geodetic referencing coordinate system. Some benchmark scenarios have been selected to quantitatively demonstrate the capabilities of this fusion implementation. This paper describes the implementation design of DFDM (version 2), and summarizes the results obtained so far when fusing the scenarios simulated data.
Quantitative comparison of sensor fusion architectural approaches in an algorithm-level test bed
Jean Roy, Eloi Bosse, Nicolas Duclos-Hindie
This paper presents the results of a quantitative comparison of two architectural options in developing a multi-sensor data fusion system. One option is the centralized architecture: a single track file is maintained and updated using raw sensor measurements. The second option is the autonomous sensor fusion architecture: each sensor maintains its own track file. The sensor tracks are then transmitted to a central processor responsible for fusing this data to form a master track file. Various performance trade-offs will typically be required in the selection of the best multi-sensor data fusion architecture since each approach has different benefits and disadvantages. The emphasis for this study is given to measuring the quality of the fused conducted with the CASE_ATTI (concept analysis and simulation environment for automatic target racking and identification) testbed. This testbed provides the algorithm-level test and replacement capability required to conduct this kind of performance study.
Recursive solution to the sensor registration problem in a multiple-sensor-tracking scenario
Nassib Nabaa, Robert H. Bishop
This paper present an on-line solution to the aircraft tracking problem with a network of spatially distributed sensor units that are imperfectly registered. We consider errors in the relative positions and alignments of the measurement units. The sensor errors (or uncertainties) are estimated by an extended Kalman filter, along with the track variables. This optimal solution is compared through Monte- Carlo simulations to a suboptimal filter, that neglects the sensor uncertainties. The measurements are taken by 2D search sensor units or 3D track sensor units. When using track sensor data, the registration errors degrade substantially the accuracy of the aircraft position estimates and the optimal filter provides position estimates that are more accurate than those of the suboptimal filter. In the search sensor case, the registration errors are small compared to the search sensor measurement noise level and have a lower impact on the tracking performance.
Integration of radar measurement attributes in the multiple hypothesis tracker: results for track initiation
Emmanuel Cassassolles, Ludovic Martinet, Herve Sedano, et al.
The foremost difficulties which multiple target tracking involves are the problems of track initiation and report-to- track association when there are missing reports and proliferation of false reports generated by clutter. Many solutions have been proposed to solve these problems for years using location informations of the received measurements. This work presents an extension of one of those solutions in order to utilize new available measurement features. The tracking methods we use is multiple hypothesis tracker (MHT) which previous works demonstrated efficiency, especially for track initiation resolution. The intrinsic properties and formulation of MHT allow to easily take additional report informations into account. The measurement attributes we propose to exploit are (1) Doppler velocity, (2) likelihood and (3) local false report density. In order to evaluate the contribution of these informations, obtained results for solving track initiation problems on radar simulated and real data to those given by a basic version of MHT are compared.
Data Processing
icon_mobile_dropdown
Optimal measurement scheduling for track accuracy control for cued target acquisition
Richard C. Chen, W. Dale Blair
The problem of optimal measurement scheduling is considered for continuous-time linear systems with discrete measurements. More specifically, the problem of achieving a prescribed estimation accuracy at some given future time using a minimum amount of measurement resources is studied. The measurement scheduling problem posed requires the minimization of a linear function subject to nonlinear inequality constraints. This constrained minimization problem (i.e., nonlinear programming problem) can be solved numerically, and this is illustrated with simple examples. The application of optimal measurement scheduling to the problem of remote cueing of an interceptor missile for target acquisition is considered. For the remote cueing problem, the Doppler shift between the missile and target, a nonlinear function of the target and missile states, must be estimated and provided to the missile with a specified accuracy. A method for estimating the Doppler shift as well as a method for determining the accuracy of this estimate are presented. Numerical solutions for examples of measurement scheduling problems are also presented.
Application of multiple-hypothesis tracking to agile beam radar tracking
Robert F. Popoli, Samuel S. Blackman, M. T. Busch
This paper describes methods that have been developed for using multiple hypothesis tracking (MHT) for an agile beam radar in the presence of range gate pull off (RGPO) electronic countermeasures (ECM). The paper shows how the agile beam radar allocation logic can be extended to include uncertainty in target position due to data association uncertainty. It also shows how the MHT track score can be modified to reflect target offset from the commanded radar antenna position and how measured SNR is included in the track score. Results from the second Benchmark tracking study are presented. These results show MHT-based allocation to ge highly efficient. The results also show that the system satisfies stringent track maintenance requirements in the presence of RGPO and coincident target maneuvers.
Retrodiction for Bayesian multiple-hypothesis/multiple-target tracking in densely cluttered environment
Sensor data processing in a dense target/dense clutter environment is inevitably confronted with data association conflicts which correspond with the multiple hypothesis character of many modern approaches (MHT: multiple hypothesis tracking). In this paper we analyze the efficiency of retrodictive techniques that generalize standard fixed interval smoothing to MHT applications. 'Delayed estimation' based on retrodiction provides uniquely interpretable and accurate trajectories from ambiguous MHT output if a certain time delay is tolerated. In a Bayesian framework the theoretical background of retrodiction and its intimate relation to Bayesian MHT is sketched. By a simulated example with two closely-spaced targets, relatively low detection probabilities, and rather high false return densities, we demonstrate the benefits of retrodiction and quantitatively discuss the achievable track accuracies and the time delays involved for typical radar parameters.
Application of multiple-hypothesis tracking to shipboard IRST tracking
Samuel S. Blackman, Robert J. Dempster, G. K. Tucker, et al.
This paper describes the use of multiple hypothesis tracking (MHT) for the IRST Shipboard Self Defense application. This application features a highly variable clutter background, such as produced by sun glint, and maneuvering targets. The paper presents a technique for including features, such as measured SNR, in the track score. Performance results are presented for the case of a simulated missile target inserted in an ocean background. The paper presents computer timing and sizing results to show that recently developed algorithm efficiencies and computational capabilities make real-time MHT tracker operation feasible within the near future. A comparative study of track maintenance shows the significant potential performance improvement of MHT when compared with other data association methods.
Comparison of IMMPDA and IMM-assignment algorithms on real air traffic surveillance data
Thiagalingam Kirubarajan, Murali Yeddanapudi, Yaakov Bar-Shalom, et al.
In this paper a comparative performance analysis of the interacting multiple model (IMM) estimation algorithm combined with the probabilistic data association filter (PDAF) and the IMM-assignment algorithm for multisensor, multitarget tracking with real air traffic surveillance data is presented. The measurement database from two FAA sensors contains detections of about 75 targets in a wide variety of motion modes. Procedures for track formation/maintenance and data association with IMMPDAF are given. Global performance measures in terms of likelihood ratio and prediction errors are presented. Also, a benchmark track with maneuvers is used to compare the performances of these algorithms for individual tracks.
Tracking multiple unresolved Rayleigh targets with a monopulse radar
W. Dale Blair, Maite Brandt-Pearce
While the problem of tracking multiple targets has been studied extensively in recent years, the issue of finite sensor resolution has been completely ignored in almost all of the studies. In a typical study, the targets are detected in the presence of false alarms and clutter with a given probability of detection and the target measurements are modeled as the true values plus independent, Gaussian errors. However, when two targets are closely spaced with regard to the resolution of the sensor, the measurements from the two targets often are merged or the errors in the measurements are correlated. This issue of tracking unresolved or non-isolated targets is particularly important in monopulse radar systems because the target direction-of- arrival measurements can be severely corrupted when the measurements of two targets are not fully resolved in angle, range, or radial velocity. The tracking of unresolved targets with monopulse radar is addressed with respect to the detection of target multiplicity, the estimation and tracking of the target amplitude, and the measurement update process.
New implementation of the SME filter approach to multiple-target tracking
R. Louis Bellaire, Edward W. Kamen
The symmetric measurement equation (SME) approach to multiple target tracking is markedly different from multiple target tackers based on probabilistic data association. THe key idea in the SME approach is to transform the original measurements data in such a way that the pairing of measurements and tracks becomes embedded in a nonlinear state estimation problem. In previous articles, a single extended Kalman filter (EKF) was used to derive estimates of each target's position and velocity. SImulation trials have shown the EKF implementation sometimes produces large estimation errors in the neighborhood of crossing targets. One of the causes of this unsatisfactory performance is the numerical instability of the EKF. Due to the recent discovery of anew iterated filter (NIF) based on the Levenberg-Marquardt algorithm, a better implementation of the SME approach is now possible. Improvements resulting from this recent work are demonstrated through Monte Carlo simulations comparing the performance of the EKF implementation of the SME, the NIF implementation of the SME, the Joint Probabilistic Data Association filter, and the associated filter (a benchmark).
Weak-Target Detection
icon_mobile_dropdown
Estimating the degrees of freedom for a common CFAR detector
Paul Frank Singer, Doreen M. Sasaki
The heavy tailed false alarm density function of a common CFAR detector was previously derived. That density function was shown to be well approximated by a t-distribution with a reduced number of degrees of freedom. The number of degrees of freedom used in the approximate probability density function must be estimated form the data at the output of the clutter filter. Three estimators for the number of degrees of freedom have been developed. Their relative advantages and disadvantages are discussed. The most practical one is presented and its performance is analyzed. Experimental results on synthetic and real data are provided. The synthetic data is used to empirically test the bias of the estimator and to qualitatively evaluate its efficiency. The effectiveness of this estimator has been quantitatively demonstrated on ocean scenes with glint. The interest in knowing the false alarm density goes beyond just setting a CFAR threshold. This estimator is incorporated into an IRST signal processing and tracking algorithm suite containing a constant threshold. This density function together with the estimated number of degrees of freedom is used to adaptively estimate the probability of false alarm. Regions with a small number of degrees of freedom have a higher false alarm probability and consequently the tracker is more conservative in initiating tracks. The tracker uses the adaptive PFA to improve the logic which initiates, confirms and deletes tracks.
Signal and Track Processing
icon_mobile_dropdown
Probabalistic strongest neighbor filter for tracking in clutter
X. Rong Li, Xiaorong Zhi
A simple and commonly used method for tracking in clutter to deal with measurement origin uncertainty is the so-called Strongest Neighbor Filter (SNF). It uses the measurement with the strongest intensity (amplitude) in the neighborhood of the predicted target measurement location, known as the 'strongest neighbor' measurement, as if it were the true one. Its performance is significantly better than that of the Nearest Neighbor Filter (NNF) but usually worse than that of the Probabilistic Data Association Filter (PDAF), while its computational complexity is the lowest one among the three filters. The SNF is, however, not consistent in the sense that its actual tracking errors are well above its on-line calculated error standard deviations. Based on the theoretical results obtained recently of the SNF for tracking in clutter, a probabilistic strongest neighbor filter is presented here. This new filter is consistent and is substantially superior to the PDAF in both performance and computation. The proposed filter is obtained by modifying the standard SNF to account for the probability that the strongest neighbor is not target-oriented, which is accomplished by using probabilistic weights.