Proceedings Volume 3370

Algorithms for Synthetic Aperture Radar Imagery V

cover
Proceedings Volume 3370

Algorithms for Synthetic Aperture Radar Imagery V

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 15 September 1998
Contents: 13 Sessions, 61 Papers, 0 Presentations
Conference: Aerospace/Defense Sensing and Controls 1998
Volume Number: 3370

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Image Formation
  • Image Registration and Fusion
  • Image Quality Assessment
  • Feature Extraction
  • FOPEN Detection
  • SAR Detection
  • Signature Analysis
  • Template-based ATR
  • Feature-based ATR
  • Neural Network ATR
  • ATR Performance Evaluation: End-to-End
  • ATR Performance Evaluation: HRR ATR
  • Performance Prediction
  • Image Registration and Fusion
  • Template-based ATR
Image Formation
icon_mobile_dropdown
Imaging of static and dynamic objects by pulsed superscanning locator (tomograph) with resolution higher than by the Rayleigh criterion
Vera Moiseevna Ginzburg
Results of theoretical and experimental research to develop a locator with antennas performing beam scanning during both the emission and reception of pulses are presented. The reflected signals are received within discrete `visibility' layers formed due to beam scanning during reception. The locator is shown to have a number of advantages in comparison with the conventional locator. The known distribution of visibility layers in space allows one: to create adaptive systems ensuring reception of the required information with the minimum energy expenses; to attain `superresolution' of objects (at distances smaller, than required by the Rayleigh criterion); to improve noise immunity of the locator. It is demonstrated that a `quasi- holographic' data-processing system can be developed, similar to the synthesized antenna aperture system proposed in the 50s by Emmett Leath. An ultrasonic version of the `superscanning' locator is described. The experimental results totally confirm the theoretical predictions.
Range stacking: an interpolation-free SAR reconstruction algorithm
Mehrdad Soumekh
A method for digital image formation in Synthetic Aperture Radar (SAR) systems is presented. The proposed approach is based on the wavefront reconstruction theory for SAR imaging systems. However, this is achieved without image formation in the spatial frequency domain of the target function which requires interpolation. The proposed method forms the target function at individual range points within the radar range swath; this is referred to as range stacking. The range stacking reconstruction method is applicable in stripmap and spotlight (broadside and squint) SAR systems. Results using a wide-beamwidth FOliage PENetrating (FOPEN) SAR database are provided, and the effect of beamwidth filtering on the signature of moving targets in the imaging scene is shown.
IFSAR phase unwrapping in the presence of Dirichlet boundary conditions
George W. Rogers, Arthur W. Mansfield, Houra Rais, et al.
Phase unwrapping is one of the key computational elements in digital elevation model generation from interferometric SAR. In this paper we present a reformulation of the weighted least squares phase unwrapping approach that incorporates Dirichlet boundary conditions. The application of this formulation to the incorporation of control points into the solution as well as for unwrapping the interferogram in stages is discussed. The ability of the weighted least squares approach to fully unwrap an interferogram can be very dependent on the weight matrix used. This has led us to develop an adaptive approach to updating the weight matrix to be used in conjunction with our weighted least squares approach. Examples along the preliminary results based on ERS data will be presented.
Application of leakage energy minimization to wideband SAR data
Brian Hendee Smith
The frequency domain support of wideband synthetic aperture radar data is nonrectangular and exhibits sidelobe artifacts in noncardinal directions. Leakage Energy Minimization (LEM) is an analytic image domain apodization scheme that has been successfully applied to SAR imagery. LEM uses a spatially varying finite impulse response filter, the coefficients of which are selected from a maximum likelihood criterion. LEM can be applied to wideband SAR data, which has an irregularly shaped frequency domain support. The algorithm successfully reduces the sidelobe artifacts without loss of resolution.
Multichannel imaging for wideband wide-angle polarimetric synthetic aperture radar
In this paper we introduce multi-channel techniques to compensate for effects of antenna shading and crosstalk in wideband, wide-angle full polarization radar imaging. We model the systems as a 2D integral operator that includes the transmit pulse function, receive and transmit antenna transfer functions, and response from scattering objects. Existing imaging algorithms provide an approximate inversion of this integral operator, without compensation for the effect of antenna transfer functions. Thus, standard processing results in image quality diminished by the inherent variation of the antenna response--in magnitude, phase and polarization--across a large band of frequencies and wide range of aspect angles. We propose three inversion techniques for this integral operator, to improve polarization purity and to achieve localized point spread functions. The first technique uses a local approximation to the system model, and propose a conceptually simple method for the inversion. The other two techniques propose inversion methods for the exact system model in different transform domains. The result is imagery with improved polarization purity and a more localized point spread function.
New autofocus technique for wideband wide-angle synthetic aperture radar
David Kirk, Paul Maloney
A small difference between the true range to the scatterer and that estimated from the radar time delay results in defocusing of Synthetic Aperture Radar (SAR) imagery in both cross-range and range directions. The amount of defocusing is a function of the integration angle and range resolution. SARs with large integration angles and high range resolution are particularly sensitive to the range error. In the UltraWide Band P-3 SAR, for example, there is an error in the knowledge of the absolute range on the order of 100 m, which results in a very poorly focused images. The current processing procedure is to form multiple SAR images using a different range estimate to form each image, where the range estimates used span the expected variation in the range error. The SAR image with the best image quality is used. This is clearly a very time consuming approach and not suitable for a real-time system. This paper describes a new technique called Autofocus Estimation of Range Error which employs a conventional autofocus algorithm to estimate the phase error in a poorly focused image and then converts this into an estimate of the absolute range. The image is then reprocessed with the improved absolute range estimate. This paper discusses how this technique is implemented and demonstrates the improvement in image quality that can be achieved with this technique as opposed to using conventional autofocus techniques.
Application of fast back-projection techniques for some inverse problems of synthetic aperture radar
Stefan Nilsson, Lars Erik Andersson
In certain radar imaging applications one encounters the problem of reconstructing a reflectivity function from information about its averages over circles with center on a straight line. A robust inversion method is a filtered backprojection method, similar to the one used in medical tomography. We will present a fast algorithm for this backprojection operator. Numerical examples are given.
SAR imaging and detection of moving targets
Robert C. DiPietro, Richard P. Perry, Ronald L. Fante
This paper presents a method of forming synthetic aperture radar, SAR, images of moving targets without using any specific knowledge of the target motion. The new method uses a unique processing kernel that involves a 1D interpolation of the deramped phase history which we call Keystone formatting. This preprocessing simultaneously eliminates the effects of linear range migration for all moving targets regardless of their unknown velocity. Step two of the moving target imaging technique involves a 2D focusing of the movers to remove residual quadratic phase errors. The third and last step removes cubic and higher order defocusing terms. This imaging technique is demonstrated using SAR data collected as part of DARPA's Moving Target Exploitation program.
Refocus of constant-velocity moving targets in synthetic aperture radar imagery
Charles V. Jakowatz Jr., Daniel E. Wahl, Paul H. Eichel
The detection and refocus of moving targets in SAR imagery is of interest in a number of applications. In this paper we address the problem of refocussing a blurred signature that has by some means been identified as a moving target. We assume that the target vehicle velocity is constant, i.e., the motion is in a straight line with constant speed. The refocus is accomplished by application of a 2D phase function to the phase history data obtained via Fourier transformation of an image chip that contains the blurred moving target data. By considering separately the phase effects of the range and cross-range components of the target velocity vector, we show how the appropriate phase correction term can be derived as a two-parameter function. We then show a procedure for estimating the two parameters, so that the blurred signature can be automatically refocused. The algorithm utilizes optimization of an image domain contrast metric. We present results of refocusing moving targets in real SAR imagery by this method.
Image Registration and Fusion
icon_mobile_dropdown
IFSAR reductions from ERS-1,/2 tandem data
Paul L. Poehler, Arthur W. Mansfield, George W. Rogers, et al.
Recent advantages in the areas of phase history processing, interferometric synthetic aperture radar processing algorithms, and the use of photogrammetric techniques have made it possible to extract extremely accurate DEM generation from Synthetic Aperture Radar images. Recent improvements by the authors in the phase unwrapping and interferogram conditioning steps are described which make it possible to obtain good elevation accuracy from noisy interferograms resulting from temporal decorrelation due to foliage or extreme terrain. Results are shown of data reductions from separate passes of the ERS-1,/2 Tandem System over Ft. Irwin, California, and Aschaffenburg, Germany.
Structure-driven SAR image registration
Jeremy S. De Bonet, Alan Chao
We present a fully automatic method for the alignment SAR images, which is capable of precise and robust alignment. A multiresolution SAR image matching metric is first used to automatically determine tie-points, which are then used to perform coarse-to-fine resolution image alignment. A formalism is developed for the automatic determination of tie-point regions that contain sufficiently distinctive structure to provide strong constraints on alignment. The coarse-to-fine procedure for the refinement of the alignment estimate both improves computational efficiency and yields robust and consistent image alignment.
Moving-target detection and automatic target recognition via signal subspace fusion of images
Mehrdad Soumekh
This paper addresses the problem of fusing the information content of two uncalibrated sensors. This problem arises in registering images of a scene when it is viewed via two different sensory systems, or detecting change in a scene when it is viewed at two different time points by a sensory system, or via two different sensory systems or observation channels. We are concerned with sensory systems which have not only a relative shift, scaling and rotational calibration error, but also an unknown point spread function (that is time-varying for a single sensor, or different for two sensors). By modeling one image in terms of an unknown linear combination of the other image, its powers and their spatially-transformed (shift, rotation and scaling) versions, a signal subspace processing is developed for fusing uncalibrated sensors. The proposed method is shown to be applicable in Moving Target Detection using monopulse Synthetic Aperture Radar with uncalibrated radars, and registration of SAR images of a target obtained via two different radars or at different coordinates by the same radar for Automatic Target Recognition. Results with realistic FOPEN SAR data will be provided.
Site-model-based exploitation of SAR data
William Phillips, Rama Chellappa
This paper presents algorithms designed to exploit multi- pass SAR imagery from the Tactical Endurance SAR sensor used on the Predator MAE-UAV. The multi-pass exploitation begins by using several images from the data set to build a detailed site model. The site modeling step is based on a multiresolution segmentation algorithm for the individual images, followed by height estimation, registration, and merging the individually labeled images into a common ground plane site model. After constructing the site model, target detection is performed on additional images of the site. The site model is then used to eliminate false alarms and detect changes in vehicle locations. After explaining the operation of our current exploitation system, we briefly address the improvements offered by segmenting SAR data formed using modern spectral estimation.
Image Quality Assessment
icon_mobile_dropdown
Determining a confidence factor for automatic target recognition based on image sequence quality
For the Automatic Target Recognition (ATR) algorithm, the quality of the input image sequence can be a major determining factor as to the ATR algorithm's ability to recognize an object. Based on quality, an image can be easy to recognize, barely recognizable or even mangled beyond recognition. If a determination of the image quality can be made prior to entering the ATR algorithm, then a confidence factor can be applied to the probability of recognition. This confidence factor can be used to rate sensors; to improve quality through selectively preprocessing image sequences prior to applying ATR; or to limit the problem space by determining which image sequences need not be processed by the ATR algorithm. It could even determine when human intervention is needed. To get a flavor for the scope of the image quality problem, this paper reviews analog and digital forms of image degradation. It looks at traditional quality metric approaches such as peak signal-to-noise ratio. It examines a newer metric based on human vision data, a metric introduced by the Institute for Telecommunication Sciences. These objective quality metrics can be used as confidence factors primarily in ATR systems that use image sequences degraded due to transmission systems. However, to determine the quality metric, a transmission system needs the original input image sequence and the degraded output image sequence. This paper suggests a more general approach to determining quality using analysis of spatial and temporal vectors where the original input sequence is not explicitly given. This novel approach would be useful where there is no transmission system but where the ATR system is part of the sensor, on-board a mobile platform. The results of this work are demonstrated on a few standard image sequences.
Nonparametric data modeling in SAR image quality assessment
Johnathan D. Michel, Qin Cai, Keith C. Drake
With the growing size of target databases and the large number of example images required for target recognition system development, a key requirement in managing ATR system development is the automatic and accurate assessment of target imagery. We define assessment in terms of image similarity of the target subimage to a truth target image set. The goal in this work is to create a system that automates the assessment of images and improves the accuracy of the image database assessment process. Our approach to the database assessment problem combines an image feature based approach with a statistical data modeling approach. The process being two-fold, provides a generic framework for approaching the problem regardless of imaging modalities. The image assessment process must handle a range of both high-level tasks as well as low level tasks, e.g., identifying Regions of Interest, segmenting the target, and computing feature based image metrics and statistical distances between images. This work describes the design and work in progress on the implementation of such a system.
Feature Extraction
icon_mobile_dropdown
Geometric invariance for synthetic aperture radar (SAR) sensors
Synthetic Aperture Radar (SAR) sensors have many advantages over electro-optic sensors (EO) for target recognition applications, such as range-independent resolution and superior poor weather performance. However, the relative unavailability of SAR data to the basic research community has retarded analysis of the fundamental invariant properties of SAR sensors relative to the extensive invariant literature for EO, and in particular photographic sensors. This paper develops the basic geometric imaging transformation associated with SAR from first principles, and then gives an existence proof for several geometric scatter configurations which give rise to SAR image invariants.
Target/shadow segmentation and aspect estimation in synthetic aperture radar imagery
This paper discusses algorithms that are useful for the classification of targets in SAR imagery. Two algorithms are presented for segmenting a target region from background clutter; one based on constant false alarm rate detection, and another histogram based technique. The histogram based technique is extended to extract shadow regions associated with a target. A method is then presented for estimating the orientation of segmented targets. These algorithms are applied to SAR imagery from the Lincoln Lab ADTS and MSTAR datasets. The aspect estimate is shown to be superior to estimates obtained from the direction of the axis of least inertia.
Method using multiple models to superresolve SAR imagery
Frank M. Candocia, Jose C. Principe
This paper introduces a methodology for the superresolution of synthetic aperture radar (SAR) images using multiple target and clutter models. The system has two major components: a mechanism that selects the appropriate model for superresolution and a bank of model estimators to accomplish the superresolution. The typical point scatterer model is incorporated into this technique as well as a model for clutter. Other models can be naturally incorporated. This methodology is flexible in that it can utilize many of the well-known modern spectral estimation techniques. The ability to more accurately model targets using models other than the point scatterer as well as the importance of including models for clutter into a superresolution paradigm is addressed. These issues are shown to be relevant to the automatic target recognition/detection problem. We present a comparison of our technique with other SAR imaging methods and discuss the relative benefits afforded by such an approach.
Radar image analysis utilizing junctive image metamorphosis
Peter G. Krueger, Sally B. Gouge, Jim O. Gouge
A feasibility study was initiated to investigate the ability of algorithms developed for medical sonogram image analysis, to be trained for extraction of cartographic information from synthetic aperture radar imagery. BioComputer Research Inc. has applied proprietary `junctive image metamorphosis' algorithms to cancer cell recognition and identification in ultrasound prostate images. These algorithms have been shown to support automatic radar image feature detection and identification. Training set images were used to develop determinants for representative point, line and area features, which were used on test images to identify and localize the features of interest. The software is computationally conservative; operating on a PC platform in real time. The algorithms are robust; having applicability to be trained for feature recognition on any digital imagery, not just those formed from reflected energy, such as sonograms and radar images. Applications include land mass characterization, feature identification, target recognition, and change detection.
Pose estimation in SAR using an information theoretic criterion
Jose C. Principe, Dongxin Xu, John W. Fisher III
In this paper we formulate pose estimation statistically and show that pose can be estimated from a low dimensional feature space obtained by maximizing the mutual information between the aspect angle and the output of a nonlinear mapper. We use the Havrda-Charvat definition of entropy to implement a nonparametric estimator based on the Parzen window method. Results in the MSTAR data set are presented and show the performance of the methodology.
Optimal algorithm statistical synthesis of spatial-temporal signal processing in multichannel combined scatterometric systems of remote sensing
Valerii Konstantinovich Volosyuk, Andrey V. Sokolnikov, Valeriy A. Onishchuk
Meeting the increased complicating of the problem of surfaces remote sensing with the synthesis aperture radar (SAR) use, requiring of more high level of resolution and accuracy and also of performing of the multiparametric measurings, the necessity of the multichannel SAR construction appears, which could receive signals in the different frequency bands, on the different types of polarization, from the different directions etc. In the proposed manuscript the algorithms of such combined processing are examined. These algorithms were synthesized by the solving of the optimization problems. They include the classic operations of the aperture synthesis and operations of the adaptive signal whitening, operations of the Earth covers parameters and statistical characteristics calculating as well.
Superresolution SAR image formation via parametric spectral estimation methods
Zhaoqiang Bi, Jian Li, Zheng-She Liu
This paper considers super resolution synthetic aperture radar (SAR) image formation via sophisticated parametric spectral estimation algorithms. Parametric spectral estimation methods are devised based on parametric data models and are used to estimate the model parameters. Since SAR images rather than model parameters are often more appreciated in SAR applications, we use the parameter estimates obtained with the parametric methods to simulate data matrices of large dimensions and then use the fast Fourier transform (FFT) methods on them to generate SAR images with super resolution. Experimental examples using the MSTAR and ERIM data illustrate that robust spectral estimation algorithms can generate SAR images of higher resolution that the conventional FFT methods and enhance the dominant target features.
Comparison of various enhanced radar imaging techniques
Inder Jiti Gupta, Avinash Gandhe
Recently, many techniques have been proposed to enhance the quality of radar images obtained using SAR and/or ISAR. These techniques include spatially variant apodization (SVA), adaptive sidelobe reduction (ASR), the Capon method, amplitude and phase estimation of sinusoids (APES) and data extrapolation. SVA is a special case of ASR; whereas the APES algorithm is similar to the Capon method except that it provides a better amplitude estimate. In this paper, the ASR technique, the APES algorithm and data extrapolation are used to generate radar images of two experimental targets and an airborne target. It is shown that although for ideal situations (point targets) the APES algorithm provides the best radar images (reduced sidelobe level and sharp main lobe), its performance degrades quickly for real world targets. The ASR algorithm gives radar images with low sidelobes but at the cost of some loss of information about the target. Also, there is not much improvement in radar image resolution. Data extrapolation, on the other hand, improves image resolution. In this case one can reduce the sidelobes by using non-uniform weights. Any loss in the radar image resolution due to non-uniform weights can be compensated by further extrapolating the scattered field data.
Feature selection in a machine learning system for texture classification
Sung Wook Baik, Jerzy Bala
The goals of this research as presented in this paper are (1) to explore the application of a machine learning method to problems of computer vision through the development of an integrated system of computer vision and machine learning, and (2) to select good features in the texture classification domain by evaluating texture features with classification results obtained by the integrated system. The main key of the feature evaluation is to consider the effects of an increasing number of features in the integrated system. The research is concerned with the development of texture feature extraction methods in texture classification and the integration of machine learning and computer vision systems. The feature extraction methods include Markov random fields, co-occurrence matrices, and moments, convolutions, and neighboring. Total of 25 texture features is extracted by these methods for the experimentation. The machine learning methods used in this research is an AQ15 method for inductive concept learning from examples. The integrated system consists of image preprocessing, feature extraction, classification (AQ15), and feature selection modules. The paper presents (1) what are good selected features by evaluating the performance of the classification system when the number of features are increasing, (2) whether coefficients (features) of the Markov Random Fields are suitable for the texture classification by comparing them with the already selected features.
FOPEN Detection
icon_mobile_dropdown
Distortion-invariant filters for foliage-penetration (FOPEN) synthetic aperture radar
David P. Casasent, Westley Cox
New distortion-invariant filters are considered for object detection and clutter rejection in ultra-wideband synthetic aperture radar (SAR) imagery. Because of the foliage penetration (FOPEN) ability of this SAR sensor, the data is attractive for automatic target recognition. We detail the first use of 2D distortion-invariant filters for object detection in FOPEN data. Since FOPEN imagery of a particular target is dependent upon the foliage obscuring the object, we use filters designed using targets in an open area and test them on objects in foliage. Initial results indicate attractive distortion-invariant detection and low false alarm rates.
Subband prescreening of foliage-penetrating SAR imagery
Timothy R. Miller, Lee C. Potter
In this paper we present the results of an empirical study investigating subband prescreener detection. The prescreener is used with ultra-wideband foliage penetrating synthetic aperture radar imagery. Our results demonstrate that, for the selected set of computationally simple features, lower resolution imagery can be used at the early detection stages. We also present initial multiband detection results. These results indicate that a combination of lower resolution subbands can be used in a fast prescreening algorithm without appreciable performance loss when compared to the fullband detector.
Ultrawideband radar target discrimination utilizing an advanced feature set
Lam H. Nguyen, Ravinder Kapoor, David C. Wong, et al.
The Army Research Laboratory, as part of its mission-funded applied research program, has been evaluating the utility of a low-frequency, ultra wideband imaging radar to detect tactical vehicles concealed by foliage. Measurement programs conducted at Aberdeen Proving Grounds and elsewhere have yielded a significant and unique database of extremely wideband and (in some cases) fully polarimetric data. Prior work has concentrated on developing computationally efficient methods to quickly canvass large quantities of data to identify likely target occurrences--often called `prescreening.' This paper reviews recent findings from our phenomenology/detection efforts. Included is a reformulated prescreener that has been trained and tested against a significantly larger data set than was used in the prior work. Also discussed are initial efforts aimed at the discrimination of targets from the difficult clutter remaining after prescreening. Performance assessments are included that detail detection rates versus false alarm levels.
Low-complexity multidiscriminant FOPEN target screener
We present a low-complexity model-based FOPEN target detection algorithm and discuss its potential application as a target screener within an end-to-end FOPEN SAR automatic target detection system. The algorithm uses multiple discriminants extracted over a local sliding window followed by a multivariate discrimination rule to perform target screening at the pixel level. We present detection performance results obtained against FOPEN SAR imagery and show that the multidiscriminant approach achieves better detection performance than a model-template matched-filter detection algorithm.
Multiresolution signature-based SAR target detection
Mark R. McClure, Priya Bharadwaj, Lawrence Carin
A full-wave electromagnetic-scattering model is utilized to effect a land-mine detector via a multiresolution template- matching-like algorithm. Detection is performed on fully polarimetric ultra-wideband (50 - 1200 MHz) synthetic aperture radar (SAR) imagery. Multiresolution template matching is effected via discrete-wavelet transform of the SAR imagery and the parametric target-signatures (templates). Detector results are presented in the form of receiver operating characteristics.
SAR Detection
icon_mobile_dropdown
Multiscale algorithms for joint detection and compression in SAR imagery
John D. Gorman, Rajesh Sharma
We present a novel content-adaptive multiresolution SAR image formation processing algorithm that incorporates dynamic, on-line detection algorithms into the image formation process. The idea is to vary image resolution locally depending on scene content, focusing the SAR imagery to fine resolution only in regions where the scene reflectivity varies rapidly, while forming the rest of the image at coarser resolution or with reduced fidelity. Our `decision-directed' SAR image formation algorithm may have applications in systems where on-board processing or datalink constraints limits the area coverage rates or resolution. We present examples of this multiresolution SAR processing on SAR imagery and show that compression rates on the order of 70:1 or more (i.e., 0.45 bits/pixel starting from 32 bits/complex sample, 16 bits/I, 16 bits/Q), can be obtained while still preserving coherent target signatures and with minor degradation in perceptual image quality.
Sequential hypothesis testing for dynamic SAR resource allocation
Nikola S. Subotic, Brian J. Thelen, David L. Wiseman
In this paper we describe a novel method of automatic target detection applied directly to the synthetic aperture radar (SAR) phase history. Our algorithm is based on a sequential likelihood ratio test (Wald test). The time dynamic behavior of the SAR phase history is modeled as a 2D autoregressive process. The sequential test attempts to dynamically ascertain the presence/absence of a target while the SAR phase history data is being collected. A target/no target decision can then be made during the collection aperture. System resources such as collection aperture and image formation processing can be dynamically reallocated depending on scene content. In contrast, image based detection methods wait until the entire aperture is collected, an image formed, then an algorithm is applied. We will show that significant savings in collection aperture can be obtained using this detection structure which may increase system search rates.
Use of model order for detecting potential target locations in SAR images
Ravi Kothari, David Ensley
The Region of Interest (ROI) detection stage of an Automatic Target Recognition System serves the crucial role of identifying candidate regions which may have potential targets. The large variability in clutter (noise or countermeasures which provide target like characteristics) complicate the task of developing accurate ROI determination algorithms. Presented in this paper is a new paradigm for ROI determination based on the premise that disjoint local approximation of the regions of a SAR image can provide discriminatory information for clutter identification. Specifically, regions containing targets are more likely to require complex approximators (i.e. ones with more free parameters of a higher model order). We show preliminary simulations results with two different approximators (sigmoidal multi-layered neural networks with lateral connections, and radial basis function neural networks with a model selection criterion), both of which attempt to produce a smooth approximation of disjoint local patches of the SAR image with as few parameters as possible. Those patches of the image which require a higher model order are then labeled as ROIs. Our preliminary results show that sigmoidal networks provide a more consistent estimate of the model order than their radial basis function counterparts.
Signature Analysis
icon_mobile_dropdown
Use of the mean-square-error matching metric in a model-based automatic target recognition system
Stephen A. Stanhope, Eric R. Keydel, Wayne D. Williams, et al.
We examine the use of mean squared error matching metrics in support of model-based automatic target recognition under the Moving and Stationary Target Acquisition and Recognition (MSTAR) program. The utility of this type of matching metric is first examined in terms of target discriminability on a 5-class problem, using live signature data collected under the MSTAR program and candidate target signature features predicted from the MSTAR signature feature prediction (MSTAR Predict) module. Analysis is extended to include the exploitation of advanced model-based candidate target signature feature prediction capabilities of MSTAR Predict, made possible by the use of probability distribution functions to characterize target return phenomenology. These capabilities include the elimination of on-pose scintillation effects from predicted target signature features and the inclusion of target pose uncertainty and intra-class target variability into predicted target signature features. Results demonstrating the performance advantages supported by these capabilities are presented.
Variability study of Ka-band HRR polarimetric signatures on 11 T-72 tanks
William E. Nixon, H. J. Neilson, G. N. Szatkowski, et al.
In an effort to effectively understand signature verification requirements through the variability of a structure's RCS characteristics, the U.S. Army National Ground Intelligence Center (NGIC), with technical support from STL, originated a signature project plan to obtain MMW signatures from multiple similar tanks. In implementing this plan NGIC/STL directed and sponsored turntable measurements performed by the U.S. Army Research Laboratory Sensors and Electromagnetic Resource Directorate on eleven T-72 tanks using an HRR full-polarimetric Ka-band radar. The physical condition and configuration of these vehicles were documented by careful inspection and then photographed during the acquisition sequence at 45 degree(s) azimuth intervals. The turntable signature of one vehicle was acquired eight times over the three day signatures acquisition period for establishing measurement variability on any single target. At several intervals between target measurements, the turntable signature of a 30 m2 trihedral was also acquired as a calibration reference for the signature library. Through an RCS goodness-of-fit correlation and ISAR comparison study, the signature-to-signature variability was evaluated for the eighteen HRR turntable measurements of the T-72 tanks. This signature data is available from NGIC on request for Government Agencies and Government Contractors with an established need-to-know.
Intraclass variablity in ATR systems
Raj K. Bhatnagar, Ronald L. Dilsavor, Mark Minardi, et al.
In this paper we describe the results of our investigation into the intra-class variability of a vehicle class (T-72 Tanks) from the perspective of an Automatic Target Recognition system. We examine the performance of synthesized vehicle models for ATR systems and demonstrate that these models fall within the bounds of the vehicle class set by the intra-class variability of the vehicle. We then demonstrate the relevance of the mean-square-error between an image chip and a template as a useful measure of distance between the two vehicles. We also show that it is possible to constitute a superior class representative and classifier by combining chips from two different vehicles while constructing the templates.
MSE template size analysis for MSTAR data
Michael Lee Bryant, Steven W. Worrell, Anson C. Dixon
Analysis of statistical pattern recognition algorithms is typically performed using stationary, gaussian noise to simplify the analysis. An example is the excellent paper titled, `Effects of Sample Size in Classifier Design', which was written by Keinosuke Fukunaga and Raymond Hayes and published in the August 1989 issue of IEEE Transactions on Pattern Analysis and Machine Intelligence. One of the main conclusions of this paper is that more training samples will improve the estimation of classifier design parameters and classifier performance. This conclusion is valid when the observed signatures are stationary. However, when the observed signatures are non-stationary, as is the case for the synthetic aperture radar data collected for the Moving and Stationary Target Acquisition and Recognition program, more samples can actually corrupt the design parameter estimation process and lead to degraded performance. This fact has been known for some time, which explains the standard practice of designing templates at various pose angles. However, no theory currently exists to determine the optimum number of signatures to use in the template design process. This paper presents some initial work to determine the optimum number of samples to use.
Template-based ATR
icon_mobile_dropdown
Effect of signal-to-clutter ratio on template-based ATR
Lance M. Kaplan, Romain Murenzi, Edward Asika, et al.
In this work, we evaluate the robustness of template matching schemes for automatic target recognition (ATR) against the effects of clutter layover. The results of our experiments indicate the performance of template matching ATR in various image transform domains against the signal to clutter ratio (SCR). The purpose of these transforms is to enhance the target features in a chip while suppressing features representative of background clutter or simple noise. The ATR experiments were performed for synthetic aperture radar imagery using target chips in the public domain MSTAR database. The transforms include pointwise nonlinearities such as the logarithm and power operations. The templates are generated using the training portion of the MSTAR database at the nominal SCR. Many different ATR parameterizations are considered for each transform domain where templates are built to represent different ranges of aspect angles in uniform angular bins of 5, 10, 15, 30, and 45 degree increments. The different ATRs were evaluated using the testing portion of the database where synthetic clutter was added to lower the SCR.
HEATR project: ATR algorithm parallelization
Catherine E. Deardorf
High Performance Computing (HPC) Embedded Application for Target Recognition (HEATR) is a project funded by the High Performance Computing Modernization Office through the Common HPC Software Support Initiative (CHSSI). The goal of CHSSI is to produce portable, parallel, multi-purpose, freely distributable, support software to exploit emerging parallel computing technologies and enable application of scalable HPC's for various critical DoD applications. Specifically, the CHSSI goal for HEATR is to provide portable, parallel versions of several existing ATR detection and classification algorithms to the ATR-user community to achieve near real-time capability. The HEATR project will create parallel versions of existing automatic target recognition (ATR) detection and classification algorithms and generate reusable code that will support porting and software development process for ATR HPC software. The HEATR Team has selected detection/classification algorithms from both the model- based and training-based (template-based) arena in order to consider the parallelization requirements for detection/classification algorithms across ATR technology. This would allow the Team to assess the impact that parallelization would have on detection/classification performance across ATR technology. A field demo is included in this project. Finally, any parallel tools produced to support the project will be refined and returned to the ATR user community along with the parallel ATR algorithms. This paper will review: (1) HPCMP structure as it relates to HEATR, (2) Overall structure of the HEATR project, (3) Preliminary results for the first algorithm Alpha Test, (4) CHSSI requirements for HEATR, and (5) Project management issues and lessons learned.
Baseline performance analysis of the LSD/DOA ATR against MSTAR data
David Cyganski, James C. Kilian, Debra Fraser
Large computational complexity arises in model-based ATR systems because an object's image is typically a function of several degrees of freedom, such as target class, pose, articulation, configuration and sensor geometry. Most model- based ATR systems treat this dependency by incorporating an exhaustive search through a library of image views. This approach, however, requires enormous storage and extensive search processing. Some ATR systems reduce the size of the library by forming composite averaged images at the expense of reducing the captured pose specific information, usually resulting in a decrease in performance. The Linear Signal Decomposition/Direction of Arrival (LSD/DOA) system, on the other hand, forms a reduced-size, essential-information object data set which implicitly incorporates target and sensor variation specific data. This reduces ATR processing by providing a low computational-cost indexing function with little loss of discrimination and pose estimation performance. The LSD/DOA system consists of two independent components: a computationally expensive off-line component which forms the object representation and a computationally inexpensive on-line object recognition component. The size of the stored data set may also be adjusted, providing a means to trade off complexity versus performance. The focus of this paper will be the performance of the LSD/DOA ATR against the MSTAR (public) data set.
Eigen-MINACE SAR detection filters with improved capacity
Rajesh Shenoy, David P. Casasent
Distortion-invariant correlation filters are used to detect and recognition distorted objects in scenes. They are used in a correlator and are thus shift-invariant. We describe a new way to design distortion-invariant correlation filters that ensures good generalization (same performance on training and test sets) and improved capacity (fewer filters that recognize distorted versions of multiple classes of objects). The traditional way of designing correlation filters uses different types of frequency domain preprocessing and linear combination of training images. We show that these different approaches can be implemented in a framework using linear combination of eigen-images of preprocessed training data. Using eigen-domain data is shown to produce filters that generalize better and have large capacity. We show results on SAR data with multiple classes of objects using eigen-MINACE filters.
Evaluation of MACH and DCCF correlation filters for SAR ATR using the MSTAR public database
The MACH and DCCF correlation filter algorithms are evaluated using the publicly released MSTAR data base. These algorithms can be used as a matching engine for automatic target recognition in SAR imagery. In practice, the required filters can be synthesized using model based signature predictions. In addition, the MACH and DCCF algorithms are optimized to be robust to variations (distortions) in the target's signature. Unlike Matched Filtering or other exhaustive template based methods, the proposed approach requires very few filters. The paper describes the theory of the algorithm, key practical advantages and details of test results on the public MSTAR data base.
Feature-based ATR
icon_mobile_dropdown
Optimal target recognition method using accumulated evidence
Daniel J. Pack, Louis A. Tamburino, Kirk Sturtz
In this paper we present a new model-based feature matching method for an object recognition system. The actual matching takes place on a 2D image space by comparing a projected image of a 3D model with a sensor-extracted image of an actual target. The proposed method can be used with images generated by a wide variety of both camera and radar sensors, but we focus our attention on camera images with some discussions on synthetic aperture radar images. The effectiveness of the method is demonstrated only using point features. An extension to include region features should require some but not major revisions to the main structure of the proposed method. The method contains three phases to complete the target recognition process. The inputs to the method are a model projected image, a sensor-extracted image, an estimated current pose of the sensor with respect to a reference coordinate frame, and the Jacobian function associated with the estimated current sensor pose which relates 3D target features with 2D image features. The first stage uses geometric information of the target model to limit the number of possible corresponding feature sets, the second stage generates a set of possible sensor pose changes by solving a set of optimization problems, and the final stage finds the `best' change of sensor pose out of all possible ones. This change of sensor pose is added to the current sensor pose to form a new sensor location and orientation. The revised pose can then be used to reproject the model features and subsequently compute a compatibility measure between the model-projected and sensor-extracted images: this quantifies the reliability of the desired target recognition. In this paper we describe each of the three steps of the method and provide experimental results to demonstrate its validity.
Moving and stationary target acquisition and recognition (MSTAR) model-based automatic target recognition: search technology for a robust ATR
Joseph R. Diemunsch, John Wissinger
DARPA/Air Force Research Laboratory Moving and Stationary Target Acquisition and Recognition (MSTAR) program is developing state-of-the-art model based vision approach to Synthetic Aperture Radar (SAR) Automatic Target Recognition (ATR). The model-based approach requires using off-line developed target models in an on-line hypothesize-and-test manner to compare predicted target signatures with image data and output target reports. Central to this model-based ATR is the PEMS (Predict-Extract-Match-Search) subsystem. The Search module is critical to PEMS by providing intelligent control to traverse the hypothesis feature space. A major MSTAR goal is to demonstrate robust ATR for variations in targets including partially hidden targets. This paper will provide an update on the technology being developed under MSTAR and the status of this model based ATR research, specifically concentrating on the Search Module.
Recognizing articulated objects and object articulation in SAR images
Bir Bhanu, Grinnell Jones III, Joon S. Ahn
The focus of this paper is recognizing articulated objects and the pose of the articulated parts in SAR images. Using SAR scattering center locations as features, the invariance with articulation (i.e. turret rotation for the T72, T80 and M1a tanks, missile erect vs. down for the SCUD launcher) is shown as a function of object azimuth. Similar data is shown for configuration differences in the MSTAR (Public) Targets. The UCR model-based recognition engine (which uses non- articulated models to recognize articulated, occluded and non-standard configuration objects) is described and target identification performance results are given as confusion matrices and ROC curves for six inch and one foot resolution XPATCH images and the one foot resolution MSTAR data. Separate body and turret models are developed that are independent of the relative positions between the body and the turret. These models are used with a subsequent matching technique to refine the pose of the body and determine the pose of the turret. An expression of the probability that a random match will occur is derived and this function is used to set thresholds to minimize the probability of a random match for the recognition system. Results for identification, body pose and turret pose are presented as a function of percent occlusion for articulated XPATCH data and results are given for identification and body pose for articulated MSTAR data.
Neural Network ATR
icon_mobile_dropdown
Feature-based RNN target recognition
Hakan Bakircioglu, Erol Gelenbe
Detection and recognition of target signatures in sensory data obtained by synthetic aperture radar (SAR), forward- looking infrared, or laser radar, have received considerable attention in the literature. In this paper, we propose a feature based target classification methodology to detect and classify targets in cluttered SAR images, that makes use of selective signature data from sensory data, together with a neural network technique which uses a set of trained networks based on the Random Neural Network (RNN) model (Gelenbe 89, 90, 91, 93) which is trained to act as a matched filter. We propose and investigate radial features of target shapes that are invariant to rotation, translation, and scale, to characterize target and clutter signatures. These features are then used to train a set of learning RNNs which can be used to detect targets within clutter with high accuracy, and to classify the targets or man-made objects from natural clutter. Experimental data from SAR imagery is used to illustrate and validate the proposed method, and to calculate Receiver Operating Characteristics which illustrate the performance of the proposed algorithm.
Efficient end-to-end feature-based system for SAR ATR
Quoc Henry Pham, Timothy Myers Brosnan, Mark J. T. Smith, et al.
In this paper we discuss an end-to-end system for SAR automatic target recognition (ATR), giving particular emphasis to the discrimination and classification stages. The ATR system employs a three sequential stage approach to reduce complexity: a detection stage, a discrimination stage, and a classification stage. Details of the detection stage were presented previously. The target discrimination and classification methods, which we present here, involve extracting rotationally and translationally invariant features from the Radon transform of target chips. The methods are applied in both isolated and complete end-to-end systems on the TESAR baseline SAR database distributed by the U.S. Army Research Laboratory and in isolation using the public MSTAR database. The performance results on these SAR datasets are presented.
Detection and classification of MSTAR objects via morphological shared-weight neural networks
Nipon Theera-Umpon, Mohamed A. Khabou, Paul D. Gader, et al.
In this paper we describe the application of morphological shared-weight neural networks to the problems of classification and detection of vehicles in synthetic aperture radar (SAR). Classification experiments were carried out with SAR images of T72 tanks and armored personnel carriers. A correct classification rate of more than 98% was achieved on a testing data set. Detection experiments were carried out with T72 tanks embedded in SAR images of clutter scenes. A near perfect detection rate and a low false alarm rate were achieved. The data used in the experiments was the standard training and testing MSTAR data set collected by Sandia National Laboratory.
Design for HMM-based SAR ATR
Dane P. Kottke, Paul D. Fiore, Kathy L. Brown, et al.
This paper describes progress on the Automatic Target Recognition (ATR) system for Synthetic Aperture Radar (SAR) imagery. The system is based upon a feature extraction, data ordering, and statistical modeling paradigm. Feature extraction is performed by applying image segmentation to convert the SAR imagery into one of four pixel classes. A description of a real-time image segmentation design is given. The segmented imagery is re-ordered from a 2D spatial representation to a sequential representation through the use of multiple Radon Transforms. Finally, the re-ordered data is classified by target type by applying Hidden Markov Model decoding techniques. Performance results on the MSTAR public targets database is provided.
ATR Performance Evaluation: End-to-End
icon_mobile_dropdown
Evaluation of SAR ATR algorithm performance sensitivity to MSTAR extended operating conditions
Testing a SAR Automatic Target Recognition (ATR) algorithm at or very near its training conditions often yields near perfect results as we commonly see in the literature. This paper describes a series of experiments near and not so near to ATR algorithm training conditions. Experiments are setup to isolate individual Extended Operating Conditions (EOCs) and performance is reported at these points. Additional experiments are setup to isolate specific combinations of EOCs and the SAR ATR algorithm's performance is measured here also. The experiments presented here are a by-product of a DARPA/AFRL Moving and Stationary Target Acquisition and Recognition (MSTAR) program evaluation conducted in November of 1997. Although the tests conducted here are in the domain of EOCs, these tests do not encompass the `real world' (i.e., what you might see on the battlefield) problem. In addition to performance results this paper describes an evaluation methodology including the Extended Operating Condition concept, as well as, data; algorithm; and figures of merit. In summary, this paper highlights the sensitivity that a baseline Mean Squared Error ATR algorithm has to various operating conditions both near and varying degrees away from the training conditions.
Standard SAR ATR evaluation experiments using the MSTAR public release data set
Timothy D. Ross, Steven W. Worrell, Vincent J. Velten, et al.
The recent public release of high resolution Synthetic Aperture Radar (SAR) data collected by the DARPA/AFRL Moving and Stationary Target Acquisition and Recognition (MSTAR) program has provided a unique opportunity to promote and assess progress in SAR ATR algorithm development. This paper will suggest general principles to follow and report on a specific ATR performance experiment using these principles and this data. The principles and experiments are motivated by AFRL experience with the evaluation of the MSTAR ATR.
ATR Performance Evaluation: HRR ATR
icon_mobile_dropdown
Stochastic models and performance bounds for pose estimation using high-resolution radar data
Joseph A. O'Sullivan, Steven P. Jacobs, Vikas Kedia
Models for radar data have been pursued for many years. The classical work of Swerling and Marcum, and Gaussian and Rician models in general, have been most common. In contrast to these statistical models, there have been tremendous efforts expended to develop signature prediction code designed to predict radar returns from faceted objects. Ongoing research attempts to merge these efforts to yield good statistical models for radar data that are based in part on the outputs of signature prediction codes. Some of the issues are explored using simulated radar data from the University Research Initiative Synthetic Dataset. A general description of the class of Gaussian models for high resolution radar range profiles is given. These models include the possibility of having range profiles for different orientations that are correlated. The performance using these models for target orientation estimation and target recognition is described. A framework for analyzing the improvement in performance for using high resolution radar range profiles from multiple radar sensors, multiple polarizations, and multiple elevations is presented.
1D HRR data analysis and ATR assessment
Robert L. Williams, David C. Gross, John J. Westerkamp, et al.
High range resolution (HRR) radar is important for its all- weather, day/night, long standoff capability. Additionally, it is an excellent sensor for identifying moving ground targets because it produces high resolution target signatures and because targets can be separated from ground clutter using Doppler processing. Ongoing research under the System Oriented HRR Automatic Recognition Program has led to an increased understanding of the HRR data, the target separability, and a baseline assessment of target recognition algorithms using template based approaches.
Performance Prediction
icon_mobile_dropdown
Upper bound calculations of ATR performance for ladar sensors
Vince E. Diehl, Geoffrey T. Benedict-Hall, Chris Heydemann
The use of robust and representative synthetic imagery data to test and evaluate automatic target recognition (ATR) systems has long been desired but generally considered beyond the current state of the art. The use of synthetic data is investigated here to calculate upper bounds on potential ATR system performance. This paper presents the use of synthetically generated imagery templates as a means of developing upper bounds of ATR performance for laser radar based seekers. This approach employs a synthetic scene generation capability and integrates it with error models that represent decrements in performance due to resolution, noise and geometric distortion resulting from the sensing process. This paper describes the modeling approach take and presents preliminary results. The model is currently undergoing testing against real imagery and is being used to select test sets to more effectively evaluate ATR's.
Nonparametric error estimation techniques applied to MSTAR data sets
Raman K. Mehra, Melvyn Huff, Ravi B. Ravichandran, et al.
The development of ATR performance characterization tools is very important for the design, evaluation and optimization of ATR systems. One possible approach for characterizing ATR performance is to develop measures of the degree of separability of the different target classes based on the available multi-dimensional image measurements. One such measure is the Bayes error which is the minimum probability of misclassification. Bayes error estimates have previously been obtained using Parzen window techniques on real aperture, high range resolution, radar data sets and on simulated synthetic aperture radar (SAR) images. This report extends these results to real MSTAR SAR data. Our results how that the Parzen window technique is a good method for estimating the Bayes error for such large dimensional data sets. However, in order to apply non-parametric error estimation techniques, feature reduction is needed. A discussion of the relationship between feature reduction and non-parametric estimation is included in this paper. The results of multimodal Parzen estimation on MSTAR images are also described. The tools used to produce the Bayes error estimates have been modified to produce Neyman-Pearson criterion estimates as well. Receiver Operating Characteristic curves are presented to illustrate non- parametric Neyman-Pearson error estimation on MSTAR images.
Theoretical and complexity issues for feature set evaluation using boundary methods
William E. Pierson Jr., Batuhan Ulug, Stanley C. Ahalt, et al.
Boundary Methods (BMs) are a collection of tools used for distribution analysis. This paper explores the theoretical and complexity issues associated with using BMs for Feature Set Evaluation (FSE). First we show the theoretical relationship between Overlap Sum, the BM measure of class separability, and Bayes error. This relationship demonstrates the utility of using BMs for FSE. Next, we investigate complexity issues associated with using BMs for FSE and compare with other techniques used for FSE.
Information measures for object recognition
Matthew L. Cooper, Michael I. Miller
We have been studying information theoretic measures, entropy and mutual information, as performance bounds on the information gain given a standard suite of sensors. Object pose is described by a single angle of rotation using a Lie group parameterization; observations are simulated using CAD models for the targets of interest and simulators such as the PRISM infrared simulator. Variability in the data due to the sensor by which the scene is remotely observed is statistically characterized via the data likelihood function. Taking a Bayesian approach, the inference is based on the posterior density, constructed as the product of the data likelihood and the prior density for target pose. Given observations from multiple sensors, data fusion is automatic in the posterior density. Here, we consider the mutual information between the target pose and remote observation as a performance measure in the pose estimation context. We have quantitatively examined target thermodynamic state information gain dependency of FLIR systems, the relative information gain of the FLIR and video sensors, and the additional information gain due to sensor fusion. Furthermore, we have applied to the Kullback-Leibler distance measures to quantify information loss due to thermodynamic signature mismatch.
Target detection theory for stripmap SAR using physics-based multiresolution signatures
Chen-Pang Yeang, Jeffrey H. Shapiro
A first-principles target detection theory is developed for stripmap operation of a synthetic aperture radar (SAR). The intermediate-frequency signal model consists of the return from a single-component target embedded in the clutter return from a random rough-surface reflector plus white Gaussian receiver noise. Target-return models are developed from electromagnetic theory for the following cases: specular reflector, dihedral reflector, and dielectric volume. Traditional stripmap SAR processing is assumed, using matched filters in both range and cross-range directions, but processing durations for these filters are chosen to optimize Neyman-Pearson detection performance by exploiting the multiresolution signatures of these targets. An optimum, whitening-filter SAR processor is also studied, and its detection performance is compared with that of the preceding multiresolution receiver.
Performance modeling of feature-based classification in SAR imagery
Michael Boshra, Bir Bhanu
We present a novel method for modeling the performance of a vote-based approach for target classification in SAR imagery. In this approach, the geometric locations of the scattering centers are used to represent 2D model views of a 3D target for a specific sensor under a given viewing condition (azimuth, depression and squint angles). Performance of such an approach is modeled in the presence of data uncertainty, occlusion, and clutter. The proposed method captures the structural similarity between model views, which plays an important role in determining the classification performance. In particular, performance would improve if the model views are dissimilar and vice versa. The method consists of the following steps. In the first step, given a bound on data uncertainty, model similarity is determined by finding feature correspondence in the space of relative translations between each pair of model views. In the second step, statistical analysis is carried out in the vote, occlusion and clutter space, in order to determine the probability of misclassifying each model view. In the third step, the misclassification probability is averaged for all model views to estimate the probability-of-correct- identification (PCI) plot as a function of occlusion and clutter rates. Validity of the method is demonstrated by comparing predicted PCI plots with ones that are obtained experimentally. Results are presented using both XPATCH and MSTAR SAR data.
MSTAR target classification using Bayesian pattern theory
Raman K. Mehra, Ravi B. Ravichandran, Anuj Srivastava
In the work described herein, Bayesian Pattern Theory is used to formulate the overall ATR problem as the optimization of a single objective function over the parameters to be estimated. Thus, all image understanding operations are then realized naturally, automatically, and consistently as byproducts of a large-scale stochastic optimization process. The work begins with a derivation of the Bayesian cost function by deriving a posterior probability distribution on the space of pose parameters and solves the optimization problem with respect to this posterior. Two noise models were considered in the derivation of the cost function: the first is the commonly used Gaussian model, and the second, considering that a SAR image is complex, is a Rician model. In order to test the robustness of the algorithm with respect to target types and adverse background conditions, four cases were constructed: Case (1) Gaussian noise was used and a Gaussian noise model was used in classification. Case (2) Rician noise was used and a Gaussian noise model was used in classification, Case (3) Rician noise was used and a Rician noise model was used in classification, and Case (4) MSTAR clutter was used. For each cases, we compute the probability of detection as a function of SNR. We obtained very good results for Case (1), however, the results at very low SNR may be unrealistic because the Gaussian noise assumptions are not accurate. As expected, the results for Case (2) were poor while the results for Case (3) were good. Compared to Case (1) the Case (3) results are more reliable because of a representative Rician noise model. The results for Case (4) were also good. These results were also independently confirmed by Bayes error analysis.
Image Registration and Fusion
icon_mobile_dropdown
Analyst-trainable fusion algorithms for surveillance applications
Richard L. Delanoy, Richard T. Lacoss
The Toolkit for Image Mining (TIM) is a prototype software environment that enables users with no knowledge of image processing and machine learning to interactively create image search and image analysis tools. TIM is being used as a testbed for the study of issues related to data fusion and on-the-fly training in the context of target recognition in DARPA's Semi-Automated IMINT Processing system. Experiments done in this environment suggest that on-the-fly training is technically feasible, need not pose an extra burden on image analysts, and can increase the flexibility and adaptability of surveillance algorithms.
Template-based ATR
icon_mobile_dropdown
Automatic target recognition using Eigen templates
Arnab Kumar Shaw, Vijay Bhatnagar
This paper presents ATR results with High Range Resolution (HRR) profiles used for classification. It is shown that effective HRR-ATR performance can be achieved if the templates are formed via Singular Value Decomposition (SVD) of detected HRR profiles. It is demonstrated theoretically that in the mean-squared sense, the eigen-vectors represent the optimal feature set. SVD analysis of a large class of XPATCH and MSTAR HRR-data clearly indicates that significant proportion (> 90%) of target energy is accounted for by the eigen-vectors of range correlation matrix, corresponding to only the largest singular value. The SV Decomposition also decouples the range and angle basis spaces. Furthermore, it is shown that significant clutter reduction can be achieved if HRR data is reconstructed using only the significant eigenvectors. ATR results with eigen-templates are compared with those based on mean-templates. Results are included for both XPATCH and MSTAR data using linear least- squares and matched-filter based classifiers.