The 3D complex damped exponential Cramer-Rao estimation bound and algorithms
Author(s):
Matthew Pepin
Show Abstract
The desire for insightful and automated segmentation or decomposition of 3D Synthetic Aperture Radar (SAR) imagery, and other 3D collections of complex detect electromagnetic wave field data leads to decomposition models with a basis such at the 3D complex exponential. This paper presents the Cramer Rao Bound (CRB) for estimation of the sum of 3D complex damped exponentials including their complex amplitude, and complex frequency in three dimensions. Synthetic 3D rectangular wave
field and SAR data are decomposed into 3D damped exponentials to the CRB accuracy in examples. As in the 1D and 2D case the 3D case for a single exponential in rectangular coordinates can be directly related to the 3D Fourier Transform (FT) and its estimation accuracy. The use of SAR data presents several additional complexities. The use of linear flight path SAR is the most appropriate for complex damped exponentials, but is by the nature of SAR also approximate. Other SAR modalities such as circular, other curvilinear, or other non-uniformity ight paths are prevalent but present a different multidimensional Impulse Response Function (IPR) and are expected to deviate for the accuracy of the CRB derived here. Additionally, sampling, interpolation, and corrections to an idealized ight path are minimized or ignored in creating synthetic SAR data sets and in applying the 3D complex damped exponential decomposition to the data. The parameters of 3D complex damped exponentials will be estimated by several algorithms and their accuracy with be compared with the 3D complex damped exponential Cramer Rao bound.
Effects of region of interest selection on phase history based SAR moving target autofocus
Author(s):
David A. Garren
Show Abstract
Recent work has revealed that an adaptation of phase gradient autofocus (PGA) techniques can be used to refocus the signature smears induced by moving targets in synthetic aperture radar (SAR) images. In this approach, the residual range migration errors induced by target motion are estimated and corrected within the phase history data (PHD) domain. The resulting quality of the target images generated using this PHD-based autofocus methodology varies according to the selected region-of-interest (ROI) as input into the processing. The present analysis investigates the variation in the quality of the refocused target images with regards to ROI selection.
Multi-sensor synthetic data generation for performance characterization
Author(s):
Christopher Paulson;
Adam Nolan;
Lori Westerkamp;
Edmund Zelnio
Show Abstract
This paper introduces an innovative framework for the development of multi-sensor datasets for target recognition. This framework goes beyond the paradigm of generating synthetic data to augment algorithm training; it employs carefully generated training and test data to characterize algorithm performance over any desired operating conditions, culminating in the ability to generate algorithm performance models for use in fusion, sensor resource management, and mission simulation. The current system instantiates the full path, from operating conditions to synthetic data to results, for synthetic aperture radar. Fully integrated electro-optic and laser radar paths, to be completed in 2019, will comprise a complete multi-sensor testbed for performance prediction. Future work will add sensor modes as well as automated decision and feature fusion for target identification.
Realistic SAR data augmentation using machine learning techniques
Author(s):
Benjamin Lewis;
Omar DeGuchy;
Joseph Sebastian;
John Kaminski
Show Abstract
While many aspects of the image recognition problem have been largely solved by presenting large datasets to convolutional neural networks, there is still much work to do when data is sparse. For synthetic aperture radar (SAR), there is a lack of data that stems both from the cost of collecting data as well as the small size of the community that collects and uses such data. In this case, electromagnetic simulation is an effective stopgap measure, but its effectiveness at mirroring reality is upper bounded both by the quality of the electromagnetic prediction code as well as the fidelity of the target's digital model. In practice, we find that classification models trained on synthetic data generalize poorly to measured data. In this work, we investigate three machine learning networks, with the goal of using the network to bridge the gap between measured and synthetic data. We experiment with two types of generative adversarial networks as well as a modification of a convolutional autoencoder. Each network tackles a different aspect in the problem of the disparity between measured and synthetic data, namely: generating new, realistic, labeled data; translating data between the measured and synthetic domain; and joining the manifold of the two domains into an intermediate representation. Classification results using widely-employed neural network classifiers are presented for each experiment; these results suggest that such data manipulation improves classification generalization for measured data.
Semi-random deep neural networks for near real-time target classification
Author(s):
Humza Syed;
Ryan Bryla;
Uttam Kumar Majumder;
Dhireesha Kudithipudi
Show Abstract
In recent years deep neural networks have shown great advances in image processing tasks. For modern datasets, these networks require long training times due to backpropagation, high amount of computational resources for weight updates, and memory intensive weight storage. Exploiting randomness during the training of deep neural networks can mitigate these concerns by reducing the computational costs without sacrificing network performance. However, a fully randomized network has limitations for real-time target classification as it leads to poor performance. Therefore we are motivated in using semi-random deep neural networks to exploit random fixed weights. In this paper, we demonstrate that semi-random deep neural networks can achieve near real-time training with comparable accuracies to conventional deep neural networks models. We find that these networks are enhanced by the usage of skip connections and train rapidly at the cost of dense memory usage. With greater memory resources available, these networks can train on larger datasets at a fraction of the training time costs. These semi-random deep neural network architectures open up an avenue for further research in utilizing random fixed weights in neural networks.
SAR object classification implementation for embedded platforms
Author(s):
Chris Capraro;
Uttam Majumder;
Josh Siddall;
Eric K. Davis;
Dan Brown;
Chris Cicotta
Show Abstract
This research details a new approach to optimize neural network architectures for Synthetic Aperture Radar (SAR) object classification on neuromorphic (e.g., IBM’s TrueNorth) and embedded platforms. We developed an algorithm to reduce the run-time and the power consumption of Deep Neural Networks (DNNs) classifiers by reducing the DNN model size required for a given object classification task. Reducing the model size reduces the number of mathematical operations performed, and the memory required, enabling computation on low size, weight and power (SWaP) hardware. We will provide our approach and results on relevant SAR data. Our entirely new approach starts with a very small multi-class convolution neural network (CNN) and replaces the standard negative log likelihood loss function with a single-class log loss function. We then generate an ensemble of small models trained for an individual class by varying the training data using a k-fold cross-validation and augmentation. This is done for each class and the resulting ensembles classify objects by finding the maximum average probability across each ensemble of single-class classifiers. We demonstrate 91-99 percent classification accuracy on three different datasets with composite networks that require almost 10 times fewer mathematical operations than SqueezeNet (a reduced parameter CNN with AlexNet performance).
A deep learning approach to the Synthetic and Measured Paired and Labeled Experiment (SAMPLE) challenge problem
Author(s):
Theresa Scarnati;
Benjamin Lewis
Show Abstract
Convolutional neural networks (CNN) are tremendously successful at classifying objects in electro-optical images. However, with synthetic aperture radar (SAR) data, off-the-shelf classifiers are insufficient because there are limited measured SAR data available and SAR images are not invariant to object manipulations. In this paper, we utilize the Synthetic and Measured Paired and Labeled Experiment (SAMPLE) dataset to present an approach to the SAR measured and synthetic domain mismatch problem. We pre-process the synthetic and measured data using Variance-Based Joint Sparsity despeckling, quantization, and clutter transfer techniques. The t-SNE (stochastic neighborhood embedding) dimensionality reduction method is used to show that pre-processing the data in the proposed way brings the two-dimensional manifolds represented by the measured and synthetic data closer. A DenseNet classification network is trained with unprocessed and processed data, showing that when no measured data are available for training, it is beneficial to pre-process SAR data with the proposed technique.
A SAR dataset for ATR development: the Synthetic and Measured Paired Labeled Experiment (SAMPLE)
Author(s):
Benjamin Lewis;
Theresa Scarnati;
Elizabeth Sudkamp;
John Nehrbass;
Stephen Rosencrantz;
Edmund Zelnio
Show Abstract
The publicly-available Moving and Stationary Target Acquisition and Recognition (MSTAR) synthetic aperture radar (SAR) dataset has been an valuable tool in the development of SAR automatic target recognition (ATR) algorithms over the past two decades, leading to the achievement of excellent target classification results. However, because of the large number of possible sensor parameters, target configurations and environmental conditions, the SAR operating condition (OC) space is vast. This leads to the impossible task of collecting sufficient measured data to cover the entire OC space. Thus, synthetic data must be generated to augment measured datasets. The study of synthetic data fidelity with respect to classification tasks is a non-trivial task. To that end, we introduce the Synthetic and Measured Paired and Labeled Experiment (SAMPLE) dataset, which consists of SAR imagery from the MSTAR dataset and well-matched synthetic data. By matching target configurations and sensor parameters among the measured and synthetic data, the SAMPLE dataset is ideal for investigating the differences between measured and synthetic SAR imagery. In addition to the dataset, we propose four experimental designs challenging researchers to investigate the best ways to classify targets in measured SAR imagery given synthetic SAR training imagery.
Synthetic data accuracy sensitivity to CAD model accuracy using ATR-based metrics
Author(s):
Elizabeth R. Sudkamp;
John W. Nehrbass;
Eric Branch;
Michael Levy
Show Abstract
In this paper we present a methodology for validating a 3D Computer Aided Design (CAD) model's accuracy for radar data synthesis. CAD models have been used to generate computer simulated radar frequency (RF) data. One problem with existing CAD-based simulations is that there is no metric or tool to verify whether data produced from the CAD model can be classified correctly before and after modifications have been made. This paper presents a methodology to quantify the similarities and differences in data generated from CAD models before and after modifications and presents this information through confusion matrices and a visualization technique. Results for three experiments involving CAD model modifications are presented.
Open set SAR target classification
Author(s):
Edmund Zelnio;
Anne Pavy
Show Abstract
Deep learning has shown significant performance advantages in object recognition problems. In particular, convolutional neural networks (CNN's) have been a preferred approach when recognizing objects in imagery. In general, however, CNN's have been applied to closed set recognition problems - those problems where all the objects of interest are in both the training and test sets. This effort addresses target classification using synthetic aperture radar (SAR) as the imaging sensor. In addition, this effort investigates the open set classification problem where targets in the test set are not in the training set. In this open set problem, the objective is to correctly classify test target types represented in the training set while rejecting those not in the training set as unknown. This open set problem is addressed using a hybrid approach of CNN's combined with a novel support vector machine (SVM) approach called SV-means.
A performance modeling framework for large scale synthetically derived performance estimates
Author(s):
G. Steven Goley;
Brian Thelen;
Ismael Xique;
Adam R. Nolan
Show Abstract
As the Air Force pushes toward reliance on autonomous systems for navigation, situational awareness, threat analysis and target engagement there are several requisite technologies that must be developed. Key among these is the concept of `trust' in the autonomous system to perform its task. This term, `trust' has many application specific definitions. We propose that a properly calibrated algorithm confidence is essential to establishing trust. To accomplish properly calibrated confidence we present a framework for assessing algorithm performance and estimating confidence of a classifier's declaration. This framework has applications to improved algorithm trust, fusion, and diagnostics. We present a metric for comparing the quality of performance modeling and examine three different implementations of performance models on a synthetic dataset over a variety of operating conditions.
Articulation study for SAR ATR baseline algorithm
Author(s):
Christopher Paulson;
Adam Nolan;
Steve Goley;
Stephen Nehrbass;
Edmund Zelnio
Show Abstract
This study investigates how operating conditions (OCs) impact the performance of a synthetic aperture radar (SAR) automatic target recognition (ATR) algorithm. We characterize the performance of the algorithm as a function of OCs to understand the algorithm's strengths and weaknesses and guide further development. This paper examines the classification stage of a template method called Quantized Grayscale Matching (QGM). To thoroughly investigate this problem, asymptotic prediction code is used to generate synthetic data for both training and testing to answer several questions. How does articulation impact the performance of the algorithm? How much training data is needed to handle the articulation of the targets? Certain targets may need more training data than others, but why? Which articulation states present the biggest challenge and why? How to have synthetic results have similar characteristics as measured results? These answers will help guide algorithm development and provide a framework to explore other OCs.