Proceedings Volume 10647

Algorithms for Synthetic Aperture Radar Imagery XXV

cover
Proceedings Volume 10647

Algorithms for Synthetic Aperture Radar Imagery XXV

Purchase the printed version of this volume at proceedings.com or access the digital version at SPIE Digital Library.

Volume Details

Date Published: 29 May 2018
Contents: 3 Sessions, 19 Papers, 19 Presentations
Conference: SPIE Defense + Security 2018
Volume Number: 10647

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 10647
  • Synthetic Data and Deep Learning
  • Advanced Image Formation, 3D Reconstruction, and Moving Target Exploitation
Front Matter: Volume 10647
icon_mobile_dropdown
Front Matter: Volume 10647
This PDF file contains the front matter associated with SPIE Proceedings Volume 10647 including the Title Page, Copyright information, Table of Contents, Introduction, and Conference Committee listing.
Synthetic Data and Deep Learning
icon_mobile_dropdown
"AFacet": a geometry based format and visualizer to support SAR and multisensor signature generation
Stephen Rosencrantz, John Nehrbass, Ed Zelnio, et al.
When simulating multisensor signature data (including SAR, LIDAR, EO, IR, etc...), geometry data are required that accurately represent the target. Most vehicular targets can, in real life, exist in many possible configurations. Examples of these configurations might include a rotated turret, an open door, a missing roof rack, or a seat made of metal or wood. Previously we have used the Modelman (.mmp) format and tool to represent and manipulate our articulable models. Unfortunately Modelman is now an unsupported tool and an undocumented binary format. Some work has been done to reverse engineer a reader in Matlab so that the format could continue to be useful. This work was tedious and resulted in an incomplete conversion. In addition, the resulting articulable models could not be altered and re-saved in the Modelman format. The AFacet (.afacet) articulable facet file format is a replacement for the binary Modelman (.mmp) file format. There is a one-time straight forward path for conversion from Modelman to the AFacet format. It is a simple ASCII, comma separated, self-documenting format that is easily readable (and in many cases usefully editable) by a human with any text editor, preventing future obsolescence. In addition, because the format is simple, it is relatively easy for even the most novice programmer to create a program to read and write AFacet files in any language without any special libraries. This paper presents the AFacet format, as well as a suite of tools for creating, articulating, manipulating, viewing, and converting the 370+ (when this paper was written) models that have been converted to the AFacet format.
High-performance computing strategies for SAR image experiments
Michael A. Saville, David F. Short, Jeremy Trammell, et al.
This article presents different strategies for generating very large sets of SAR phase history and imagery for target recognition studies using the open-use Raider Tracer simulation tool. Previous data domes, based on Visual D, produced numerous data sets for ground targets above a flat surface, but each target had a single orientation. Here, the experiment specifies different target types, each above a ground plane, but with arbitrary pose, yaw, and pitch. The customized data set poses challenges to load balancing and file input/output synchronization for a limited cpu hour budget. Strategies are presented to complete each image within a minimal time, and to generate the complete experiment set within a desired time.
Using synthetic SAR data to analyze ATR performance under various conditions (Conference Presentation)
Christopher R. Paulson, Adam R. Nolan, Edmund G. Zelnio
Conference Presentation for "Using synthetic SAR data to analyze ATR performance under various conditions"
Mobile high-performance computing (HPC) for synthetic aperture radar signal processing
Joshua Misko, Youngsoo Kim, Chenchen Qi, et al.
The importance of mobile high-performance computing has emerged in numerous battlespace applications at the tactical edge in hostile environments. Energy efficient computing power is a key enabler for diverse areas ranging from real-time big data analytics and atmospheric science to network science. However, the design of tactical mobile data centers is dominated by power, thermal, and physical constraints. Presently, it is very unlikely to achieve required computing processing power by aggregating emerging heterogeneous many-core processing platforms consisting of CPU, Field Programmable Gate Arrays and Graphic Processor cores constrained by power and performance. To address these challenges, we performed a Synthetic Aperture Radar case study for Automatic Target Recognition (ATR) using Deep Neural Networks (DNNs). However, these DNN models are typically trained using GPUs with gigabytes of external memories and massively used 32-bit floating point operations. As a result, DNNs do not run efficiently on hardware appropriate for low power or mobile applications. To address this limitation, we proposed for compressing DNN models for ATR suited to deployment on resource constrained hardware. This proposed compression framework utilizes promising DNN compression techniques including pruning and weight quantization while also focusing on processor features common to modern low-power devices. Following this methodology as a guideline produced a DNN for ATR tuned to maximize classification throughput, minimize power consumption, and minimize memory footprint on a low-power device.
Generative adversarial networks for SAR image realism
Benjamin Lewis, Jennifer Liu, Amy Wong
In a combat environment, synthetic aperture radar (SAR) is attractive for several reasons, including automatic target recognition (ATR). In order to effectively develop ATR algorithms, data from a wide variety of targets in different configurations is necessary. Naturally, collecting all this data is expensive and time-consuming. To mitigate the cost, simulated SAR data can be used to supplement real data, but the accuracy and performance is degraded. We investigate the use of generative adversarial networks (GANs), a recent development in the field of machine learning, to make simulated data more realistic and therefore better suited to develop ATR algorithms for real-world scenarios. This class of machine learning algorithms has been shown to have good performance in image translation between image domains, making it a promising method for improving the realism of simulated SAR data. We compare the use of two different GAN architectures to perform this task. Data from the publicly available MSTAR dataset is paired with simulated data of the same targets and used to train each GAN. The resulting images are evaluated for realism and the ability to retain target class. We show the results of these tests and make recommendations for using this technique to inexpensively augment SAR data sets.
Blending synthetic and measured data using transfer learning for synthetic aperture radar (SAR) target classification
Convolutional neural networks (CNNs) are state-of-the-art techniques for image classification; however, CNNs require an extensive amount of training data to achieve high accuracy. This demand presents a challenge because the existing amount of measured synthetic aperture radar (SAR) data is typically limited to just a few examples and does not account for articulations, clutter, and other target or scene variability. Therefore, this research aimed to assess the feasibility of combining synthetic and measured SAR images to produce a classification network that is robust to operating conditions not present in measured data and that may adapt to new targets without necessarily training on measured SAR images. A network adapted from the CIFAR-10 LeNet architecture in MATLAB Convolutional Neural Network (MatConvNet) was first trained on a database of multiple synthetic Moving and Stationary Target Acquisition and Recognition (MSTAR) targets. After the network classified with almost perfect accuracy, the synthetic data was replaced with corresponding measured data. Only the first layer of filters was permitted to change in order to create a translation layer between synthetic and measured data. The low error rate of this experiment demonstrates that diverse clutter and target types not represented in measured training data may be introduced in synthetic training data and later recognized in measured test data.
Deep learning model-based algorithm for SAR ATR
Robert D. Friedlander, Michael Levy, Elizabeth Sudkamp, et al.
Many computer-vision-related problems have successfully applied deep learning to improve the error rates with respect to classifying images. As opposed to optically based images, we have applied deep learning via a Siamese Neural Network (SNN) to classify synthetic aperture radar (SAR) images. This application of Automatic Target Recognition (ATR) utilizes an SNN made up of twin AlexNet-based Convolutional Neural Networks (CNNs). Using the processing power of GPUs, we trained the SNN with combinations of synthetic images on one twin and Moving and Stationary Target Automatic Recognition (MSTAR) measured images on a second twin. We trained the SNN with three target types (T-72, BMP2, and BTR-70) and have used a representative, synthetic model from each target to classify new SAR images. Even with a relatively small quantity of data (with respect to machine learning), we found that the SNN performed comparably to a CNN and had faster convergence. The results of processing showed the T-72s to be the easiest to identify, whereas the network sometimes mixed up the BMP2s and the BTR-70s. In addition we also incorporated two additional targets (M1 and M35) into the validation set. Without as much training (for example, one additional epoch) the SNN did not produce the same results as if all five targets had been trained over all the epochs. Nevertheless, an SNN represents a novel and beneficial approach to SAR ATR.
Development of CNNs for feature extraction
Nicole Eikmeier, Rachel Westerkamp, Ed Zelnio
There are significant challenges in applying deep learning technology to classifying targets. Among the challenges in deep learning algorithms, limited amount of measured data makes classification of targets using synthetic aperture radar very difficult. Our approach is to use CNNs to extract feature level information. We explore both regression and classification of features, and achieve accurate results in estimating the target’s azimuth angle while using testing and training sets that have no overlap in target types. We introduce dropout into the network architecture to capture confidence in our algorithmic output, with the future goal of confidence across multi-sensor feature-level classification.
Using deep learning for SAR image optimization
Peter John-Baptiste, Edmund Zelnio, Graeme E. Smith
The objective of this project is to apply Convolutional Neural Networks (CNN) to optimization based Synthetic Aperture Radar (SAR) imaging to learn good parameter choices that enhance desirable features while suppressing undesirable properties of the SAR images of tactical ground vehicles. Specifically, the combined CNN and imaging algorithm are designed to achieve a sharper shadow feature and to reduce the speckle noise in the SAR image. A convolutional neural network is a subset of machine learning based on artificial deep neural networks which are inspired by the biology of the brain. The significance of deep networks over previous work in artificial neural networks is the use of large number of layers as compared to the standard 3 layer network. The CNN is trained in a supervised regression setting where a human operator provides the truth which pairs desired parameter values with each image in the training set. These learned parameters are then predicted by the CNN for each test image and the parameters are fed to an optimization based image formation algorithm that is regularized by an edge detection term based on polynomial annihilation.The optimization is then solved using an alternating minimization approach. Several experiments are run comparing different networks, different learning algorithms (adagrad, adadelta and rmsprop), and different input normalization techniques (variable scaling and z-score). Comparisons between the human optimized images and the learned CNN images were compared with subjective visual comparisons and with objective measures including Mean Squared Error (MSE) and the correlation coefficient of both the predicted image and its inverted amplitude image. Using this approach, the adagrad normalized input network performed the best with a MSE of 0.0013 and a correlation coefficient of 0.9667 (un-inverted image) and 0.9772(inverted image).
Deep learning for waveform estimation in passive synthetic aperture radar imaging
We propose Deep Learning (DL) as a framework for performing simultaneous waveform estimation and image reconstruction in passive synthetic aperture radar (SAR). We interpret image reconstruction as a machine learning task for which a deep recurrent neural network (RNN) can be constructed by unfolding the iterations of a proximal gradient descent algorithm. We formulate the problem by representing the unknown waveform in a basis, and extend the recurrent auto-encoder architecture we proposed in1–3 by modifying the parameterization of the RNN to perform estimation of waveform coefficients, instead of unknown phase components in the forward model. Under a convex prior on the scene reflectivity, the constructed network serves as a convex optimizer in forward propagation, and a non-convex optimizer for the unknown waveform coefficients in backpropagation. With the auto-encoder architecture, the unknowns of the problem are estimated by operations only in the data domain, performed in an unsupervised manner. The highly non-convex problem of backpropagation is guided to a feasible solution over the parameter space by initializing the network with the known components of the SAR forward model. Moreover, prior information regarding the waveform can be incorporated during initialization. We validate the performance of our method with numerical simulations.
Advanced Image Formation, 3D Reconstruction, and Moving Target Exploitation
icon_mobile_dropdown
SAR processing UWB VHF data without a motion measurement system
Jan Torgrimsson, Patrik Dammert, Hans Hellsten, et al.
SAR processing usually requires very accurate navigation data, i.e. to form a focused image. The track must be measured within fractions of the centre wavelength. For high frequencies (e.g. X-band) this condition is too strict. Even with a cutting-edge motion measurement system, autofocus is a necessity. For low frequencies (e.g. VHF-band) a differential GPS (DGPS) is often an adequate solution (alone). However, for this case, it is actually conceivable to rely on autofocus capability over the motion measurement system. This paper describes how to form a SAR image without support from navigation data. That is within the scope of factorized geometrical autofocus (FGA). The FGA algorithm is a base-2 fast factorized back-projection realization with six free geometry parameters (per sub-aperture pair). These are tuned step-by-step until a sharp image is obtained. This procedure can compensate for an erroneous geometry (from a focus perspective). The FGA algorithm has been applied successfully on an ultra-wideband (UWB) data set, acquired at VHF-band by the CARABAS 3 system. The track is measured accurately by means of a DGPS. We however adopt and modify a basic geometry model. A linear equidistant flight path at fixed altitude is assumed and adjusted at several resolution levels. With this approach, we emulate a stand-alone processing chain without support from navigation data. The resulting FGA image is compared to a reference image and verified to be focused. This indicates that it is feasible to form a VHF-band SAR image without a motion measurement system.
Sparsity-driven coupled imaging and autofocusing for interferometric SAR
Oğuzcan Zengin, Ahmed Shaharyar Khwaja, Müjdat Çetin
We propose a sparsity-driven method for coupled image formation and autofocusing based on multi-channel data collected in interferometric synthetic aperture radar (IfSAR). Relative phase between SAR images contains valuable information. For example, it can be used to estimate the height of the scene in SAR interferometry. However, this relative phase could be degraded when independent enhancement methods are used over SAR image pairs. Previously, Ramakrishnan et al. proposed a coupled multi-channel image enhancement technique, based on a dual descent method, which exhibits better performance in phase preservation compared to independent enhancement methods. Their work involves a coupled optimization formulation that uses a sparsity enforcing penalty term as well as a constraint tying the multichannel images together to preserve the cross-channel information. In addition to independent enhancement, the relative phase between the acquisitions can be degraded due to other factors as well, such as platform location uncertainties, leading to phase errors in the data and defocusing in the formed imagery. The performance of airborne SAR systems can be affected severely by such errors. We propose an optimization formulation that combines Ramakrishnan et al.’s coupled IfSAR enhancement method with the sparsity-driven autofocus (SDA) approach of Önhon and Çetin to alleviate the effects of phase errors due to motion errors in the context of IfSAR imaging. Our method solves the joint optimization problem with a Lagrangian optimization method iteratively. In our preliminary experimental analysis, we have obtained results of our method on synthetic SAR images and compared its performance to existing methods.
Leveraging 3D models for SAR-based navigation in GPS-denied environments
Zachary Reid, Josh N. Ash
In this paper, we consider use of synthetic aperture radar (SAR) to provide absolute platform position information in scenarios where GPS signals may be degraded, jammed, or spoofed. Two algorithms are presented, and both leverage known 3D ground structure in an area of interest, e.g. provided by LIDAR data, to provide georeferenced position information to airborne SAR platforms. The first approach is based on the wide-aperture layover properties of elevated reflectors, while the second approach is based on correlating backprojected imagery with digital elevation imagery. Building on 3D backprojection, localization solutions result from non-convex optimization problems based on image sharpness or correlation measures. Results using measured GOTCHA data demonstrate localization errors of only a few meters with initial uncertainty regions as large as 16 km2.
Three-dimensional total least square Prony method for 3D synthetic aperture scatterer localization
Synthetic Aperture Radar (SAR) creates a 2-D (azimuth-range) image from radar pulses collected equally-spaced along a linear áight path. One 3-D scenerio collects these pulses at each collection point along the path from a linear (elevation) array orthogonal to the áight path. From this 3-D data set images (to a pixel accuracy) or array processing (to subpixel accuracy) allows strong scatterers to be located. Streamlined algorithms are needed for such practical image and volume reáectively function formation. Sacchini, Steedly and Moses (1993)3 presents a 2-D Total Least Squares (TLS) Prony method that robustly identiÖes 2-D scatterer locations in SAR images. In this method scatterer coordinates are matched by Ötting the data in each dimension, Ötting the resultant amplitudes in the cross-dimension and then matching the highest energy pairs in both these sets. This matching can produce excellent results for TLS Prony and for other 1-D scatterer localization algorithms. The algorithm is extended here to supply 3-D scatterer locations for simulated 3-D SAR data. Previous results for 3-D data show good localization using 2-D TLS Prony on azimuth-elevation slices and interpolating the range location between slices. Thresholding of the highest energy points, however, is required to Önd the actual location of scatterers. Range accuracy is also limited due to use of only the two closest range samples. Consistency of results is di§erent for di§erent amplitude scatterers. This paper produces results for a new 3-D TLS Prony method. Algorithm accuarcy, bias, robustness in di§erent scenarios are examined.
Sparse 4D TomoSAR imaging in the presence of non-linear deformation
Ahmed Shaharyar Khwaja, Müjdat Çetin
In this paper, we present a sparse four-dimensional tomographic synthetic aperture radar (4D TomoSAR) imaging scheme that can estimate elevation and linear as well as non-linear seasonal deformation rates of scatterers using the interferometric phase. Unlike existing sparse processing techniques that use fixed dictionaries based on a linear deformation model, we use a variable dictionary for the non-linear deformation in the form of seasonal sinusoidal deformation, in addition to the fixed dictionary for the linear deformation. We estimate the amplitude of the sinusoidal deformation using an optimization method and create the variable dictionary using the estimated amplitude. We show preliminary results using simulated data that demonstrate the soundness of our proposed technique for sparse 4D TomoSAR imaging in the presence of non-linear deformation.
A subaperture based approach for SAR moving target imaging by low-rank and sparse decomposition
Mubashar Yasin, Ahmed Shaharyar Khwaja, Müjdat Çetin
In this paper, we propose a synthetic aperture radar (SAR) moving-target imaging approach that exploits the low-rank and sparse decomposition (LRSD) of subaperture data. The low-rank component consists of the static background whereas the sparse component captures the moving targets. This allows the reconstruction of a full resolution moving target image separate from the static background image after LRSD. Furthermore, it facilitates the applicability of sparsity-driven moving target imaging in low signal to clutter ratio (SCR) scenarios. We demonstrate the effectiveness of our approach with experiments on synthetic as well as real SAR data.
Insights into the complicated SAR signature shapes induced by braking targets
This investigation considers the shapes of synthetic aperture radar (SAR) imagery signature smears that are caused by surface targets that perform braking maneuvers during the SAR collection time. It is known that such maneuvering target signatures can have a wide variety of two-dimensional (2D) shapes, as opposed to the simpler parabolic signatures that are induced by constant velocity targets. The current paper examines the theoretical properties of these 2D signature shapes for cases in which the specific parameters of the target braking maneuver temporal profile are varied, including the rate at which the target decreases speed, the total amount of speed change, and the speed transition time within the SAR collection interval. Furthermore, the current investigation yields new insights regarding the complicated SAR signature shapes that are indicative of targets undergoing such braking maneuvers. This analysis reveals that the SAR signature for a given braking target is effectively a composite of three curved smear portions. One portion is a part of a parabola that is obtained from the constant-velocity target motion at the initial SAR collection time. Next, the second portion is that of a part of a different parabola that is generated from the final constant target velocity segment during the SAR collection interval. The third curved portion of the full moving target signature forms a connection between the parts of the two parabolas that are due to the initial and final constant velocity segments of the full target motion during the SAR measurement interval.
Multi-function radio frequency ISR in contested environments (Conference Presentation)
Steven Jaroszewski, Allan Corbeil, Jeffrey Corbeil
Wide band radar waveforms that employ spread spectrum techniques are being investigated and experimentally tested during this Phase 2 SBIR effort. Such waveforms offer a nearly ideal thumbtack shaped ambiguity response. In this paper, spread spectrum coding and matched filtering techniques are examined with the goal of enhancing the performance of simultaneous GMTI and VideoSAR mode operation in airborne radars. The spread spectrum coding techniques provide nearly orthogonal waveforms and offer significant promise for enhanced operation in contested environments by distributing the transmitted energy over a large instantaneous bandwidth. Recent results are shown for the preliminary design and evaluation of waveforms generated using an Arbitrary Waveform Generator during recent radar ground tests and laboratory experiments. Results from enhanced loop-back ground tests with the transmitter in the loop are examined along with recent airborne SAR/GMTI collections performed using the GREP Spiral II radar. An analysis of predicted performance for simultaneous SAR and GMTI operation is presented together with a discussion of near-term flight testing plans.
Synthetic aperture radar quantized grayscale reference automatic target recognition algorithm
Christopher Paulson, Jervon Wilson, Travious Lewis
This paper presents a Synthetic Aperture Radar (SAR) Automatic Target Recognition (ATR) baseline algorithm that uses a quantized grayscale matching (QGM) algorithm.10 QGM is a classifier that uses a template matching approach to identify the target, has been throughly tested as part of the MSTAR program,10 and has shown to be robust to fluctuation in the absolute amplitude. A performance study is conducted to show how translation and obscuration/shadowing affects the performance of the QGM algorithm for SAR synthetic data at multiple different resolutions. This QGM implementation along with the synthetic data generation capability allows researchers to test the susceptibility of QGM to different operating conditions and, in addition, provides a baseline algorithm for comparison.
Variance based joint sparsity reconstruction of synthetic aperture radar data for speckle reduction
Theresa Scarnati, Anne Gelb
In observing multiple synthetic aperture radar (SAR) images of the same scene, it is apparent that the brightness distributions of the images are not smooth, but rather composed of complicated granular patterns of bright and dark spots. Further, these brightness distributions vary from image to image. This salt and pepper like feature of SAR images, called speckle, reduces the contrast in the images and negatively affects texture based image analysis. This investigation uses the variance based joint sparsity reconstruction method for forming SAR images from the multiple SAR images. In addition to reducing speckle, the method has the advantage of being non-parametric, and can therefore be used in a variety of autonomous applications. Numerical examples include reconstructions of both simulated phase history data that result in speckled images as well as the images from the MSTAR T-72 database.