Sparse reconstruction for radar
Author(s):
Lee C Potter;
Philip Schniter;
Justin Ziniel
Show Abstract
Imaging is not itself a system goal, but is rather a means to support inference tasks. For data processing with linearized signal models, we seek to report all high-probability
interpretations of the data and to report confidence labels in the form of posterior probabilities. A low-complexity recursive procedure is presented for Bayesian estimation in linear regression models. A Gaussian mixture is chosen as the prior on the unknown parameter vector. The algorithm returns both a set of high posterior probability mixing parameters
and an approximate minimum mean squared
error (MMSE) estimate of the parameter
vector. Emphasis is given to the case of a sparse parameter vector. Numerical simulations demonstrate estimation performance and illustrate
the distinctions between MMSE estimation and maximum a posteriori probability (MAP) model selection.
The proposed tree-search algorithm provides exact ratios of posterior probabilities for a set of high probability solutions to the sparse reconstruction problem. These relative probabilities serve to reveal potential ambiguity among multiple candidate solutions that are ambiguous due to low signal-to-noise ratio and/or significant correlation among columns in the super-resolving regressor matrix.
Mono- and multistatic polarimetric sparse aperture 3D SAR imaging
Author(s):
Stuart DeGraaf;
Charles Twigg;
Louis Phillips
Show Abstract
SAR imaging at low center frequencies (UHF and L-band) offers advantages over imaging at more conventional (X-band)
frequencies, including foliage penetration for target detection and scene segmentation based on polarimetric
coherency. However, bandwidths typically available at these center frequencies are small, affording poor resolution. By
exploiting extreme spatial diversity (partial hemispheric k-space coverage) and nonlinear bandwidth
extrapolation/interpolation methods such as Least-Squares SuperResolution (LSSR) and Least-Squares CLEAN
(LSCLEAN), one can achieve resolutions that are commensurate with the carrier frequency (λ/4) rather than the
bandwidth (c/2B). Furthermore, extreme angle diversity affords complete coverage of a target's backscatter, and a
correspondingly more literal image. To realize these benefits, however, one must image the scene in 3-D; otherwise
layover-induced misregistration compromises the coherent summation that yields improved resolution. Practically, one
is limited to very sparse elevation apertures, i.e. a small number of circular passes. Here we demonstrate that both LSSR
and LSCLEAN can reduce considerably the sidelobe and alias artifacts caused by these sparse elevation apertures.
Further, we illustrate how a hypothetical multi-static geometry consisting of six vertical real-aperture receive apertures,
combined with a single circular transmit aperture provide effective, though sparse and unusual, 3-D k-space support.
Forward scattering captured by this geometry reveals horizontal scattering surfaces that are missed in monostatic
backscattering geometries. This paper illustrates results based on LucernHammer UHF and L-band mono- and multi-static
simulations of a backhoe.
Joint space aspect reconstruction of wide-angle SAR exploiting sparsity
Author(s):
Ivana Stojanovic;
Mujdat Cetin;
William C. Karl
Show Abstract
In this paper we present an algorithm for wide-angle synthetic aperture radar (SAR) image formation. Reconstruction
of wide-angle SAR holds a promise of higher resolution and better information about a scene, but it
also poses a number of challenges when compared to the traditional narrow-angle SAR. Most prominently, the
isotropic point scattering model is no longer valid. We present an algorithm capable of producing high resolution
reflectivity maps in both space and aspect, thus accounting for the anisotropic scattering behavior of targets. We
pose the problem as a non-parametric three-dimensional inversion problem, with two constraints: magnitudes
of the backscattered power are highly correlated across closely spaced look angles and the backscattered power
originates from a small set of point scatterers. This approach considers jointly all scatterers in the scene across all
azimuths, and exploits the sparsity of the underlying scattering field. We implement the algorithm and present reconstruction results on realistic data obtained from the XPatch Backhoe dataset.
Three-dimensional sparse-aperture moving-target imaging
Author(s):
Matthew Ferrara;
Julie Jackson;
Mark Stuff
Show Abstract
If a target's motion can be determined, the problem of reconstructing a 3D target image becomes a sparse-aperture
imaging problem. That is, the data lies on a random trajectory in k-space, which constitutes a sparse
data collection that yields very low-resolution images if backprojection or other standard imaging techniques are
used. This paper investigates two moving-target imaging algorithms: the first is a greedy algorithm based on
the CLEAN technique, and the second is a version of Basis Pursuit Denoising. The two imaging algorithms are
compared for a realistic moving-target motion history applied to a Xpatch-generated backhoe data set.
Multibaseline IFSAR for 3D target reconstruction
Author(s):
Emre Ertin;
Randolph L. Moses;
Lee C. Potter
Show Abstract
We consider three dimensional target construction from SAR data collected on multiple complete circular apertures
at different elevation angle. The 3-D resolution of circular SAR systems is constrained by two factors: the
sparse sampling in elevation and the limited azimuthal persistence of the reflectors in the scene. Three dimensional
target reconstruction with multipass circular SAR data is further complicated by nonuniform elevation
spacing in real flight paths and non-constant elevation angle throughout the circular pass. In this paper we first
develop parametric spectral estimation methods that extend standard IFSAR method of height estimation to
apertures at more than two elevation angles. Next, we show that linear interpolation of the phase history data
leads to unsatisfactory performance in 3-D reconstruction from nonuniformly sampled elevation passes. We then
present a new sparsity regularized interpolation algorithm to preprocess nonuniform elevation samples to create
a virtual uniform linear array geometry. We illustrate the performance of the proposed method using simulated
backscatter data.
Hyper-parameter selection in non-quadratic regularization-based radar image formation
Author(s):
Özge Batu;
Müjdat Çetin
Show Abstract
We consider the problem of automatic parameter selection in regularization-based radar image formation techniques. It
has previously been shown that non-quadratic regularization produces feature-enhanced radar images; can yield
superresolution; is robust to uncertain or limited data; and can generate enhanced images in non-conventional data
collection scenarios such as sparse aperture imaging. However, this regularized imaging framework involves some
hyper-parameters, whose choice is crucial because that directly affects the characteristics of the reconstruction. Hence
there is interest in developing methods for automatic parameter choice. We investigate Stein's unbiased risk estimator
(SURE) and generalized cross-validation (GCV) for automatic selection of hyper-parameters in regularized radar
imaging. We present experimental results based on the Air Force Research Laboratory (AFRL) "Backhoe Data Dome,"
to demonstrate and discuss the effectiveness of these methods.
Fast CSAR algorithm
Author(s):
Jehanzeb Burki;
Christopher F. Barnes
Show Abstract
Fourier analysis based focusing of synthetic aperture radar (SAR) data collected during circular flight path
is a recent advancement in SAR signal processing. Fast CSAR algorithm uses the Householder transform to
obtain a ground plane circular SAR (CSAR) signal phase history from the slant plane CSAR phase history by
inverting the linear shift-varying system model, thereby circumventing the need for explicitly computing a pseudo-inverse.
The Householder transform has recently been shown to have improved error bounds and stability as an
underdetermined and ill-conditioned system solver, and the Householder transform is computationally efficient.
An implementation of a fast backprojection image formation algorithm for spotlight-mode SAR
Author(s):
Daniel E. Wahl;
David A. Yocky;
Charles V. Jakowatz Jr.
Show Abstract
In this paper we describe an algorithm for fast spotlight-mode synthetic aperture radar (SAR) image formation that
employs backprojection as the core, but is implemented such that its compute time is comparable to the often-used Polar
Format Algorithm (PFA). (Standard backprojection is so much slower than PFA that it is impractical to use in many
operational scenarios.) We demonstrate the feasibility of the algorithm on real SAR phase history data sets and show
some advantages in the SAR image formed by this technique.
Imaging that exploits spatial, temporal, and spectral aspects of far-field radar data
Author(s):
Margaret Cheney;
Brett Borden
Show Abstract
We develop a linearized imaging theory that combines the spatial, temporal, and spectral aspects of scattered
waves. We consider the case of fixed sensors and a general distribution of objects, each undergoing linear
motion; thus the theory deals with imaging distributions in phase space. We derive a model for the data that is
appropriate for any waveform, and show how it specializes to familiar results when the targets are far from the
antennas and narrowband waveforms are used.
We develop a phase-space imaging formula that can be interpreted in terms of filtered backprojection or
matched filtering. For this imaging approach, we derive the corresponding point-spread function. We show
that special cases of the theory reduce to: a) Range-Doppler imaging, b) Inverse Synthetic Aperture Radar
(isar), c) Spotlight Synthetic Aperture Radar (sar), d) Diffraction Tomography, and e) Tomography of Moving
Targets. We also show that the theory gives a new SAR imaging algorithm for waveforms with arbitrary ridge-like
ambiguity functions.
Distributed aperture imaging with multiple transmitters in complex environments
Author(s):
T. Varslot;
B. Yazici;
M. Cheney
Show Abstract
We present a new image reconstruction method for distributed apertures operating in complex environments
with additive non-stationary noise. Our method is capable of exploiting information that we might have about:
multipath scattering in the environment; statistics of the objects to be imaged; statistics of the additive non-stationary
noise. The aperture elements are distributed spatially in an arbitrary fashion, and can be several
hundred wavelengths apart. Furthermore, our method facilitates multiple transmit apertures which operate
simultaneously, and is thus capable of handling a true multi-transmit-multi-receive scenario. We derive a set
of basis functions which is adapted to the given operating environment and sensor distribution. By selecting
an appropriate subset of these basis functions we obtain a sub-space reconstruction which is optimal in the
sense of obtaining the minimum-mean-square-error for the reconstructed image. Furthermore, as this subspace
determines which details will be visible in the reconstructed image, it provides a tool for evaluating the sensor
locations against the objects that we would like to see in the image. The implementation of our reconstruction
takes the form of a filter bank which is applied to the pulse-echo measurements. This processing can be performed
independently on the measurements obtained from each receiving element. Our approach is therefore well suited
for parallel implementation, and can be performed in a distributed manner in order to reduce the required
communication bandwidth between each receiver and the location where the results are merged into the final
image. We present numerical simulations which illustrate capabilities of our method.
Subsidence measurement and DSM extraction of IFSAR data using anisotropic diffusion and wavelet denoising filters
Author(s):
Kenneth Sartor;
Josef De Vaughn Allen;
Emile Ganthier;
Mark Rahmes;
Gnana Bhaskar Tenali;
Samuel Kozaitis
Show Abstract
The most commonly used smoothing algorithms for complex data processing are low pass filters. Unfortunately, an
undesired side effect of the aforementioned techniques is the blurring of scene discontinuities in the interferogram. For
Digital Surface Map (DSM) extraction and subsidence measurement, the smoothing of the scene discontinuities can
cause inaccuracy in the final product. Our goal is to perform spatially non-uniform smoothing to overcome the
aforementioned disadvantages. We achieve this by using an Anisotropic Non-Linear Diffuser (ANDI). Here, in this
paper we will show the utility of ANDI filtering on simulated and actual Interferometric Synthetic Aperture Radar
(IFSAR) data for preprocessing, subsidence measurement and DSM extraction to overcome the difficulties of typical
filters. We also compare the results of the ANDI filter with a wavelet filter. Finally, we detail some of our results of the
New Orleans IFSAR research project with Canadian Space Agency, NASA, and USGS. The Harris LiteSiteTM Urban
3D Modeling software is used to illustrate some of the results of our RADARSAT-1 processing.
Multipath simulation and removal from SAR imagery
Author(s):
Daniel B. André;
Robert D. Hill;
Christopher P. Moate
Show Abstract
Current SAR imaging techniques assume that radar pulses are reflected from a scene by a single bounce event (reflection
from a sphere), or multiple bounces producing a fixed phase-centre (a trihedral). However, scattering is often more
complex; e.g. the pulse may reflect off the ground before interacting with a vehicle, leading to additional bright returns in
the image which are not located at the position of either bounce.
In this paper we use simulation to assess the affect of multipath on vehicle signatures and develop techniques for the
identification and removal of multipath returns from SAR imagery.
Through-the-wall polarimetric imaging
Author(s):
Fauzia Ahmad;
Moeness G Amin
Show Abstract
Through-the-Wall Imaging is emerging as an affordable sensor technology supporting a variety of applications, such as
surveillance and reconnaissance, emergency rescue, and firefighting. Motivated by the desire to understand the
underlying phenomenology and performance bounds associated with imaging targets behind walls, several through-the-wall
imaging experiments were conducted at the Center for Advanced Communications (CAC), Villanova University.
These experiments aimed at supporting resolution, polarization, and localization of indoor targets and objects behind
walls, and provided valuable dual-polarized synthetic aperture data measurements of indoor scenes of different
complexity and population. In this paper, we present
full-polarization imaging results, for a setting of calibrated
reflectors behind a typical exterior grade wall. These imaging results provide polarimetric scene characterization and are
shown to be in good agreement with the ground truth.
Autofocus for 3D imaging
Author(s):
Forest Lee-Elkin
Show Abstract
Three dimensional (3D) autofocus remains a significant challenge for the development of practical 3D multipass
radar imaging. The current 2D radar autofocus methods are not readily extendable across sensor passes. We
propose a general framework that allows a class of data adaptive solutions for 3D auto-focus across passes with
minimal constraints on the scene contents. The key enabling assumption is that portions of the scene are sparse
in elevation which reduces the number of free variables and results in a system that is simultaneously solved
for scatterer heights and autofocus parameters. The proposed method extends 2-pass interferometric synthetic
aperture radar (IFSAR) methods to an arbitrary number of passes allowing the consideration of scattering from
multiple height locations. A specific case from the proposed autofocus framework is solved and demonstrates
autofocus and coherent multipass 3D estimation across the 8 passes of the "Gotcha Volumetric SAR Data Set"
X-Band radar data.
Recursive SAR imaging
Author(s):
Randolph L. Moses;
Joshua N. Ash
Show Abstract
We investigate a recursive procedure for synthetic aperture imaging. We consider a concept in which a SAR
system persistently interrogates a scene, for example as it flies along or around that scene. In traditional SAR
imaging, the radar measurements are processed in blocks, by partitioning the data into a set of non-overlapping
or overlapping azimuth angles, then processing each block. We consider a recursive update approach, in which
the SAR image is continually updated, as a linear combination of a small number of previous images and a
term containing the current radar measurement. We investigate the crossrange sidelobes realized by such an
imaging approach. We show that a first-order autoregression of the image gives crossrange sidelobes similar to
a rectangular azimuth window, while a third-order autoregression gives sidelobes comparable to those obtained
from widely-used windows in block-processing image formation. The computational and memory requirements
of the recursive imaging approach are modest - on the order of M • N2 where M is the recursion order (typically
≤ 3) and N2 is the image size. We compare images obtained from the recursive and block processing techniques,
both for a synthetic scene and for X-band SAR measurements from the Gotcha data set.
Beamforming as a foundation for spotlight-mode SAR image formation by backprojection
Author(s):
Charles V. Jakowatz Jr.;
Daniel E. Wahl;
David A. Yocky
Show Abstract
In this paper we show that the technique for spotlight-mode SAR image formation generally known as "backprojection"
or "time-domain" is most easily derived and described in terms of the well-known methods of phased-array
beamforming. By contrast, backprojection has been typically developed via analogy to tomographic imaging, which
restricts this technique to the case of planar wavefronts. We demonstrate how the very simple notion of delay-and-sum
beamforming leads directly to the backprojection algorithm for SAR, including the case for curved wavefronts. We
further explain why backprojection offers a certain elegant simplicity for SAR imaging, and allows direct one-step
computation of several useful SAR products, including an orthographically correct image free of any geometric or
defocus effects from wavefront curvature and also free of the effects of terrain-elevation-induced defocus. (This product
requires as an input a pre-existing digital elevation map (DEM) of the scene to be imaged.) In addition, we'll
demonstrate why beamforming yields a mode-independent SAR image formation algorithm, i.e. one that can just as
easily accommodate strip-map or spotlight-mode phase histories collected on an arbitrary flight path.
Analyzing the effects of square versus non-square resolutions on automatic target recognition performance
Author(s):
Lee J. Montagnino;
Mary L. Cassabaum;
Shawn D. Halversen;
Christina L. Hebert;
Chad T. Rupp;
Matthew T. Young;
Neilson Ku
Show Abstract
A multi-stage Synthetic Aperture Radar (SAR) Automatic Target Recognition (ATR) system is analyzed across images
of various pixel areas achieved by both square and non-square resolution. Non-square resolution offers the ability to
achieve finer resolution in the range or cross-range direction with a corresponding degradation of resolution in the cross-range
or range direction, respectively. The algorithms examined include a standard 2-parameter Constant False Alarm
Rate (CFAR) detection stage, a discrimination stage, and a template-based classification stage. Performance for each
stage with respect to both pixel area and square versus non-square resolution is shown via cascaded Receiver Operating
Characteristic (ROC) curves. The results indicate that, for fixed pixel areas, non-square resolution imagery can achieve
statistically similar performance to square pixel resolution imagery in a multi-stage SAR ATR system.
An ATR challenge problem using HRR data
Author(s):
Bart Kahler;
John Querns;
Greg Arnold
Show Abstract
This paper describes the automatic target recognition (ATR) challenge problem which
includes source code for a baseline ATR algorithm, display utilities for the results, and a high
range resolution (HRR) data set consisting of 10 civilian vehicles. The Ku-band data in this data
set has been processed into 1-dimensional range profiles of vehicles in the open, moving in a
straight line. It is being released to the ATR community to facilitate the development of new and
improved HRR identification algorithms which can provide greater confidence and very high
identification performance. The intent of the baseline algorithm included with this challenge
problem is to provide an ATR performance comparison to newly developed algorithms. Single-look
identification performance results using the baseline algorithm and the data set are provided
as a starting point for algorithm developers. Both the algorithm and data set can support single
look and multi-look target identification.
Performance model for joint tracking and ATR with HRR radar
Author(s):
Shan Cong;
Lang Hong;
Erik Blasch
Show Abstract
Joint tracking and ATR with HRR radar is an important field of research in recent years. This paper addresses
the issue of end-to-end performance modeling for HRR radar based joint tracking and ATR system under various
operating conditions. To this end, an ATR system with peak location and amplitude as features is considered. A
complete set of models are developed to capture the statistics in all stages of processing, including HRR signal,
extracted features, Baysian classifier and tracker. In particular, we demonstrate that the effect of operating
conditions on feature can be represented through a random variable with Log-normal distribution. Then, the
result is extended to predicting the system performance under specified operating conditions.
Although this paper is developed based on a type of ATR and tracking system, the result indicates the trend of
the performance for general joint ATR and tracking system over operating conditions. It also provides guidance
to how the empirical performance model of a general joint tracking and ATR system shall be constructed.
Vehicle tracking for urban surveillance
Author(s):
William Roberts;
Leslie Watkins;
Dapeng Wu;
Jian Li
Show Abstract
Tracking is widely used in a variety of computer vision applications, ranging from video surveillance to medical
imaging. The principal goal of tracking is to first identify regions of interest in a scene, and to then monitor the
movements or changes of the object through the image sequence. In this paper, we focus on unsupervised vehicle
tracking for low resolution aerial images taken from an urban area. Various optical effects have traditionally
made this tracking problem very challenging. Objects are often lost in tracking due to intensity changes that
result from shadowed or partially occluded regions of an image. Additionally, the presence of multiple vehicles
in a scene can lead to mistakes in tracking and significantly increased computation time. We propose a feature-based
tracking algorithm herein that will seek to mitigate these limitations. To first isolate vehicles in the initial
frame, we apply three-frame change detection to the registered images. Feature points are identified in the
labelled regions using the Harris corner criteria. To track a feature point from one frame to the next, we search
for the point around a predicted location, determined from the feature's previous motion, that minimizes the
sum-of-squared-differences value. Finally, during the course of the image sequence, our algorithm constantly
searches for new objects that might have entered the scene. We will demonstrate the success of our tracking
approach through experimental considerations.
A rotation-invariant transform for target detection in SAR images
Author(s):
Wenxing Ye;
Christopher Paulson;
Dapeng Oliver Wu;
Jian Li
Show Abstract
Rotation of targets pose great a challenge for the design of an automatic image-based target detection system.
In this paper, we propose a target detection algorithm that is robust to rotation of targets. Our key idea
is to use rotation invariant features as the input for the classifier. For an image in Radon transform space,
namely R(b,θ), taking the magnitude of 1-D Fourier transform on θ, we get |Fθ{R(b,θ)}|. It was proved that
the coefficients of the combined Radon and 1-D Fourier transform, |Fθ{R(b,θ)}| is invariant to rotation of the
image. These coefficients are used as the input to a maximum-margin classifier based on I-RELIEF feature
weighting technique. Its objective is to maximize the margin between two classes and improve the robustness of
the classifier against uncertainties. For each pixel of a sample SAR image, a feature vector can be extracted from
a sub image centered at that pixel. Then our classifier decides whether the pixel is target or non-target. This
produces a binary-valued image. We further improve the detection performance by connectivity analysis, image
differencing and diversity combining. We evaluate the performance of our proposed algorithm, using the data
set collected by Swedish CARABAS-II systems, and the experimental results show that our proposed algorithm
achieves superior performance over the benchmark algorithm.
Ripplet transform for feature extraction
Author(s):
Jun Xu;
Dapeng Wu
Show Abstract
Efficient representation of images usually leads to improvements in storage efficiency, computational complexity
and performance of image processing algorithms. Efficient representation of images can be achieved by
transforms. However, conventional transforms such as Fourier transform and wavelet transform suffer from
discontinuities such as edges in images. To address this problem, we propose a new transform called ripplet
transform. The ripplet transform is a higher dimensional generalization of the wavelet transform designed to
represent images or two-dimensional signals at different scales and different directions. The ripplet transform
is also a generalization of the curvelet transform. Specifically, the ripplet transform allows arbitrary support c
and degree d while the curvelet transform is just a special case of the ripplet transform (Type I) with c = 1 and
d = 2. Our experimental results show that the ripplet transform can provide efficient representation of images
that contain edges. The ripplet transform holds great potential for image denoising and image compression.
A target detection scheme for VHF SAR ground surveillance
Author(s):
Wenxing Ye;
Christopher Paulson;
Dapeng Oliver Wu;
Jian Li
Show Abstract
Detection of targets concealed in foliage is a challenging problem and is critical for ground surveillance. To
detect foliage-concealed targets, we need to address two major challenges, namely, 1) how to remotely acquire
information that contains important features of foliage-concealed targets, and 2) how to distinguish targets from
background and clutter. Synthetic aperture radar operated in low VHF-band has shown very good penetration
capability in the forest environment, and hence the first problem can be satisfactorily addressed. The second
problem is the focus of this paper. Existing detection schemes can achieve good detection performance but at
the cost of high false alarm rate. To address the limitation of the existing schemes, in this paper, we develop
a target detection algorithm based on a supervised learning technique that maximizes the margin between two
classes, i.e., the target class and the non-target class. Specifically, our target detection algorithm consists of
1) image differencing, 2) maximum-margin classifier, and 3) diversity combining. The image differencing is
to enhance and highlight the targets so that the targets are more distinguishable from the background. The
maximum-margin classifier is based on a recently developed feature weighting technique called I-RELIEF; the
objective of the maximum-margin classifier is to achieve robustness against uncertainties and clutter. The
diversity combining utilizes multiple images to further improve the performance of detection, and hence it is a
type of multi-pass change detection. We evaluate the performance of our proposed detection algorithm, using
the SAR image data collected by Swedish CARABAS-II systems which operates at low VHF-band around 20-90
MHz. The experimental results demonstrate superior performance of our algorithm, compared to the benchmark
algorithm associated with the CARABAS-II SAR image data. For example, for the same level of target detection
probability, our algorithm only produces 11 false alarms while the benchmark algorithm produces 86 false alarms.
Discrimination of civilian vehicles using wide-angle SAR
Author(s):
Kerry E. Dungan;
Lee C. Potter;
Jason Blackaby;
John Nehrbass
Show Abstract
At high frequencies, synthetic aperture radar (SAR) imagery can be represented as a set of points corresponding
to scattering centers. Using a collection of sequential azimuths with a fixed aperture we build a cube of points for
each of seven civilian vehicles in the Gotcha public release data set (GPRD). We present a baseline study of the
ability to discriminate between the vehicles using strictly 2D geometric information of the scattering centers. The
comparison algorithm is independent of pose and translation using a novel application of the partial Hausdorff
distance (PHD) minimized through a particle swarm optimization. Using the PHD has the added benefit of
reducing the effects of occlusions and clutter in comparing vehicles from pass to pass. We provide confusion
matrices for a variety of operating parameters including azimuth extent, various amplitude cutoffs, and various
parameters within PHD. Finally, we discuss extension of the approach to near-field imaging and to additional
point attributes, such as 3D location and polarimetric response.