Proceedings Volume 7699

Algorithms for Synthetic Aperture Radar Imagery XVII

cover
Proceedings Volume 7699

Algorithms for Synthetic Aperture Radar Imagery XVII

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 15 April 2010
Contents: 5 Sessions, 32 Papers, 0 Presentations
Conference: SPIE Defense, Security, and Sensing 2010
Volume Number: 7699

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 7699
  • Advanced Image Formation I
  • Advanced Image Formation II
  • Advanced Motion Processing
  • Advanced Exploitation
Front Matter: Volume 7699
icon_mobile_dropdown
Front Matter: Volume 7699
This PDF file contains the front matter associated with SPIE Proceedings Volume 7699, including the Title Page, Copyright information, Table of Contents, and the Conference Committee listing.
Advanced Image Formation I
icon_mobile_dropdown
A beamforming algorithm for bistatic SAR image formation
Charles V. Jakowatz Jr., Daniel E. Wahl, David A. Yocky
Beamforming is a methodology for collection-mode-independent SAR image formation. It is essentially equivalent to backprojection. The authors have in previous papers developed this idea and discussed the advantages and disadvantages of the approach to monostatic SAR image formation vis-à-vis the more standard and time-tested polar formatting algorithm (PFA). In this paper we show that beamforming for bistatic SAR imaging leads again to a very simple image formation algorithm that requires a minimal number of lines of code and that allows the image to be directly formed onto a three-dimensional surface model, thus automatically creating an orthorectified image. The same disadvantage of beamforming applied to monostatic SAR imaging applies to the bistatic case, however, in that the execution time for the beamforming algorithm is quite long compared to that of PFA. Fast versions of beamforming do exist to help alleviate this issue. Results of image reconstructions from phase history data are presented.
Doppler synthetic aperture hitchhiker imaging
We consider passive airborne receivers that use backscattered signals from sources of opportunity transmitting fixed-frequency waveforms, which we refer to as Doppler Synthetic Aperture Hitchhiker (DSAH). We present a novel image formation method for DSAH. Our method first correlates the windowed signal obtained from one receiver with the windowed, filtered, scaled and translated version of the received signal from another receiver, and then uses the microlocal analysis to reconstruct the scene radiance by the weighted-backprojection of the correlated signal. This imaging algorithm can put the visible edges of the scene radiance at the correct location, and under appropriate conditions, with correct strength. We show that the resolution of the image is directly related to the length of the support of the windowing function and the frequency of the transmitted waveform. We present numerical experiments to demonstrate the performance of the proposed method.
Tutorial on Fourier space coverage for scattering experiments, with application to SAR
The Fourier Diffraction Theorem relates the data measured during electromagnetic, optical, or acoustic scattering experiments to the spatial Fourier transform of the object under test. The theorem is well-known, but since it is based on integral equations and complicated mathematical expansions, the typical derivation may be difficult for the non-specialist. In this paper, the theorem is derived and presented using simple geometry, plus undergraduatelevel physics and mathematics. For practitioners of synthetic aperture radar (SAR) imaging, the theorem is important to understand because it leads to a simple geometric and graphical understanding of image resolution and sampling requirements, and how they are affected by radar system parameters and experimental geometry. Also, the theorem can be used as a starting point for imaging algorithms and motion compensation methods. Several examples are given in this paper for realistic scenarios.
Dual format algorithm for monostatic SAR
The polar format algorithm for monostatic synthetic aperture radar imaging is based on a linear approximation of the differential range to a scatterer, which leads to spatially-variant distortion and defocus in the resultant image. While approximate corrections may be applied to compensate for these effects, these corrections are ad-hoc in nature. Here, we introduce an alternative imaging algorithm called the Dual Format Algorithm (DFA) that provides better isolation of the defocus effects and reduces distortion. Quadratic phase errors are isolated along a single dimension by allowing image formation to an arbitrary grid instead of a Cartesian grid. This provides an opportunity for more efficient phase error corrections. We provide a description of the arbitrary image grid and we show the quadratic phase error correction derived from a second-order Taylor series approximation of the differential range. The algorithm is demonstrated with a point target simulation.
SAR image formation toolbox for MATLAB
While many synthetic aperture radar (SAR) image formation techniques exist, two of the most intuitive methods for implementation by SAR novices are the matched filter and backprojection algorithms. The matched filter and (non-optimized) backprojection algorithms are undeniably computationally complex. However, the backprojection algorithm may be successfully employed for many SAR research endeavors not involving considerably large data sets and not requiring time-critical image formation. Execution of both image reconstruction algorithms in MATLAB is explicitly addressed. In particular, a manipulation of the backprojection imaging equations is supplied to show how common MATLAB functions, ifft and interp1, may be used for straight-forward SAR image formation. In addition, limits for scene size and pixel spacing are derived to aid in the selection of an appropriate imaging grid to avoid aliasing. Example SAR images generated though use of the backprojection algorithm are provided given four publicly available SAR datasets. Finally, MATLAB code for SAR image reconstruction using the matched filter and backprojection algorithms is provided.
An analytical expression for the three-dimensional response of a point scatterer for circular synthetic aperture radar
Three-dimensional (3-D) spotlight-mode synthetic aperture radar (SAR) images of point scatterers provide insight into the achievable effectiveness of exploitation algorithms given a variety of operating parameters such as elevation angle, azimuth or synthetic aperture extent, and frequency bandwidth. Circular SAR, using 360 degrees of azimuth, offers the benefit of persistent surveillance and the potential for 3-D image reconstruction improvement compared with limited aperture SAR due in part to the increase in favorable viewing angles of unknown objects. The response of a point scatter at the origin, or center of the imaging scene, is known and has been quantified for circular SAR in prior literature by a closed-form solution. The behavior of a point scatterer radially displaced from the origin has been previously characterized for circular SAR through implementation of backprojection image reconstructions. Here, we derive a closed-form expression for the response of an arbitrarily located point scatterer given a circular flight path. In addition, the behavior of the response of an off-center point target is compared to that of a point scatterer at the origin. Symmetries within the 3-D point spread functions (PSFs), or impulse response functions (IPRs), are also noted to provide knowledge of the minimum subset of SAR images required to fully characterize the response of a particular point scatterer. Understanding of simple scattering behavior can provide insight into the response of more complex targets, given that complicated targets may sometimes be modeled as an arrangement of geometrically simple scattering objects.
An analysis of 3D SAR from single pass nonlinear radar platform trajectories
An analysis of 3-D SAR image formation under the challenging condition of single pass sampling in the elevation dimension is presented. The analysis is operationally relevant as it is often not possible for a radar platform to collect radar data at sufficient grazing angles to satisfy the Nyquist sampling criterion. It is found that these sampling issues can partly be overcome through the use of non-linear radar platform trajectories. In conventional 2-D SAR imaging this approach can be viewed as detrimental, as the image depth of focus is reduced, however for 3-D imaging a reduced depth of focus has been found to be advantageous. The approach however, comes at the cost of resultant unusual image point spread functions, with coarser resolution in the vertical dimension. It is possible to obtain a wide range of point spread functions as a function of collection parameters including range, the form of the non-linear radar platform trajectories and centre frequency. This work explores this parameter space to find advantageous radar collection geometries. The image point spread functions are difficult to characterise analytically and so a numerical approach is undertaken.
Autofocus for 3D imaging with multipass SAR
The emergence of 3D imaging from multipass radar collections motivates the need for 3D autofocus. While several effective methods exist to coherently align radar pulses for 2D image formation from a single elevation pass, further methods are needed to appropriately align radar collection surfaces from pass to pass. We propose one such method of 3D autofocus involving the optimization of a coherence factor metric for the dominant scatterers in an image scene. This method is demonstrated using a diffuse target from a multipass collection of circular SAR data.
Advanced Image Formation II
icon_mobile_dropdown
Superresolution inverse synthetic aperture radar (ISAR) imaging using compressive sampling
Suman K. Gunnala, Saibun Tjuatja
A method based on compressive sampling to achieve superresolution in ISAR imaging is presented. The superresolution ISAR imaging algorithm is implemented by enforcing the sparsity constraints via random compressive sampling of the measured data. Sparsity constraint ratio (SCR) is used as a design parameter. Mutual coherence is used as a quantitative measure to determine the optimal SCR. ISAR data for full angular sector as well as different partial angular sectors are utilized in this study. Results show that significant resolution enhancement is achieved around optimal SCR of 0.2.
Bayesian SAR imaging
We introduce a maximum a posteriori (MAP) algorithm and a sparse learning via iterative minimization (SLIM) algorithm to synthetic aperture radar (SAR) imaging. Both MAP and SLIM are sparse signal recovery algorithms with excellent sidelobe suppression and high resolution properties. The former cyclically maximizes the a posteriori probability density function for a given sparsity promoting prior, while the latter cyclically minimizes a regularized least squares cost function. We show how MAP and SLIM can be adapted to the SAR imaging application and used to enhance the image quality. We evaluate the performance of MAP and SLIM using the simulated complex-valued backscattered data from a backhoe vehicle. The numerical results show that both MAP and SLIM satisfactorily suppress the sidelobes and yield higher resolution than the conventional matched filter or delay-and-sum (DAS) approach. MAP and SLIM outperform the widely used compressive sampling matching pursuit (CoSaMP) algorithm, which requires the delicate choice of user parameters. Compared with the recently developed iterative adaptive approach (IAA), MAP and SLIM are computationally more efficient, especially with the help of fast Fourier transform (FFT). Also, the a posteriori distribution given by the algorithms provides us with a basis for the analysis of the statistical properties of the SAR image pixels.
Experimental validation of a microwave tomographic approach for through-the-wall radar imaging
Experimental validation of a tomographic technique for radar imaging of 3-D scenes behind walls is presented. The imaging technique is based on a linear inverse scattering algorithm combined with a 2-D sliced approach, which ensures fast data processing and quick investigation of very large spatial regions. Further, we investigate the possibility of achieving 3-D reconstructions using a limited set of data with the objective of reduction in data acquisition time, while maintaining a reasonable image quality. Performance of the limited data schemes is evaluated using experimental data collected in a semi-controlled environment.
Contourlet domain hidden Markov tree based detection algorithm for DRDC through-wall SAR (TWSAR) system applications
Brigitte Chan
DRDC Ottawa is investigating high resolution synthetic aperture radar (SAR) techniques to perform 3-D imaging through walls in urban operations. Through-wall capabilities of interest include room mapping, imaging of in-wall structures, and detection of objects of interest. Such capabilities would greatly enhance situational awareness for military forces operating in the urban battle space. Current activities include hardware and software development and testing of an L-band through-wall SAR (TWSAR) system. Detection algorithms and automatic target recognition (ATR) systems are under investigation using experimental 2-D data. ATR may be more difficult in urban environments due to the high number of detectable objects and multi-path artifacts. Furthermore, penetrating through walls presents a formidable challenge as wall effects can greatly interfere with image quality inside buildings. By classifying wall material, wall compensation algorithms can be applied to enhance the image. In this paper, we present results from our preliminary investigation on detecting internal and external wall structures and their features (including doors and windows as well as internal wall construction) from scenes acquired with a single channel L-band TWSAR system. We evaluate the effectiveness of automatic detection based on the contourlet domain hidden Markov tree in the context of detecting wall edges and building features, while minimizing the amount of false edge detection. This work will form the basis of wall compensation algorithm development. The detection technique will also be used to detect objects of interests beyond walls once the SAR images have been wall compensated.
A videoSAR mode for the x-band wideband experimental airborne radar
A. Damini, B. Balaji, C. Parry, et al.
DRDC has been involved in the development of airborne SAR systems since the 1980s. The current system, designated XWEAR (X-band Wideband Experimental Airborne Radar), is an instrument for the collection of SAR, GMTI and maritime surveillance data at long ranges. VideoSAR is a land imaging mode in which the radar is operated in the spotlight mode for an extended period of time. Radar data is collected persistently on a target of interest while the aircraft is either flying by or circling it. The time span for a single circular data collection can be on the order of 30 minutes. The spotlight data is processed using synthetic apertures of up to 60 seconds in duration, where consecutive apertures can be contiguous or overlapped. The imagery is formed using a back-projection algorithm to a common Cartesian grid. The DRDC VideoSAR mode noncoherently sums the images, either cumulatively, or via a sliding window of, for example, 5 images, to generate an imagery stream presenting the target reflectivity as a function of viewing angle. The image summation results in significant speckle reduction which provides for increased image contrast. The contrast increases rapidly over the first few summed images and continues to increase, but at a lesser rate, as more images are summed. In the case of cumulative summation of the imagery, the shadows quickly become filled in. In the case of a sliding window, the summation introduces a form of persistence into the VideoSAR output analogous to the persistence of analog displays from early radars.
Synthetic aperture radar data visualization on the iPod Touch
Aaron Fouts, Rhonda Vickery, Uttam Majumder, et al.
A major area of focus for the Air Force is sensor performance in urban environments. Aircraft with multiple sensor modalities, such as Synthetic Aperture RADAR (SAR), Infrared (IR), and Electro-Optics (EO), are essential for intelligence, surveillance, and reconnaissance (ISR) of current and future urban battlefields. Although applications exist for visualization of these types of imagery, they usually require at least a laptop computer and internet connection. Field operatives need to be able to access georeferenced information about imagery as part of a Geographic Information System (GIS) on mobile devices. The iPod/iPhone has a 640x480 resolution multi-touch display, making it an excellent device for interacting with georeferenced imagery. We created an iPhone application that loads SAR imagery and allows the user to interact with it. The user multi-touch interface provides pan and zoom capabilities as well as options to change parameters relating to the query. We describe how operatives in the field can use this application to investigate SAR and GIS related problems on the iPhone mobile device, which otherwise would require a computer and Internet connection.
Advanced Motion Processing
icon_mobile_dropdown
SAR based adaptive GMTI
Duc Vu, Bin Guo, Luzhou Xu, et al.
We consider ground moving target indication (GMTI) and target velocity estimation based on multi-channel synthetic aperture radar (SAR) images. Via forming velocity versus cross-range images, we show that small moving targets can be detected even in the presence of strong stationary ground clutter. Moreover, the velocities of the moving targets can be estimated, and the misplaced moving targets can be placed back to their original locations based on the estimated velocities. Adaptive beamforming techniques, including Capon and generalizedlikelihood ratio test (GLRT), are used to form velocity versus cross-range images for each range bin of interest. The velocity estimation ambiguities caused by the multi-channel array geometry are analyzed. We also demonstrate the effectiveness of our approaches using the Air Force Research Laboratory (AFRL) publicly-released Gotcha SAR based GMTI data set.
Detection/tracking of moving targets with synthetic aperture radars
Gregory E. Newstadt, Edmund Zelnio, Leroy Gorham, et al.
In this work, the problem of detecting and tracking targets with synthetic aperture radars is considered. A novel approach in which prior knowledge on target motion is assumed to be known for small patches within the field of view. Probability densities are derived as priors on the moving target signature within backprojected SAR images, based on the work of Jao.1 Furthermore, detection and tracking algorithms are presented to take advantage of the derived prior densities. It was found that pure detection suffered from a high false alarm rate as the number of targets in the scene increased. Thus, tracking algorithms were implemented through a particle filter based on the Joint Multi-Target Probability Density (JMPD) particle filter2 and the unscented Kalman filter (UKF)3 that could be used in a track-before-detect scenario. It was found that the PF was superior than the UKF, and was able to track 5 targets at 0.1 second intervals with a tracking error of 0.20 ± 1.61m (95% confidence interval).
Analysis of motion disambiguation using multi-channel circular SAR
Ahmed R. Fasih, Carl W. Rossler, Joshua N. Ash, et al.
Combining moving target indication (MTI) radar with synthetic aperture radar (SAR) is of great interest to radar specialists, in terms of improving multiple-target tracking in large, urban scenes. A major obstacle to such a merger are ambiguities induced by mution. Using statistical bounds we quantify the improvement of moving target localization with multi-channel SAR over single-channel SAR and the more traditional MTI technique of displaced phase center array (DPCA) processing. We show that the potential for substantial improvements in localization performance is borne out by practical estimators based on sparse reconstruction algorithms, whose performance approach statistical bounds, even under clutter. We also outline a parallelization scheme for the nonquadratic regularized sparse reconstruction technique to utilize clusters for processing large datasets.
Verification of target motion effects on SAR imagery using the Gotcha GMTI challenge dataset
Dan E. Hack, Michael A. Saville
This paper investigates the relationship between a ground moving target's kinematic state and its SAR image. While effects such as cross-range offset, defocus, and smearing appear well understood, their derivations in the literature typically employ simplifications of the radar/target geometry and assume point scattering targets. This study adopts a geometrical model for understanding target motion effects in SAR imagery, termed the target migration path, and focuses on experimental verification of predicted motion effects using both simulated and empirical datasets based on the Gotcha GMTI challenge dataset. Specifically, moving target imagery is generated from three data sources: first, simulated phase history for a moving point target; second, simulated phase history for a moving vehicle derived from a simulated Mazda MPV X-band signature; and third, empirical phase history from the Gotcha GMTI challenge dataset. Both simulated target trajectories match the truth GPS target position history from the Gotcha GMTI challenge dataset, allowing direct comparison between all three imagery sets and the predicted target migration path. This paper concludes with a discussion of the parallels between the target migration path and the measurement model within a Kalman filtering framework, followed by conclusions.
Estimation of vibration spectra including vibrating direction with synthetic aperture radar
In this paper, we develop a method for determining the vibration spectrum and vibrating direction of a vibrating object measured with Synthetic Aperture Radar. The methodology presented here is performed after the vibration history has been extracted from the SAR phase history by some other technique; then, our method is applied. The method is tested here with simulated data to verify its performance and to determine the conditions required for good vibration spectrum and direction estimates.
Analysis of focused dismount signatures
Thomas L. Lewis, Brian Rigling
The detection and characterization of dismount activity is of increasing interest, particularly using radar to allow for day/night operation from long range. Current RF dismount sensing concepts either employ short coherent intervals with fine range resolution or long coherent intervals with fine Doppler resolution. We propose use of both fine range resolution and long coherent intervals to achieve fine Doppler resolution. When dismounts are moving, this introduces the added complication of micro-range/Doppler signature drift through range-Doppler resolution cells. In this paper, we describe potential methods for focusing the signatures of moving dismounts, and then analyze the focused signature for potential features that might lead to the automatic classification of the dismounts into several categories.
Advanced Exploitation
icon_mobile_dropdown
A comparison of spatial sampling techniques enabling first principles modeling of a synthetic aperture RADAR imaging platform
Simulation of synthetic aperture radar (SAR) imagery may be approached in many different ways. One method treats a scene as a radar cross section (RCS) map and simply evaluates the radar equation, convolved with a system impulse response to generate simulated SAR imagery. Another approach treats a scene as a series of primitive geometric shapes, for which a closed form solution for the RCS exists (such as boxes, spheres and cylinders), and sums their contribution at the antenna level by again solving the radar equation. We present a ray-tracing approach to SAR image simulation that treats a scene as a series of arbitrarily shaped facetized objects, each facet potentially having a unique radio frequency optical property and time-varying location and orientation. A particle based approach, as compared to a wave based approach, presents a challenge for maintaining coherency of sampled scene points between pulses that allows the reconstruction of an exploitable image from the modeled complex phase history. We present a series of spatial sampling techniques and their relative success at producing accurate phase history data for simulations of spotlight, stripmap and SAR-GMTI collection scenarios.
Comparison of real and simulated SAR imagery of ships for use in ATR
N. Ødegaard, A. O. Knapskog, C. Cochin, et al.
Collecting real data to build a database for Automatic Target Recognition (ATR) in SAR imagery can be an overwhelming task. Simulated SAR images of targets are desirable. To use simulations for ATR one has to make sure they are good enough for discriminating among the different classes. This paper investigates the similarities between SAR images of ships simulated using a phenomenological SAR simulation tool and real data of the same targets collected with PicoSAR and TerraSAR-X. The study has been completed by FFI using the DGA MOCEM LT software. MOCEM generates a SAR image from a CAD model based on the major scattering mechanisms of the target in a matter of minutes. Simulations of several ships are compared to real data. The results obtained are highly dependent on the imaging geometry, as well as the CAD model complexity and the materials chosen for the target. Using normalized cross correlation, the simulation from the correct class always has the highest correlation with the real one when the scatterers are spatially distributed in the image. In other geometries, when the scatterers are more concentrated, the results were not satisfying, and further testing using other materials, model complexities and comparison metrics is necessary.
Civilian vehicle radar data domes
Kerry E. Dungan, Christian Austin, John Nehrbass, et al.
We present a set of simulated X-band scattering data for civilian vehicles. For ten facet models of civilian vehicles, a high-frequency electromagnetic simulation produced fully polarized, far-field, monostatic scattering for 360 degrees azimuth and elevation angles from 30 to 60 degrees. The 369 GB of phase history data is stored in a MATLAB file format. This paper describes the CVDomes data set along with example imagery using 2D backprojection, single pass 3D, and multi-pass 3D.
Classifying sets of attributed scattering centers using a hash coded database
We present a fast, scalable method to simultaneously register and classify vehicles in circular synthetic aperture radar imagery. The method is robust to clutter, occlusions, and partial matches. Images are represented as a set of attributed scattering centers that are mapped to local sets, which are invariant to rigid transformations. Similarity between local sets is measured using a method called pyramid match hashing, which applies a pyramid match kernel to compare sets and a Hamming distance to compare hash codes generated from those sets. By preprocessing a database into a Hamming space, we are able to quickly find the nearest neighbor of a query among a large number of records. To demonstrate the algorithm, we simulated X-band scattering from ten civilian vehicles placed throughout a large scene, varying elevation angles in the 35 to 59 degree range. We achieved better than 98 percent classification performance. We also classified seven vehicles in a 2006 public release data collection with 100% success.
Application of sparse dictionaries to SAR speckle reduction
Thomas R. Braun, John B. Greer
Synthetic Aperture Radar (SAR) provides day/night all weather imagery, and as such is being increasingly utilized for overhead reconnaissance. Additionally, the active, coherent nature of the system provides for analysis not readily achievable with electro-optical imagery. However, like all coherent systems, SAR imagery suffers degradation from speckle (a random interference pattern) which hinders interpretation. Herein, we investigate SAR denoising with a new method based on sparse reconstruction over learned dictionaries and show this approach performs better than the current state of the art speckle filters.
Target detection in SAR images using codifference and directional filters
Target detection in SAR images using region covariance (RC) and codifference methods is shown to be accurate despite the high computational cost. The proposed method uses directional filters in order to decrease the search space. As a result the computational cost of the RC based algorithm significantly decreases. Images in MSTAR SAR database are first classified into several categories using directional filters (DFs). Target and clutter image features are extracted using RC and codifference methods in each class. The RC and codifference matrix features are compared using l1 norm distance metric. Support vector machines which are trained using these matrices are also used in decision making. Simulation results are presented.
A challenge problem for SAR change detection and data compression
This document describes a challenge problem whose scope is two-fold. The first aspect is to develop SAR CCD algorithms that are applicable for X-band SAR imagery collected in an urban environment. The second aspect relates to effective data compression of these complex SAR images, where quality SAR CCD is the metric of performance. A set of X-band SAR imagery is being provided to support this development. To focus research onto specific areas of interest to AFRL, a number of challenge problems are defined. The data provided is complex SAR imagery from an AFRL airborne X-band SAR sensor. Some key features of this data set are: 10 repeat passes, single phase center, and single polarization (HH). In the scene observed, there are multiple buildings, vehicles, and trees. Note that the imagery has been coherently aligned to a single reference.
FOPEN change detection experiments using a CARABAS public release data set
The detection of stationary targets under foliage is an extremely difficult problem. A viable solution to this problem involves using low-frequency FOPEN SAR in a change detection mode. The FOPEN SAR gathers a reference image of the area under surveillance before the targets have entered the area. At FOPEN frequencies the energy transmitted by the radar penetrates through the foliage and provides an image of the tree trunks and other stationary man-made objects (buildings, etc.). The SAR then gathers a test image of the area under surveillance; this test image is comprised of the tree trunks and other stationary man-made objects, plus the targets that have entered the scene. Comparing the test and reference images yields a change image of the area; the returns from tree trunks, buildings, and other man-made clutter are significantly cancelled, revealing the targets hidden under the foliage. This paper investigates some phenomenological aspects of FOPEN change detection using SAR imagery from the CARABAS-II VHF radar.
Classification of canonical scattering through sub-band analysis
The spectrum parted linked image test (SPLIT) algorithm was experimentally shown to estimate frequency-dependency of dominant scattering centers through sub-band analysis. Based on its demonstrated potential for classifying canonical scatterers, a theoretical model of the SPLIT algorithm is presented in this paper. Terms are defined, procedures are detailed, and a metric for total least squares model fitting is developed. In addition, the paper addresses multiple observations, measures of confidence, sidelobe interference and sensitivity to bandwidth and noise. Finally, it is described how the one-dimensional (1D) SPLIT algorithm can be extended for use with 2D and 3D imaging.
The effect of synthetic aperture radar image resolution on target discrimination
John E. McGowan, Steven C. Gustafson, Julie Ann Jackson, et al.
This paper details the effect of spatial resolution on target discrimination in Synthetic Aperture Radar (SAR) images. Multiple SAR image chips, containing targets and non-targets, are used to test a baseline Automatic Target Recognition (ATR) system with reduced spatial resolution obtained by lowering the pixel count or synthesizing a degraded image. The pixel count is lowered by averaging groups of adjoining pixels to form a new single value. The degraded image is synthesized by low-pass-filtering the image frequency space and then lowering the pixel count. To train a linear classifier, a two-parameter Constant False Alarm Rate (CFAR) detector is tested, and three different types of feature spaces, are used: size, contrast, and texture. The results are scored using the Area Under the Receiver Operator Characteristic (AUROC) curve. The CFAR detector is shown to perform better at lower resolution. All three feature sets together performed well with the degradation of resolution; separately the sets had different performances. The texture features performed best because they do not rely on the number of pixels on the target, while the size features performed the worst for the same reason. The contrast features yielded improved performance when the resolution was slightly reduced.
Depth-based image registration
Bing Han, Christopher Paulson, Jiangping Wang, et al.
Image registration is a fundamental task in computer vision because it can significantly contribute to high-level computer vision and benefit numerous practical applications. Though a lot of image registration techniques exist in literature, there is still a significant amount of research to be conducted because there are a lot of issues that need to be solved such as the parallax problem. The traditional image registration algorithms suffer from the parallax problem due to their underling assumption that the scene can be regarded approximately planar which is not satisfied in the case of large depth variation in the images with high-rise objects. With regard to the the parallax problem, a new strategy is proposed by leveraging the depth information via 3D reconstruction. One novel idea is to recover the depth in the image region with high-rise objects to build accurate transform function for image registration. Our method mitigates the parallax problem and can achieve robust registration results, which is validated by our experiments. Our algorithm is attractive to numerous practical applications.