Proceedings Volume 10202

Automatic Target Recognition XXVII

cover
Proceedings Volume 10202

Automatic Target Recognition XXVII

Purchase the printed version of this volume at proceedings.com or access the digital version at SPIE Digital Library.

Volume Details

Date Published: 7 June 2017
Contents: 6 Sessions, 25 Papers, 13 Presentations
Conference: SPIE Defense + Security 2017
Volume Number: 10202

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 10202
  • Advanced Processing Methods for ATR I
  • Learning Methods in ATR
  • Advanced Systems for ATR
  • Advanced Processing Methods for ATR II
  • Advanced Signal Exploitation Methods
Front Matter: Volume 10202
icon_mobile_dropdown
Front Matter: Volume 10202
This PDF file contains the front matter associated with SPIE Proceedings Volume 10202, including the Title Page, Copyright information, Table of Contents, and Conference Committee listing.
Advanced Processing Methods for ATR I
icon_mobile_dropdown
Robust fusion-based processing for military polarimetric imaging systems
Duncan L. Hickman, Moira I. Smith, Kyung Su Kim, et al.
Polarisation information within a scene can be exploited in military systems to give enhanced automatic target detection and recognition (ATD/R) performance. However, the performance gain achieved is highly dependent on factors such as the geometry, viewing conditions, and the surface finish of the target. Such performance sensitivities are highly undesirable in many tactical military systems where operational conditions can vary significantly and rapidly during a mission. Within this paper, a range of processing architectures and fusion methods is considered in terms of their practical viability and operational robustness for systems requiring ATD/R. It is shown that polarisation information can give useful performance gains but, to retained system robustness, the introduction of polarimetric processing should be done in such a way as to not compromise other discriminatory scene information in the spectral and spatial domains. The analysis concludes that polarimetric data can be effectively integrated with conventional intensity-based ATD/R by either adapting the ATD/R processing function based on the scene polarisation or else by detection-level fusion. Both of these approaches avoid the introduction of processing bottlenecks and limit the impact of processing on system latency.
Efficient generation of image chips for training deep learning algorithms
Sanghui Han, Alex Fafard, John Kerekes, et al.
Training deep convolutional networks for satellite or aerial image analysis often requires a large amount of training data. For a more robust algorithm, training data need to have variations not only in the background and target, but also radiometric variations in the image such as shadowing, illumination changes, atmospheric conditions, and imaging platforms with different collection geometry. Data augmentation is a commonly used approach to generating additional training data. However, this approach is often insufficient in accounting for real world changes in lighting, location or viewpoint outside of the collection geometry. Alternatively, image simulation can be an efficient way to augment training data that incorporates all these variations, such as changing backgrounds, that may be encountered in real data. The Digital Imaging and Remote Sensing Image Image Generation (DIRSIG) model is a tool that produces synthetic imagery using a suite of physics-based radiation propagation modules. DIRSIG can simulate images taken from different sensors with variation in collection geometry, spectral response, solar elevation and angle, atmospheric models, target, and background. Simulation of Urban Mobility (SUMO) is a multi-modal traffic simulation tool that explicitly models vehicles that move through a given road network. The output of the SUMO model was incorporated into DIRSIG to generate scenes with moving vehicles. The same approach was used when using helicopters as targets, but with slight modifications. Using the combination of DIRSIG and SUMO, we quickly generated many small images, with the target at the center with different backgrounds. The simulations generated images with vehicles and helicopters as targets, and corresponding images without targets. Using parallel computing, 120,000 training images were generated in about an hour. Some preliminary results show an improvement in the deep learning algorithm when real image training data are augmented with the simulated images, especially when obtaining sufficient real data was particularly challenging.
Radiometric features for vehicle classification with infrared images
Seçkin Özsaraç, Gözde Bozdağı Akar
A vehicle classification system, which uses features based on radiometry, is developed for single band infrared (IR) image sequences. In this context, the process is divided into three components. These are moving vehicle detection, radiance estimation, and classification. The major contribution of this paper lies in the usage of the radiance values as features, other than the raw output of IR camera output, to improve the classification performance of the detected objects. The motivation behind this is that each vehicle class has a discriminating radiance value that originates from the source temperature of the object modified by the intrinsic characteristics of the radiating surface and the environment. As a consequence, significant performance gains are achieved due to the use of radiance values in classification for the utilized measurement system.
Open set recognition of aircraft in aerial imagery using synthetic template models
Aleksander B. Bapst, Jonathan Tran, Mark W. Koch, et al.
Fast, accurate and robust automatic target recognition (ATR) in optical aerial imagery can provide game-changing advantages to military commanders and personnel. ATR algorithms must reject non-targets with a high degree of confidence in a world with an infinite number of possible input images. Furthermore, they must learn to recognize new targets without requiring massive data collections. Whereas most machine learning algorithms classify data in a closed set manner by mapping inputs to a fixed set of training classes, open set recognizers incorporate constraints that allow for inputs to be labelled as unknown. We have adapted two template-based open set recognizers to use computer generated synthetic images of military aircraft as training data, to provide a baseline for military-grade ATR: (1) a frequentist approach based on probabilistic fusion of extracted image features, and (2) an open set extension to the one-class support vector machine (SVM). These algorithms both use histograms of oriented gradients (HOG) as features as well as artificial augmentation of both real and synthetic image chips to take advantage of minimal training data. Our results show that open set recognizers trained with synthetic data and tested with real data can successfully discriminate real target inputs from non-targets. However, there is still a requirement for some knowledge of the real target in order to calibrate the relationship between synthetic template and target score distributions. We conclude by proposing algorithm modifications that may improve the ability of synthetic data to represent real data.
Learning Methods in ATR
icon_mobile_dropdown
Combining high-speed SVM learning with CNN feature encoding for real-time target recognition in high-definition video for ISR missions
Christine Kroll, Monika von der Werth, Holger Leuck, et al.
For Intelligence, Surveillance, Reconnaissance (ISR) missions of manned and unmanned air systems typical electrooptical payloads provide high-definition video data which has to be exploited with respect to relevant ground targets in real-time by automatic/assisted target recognition software. Airbus Defence and Space is developing required technologies for real-time sensor exploitation since years and has combined the latest advances of Deep Convolutional Neural Networks (CNN) with a proprietary high-speed Support Vector Machine (SVM) learning method into a powerful object recognition system with impressive results on relevant high-definition video scenes compared to conventional target recognition approaches. This paper describes the principal requirements for real-time target recognition in high-definition video for ISR missions and the Airbus approach of combining an invariant feature extraction using pre-trained CNNs and the high-speed training and classification ability of a novel frequency-domain SVM training method. The frequency-domain approach allows for a highly optimized implementation for General Purpose Computation on a Graphics Processing Unit (GPGPU) and also an efficient training of large training samples. The selected CNN which is pre-trained only once on domain-extrinsic data reveals a highly invariant feature extraction. This allows for a significantly reduced adaptation and training of the target recognition method for new target classes and mission scenarios. A comprehensive training and test dataset was defined and prepared using relevant high-definition airborne video sequences. The assessment concept is explained and performance results are given using the established precision-recall diagrams, average precision and runtime figures on representative test data. A comparison to legacy target recognition approaches shows the impressive performance increase by the proposed CNN+SVM machine-learning approach and the capability of real-time high-definition video exploitation.
Deep learning based multi-category object detection in aerial images
Lars W. Sommer, Tobias Schuchert, Jürgen Beyerer
Multi-category object detection in aerial images is an important task for many applications such as surveillance, tracking or search and rescue tasks. In recent years, deep learning approaches using features extracted by convolutional neural networks (CNN) significantly improved the detection accuracy on detection benchmark datasets compared to traditional approaches based on hand-crafted features as used for object detection in aerial images. However, these approaches are not transferable one to one on aerial images as the used network architectures have an insufficient resolution of feature maps for handling small instances. This consequently results in poor localization accuracy or missed detections as the network architectures are explored and optimized for datasets that considerably differ from aerial images in particular in object size and image fraction occupied by an object. In this work, we propose a deep neural network derived from the Faster R-CNN approach for multi- category object detection in aerial images. We show how the detection accuracy can be improved by replacing the network architecture by an architecture especially designed for handling small object sizes. Furthermore, we investigate the impact of different parameters of the detection framework on the detection accuracy for small objects. Finally, we demonstrate the suitability of our network for object detection in aerial images by comparing our network to traditional baseline approaches and deep learning based approaches on the publicly available DLR 3K Munich Vehicle Aerial Image Dataset that comprises multiple object classes such as car, van, truck, bus and camper.
Person re-identification across aerial and ground-based cameras by deep feature fusion
Arne Schumann, Jürgen Metzler
Person re-identification is the task of correctly matching visual appearances of the same person in image or video data while distinguishing appearances of different persons. The traditional setup for re-identification is a network of fixed cameras. However, in recent years mobile aerial cameras mounted on unmanned aerial vehicles (UAV) have become increasingly useful for security and surveillance tasks. Aerial data has many characteristics different from typical camera network data. Thus, re-identification approaches designed for a camera network scenario can be expected to suffer a drop in accuracy when applied to aerial data. In this work, we investigate the suitability of features, which were shown to give robust results for re- identification in camera networks, for the task of re-identifying persons between a camera network and a mobile aerial camera. Specifically, we apply hand-crafted region covariance features and features extracted by convolu- tional neural networks which were learned on separate data. We evaluate their suitability for this new and as yet unexplored scenario. We investigate common fusion methods to combine the hand-crafted and learned features and propose our own deep fusion approach which is already applied during training of the deep network. We evaluate features and fusion methods on our own dataset. The dataset consists of fourteen people moving through a scene recorded by four fixed ground-based cameras and one mobile camera mounted on a small UAV. We discuss strengths and weaknesses of the features in the new scenario and show that our fusion approach successfully leverages the strengths of each feature and outperforms all single features significantly.
Probabilistic SVM for open set automatic target recognition on high range resolution radar data
Jason D. Roos, Arnab K. Shaw
The Eigen-Template (ET) based closed-set feature extraction approach is extended to an open-set HRR-ATR framework to develop an Open Set Probabilistic Support Vector Machine (OSP-SVM) classifier. The proposed ET-OSP-SVM is shown to perform open set ATR on HRR data with 80% PCC for a 4-class MSTAR dataset.
Infrared image segmentation based on region of interest extraction with Gaussian mixture modeling
Infrared (IR) imaging has the capability to detect thermal characteristics of objects under low-light conditions. This paper addresses IR image segmentation with Gaussian mixture modeling. An IR image is segmented with Expectation Maximization (EM) method assuming the image histogram follows the Gaussian mixture distribution. Multi-level segmentation is applied to extract the region of interest (ROI). Each level of the multi-level segmentation is composed of the k-means clustering, the EM algorithm, and a decision process. The foreground objects are individually segmented from the ROI windows. In the experiments, various methods are applied to the IR image capturing several humans at night.
Advanced Systems for ATR
icon_mobile_dropdown
Enhancement of thermal imagery using a low-cost high-resolution visual spectrum camera for scene understanding
Ryan E. Smith, Derek T. Anderson, Cindy L. Bethel, et al.
Thermal-infrared cameras are used for signal/image processing and computer vision in numerous military and civilian applications. However, the cost of high quality (e.g., low noise, accurate temperature measurement, etc.) and high resolution thermal sensors is often a limiting factor. On the other hand, high resolution visual spectrum cameras are readily available and typically inexpensive. Herein, we outline a way to upsample thermal imagery with respect to a high resolution visual spectrum camera using Markov random field theory. This paper also explores the tradeoffs and impact of upsampling, both qualitatively and quantitatively. Our preliminary results demonstrate the successful use of this approach for human detection and accurate propagation of thermal measurements in an image for more general tasks like scene understanding. A tradeoff analysis of the cost-to-performance as the resolution of the thermal camera decreases is provided.
Target recognition and phase acquisition by using incoherent digital holographic imaging
In this study, we proposed the Incoherent Digital Holographic Imaging (IDHI) for recognition and phase information of dedicated target. Although recent development of a number of target recognition techniques such as LIDAR, there have limited success in target discrimination, in part due to low-resolution, low scanning speed, and computation power. In the paper, the proposed system consists of the incoherent light source, such as LED, Michelson interferometer, and digital CCD for acquisition of four phase shifting image. First of all, to compare with relative coherence, we used a source as laser and LED, respectively. Through numerical reconstruction by using the four phase shifting method and Fresnel diffraction method, we recovered the intensity and phase image of USAF resolution target apart from about 1.0m distance. In this experiment, we show 1.2 times improvement in resolution compared to conventional imaging. Finally, to confirm the recognition result of camouflaged targets with the same color from background, we carry out to test holographic imaging in incoherent light. In this result, we showed the possibility of a target detection and recognition that used three dimensional shape and size signatures, numerical distance from phase information of obtained holographic image.
Key features for ATA / ATR database design in missile systems
Automatic target acquisition (ATA) and automatic target recognition (ATR) are two vital tasks for missile systems, and having a robust detection and recognition algorithm is crucial for overall system performance. In order to have a robust target detection and recognition algorithm, an extensive image database is required. Automatic target recognition algorithms use the database of images in training and testing steps of algorithm. This directly affects the recognition performance, since the training accuracy is driven by the quality of the image database. In addition, the performance of an automatic target detection algorithm can be measured effectively by using an image database. There are two main ways for designing an ATA / ATR database. The first and easy way is by using a scene generator. A scene generator can model the objects by considering its material information, the atmospheric conditions, detector type and the territory. Designing image database by using a scene generator is inexpensive and it allows creating many different scenarios quickly and easily. However the major drawback of using a scene generator is its low fidelity, since the images are created virtually. The second and difficult way is designing it using real-world images. Designing image database with real-world images is a lot more costly and time consuming; however it offers high fidelity, which is critical for missile algorithms. In this paper, critical concepts in ATA / ATR database design with real-world images are discussed. Each concept is discussed in the perspective of ATA and ATR separately. For the implementation stage, some possible solutions and trade-offs for creating the database are proposed, and all proposed approaches are compared to each other with regards to their pros and cons.
A data set for evaluating the performance of multi-class multi-object video tracking
Avishek Chakraborty, Victor Stamatescu, Sebastien C. Wong, et al.
One of the challenges in evaluating multi-object video detection, tracking and classification systems is having publically available data sets with which to compare different systems. However, the measures of performance for tracking and classification are different. Data sets that are suitable for evaluating tracking systems may not be appropriate for classification. Tracking video data sets typically only have ground truth track IDs, while classification video data sets only have ground truth class-label IDs. The former identifies the same object over multiple frames, while the latter identifies the type of object in individual frames. This paper describes an advancement of the ground truth meta-data for the DARPA Neovision2 Tower data set to allow both the evaluation of tracking and classification. The ground truth data sets presented in this paper contain unique object IDs across 5 different classes of object (Car, Bus, Truck, Person, Cyclist) for 24 videos of 871 image frames each. In addition to the object IDs and class labels, the ground truth data also contains the original bounding box coordinates together with new bounding boxes in instances where un-annotated objects were present. The unique IDs are maintained during occlusions between multiple objects or when objects re-enter the field of view. This will provide: a solid foundation for evaluating the performance of multi-object tracking of different types of objects, a straightforward comparison of tracking system performance using the standard Multi Object Tracking (MOT) framework, and classification performance using the Neovision2 metrics. These data have been hosted publically.
Advanced Processing Methods for ATR II
icon_mobile_dropdown
Underwater image mosaicking and visual odometry
Firooz Sadjadi, Sekhar Tangirala, Scott Sorber
This paper summarizes the results of studies in underwater odometery using a video camera for estimating the velocity of an unmanned underwater vehicle (UUV). Underwater vehicles are usually equipped with sonar and Inertial Measurement Unit (IMU) - an integrated sensor package that combines multiple accelerometers and gyros to produce a three dimensional measurement of both specific force and angular rate with respect to an inertial reference frame for navigation. In this study, we investigate the use of odometry information obtainable from a video camera mounted on a UUV to extract vehicle velocity relative to the ocean floor. A key challenge with this process is the seemingly bland (i.e. featureless) nature of video data obtained underwater which could make conventional approaches to image-based motion estimation difficult. To address this problem, we perform image enhancement, followed by frame to frame image transformation, registration and mosaicking/stitching. With this approach the velocity components associated with the moving sensor (vehicle) are readily obtained from (i) the components of the transform matrix at each frame; (ii) information about the height of the vehicle above the seabed; and (iii) the sensor resolution. Preliminary results are presented.
Motion-seeded object-based attention for dynamic visual imagery
This paper† describes a novel system that finds and segments “objects of interest” from dynamic imagery (video) that (1) processes each frame using an advanced motion algorithm that pulls out regions that exhibit anomalous motion, and (2) extracts the boundary of each object of interest using a biologically-inspired segmentation algorithm based on feature contours. The system uses a series of modular, parallel algorithms, which allows many complicated operations to be carried out by the system in a very short time, and can be used as a front-end to a larger system that includes object recognition and scene understanding modules. Using this method, we show 90% accuracy with fewer than 0.1 false positives per frame of video, which represents a significant improvement over detection using a baseline attention algorithm.
Fast Legendre moment computation for template matching
Normalized cross correlation (NCC) based template matching is insensitive to intensity changes and it has many applications in image processing, object detection, video tracking and pattern recognition. However, normalized cross correlation implementation is computationally expensive since it involves both correlation computation and normalization implementation. In this paper, we propose Legendre moment approach for fast normalized cross correlation implementation and show that the computational cost of this proposed approach is independent of template mask sizes which is significantly faster than traditional mask size dependent approaches, especially for large mask templates. Legendre polynomials have been widely used in solving Laplace equation in electrodynamics in spherical coordinate systems, and solving Schrodinger equation in quantum mechanics. In this paper, we extend Legendre polynomials from physics to computer vision and pattern recognition fields, and demonstrate that Legendre polynomials can help to reduce the computational cost of NCC based template matching significantly.
Automatic threshold selection for multi-class open set recognition
Multi-class open set recognition is the problem of supervised classification with additional unknown classes encountered after a model has been trained. An open set classifer often has two core components. The first component is a base classifier which estimates the most likely class of a given example. The second component consists of open set logic which estimates if the example is truly a member of the candidate class. Such a system is operated in a feed-forward fashion. That is, a candidate label is first estimated by the base classifier, and the true membership of the example to the candidate class is estimated afterward. Previous works have developed an iterative threshold selection algorithm for rejecting examples from classes which were not present at training time. In those studies, a Platt-calibrated SVM was used as the base classifier, and the thresholds were applied to class posterior probabilities for rejection. In this work, we investigate the effectiveness of other base classifiers when paired with the threshold selection algorithm and compare their performance with the original SVM solution.
Automatic small target detection in synthetic infrared images
Ozan Yardımcı, İlkay Ulusoy
Automatic detection of targets from far distances is a very challenging problem. Background clutter and small target size are the main difficulties which should be solved while reaching a high detection performance as well as a low computational load. The pre-processing, detection and post-processing approaches are very effective on the final results. In this study, first of all, various methods in the literature were evaluated separately for each of these stages using the simulated test scenarios. Then, a full system of detection was constructed among available solutions which resulted in the best performance in terms of detection. However, although a precision rate as 100% was reached, the recall values stayed low around 25-45%. Finally, a post-processing method was proposed which increased the recall value while keeping the precision at 100%. The proposed post-processing method, which is based on local operations, increased the recall value to 65-95% in all test scenarios.
Physics-based modeling tool development for spectral-sensing measurements under atmospheric attenuation
In an experimental setting where new sensing techniques are being developed and the source/medium/system parameters are in a constant state of change, a flexible radiometric prediction tool can be essential for experimental design and analysis. The Spectral Signature Sensing (SSS) analysis and visualization software development is a user-friendly analytic tool that is designed for radiometric analysis and modeling of radiant optical energy from a source to a detection system. Transmission through the atmosphere is computed with MODTRAN and the code features multiple-source options and a flexible set of parameters for the detector. It also provides a Google Earth display function to visualize the simulation scenario. In this paper a summary is presented of the radiometric calculations applied in this modeling tool. The essential components and the main features are briefly described including the system-component inputs, other options such as save and load inputs, and the resulting spectral plots and radiometric output.
Advanced Signal Exploitation Methods
icon_mobile_dropdown
THz identification and Bayes modeling
THz Identification is a developing technology. Sensing in the THz range potentially gives opportunity for short range radar sensing because THz waves can better penetrate through obscured atmosphere, such as fog, than visible light. The lower scattering of THz as opposed to the visible light results also in significantly better imaging than in IR spectrum. A much higher contrast can be achieved in medical trans-illumination applications than with X-rays or visible light. The same THz radiation qualities produce better tomographical images from hard surfaces, e.g. ceramics. This effect comes from the delay in time of reflected THz pulses detection. For special or commercial applications alike, the industrial quality control of defects is facilitated with a lower cost. The effectiveness of THz wave measurements is increased with computational methods. One of them is Bayes modeling. Examples of this kind of mathematical modeling are considered.
Object shape extraction from cluttered bags
Nikolay Metodiev Sirakov
The passengers flow at the US airports increased in the recent years. The larger number of passengers demands for lower number of false alarms and higher accuracy of threat detection at the time of baggage screening. This paper presents an algorithm to detect and extract possible explosive containers in X-Ray- CT bags images. The algorithm is composed by three main stages. The 1st one makes the threat container excels among the other objects in the bag image. The 2nd approach: Extracts the SURF features from the query and the bag images; Matches the SURF feature vectors from the two images. The bag image points (pixels), at which the best match is found, define regions of interest (RoI). Different RoI in a bag are identified by separate clusters of points. At the 3rd stage of the algorithm an enlarging active contour (AC) extracts the boundary of every RoI. The starting point of every AC is the mass center of the corresponding cluster of SURF points. The theory is validated on a number of X-ray/CT images. A qualitative comparison with contemporary methods outlines the advantages and the contribution of the present algorithm.
Heterogeneous sharpness for cross-spectral face recognition
Matching images acquired in different electromagnetic bands remains a challenging problem. An example of this type of comparison is matching active or passive infrared (IR) against a gallery of visible face images, known as cross-spectral face recognition. Among many unsolved issues is the one of quality disparity of the heterogeneous images. Images acquired in different spectral bands are of unequal image quality due to distinct imaging mechanism, standoff distances, or imaging environment, etc. To reduce the effect of quality disparity on the recognition performance, one can manipulate images to either improve the quality of poor-quality images or to degrade the high-quality images to the level of the quality of their heterogeneous counterparts. To estimate the level of discrepancy in quality of two heterogeneous images a quality metric such as image sharpness is needed. It provides a guidance in how much quality improvement or degradation is appropriate. In this work we consider sharpness as a relative measure of heterogeneous image quality. We propose a generalized definition of sharpness by first achieving image quality parity and then finding and building a relationship between the image quality of two heterogeneous images. Therefore, the new sharpness metric is named heterogeneous sharpness. Image quality parity is achieved by experimentally finding the optimal cross-spectral face recognition performance where quality of the heterogeneous images is varied using a Gaussian smoothing function with different standard deviation. This relationship is established using two models; one of them involves a regression model and the other involves a neural network. To train, test and validate the model, we use composite operators developed in our lab to extract features from heterogeneous face images and use the sharpness metric to evaluate the face image quality within each band. Images from three different spectral bands visible light, near infrared, and short-wave infrared are considered in this work. Both error of a regression model and validation error of a neural network are analyzed.
Detecting necessary and sufficient parts for assembling a functional weapon
Christian F. Hempelmann, Divya Solomon, Abdullah N. Arslan, et al.
Continuing our previous research to visually extract and visually and conceptually match weapons, this study develops a method to determine whether a set of weapon parts visually extracted from images taken from different scenes can be assembled as a firing weapon. This new approach identifies potential weapons in the ontology via tracing detected necessary and sufficient parts through their meronymic relation to the whole weapon. A fast algorithm for identifying potential weapons that can be assembled from a given set of detected parts is presented.
Multi-ball and one-ball geolocation and location verification
D. J. Nelson, J. L. Townsend
We present analysis methods that may be used to geolocate emitters using one or more moving receivers. While some of the methods we present may apply to a broader class of signals, our primary interest is locating and tracking ships from short pulsed transmissions, such as the maritime Automatic Identification System (AIS.) The AIS signal is difficult to process and track since the pulse duration is only 25 milliseconds, and the pulses may only be transmitted every six to ten seconds. Several fundamental problems are addressed, including demodulation of AIS/GMSK signals, verification of the emitter location, accurate frequency and delay estimation and identification of pulse trains from the same emitter. In particular, we present several new correlation methods, including cross-cross correlation that greatly improves correlation accuracy over conventional methods and cross- TDOA and cross-FDOA functions that make it possible to estimate time and frequency delay without the need of computing a two dimensional cross-ambiguity surface. By isolating pulses from the same emitter and accurately tracking the received signal frequency, we are able to accurately estimate the emitter location from the received Doppler characteristics.