Proceedings Volume 2484

Signal Processing, Sensor Fusion, and Target Recognition IV

cover
Proceedings Volume 2484

Signal Processing, Sensor Fusion, and Target Recognition IV

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 5 July 1995
Contents: 9 Sessions, 66 Papers, 0 Presentations
Conference: SPIE's 1995 Symposium on OE/Aerospace Sensing and Dual Use Photonics 1995
Volume Number: 2484

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Multisensor Fusion
  • Multitarget Tracking and Sensor Management I
  • Multitarget Tracking and Sensor Management II
  • Signal and Image Processing I
  • Signal and Image Processing II
  • Signal and Image Processing III
  • Model-Driven Automatic Target Recognition
  • New Techniques in Automatic Target Recognition I
  • New Techniques in Automatic Target Recognition II
  • Signal and Image Processing I
  • New Techniques in Automatic Target Recognition II
  • Multisensor Fusion
  • New Techniques in Automatic Target Recognition II
  • Signal and Image Processing I
  • Multitarget Tracking and Sensor Management I
  • Multisensor Fusion
  • Signal and Image Processing I
Multisensor Fusion
icon_mobile_dropdown
Adaptive fusion processor
An adaptive learning fusion processor, capable of fusion of a mix of information at the data, feature, and decision levels, acquired from multiple sources (sensors as well as feature extractors and/or decision processors) is presented. Four alternative approaches: a self- partitioning neural net, an adaptive fusion process, an evidential reasoning approach, and a concurrence seeking approach were initially evaluated from a conceptual viewpoint followed by some limited simulation and testing. Based on this assessment, an adaptive fusion processor employing innovative advances of the nearest neighbor concept was selected for detailed implementation and testing using real-world field data. Results show the benefits of fusion in terms of improved performance as compared to those obtainable from the individual component information streams being input to the fusion processor and clearly bring out the feasibility and effectiveness of the new multi-level fusion concepts.
Adaptive sensor fusion
A perceptual reasoning system adaptively extracting, associating, and fusing information from multiple sources, at various levels of abstraction, is considered as the building block for the next generation of surveillance systems. A system architecture is presented which makes use of both centralized and distributed predetection fusion combined with intelligent monitor and control coupling both on-platform and off-board track and decision level fusion results. The goal of this system is to create a `gestalt fused sensor system' whose information product is greater than the sum of the information products from the individual sensors and has performance superior to either individual or a sub-group of combined sensors. The application of this architectural concept to the law enforcement arena (e.g. drug interdiction) utilizing multiple spatially and temporally diverse surveillance platforms and/or information sources, is used to illustrate the benefits of the adaptive perceptual reasoning system concept.
Bayesian attribute fusion using off-board ID sources
Rajat K. Saha
This paper describes the problems associated with the implementation of Bayesian target attribute fusion algorithms when the track identification (ID) information is received at the fusion center over several types of communication links, and nothing is known about the types of sensors used to derive such information. A typical message format that is frequently used consists of the following: friend, foe, or neutral, etc.; fighter, bomber, etc.; and platform specific target types (F-15, MIG-29, etc.). It is assumed that the track information received over the link is a result of multisensor ID fusion at the transmitting stations that employ soft decision fusion and provide confidence level or probabilities associated with the message. At the command center, these off-board target IDs must be fused in order to resolve conflicts among originating message sources and to increase confidence in the fused ID. If the message format contains N attributes, Bayesian approach to M-ary decision theory requires 2N definitions of type I and type II error probabilities and their respective priors for a probabilistically exact procedure, which could be computationally prohibitive. This problem can be circumvented by mapping the N-dimensional decision space into one of lower dimension and manipulating the thresholds involved in the likelihood ratio test in reduced dimension. Such an approach ignores the requirement that the hypothesis space be mutually exclusive and exhaustive; otherwise strong inconsistency in decision results. In this paper, the errors involved in making these two assumptions are discussed and the effect of dependence of evidence obtained at each source is also explored.
Multiple target detection and tracking with a bistatic radar using adaptive neural networks
Christopher G. Morlier, S. N. Balakrishnan
Within the past decade and a half, there has been a renewed interest in two separate areas: bistatic or multistatic radar and artificial neural networks (ANNs). Multistatic radar systems offer many advantages over monostatic systems. One such advantage is better detection of objects with a low radar cross-section. ANNs are very useful for large scale processing or storing of data. In this paper, we study a combination of both multistatic radar and ANNs, for multiple target detection and tracking. For the detection phase, a basic bistatic radar geometry is used, with noise added to simulate a more realistic situation. To track the targets, a two layer, backpropagation ANN is used to process the data. At first, the network was used in two phases: the learning phase and then the recall phase. Although this provided good data near the training time, the network became easily confused when targets crossed. An adaptive feature has been added allowing the weights to be modified on line as new data becomes available, which means it learns continuously. Numerical results taken from tests on both circular and linear target paths are presented in this study.
Methods of approximate agreement for multisensor fusion
Richard Ree Brooks, S. Sitharama Iyengar
Multisensor fusion is a method for improving sensor reliability. Because individual sensors are prone to errors and noise, it is advisable to fuse readings from many sensors. This allows several technologies to be used to measure the value of a variable. Unfortunately it is a non- trivial task to glean the best interpretation from a large number of partially contradictory sensor readings. A number of methods exist for finding the best approximate match for this type of redundant, but possibly faulty, data. This paper states the approximate matching problem and its application to multisensor fusion. Existing algorithms and recent developments are explained along with their performance and assumptions. A new algorithm is presented which unifies previous research. Appropriate applications and potential bottlenecks are discussed.
Modular algorithm concept evaluation tool (MACET) sensor fusion algorithm testbed
John S. Watson, Bradford D. Williams, Sunjay E. Talele, et al.
Target acquisition in a high clutter environment in all-weather at any time of day represents a much needed capability for the air-to-surface strike mission. A considerable amount of the research at the Armament Directorate at Wright Laboratory, Advanced Guidance Division WL/MNG, has been devoted to exploring various seeker technologies, including multi-spectral sensor fusion, that may yield a cost efficient system with these capabilities. Critical elements of any such seekers are the autonomous target acquisition and tracking algorithms. These algorithms allow the weapon system to operate independently and accurately in realistic battlefield scenarios. In order to assess the performance of the multi-spectral sensor fusion algorithms being produced as part of the seeker technology development programs, the Munition Processing Technology Branch of WL/MN is developing an algorithm testbed. This testbed consists of the Irma signature prediction model, data analysis workstations, such as the TABILS Analysis and Management System (TAMS), and the Modular Algorithm Concept Evaluation Tool (MACET) algorithm workstation. All three of these components are being enhanced to accommodate multi-spectral sensor fusion systems. MACET is being developed to provide a graphical interface driven simulation by which to quickly configure algorithm components and conduct performance evaluations. MACET is being developed incrementally with each release providing an additional channel of operation. To date MACET 1.0, a passive IR algorithm environment, has been delivered. The second release, MACET 1.1 is presented in this paper using the MMW/IR data from the Advanced Autonomous Dual Mode Seeker (AADMS) captive flight demonstration. Once completed, the delivered software from past algorithm development efforts will be converted to the MACET library format, thereby providing an on-line database of the algorithm research conducted to date.
Unified nonparametric data fusion
In several recent papers we demonstrated that classical single-sensor, single-source statistics can be directly extended to the multisensor, multisource case. The basis for this generalization is the finite random set, together with a set of direct parallels between random-set and random- vector theories which allow familiar statistical techniques to be directly transferred to data fusion problems. We previously showed that parametric point estimation theory can be thus generalized, resulting in fully integrated data fusion algorithms. However, parametric estimation is not appropriate when sensor noise distributions are poorly known. Also, since most data fusion algorithms are partially ad hoc constructions it is difficult to determine the overall statistical behavior of such algorithms even if the statistics of the sensors are well understood. This paper shows how a standard nonparametric estimation technique, the projection kernel approach to estimating unknown probability distributions, can be extended directly to the data fusion realm.
Multitarget Tracking and Sensor Management I
icon_mobile_dropdown
Centralized fusion multisensor/multitarget tracker based on multidimensional assignments for data association
S. Chaffee, Aubrey B. Poore, Nenad Rijavec, et al.
Large classes of data association problems in multiple hypothesis tracking applications, including sensor fusion, can be formulated as multidimensional assignment problems. Lagrangian relaxation methods have been shown to solve these problems to the noise level in the problem in real-time, especially for dense scenarios and for multiple scans of data from multiple sensors. This work presents a new class of algorithms that circumvent the difficulties of similar previous algorithms. The computational complexity of the new algorithms is shown via some numerical examples to be linear in the number of arcs.
Distributed multisensor multitarget tracking algorithm
Ying Zhang, Henry Leung, Titus K. Y. Lo, et al.
In this paper, an efficient distributed multi-sensor multi-target tracking algorithm is proposed. This distributed tracker consists of two main components: local sensor-level trackers and a track fuser. In the track fuser, track data from local sensors are first transformed to a common coordinate, and synchronized using a linear Kalman filter. A sequential minimum normalized distance nearest neighbor correlation and minimum mean-square error fusion algorithm, combined with the majority decision making logic is presented to correlate and fuse tracks from different sensors. Simulated data under various tracking conditions are used to evaluate the feasibility and effectiveness of this new distributed tracker.
New method for finding multiple meaningful trajectories
Zhonghao Bao, Gerald M. Flachs, Jay B. Jordan
Mathematical foundations and algorithms for efficiently finding multiple meaningful trajectories (FMMT) in a sequence of digital images are presented. A meaningful trajectory is motion created by a sentient being or by a device under the control of a sentient being. It is smooth and predictable over short time intervals. A meaningful trajectory can suddenly appear or disappear in sequence images. The development of the FMMT is based on these assumptions. A finite state machine in the FMMT is used to model the trajectories under the conditions of occlusions and false targets. Each possible trajectory is associated with an initial state of a finite state machine. When two frames of data are available, a linear predictor is used to predict the locations of all possible trajectories. All trajectories within a certain error bound are moved to a monitoring trajectory state. When trajectories attain three consecutive good predictions, they are moved to a valid trajectory state and considered to be locked into a tracking mode. If an object is occluded while in the valid trajectory state, the predicted position is used to continue to track; however, the confidence in the trajectory is lowered. If the trajectory confidence falls below a lower limit, the trajectory is terminated. Results are presented that illustrate the FMMT applied to track multiple munitions fired from a missile in a sequence of images. Accurate trajectories are determined even in poor images where the probabilities of miss and false alarm are very high.
Model for the detection and tracking of multiple targets
James A. Anderson, N. C. Mohanty, Pradeep Kumar Bhattacharya, et al.
The detection and tracking of multiple targets in the presence of clutter in the infrared images involve the use of sophisticated algorithms and signal processing that are important aspects of the development of massively parallel processing skills. The problem is difficult if the target intensity is low and is of small size compared to the clutter. Further, the target size and velocity are usually unknown. It is desirable to obtain optimum filtering to detect low-intensity targets and to evaluate the performance of the filter and tracking algorithms based on observed data. The problem of tracking the targets by computer has been developed. However, an accurate clutter model is the key to precise target detection. Clutter is usually a non-stationary process. It is necessary to estimate the power spectral density of the clutter in real-time such that a multi-dimensional matched filter can be designed to detect the targets. This paper formulates the model and filter design.
Genetic algorithms for learning and design of optimal fuzzy trackers
Wen-Ruey Hwang, Wiley E. Thompson
A methodology for combining genetic algorithms (GA) and fuzzy algorithms for learning and design of optimal fuzzy trackers is presented. With the aid of genetic algorithms, optimal rules of fuzzy logic controllers and membership functions can be designed without human operator's experience and/or control engineer's knowledge. The approach presented here involves searching the decoded parameters of the membership functions and finding the optimal control rules based upon a fitness value which is defined in terms of a performance criterion. Two applications are presented: the first application deals with a GA that adjusts the fuzzy tracker at run-time on the basis of performance indices, and the second application deals with a Model Reference Adaptive Algorithm which is based on a crisp model of the closed loop system. The GA changes the parameters of the fuzzy tracker and the fuzzy membership functions in such a way that the closed loop system behaves like the reference model.
Space-based sensor and target engagement planning for the Midcourse Space Experiment
C. Ray Mitchell, William Tom Prestwood
This paper addresses the problem of determining the launch time of a ballistic target to allow a space-based sensor such as the Midcourse Space Experiment (MSX) satellite to collect multi- sensor data. The Midcourse Space Experiment addresses many Ballistic Missile Defense Organization (BMDO) systems issues concerned with surveillance, acquisition, tracking, and target discrimination using infrared, visible, and ultraviolet passive sensors. Targets of interest to BMDO National & Theater Missile Defense (NMD/TMD) programs address issues including missile acquisition against the hard-Earth, high altitude plume signatures, deployment of multiple bodies and pen-aids from a post-boost vehicle, resolved object track and discrimination, bulk filtering for debris, and re-entry viewing. Typical MSX data collection requirements for NMD and TMD experiments are presented along with the concept of a spacecraft feasible region. The NMD experiments use long and intermediate range type targets and the TMD experiments use shorter range type targets. The feasible region is introduced to describe the locus of satellite positions at target burnout that satisfy all data collection requirements over the resulting satellite/target encounter while not exceeding equipment constraints. Daily satellite drift and target launch windows are developed for typical MSX experiments being planned. Methodologies and techniques to be presented apply to any space-based sensor and ballistic target data collection scenario.
Threat object map handover
C. Dana Crowell, Mike Lash
The laser beam intensity in a given output plane depends on the beamwidth measurement. To satisfy the application requirements, the beamwidth can be suitably modified either by using external optics or by varying the resonator parameters. The latter method is most appropriate for solid-state lasers. In this paper, a fuzzy controller to modify and maintain a desired beamwidth for stable resonators is discussed. Simulation results indicate that a fuzzy logic based controller can be used to achieve a desired output beamwidth within the available range, and maintain the beamwidth at a specified value within tolerable limits.
Multitarget Tracking and Sensor Management II
icon_mobile_dropdown
Simulation study of a fuzzy-logic-based controller for laser cavities
This report summarizes the Threat Object Map (TOM) handover analysis that the ODA team has performed during the past year. The areas of study include evaluating data from the STORM 4 and STORM 6 missions to determine: (1) performance of a radar to optical interceptor TOM handover, (2) sensitivities to data latency both above and within the atmosphere, (3) platform sensitivities to closely spaced objects, and (4) sensitivity to N objects handed up from the radar to M objects on the interceptor focal plane. This analysis is limited to metric only TOM handover and does not include generalized TOM evaluation. The analysis uses the OMEGA and TOMAHOC codes. OMEGA models the radar noising. TOMAHOC (Threat Object Map and Handover Code) performs the metric handover. TOMAHOC contains bias removal algorithms and a Sparse Munkre's algorithm.
Laser-beam diagnostics using fuzzy logic
The measurement of laser beam quality is of prime importance for various applications. M2 factor has been widely accepted as a standard for characterizing the quality of real laser beams. The inaccuracies present in the specifications of resonator elements, variations occurring due to various competing physical processes inside the lasing medium, and offsets in the cavity configuration make the beam quality deviate from the desired value. Since the beam quality can be improved by manipulating the cavity parameters, fine tuning of mirror separation distance can offer considerable modification in the beam quality. In this paper, a fuzzy logic based controller to obtain and monitor desired laser beam characteristics for stable resonators is discussed. The simulation results indicate that the proposed fuzzy logic controller will dynamically adapt to real laser beams and can offer superior performance over conventional proportional-integral-derivative (PID) controllers. The principle advantage of the present approach is that it provides a versatile means for automatic control over the beam characteristics without relying on detailed mathematical modeling techniques.
Generalized model for fuzzy and neural network controllers
Syed Ali Akbar, Ramon Parra-Loera
A generalized model is developed for a neural and fuzzy controller. A generalized model for the implementation and performance of a fuzzy and neural network controllers scheme is presented. This new method provides a structure for combining linguistic and numerical information into a common framework. This common framework can be used to implement equivalent fuzzy or neural controllers. This method provides a unified way for implementing equivalent controllers from different sets of information as well as it provides a fair basis for comparing two different controller strategies since they use the same information for both controllers. Also, this model gives freedom to the designer to choose the most appropriate controller regardless of the type of information available. This method shows the best performance when either kind of information alone is incomplete. This method was applied to the truck control problem as a case of study.
Signal and Image Processing I
icon_mobile_dropdown
Target recognition using cepstrum and inverse filtering
Sharon X. Wang, Carl D. Crane, Murali Rao, et al.
This research proposes an algorithm to recognize a target that may differ from a reference template in position, scale, and in-plane rotation. The proposed algorithm accomplishes the recognition in two steps: rotation and scale detection followed by translation detection. In the first step, the Fourier transformation of the cepstrum is used to achieve the translation invariance, and a polar-logarithmic coordinate system is employed to convert the scale and rotation changes into linear shifts. In the second step, the template is regenerated to the correct size and orientation using rotation and scale parameters obtained from step one. In both steps matching is accomplished using template inverse filtering, which generates a Dirac delta function that appears as a sharp peak. The distinguishing features of this system are three-fold. First, while detecting the existence of the target, it provides the translation, scale, and rotation parameters, which are often needed in many applications. Secondly, it employs the template inverse filters to increase the auto-correlation peaks and suppress the cross-correlation coefficients and noise. Thirdly, it utilizes the cepstrum instead of the spectrum to enhance the high frequency components, and therefore improves the recognition significantly for scale and rotation detection. Experimental examples demonstrate recognition results.
Optimized encoder design algorithm for joint compression and recognition
Jin-Woo Nahm, Mark J. T. Smith
Sensor data, such as SAR and FLIR images, are commonly transmitted from aircraft or satellites to airborne or ground stations for target detection and recognition processing. ATR algorithms are typically run at remote locations because they are very complex computationally, and require powerful computer resources. Rarely is unlimited channel bandwidth available for transmission. Thus one must also contend with delay-cost-quality tradeoff issues, which are often addressed by compressing data prior to transmission. Overall performance is largely restricted by the computational power of the on-board processor, since this limits the complexity and quality of the compression, which in turn affects the speed of transmission. Given some fixed level of computational power available for compression and transmission on board the aircraft, a useful technological improvement would be to have some level of on-board detection/recognition capability so that immediate action could be taken as appropriate. Toward this end, we introduce a method of joint compression and recognition for potential implementation on sensor-equipped aircraft. The algorithm is formulated to provide a level of immediate classification as a by-product of the compression, which in turn would provide the pilot with potential target information instantly.
Very low bit rate data compression using a quality measure based on target detection performance
Jin-Woo Nahm, Mark J. T. Smith
Compression of sensor data is important for transmission and storage of digital infrared and SAR images. For speed and economy, one would like to achieve the highest compression ratios possible while preserving the critical information in the images, i.e., target information. Conventional compression methods such as JPEG, subband coding, fractal coding methods, and the like are tailored to optimizing the reconstructed output to achieve the most subjectively pleasing images possible. Their goal is to make the reconstructed images look as close to the original as possible. In the defense industry ATR paradigm, this is not the relevant optimality criterion. Rather it is preservation of target detection and recognition performance, a concept which is somewhat new in the compression community. In this paper we report on a compression strategy based on subband coding and vector quantization that can achieve compression ratios in excess of 250 to 1, while maintaining high levels of detection/recognition accuracy.
Introduction to the recognition of patterns in compressed data: image template operations over block-, transform-, runlength-encoded, and vector-quantized data
The processing of compressed or encrypted imagery is a vital new area of research that can achieve computational efficiency and data security by processing fewer data, which may be obscurely encoded. In particular, we have derived numerous image processing algorithms that achieve computational speedups which approach the compression ratio (CR). In this paper, we extend our previous work in computation and pattern recognition over one-dimensional compressed data to include operations over multidimensional imagery. We discuss the processing of transform, block, and runlength encoded imagery, as well as the special case of vector-quantized (VQ) imagery. We note that certain cases of template matching over the range space of the block-encoding or VQ transform can yield a computational speedup that approaches the domain compression ratio (CRd). Defined as the ratio of the number of source data to the number of compressed data, CRd generally exceeds the customary compression ratio. Analyses emphasize computational complexity, information loss, and implementational feasibility.
New approach to array processing
Erol Emre
A qualitative and quantitative theory is developed which simultaneously unifies and extends MUSIC, MIN-NORM, ESPRIT, and PISARENKO type techniques. These techniques can be used for both spatial and temporal spectral decomposition of signals. In particular, the usual assumptions on the problem formulation are reduced. Our approach is a realization theoretic approach and substantially extends the previous results on multidimensional arrays of sensors. A theory is provided for analysis and design of (not necessarily linear and equally spaced) array structures to estimate the temporal frequency and the directions for coherent sources, such as in the case of multipath. Techniques are also developed to null signals in certain directions with certain frequencies, such as for multipath cancellation.
Bi-sensor channel identification method for multipath environments
Qu Jin, Kon Max Wong
The extract of a signal at the receiver in the multipath environment is still largely an unsolved problem. This is especially true when there is no previous knowledge of the signal. A new algorithm is introduced in this paper to extract the signal which has undergone multipath transmission. Two separate sensors are used to receive two versions of received signal which are then utilized to estimate the channels. The inverse of these estimated channels are then used to extract the signals.
Regularization theory-based interpretation and modification of the minimum variance distortionless response beamformer for extended object imaging
Yuri V. Shkvarko
A new regularization theory-based approach to the problem of development of a high resolution spatial spectral analysis technique for extended object imaging is addressed. The technique exploits the idea of combining the modified minimum variance distortionless response beam forming algorithm and regularization methodology for radar/sonar remote sensing imaging optimal/suboptimal in a fused regularization-experiment design setting. The generic spatial power spectrum distribution estimation problem is conceptualized as an ill-posed inverse problem and reformulated in the terms of a descriptive regularization problem. By matching the designed augmented cost function with prior information on the degrees of freedom" of an array imaging experiment, the modified imaging technique is derived, and the iterative spatial power spectrum distribution estimation (image improvement) algorithm is developed for computational efficiency of implementation. Keywords: beamformer, extended object, spatial spectrum, experiment design, regularization, image restoration.
Efficient small-target detection algorithm
Guoyou Wang, Tianxu Zhang, Luogang Wei, et al.
According to the principle of human discrimination of a small object from a natural scene in which there is the signature of discontinuity between the object and its neighbor regions, we develop an efficient algorithm for small object detection based on template matching by using a dissimilarity measure that is called average gray absolute difference maximum map (AGADMM), infer the criterion of recognizing a small object from the properties of the AGADMM of a natural scene that is a spatially independent and stable Gaussian random field, explain how the AGADMM improves the detectable probability and keeps the false alarm probability very low, analyze the complexity of computing AGADMM, and justify the validity and efficiency. Experiments with visual images of a natural scene such as sky and sea surface have shown the great potentials of the proposed method for distinguishing a small man-made object from natural scenes.
Signal and Image Processing II
icon_mobile_dropdown
High-fidelity simulations to predict the utility of space object multispectral imagery
B. Scott Hunt, Robert E. Introne, Brian J. Scamman
This study examined the efficacy of computer driven modeling to derive and display the anticipated results of a hypothetical multispectral imaging telescope utilized to determine material decomposition/contamination of near-earth orbit space objects. Imagery was simulated based on a hypothetical multispectral telescope and a satellite CAD. Image processing was performed on the simulations to determine if material decomposition/contamination could be ascertained through observed material property variations. The study examined the affects of decreased spatial resolution and increased specularity to determine limitations of the hypothetical telescope's imagery. Keywords: multispectral, material decomposition/contamination, space object, simulation, adaptive optics
Shift variant linear system modeling for multispectral scanners
Abolfazl M. Amini, George E. Ioup, Juliette W. Ioup
Multispectral scanner data are affected both by the spatial impulse response of the sensor and the spectral response of each channel. To achieve a realistic representation for the output data for a given scene spectral input, both of these effects must be incorporated into a forward model. Each channel can have a different spatial response and each has its characteristic spectral response. A forward model is built which includes the shift invariant spatial broadening of the input for the channels and the shift variant spectral response across channels. The model is applied to the calibrated airborne multispectral scanner as well as the airborne terrestrial applications sensor developed at NASA Stennis Space Center.
Evolving neural networks for video attitude and height sensor
Zhixiong Zhang, Kenneth J. Hintz
The development of an on-board video attitude and height sensor (VAHS) which is used to measure the height, roll, and pitch of an airborne vehicle at low altitude, is presented. The VAHS consists of a downlooking TV camera and two orthogonal sets of laser diodes (total four laser diodes) producing a structured light pattern. Although the height, roll, and pitch can be determined by measuring the locations of the dots in the image, in practice it is very difficult to precisely align the laser diodes and the TV camera. Moreover it is also very hard to obtain accurate camera parameters, because of its various nonlinear distortions. An approach which uses layered neural networks (NNs) to map the locations of the dots in the image to the height, roll, and pitch of the airborne vehicle, is presented here. Amorphous NNs have also been evolved by genetic algorithms (GA) with mixed results. Some simulation results of these experiments are presented.
Robot algorithm evaluation by simulating sensor faults
Richard Ree Brooks, S. Sitharama Iyengar
Recently developed algorithms in automation theory are often difficult to compare correctly since systems must interact with a changing environment. All algorithms are therefore dependent on sensor inputs which are notoriously subject to noise and errors. Proper comparison must be platform independent, but must also take sensor reliability problems into account. We have developed, and are using, a software simulator for comparative evaluation of robotics algorithms. The simulator uses an abstract sensor model which allows evaluation of the algorithms with various sensor reliability parameter values. By applying equivalent algorithms to a large number of randomly generated scenarios it is possible to make valid quantitative comparisons of average performance. This information is complementary to asymptotic time complexity measure which is the most common tool for algorithm comparison. Information is gathered which allows comparison according to criteria chosen by the user, such as distance traveled, number of sensor scans taken, or even collisions with obstacles in the environment. A preliminary discussion of a system capable of quantitative comparison of several algorithms for robot navigation in unknown terrains is presented. This system is in the final stages of acceptance testing, and promises to provide a testbed for future robot navigation research.
Discrimination requirements model
Robert F. Cuffel, Lisa A. Strugala, C. Pham, et al.
The development of a discrimination software testbed, the Discrimination Requirements Model (DRM), intended to support IR sensor requirements definition is described. The DRM employs the standard pattern recognition paradigm, e.g., the reduction of a Monte Carlo database of noised input object signatures into the corresponding target and non-target feature vector sets. Classification of the feature vectors is performed using a varied threshold. The false alarm (PFA) and leakage (PL) error probabilities are estimated via a leave-one- out procedure. The resultant PFA versus PL curve of user selectable thresholds is used to evaluate discrimination performance for the test signature database. Degradation of input signature data strings is accomplished through a set of user selectable sensor performance capabilities. The selectable feature subset includes statistical, curvilinear fit, dynamical, and centralized moment-based parameters for single and multiple band optical systems as well as various normalization options. The DRM accommodates dropouts and other realistic SNR effects. A centralized approach is employed for multiple sensor data fusion for discrimination based on prior associated object tracks. Applications to sensor design and system performance projections are discussed.
Analysis of transputer processor networks for image processing
Vidya B. Manian, Ramon E. Vasquez
This paper presents a performance analysis of the transputer T805 processor networks for implementing low-level image processing algorithms. The influence of communication on the performance of transputer networks is analyzed. The network implementation constraints for using transputer networks for image processing are discussed. The paper also presents the results of implementing a texture feature extraction algorithm, called the Spatial Gray Level Dependence Method (SGLDM), on transputer networks and the results are studied with respect to communication. This algorithm is used for image texture analysis. The algorithm estimates the second-order joint conditional probabilities, of transition from one gray level to another, between two pixels that are at a specific distance and at a specific angle to the horizontal axis. Many statistical texture features can be derived from the estimated co- occurrence matrices. The transputer networks are configured as hypercubes which provides embedded tree, ring, and mesh topologies. The algorithm is implemented on different transputer hypercube configurations with tree topologies mapped onto them. The communication overheads in parallel transputer networks have a major influence on the optimal number of processors that can be used in an application, and on the maximum speedup that can be achieved. By resolving the communication issues transputer based real-time image processing applications can be developed.
Signal and Image Processing III
icon_mobile_dropdown
Background filters for Midcourse Space Experiment (MSX): Spirit III theater midcourse scenarios and their impact to object detection and estimation performance
Kevin H. Giles, Jeffery L. King, William Tom Prestwood, et al.
This paper discusses post-flight ground processing algorithms for removal of background in the Midcourse Space Experiment (MSX) Spatial Infrared Imaging Telescope (SPIRIT III) scanning infrared radiometer data. The algorithms presented are linear and non-liner techniques for estimation and subtraction of expected backgrounds for theater missile defense scenarios. The impacts to object detection and amplitude estimation are addressed for each background filter as a part of the filter trade studies. This effort is funded under the MSX Theater and Midcourse Cooperative Target Experiments task on the Systems Engineering and Technical Assistance Contract with the U.S. Army Space and Strategic Defense Command.
Space-based RF signal classification using adaptive wavelet features
Michael P. Caffrey, Scott D. Briles
Rf signals are dispersed in frequency as they propagate through the ionosphere. For wide-band signals, this results in nonlinearly-chirped-frequency, transient signals in the VHF portion of the spectrum. This ionospheric dispersion provides a means of discriminating wide-band transients from other signals (e.g., continuous-wave carriers, burst communications, chirped- radar signals, etc.). The transient nature of these dispersed signals makes them candidates for wavelet feature selection. Rather than choosing a wavelet ad hoc, we adaptively compute an optimal mother wavelet via a neural network. Gaussian weighted, linear frequency modulate (GLFM) wavelets are linearly combined by the network to generate our application specific mother wavelet, which is optimized for its capacity to select features that discriminate between the dispersed signals and clutter (e.g., multiple continuous-wave carriers), not for its ability to represent the dispersed signal. The resulting mother wavelet is then used to extract features for a neural network classifier. The performance of the adaptive wavelet classifier is then compared to an FFT based neural network classifier.
Locating multiple targets in complex images
Zhonghao Bao, Gerald M. Flachs, Jay B. Jordan
A new algorithm called the flying window locator algorithm (FWLA) is presented for locating multiple targets in complex images, in which the illumination is not uniform, the sizes of targets are small, and the colors of targets and their background are similar. The FWLA divides a complex image into subimages, called windows. To guarantee that a target of interest is completely in one of the windows, neighboring windows are overlapped so that each overlapped area is bigger than that of a target. Since a window defined in the FWLA is small, a scene component in a window is considered as having a similar shading or color. Hence, a window is composed of a mixture of component distributions, stemming from the different background and target regions in the window. A gradient clustering algorithm (GCA) is developed to separate a mixture in each window and segment a window. After segmentation, a `hole' concept is used to find the targets of interest based upon a priori knowledge of the target size, shape, and color. The FWLA is a non-iterative clustering algorithm and uses only fixed- point numbers to analyze an image. Consequently, it is fast and computationally efficient. Results of applying the FWLA to two different computer vision problems are presented.
Comparison of transport protocols for high-speed data networks
Wanda B. Perkins, I. K. Dabipi, E. W. Hinds
Due to increasing developments in large (gigabit) application programs, very large scale integration (VSLI) technology, high speed transmission media, high speed data networks, and others, researchers question the ability of most existing transport protocols to handle enormous amounts of data efficiently. Applications such as video conferencing, distributed processing, and real-time imaging requires protocols and networks that are capable of providing large bandwidths and low delays. Transport and network mechanisms implemented in VSLI allow a substantial amount of protocol processing to be performed in hardware. Several performance criteria are placed on existing transport protocols to meet the needs of gigabit applications. This paper compares the different transport protocols, such as Delta-t, Network Block Transfer (NETBLT), Transport Protocol Class 4 (TP4), Transmission Control Protocol (TCP), and Versatile Message Transaction Protocol (VMTP), in terms of the following transport services: connection management, data transfer, and flow and error control.
Communication issues in determining departmental local area networks requirements
I. K. Dabipi, A. Donaldson, James A. Anderson
One of the major issues when configuring networks is about the way in which the decision on what kind of a network to install is made. The Electrical Engineering Department at Southern University has recently gone through this process. This paper addresses the related network design issues as a class project simulation of the different network designs. The simulation details the network simulation and the network connectivity issues. A comparison of the various designs is presented and the reason for the choice of the final configuration is also given.
Signal analysis for distributed systems
James A. Anderson, N. C. Mohanty, A. S. King
The remote IR sensor measurements and data reduction and analysis are important aspects of the development of massively parallel processing (MPP) skills. The problem of data fusion in a central decision center is very important in view of the deployment of distributed multiple sensors for communication, surveillance, and battle management. Because of a limited transmission capacity, the sensors are required to transmit their decision (with or without information bits) instead of the raw data. Besides, the performance of a sensor is based upon its operating conditions such as weather in the case of an infrared (IR) sensor. Several sensors can be used to increase recognition and classification of targets in general. If complementary sensors are used, then robust recognition can be achieved. An example of complementary sensors is IR and millimeter wave (MMW) sensors. The performance of an IR sensor (which has high resolution and day and night capability) decreases with inclement weather conditions whereas the performance of a MMW sensor (which suffers from low resolution) is not affected by these conditions. The basic goal of such a multiple-sensor distributed system is to improve system performance such as reliability, speed, coverage area, multiple target tracking, system response to various bands/channels for the various target or object features. This paper describes the processing of such information and suitable configurations to maximize applications.
Model-Driven Automatic Target Recognition
icon_mobile_dropdown
PTBS segmentation scheme for synthetic aperture radar
Noah S. Friedland, Brian J. Rothwell
The Image Understanding Group at Martin Marietta Technologies in Denver, Colorado has developed a model-based synthetic aperture radar (SAR) automatic target recognition (ATR) system using an integrated resource architecture (IRA). IRA, an adaptive Markov random field (MRF) environment, utilizes information from image, model, and neighborhood resources to create a discrete, 2D feature-based world description (FBWD). The IRA FBWD features are peak, target, background and shadow (PTBS). These features have been shown to be very useful for target discrimination. The FBWD is used to accrue evidence over a model hypothesis set. This paper presents the PTBS segmentation process utilizing two IRA resources. The image resource (IR) provides generic (the physics of image formation) and specific (the given image input) information. The neighborhood resource (NR) provides domain knowledge of localized FBWD site behaviors. A simulated annealing optimization algorithm is used to construct a `most likely' PTBS state. Results on simulated imagery illustrate the power of this technique to correctly segment PTBS features, even when vehicle signatures are immersed in heavy background clutter. These segmentations also suppress sidelobe effects and delineate shadows.
Automatic target recognition via classical detection theory
Douglas R. Morgan
Classical Bayesian detection and decision theory applies to arbitrary problems with underlying probabilistic models. When the models describe uncertainties in target type, pose, geometry, surround, scattering phenomena, sensor behavior, and feature extraction, then classical theory directly yields detailed model-based automatic target recognition (ATR) techniques. This paper reviews options and considerations arising under a general Bayesian framework for model- based ATR, including approaches to the major problems of acquiring probabilistic models and of carrying out the indicated Bayesian computations.
Issues in SAR model-based target recognition
Dan E. Dudgeon, W. Eric L. Grimson, Robert R. Tenney
Synthetic aperture radar (SAR) is an important tool for wide-area surveillance since it can provide all-weather, day/night coverage. The surveillance of large areas implies a large number of SAR images to be analyzed per unit time. Target detection and recognition algorithms can potentially ease this workload by focusing the analysts' attention on important parts of the collected imagery. Automatic target detection and recognition are challenging because, in SAR imagery, the target signatures can vary significantly with viewing angle. The clutter backgrounds against which targets may be placed can also vary drastically, from open fields to urban streets. Furthermore, because SAR data is collected and processed coherently, target signatures and clutter backgrounds are corrupted with a speckle noise component. Model-based target recognition represents a spectrum of approaches to the problem of detecting and identifying targets of interest in large volumes of data. The basic paradigm consists of detecting and extracting features that are used to make initial hypotheses about target identities and states. Based on those working hypotheses, target signatures are predicted and compared to image-derived data. If the comparison is good, the target is `recognized.' If not, the working hypothesis is refined and used to improve the predicted signature. If, at some point, it is concluded that no predicted signature adequately represents the data, then the object in question is declared `unknown.' In this paper we highlight several of the important issues in developing model-based target recognition algorithms for SAR imagery. We discuss signature representation, hypothesis generation, feature prediction, and evidential reasoning. Our goal is to highlight these issues and any controversies surrounding them, rather than discuss a particular approach to developing a model-based recognition system.
Feature transform for ATR image decomposition
Davi Geiger, Robert A. Hummel, Barney Baldwin, et al.
We have developed an approach to image decomposition for ATR applications called the `feature transform.' There are two aspects to the feature transform: (1) A collection of rich, sophisticated feature extraction routines, and (2) the orchestration of a hierarchical decomposition of the scene into an image description based on the features. We have expanded the approach into two directions, one considering local features and the other considering global features. When studying local features, we have developed for (1) corner, T-junctions, edge, line, endstopping, and blob detectors as local features. A unified approach is used for all these detectors. For (2), we make use of the theory of matching pursuits and extend it to robust measures, using results involving Lp norms, in order to build an iterative procedure in which local features are removed from the image successively, in a hierarchical manner. We have also considered for (1) global shape features or modal features, i.e., features representing the various modes of the models to be detected. For (2) a multiscale strategy is used for moving from the principal modes to secondary ones. The common aspect of both directions, local and global feature detection, is that the resulting transformations of the scene decomposes the image into a collection of features, in much the same way that a discrete Fourier transform decomposes an image into a sum of sinusoidal bar patterns. With the feature transform, however, the decomposition uses redundant basis functions that are related to spatially localized features or modal features that support the recognition process.
Robust 3D part extraction from range images with deformable superquadric models
Yong-Lin Hu, William G. Wee
The extraction of 3-D geometric primitives is an important issue in model-based computer vision. The reliability of the primitives extraction is vital for further object recognition processing. In this paper, we develop a robust 3-D part extraction system. The deformable superquadrics are selected as 3-D part primitives, and a robust superquadric extraction method is developed. First, we introduce a novel adaptive weighted partial data minimization algorithm which can robustly extract superquadric from data containing both Gaussian and random noise. The convergence and the efficiency of the algorithm are discussed. The fuzzy logic techniques are introduced to further improve the algorithm to handle input containing multiple objects. Finally, a range image processing system is developed based on robust superquadric extraction method. This system can efficiently extract 3-D parts from range images. The testing results using both synthetic and real data are presented.
Evolution of convolution kernels for feature extraction
Shawn C. Masters, Kenneth J. Hintz
A fundamental difficulty in image processing is the determination of a suitable set of features which can be used to segment images or can be combined by an appropriate method for the identification and classification of targets. Many features have been and are being used which are `reasonable' to the target recognition researcher, but there is no assurance that other features which can extract more information from an image don't exist. This paper investigates the use of genetic algorithms (GA) to evolve convolution kernels which produce features that can be used for image segmentation. Any linear transform can be implemented as a convolution kernel. Using supervised learning and a fitness function which maximizes the interclass distance and minimizes intraclass variance, a genetic algorithm is used to evolve a sub-image convolution kernel. The genome which represents the convolution kernel is converted from a 2-D form into a 1-D form using an approach similar to a space-filling curve. The fitness of the genome for each kernel is measured by its classification performance compared to ground truth data, and then biased by the size of the kernel so that a smallest kernel solution can be found.
New Techniques in Automatic Target Recognition I
icon_mobile_dropdown
Automatic target recognition using correlation filters
The approach to target recognition using correlation filters is reviewed in this paper. The fundamentals have been well investigated for over a decade. Recent advances in both processor technology as well as algorithms have brought the application of correlation filters closer to reality.
Neural network approach for high-resolution target classification
Yi-Chuan Lu, Kuo-Chu Chang
We proposed an improved version of the SOFM/LVQ classifier currently used in an ATR system for SAR imagery. This classifier was originally designed to construct a few number of templates to represent a set of targets with different orientations. The classifier accepts an input of a target, computes distances of this data with those representative templates, and then classifies this data to the target class with the shortest distance. In this paper, we focus on the issue of how to identify and reject data from targets outside the given data set, such as man- made clutters. To reject clutters, we propose two discrimination functions, distance and entropy measures. With the distance discriminator, we have obtained a very good classification performance when all data are from the given target sets. However, the simple distance measure produces poor classification results when unknown targets such as natural or manmade clutters are present and when each target is represented by a small number of templates. We correct this deficiency by incorporating an entropy measure into the original classifier. With this entropy discriminator, our system rejects a majority of the false alarms while maintaining a high correct classification rate with a relatively few templates for each target. Although, this system was tested on real ISAR data and showed a very good performance, the data was obtained from `turntable' experiment with a fixed depression angle and known target location. One of the future research directions is to test this algorithm with real `field' SAR data and study the robustness of the system.
Spectral correlation of wideband target resonances
The potential for automatic target recognition (ATR) processing of foliage-penetrating (FOPEN) synthetic-aperture radar (SAR) imagery requires very high bandwidth occupancies to achieve sufficient range resolution for the ATR task. The U.S. Army Research Laboratory (ARL) ultra-wideband (UWB) FOPEN SAR -- with greater than 95 percent bandwidth occupancy -- provides a suitable testbed for evaluation of resonance-based ATR approaches. Current resonance-extraction techniques (e.g., SEM) typically have poor performance in the presence of noise, and are often computationally intensive. Recently developed at ARL, the `spectral correlation method' uses linear transforms -- such as Fourier and wavelets -- to resolve resonant components; these transforms are generally quite fast, and have straightforward implementations. Creating a synthetic version of the ringdown and projecting onto the desired transform basis provides a set of expected spectral coefficients (the `spectral template'). The spectral template is correlated with the spectral coefficients acquired from the projection of the focused image data onto the same basis function set; the correlation coefficient is then passed through a simple threshold detector. This yields a fast, efficient scheme for recognition of target resonance effects in UWB imagery. Recent advances in this area include a reduction in false-alarm rate by two orders of magnitude, a reduction in processing time by three orders of magnitude, and recognition of a tactical target.
Improved ATR performance evaluation via mode seeking
Improved performance evaluation results for complex data sets, in the Bayes error estimation sense, are shown by first decomposing the data sets into approximately normal modes before input to the error estimator. More specifically, the utility of a particular nonparametric Bayes error estimator, the Parzen error estimator, is generalized to multimodal data sets by preprocessing the data through a mode seeker before input to the error estimator. The utility of the mode seeker and the Parzen error estimator for data analysis and performance evaluation is demonstrated on a field collected radar data set.
Feature-based classification of SAR data using RBF networks
Batuhan Ulug, Jun Zhao, Stanley C. Ahalt
We describe the application of radial basis function (RBF) classifiers to feature-based automatic target recognition (FBATR) using synthetic aperture radar (SAR) data. FBATR systems are attractive because of their promise for robust, computationally efficient, scalable ATR systems. We compare the performance of RBF classifiers, multi layer perceptron (MLP) networks and a nearest neighbor (1-NN) classifier using a synthetic SAR database. Using this database, this preliminary study attempts to establish how classification performance deteriorates when the measured data is perturbed with additive white Gaussian noise (AWGN) prior to feature extraction. Our experimental results indicate that the RBF network performs better and it is more robust to this type of noise when compared to the other feature-based classifiers we considered. Consequently, we conclude that RBF classifiers are strong candidates for FBATR systems.
Geometrical formulation of radar signal complexity and target recognition performance capabilities
In this paper, we discuss a preliminary investigation of the basic limitations for the performance of target recognition systems. In particular, we focus on the analysis of real- aperture-radar systems, and address the intrinsic separability of sets of backscatter waveforms representing distinct target classes.
Multilevel detection method for multispectral and hyperspectral images
Aleksandar Zavaljevski, Atam P. Dhawan, David J. Kelch, et al.
A novel multi-level detection (MLD) method for detecting small targets within multispectral images, that takes into account both spectral and spatial characteristics of the data, is proposed. In the first level of processing, misclassification is minimized by applying minimum distance statistical classifier in conjunction with a spectral library of known class signatures. In a second level, the neighborhood of each unclassified pixel is analyzed for detection of candidate classes for use as endmembers in a spectral unmixing model. The fractions of neighborhood and target signatures for the unclassified pixels are determined by means of linear least-squares method. The third processing level determines the size and location of detected targets with a clustering analysis methodology. Target size and location are estimated on the basis of the sum and weighted vector mean, respectively, of the mixing fractions of the neighboring pixels. The MLD method was successfully applied to both synthetic and AVIRIS hyperspectral imagery data sets.
New Techniques in Automatic Target Recognition II
icon_mobile_dropdown
Wiener filter: synthetic discriminant function for target identification
Christopher R. Chatwin, Ruikang K. Wang, Rupert C. D. Young
The Wiener filter, which has been used extensively for image restoration and signal processing, is employed for robust optical pattern recognition and classification. The Wiener filter is formulated to incorporate the in-class image and the out-of-class noise image into a single step filter construction. It is compared with the classical matched filter (CMF) and phase-only filter (POF), demonstrating a superior discrimination capability. The Wiener filter is incorporated into a synthetic discriminant function (SDF); correlation results show that it is tolerant to image distortion. With a 30 degree out-of-plane rotation between training set images, the Wiener filter-SDF achieves a 100% success rate in discriminating one-class of images from another. The CMF-SDF and POF-SDF fail to achieve 100% discrimination even at rotation increments of 15 degrees.
Automatic target recognition system using wavelet transform and cluster analysis
Anitha Panapakkam, S. N. Balakrishnan
Wavelet transform has received much attention in research and is widely applied for image coding, fractal analysis, speech synthesis, texture discrimination, etc. We have developed an automatic target recognition (ATR) system employing wavelet transforms to capture target signatures. Detection and segmentation stages efficiently differentiate the targets from background and write them as separate subimages. Segmented targets are subjected to wavelet decomposition. The feature vector that characterizes each target is a set of energy values calculated from the wavelet decomposed target images. The classification of the targets into several categories is performed using hierarchical cluster analysis. 117
Wavelet feature performance analysis for distortion-invariant target detection
Wavelet feature performance for the detection and recognition of targets from noisy images is investigated. Training patterns with different noise contents are first employed to come up with a statistical model for the dissimilarity of the reference target and noisy inputs. This model is then analyzed with Daubechies wavelet filter with extremal phase and vanishing moment. Simulation results show the potential of wavelet features that can be used in the decision making subsystem to yield high discrimination between target and non-target.
Pixel-registered image fusion
One of the highest potential uses of image fusion is that of recognition of critical targets. The continuing image fusion question then is how to make optimal use of the often disparate forms of encountered image detail during fusion. Toward this end, many techniques have been advanced for fusion to a single viewable image. Fewer techniques have been suggested toward fusion with the goal of directly improving target detection or recognition. Based upon emerging trends in pixel accurate registration of images, we show the theoretical foundations required to optimally fuse target imagery for recognition. Results obtained can be applied to both the cases of automatic target recognition and image analysis.
Preprocessing for data fusion using fractal multiresolution analysis
Jingyun Li, Patrick C. Yip, Eloi Bosse
In this paper, we introduce a preprocessing method for data fusion, based on multiresolution analysis using fractal functions. The reason for choosing this method is that many natural signals belong to the 1/f family and an important class of fractal signals is also of the 1/f type. Because of the self-affinity and the dilation properties, a finite set of fractal interpolation functions (FIF) is chosen for the multiresolution analysis. It is seen that a nested set of subspaces can be generated by the FIF which is equivalent to the set of wavelet subspaces. Through multiresolution analysis, it is possible to reduce the effect of high frequency noise and to keep useful information at the low frequency. Furthermore, such an approach has a localization effect. According to the characteristics of the FIF, the decomposition and reconstruction approach obtained from multiresolution analysis can be implemented by cascade filter banks. Computation complexity is thus also reduced. This method may provide a good way of preprocessing data in fusion.
Target recognition by maximizing heterogeneity of signal samples collected for discrimination with respect to an observed signal
This paper deals with the problem of how to identify targets or signals in noise, which represents, mathematically, the problem of classifying an observed target data sample as coming from one of several populations. Some of the information about the alternative distributions of populations has been obtained from signal data samples collected for discrimination. Each sample is declared to be realization of a specific stochastic process. By this step each sample is attached to just one out of a set of possible signals with distinct characteristics. We are dealing with the case when the alternative distributions of populations are multivariate normal with different mean vectors and covariance matrices. It is assumed that all parameters are unknown. Also, the univariate case is considered. It is shown how certain tests of homogeneity or normality of several samples of the data can be used to transform a set of signal data samples into some statistic that measures either distance from homogeneity or distance from normality of these samples, respectively. This statistic is then used to construct sample based discriminant rule which either maximizes distance from homogeneity or minimizes distance from normality, respectively, with respect to an observed signal. The above discriminant rules are applied to obtain new procedures of target recognition which are relatively simple to carry out and can be easily used, say, for bird recognition by radar in order to preclude the possibility of collisions between aircraft and birds, etc. In those situations when we deal with small samples of the data, the procedures proposed herein are recommended. An illustrative numerical example is given.
Signal and Image Processing I
icon_mobile_dropdown
Multiple camera data fusion for accurate pose and scale recovery
Tieniu Tan
This paper concerns the use of multiple cameras for accurate and noise-robust recovery of pose and scale parameters. Of particular interest is the recovery of pose and scale of vehicles in traffic scenes which, under normal conditions, are constrained to lie on the ground-plane. Several closed-form algorithms are described. The algorithms directly exploit the ground- plane constraint, and are applicable to an arbitrary number of image-to-model line matches. The importance of using multiple cameras is illustrated and the fusion of data from multiple cameras is shown to be simple and straightforward. The algorithms are tested extensively with both synthetic and real outdoor traffic images. They are found to be robust and perform satisfactorily with real images.
Improvements in optical telescope performance through adaptive optics: results from the National Solar Observatory prototype adaptive optics system
Steve P. Doinidis, Wiley E. Thompson
Earth-based telescopes are limited in their performance primarily by aberrations introduced to the light by the atmosphere. This problem can be compensated for by the introduction of an adaptive optics (AO) system, and at the National Solar Observatory at Sunspot, New Mexico, work has been underway to develop a real time (100 - 300 Hz) AO system to work in concert with a number of major instruments at the Vacuum Tower Telescope. A prototype system sharing many components of the final system has been assembled and tested, and preliminary tests of this system suggest that it may be suitable for actual use as a correcting element in situations where small, low-order optical aberrations are present. The prototype system and its performance are presented here.
New Techniques in Automatic Target Recognition II
icon_mobile_dropdown
Multiple-class identification algorithm using genetic neural networks
Rustom Mamlook, Wiley E. Thompson
Multiple-class identification algorithm using genetic neural networks is presented. The algorithm uses a feedforward neural network so it is fast. The algorithm uses the Kohonen network to provide an unsupervised learning. The Kohonen network is used with Z-axis normalization. The weight initialization is done by genetic optimization to escape from local minima. The performance of the algorithm is evaluated using a confusion matrix method. The algorithm does not require the number of classes to be known a priori. It also provides a threshold selection method. An example is given to illustrate the application of the algorithm and to evaluate its performance.
Multisensor Fusion
icon_mobile_dropdown
Sensor fusion and nonlinear prediction for anomalous event detection
Jose N.V. Hernandez, Kurt R. Moore, Richard C. Elphic
We consider the problem of using the information from two time series, each characterizing a different physical quantity, to predict the future state of the system and, based on that information, to detect and classify anomalous events. We stress the application of principal components analysis (PCA) to analyze and combine data from the different sensors. We construct both linear and nonlinear predictors. In particular, for linear prediction we use the least-mean-square (LMS) algorithm and for nonlinear prediction we use both back-propagation (BP) networks and fuzzy predictors (FP). As an application, we consider the prediction of gamma counts from past values of electron and gamma counts recorded by the instruments of a high altitude satellite.
New Techniques in Automatic Target Recognition II
icon_mobile_dropdown
ATCURE: a heterogeneous high-performance architecture image recognition computer
R. Michael Hord, Jeremy A. Salinger
ATCURE is a real-time, open, high-performance computer architecture optimized for automatically analyzing imagery for such applications as target cueing, medical diagnosis, and character recognition. ATCURE's tightly coupled heterogeneous architecture achieves high performance and affordability. This paper discusses ATCURE in the context of the evolution of computer architectures, and shows that heterogeneous high-performance architecture (HHPA) computers, an emerging category of parallel processors characterized by superior cost performance of which ATCURE is an example, are well suited for a wide range of image recognition applications.
Signal and Image Processing I
icon_mobile_dropdown
Hand recognition by wavelet transform and neural network
Wei Wang, Zhonghao Bao, Qiang Meng, et al.
A new approach to human hand recognition is presented. It combines concepts from image segmentation, contour representation, wavelet transforms, and neural networks. With this approach, people are distinguished by their hands. After obtaining a person's hand contour, each finger of the hand is located and separated based on its points of sharp curvature. A two dimensional (2-D) finger contour is then mapped to a one dimensional (1-D) functional representation of the boundary called a finger signature. The wavelet transform then decomposes the finger signature signal into lower resolutions retaining the most significant features. The energy at each stage of the decomposition is calculated to extract the features of each finger. A three layer artificial neural network with back propagation training is employed to measure the performance of the wavelet transform. A database consisting of five hand images obtained from twenty-eight different people is used in the experiment. Three of the images are used for training the neural network. The other two are used for testing the algorithm. Results presented illustrate high accuracy human recognition using this scheme.
Pyramid framework for image reconstruction from nonimaged laser speckle
Wissam A. Rabadi, Harley R. Myler, Arthur Robert Weeks, et al.
A multiresolution approach for image reconstruction from the magnitude of its Fourier transform has been developed and implemented by employing the concept of pyramid sampling. In this approach several iterations of the error reduction algorithm are preformed at each level of the pyramid using a coarse-to-fine strategy, resulting in improved convergence and reduced computational cost.
Multitarget Tracking and Sensor Management I
icon_mobile_dropdown
New class of Lagrangian relaxation-based algorithms for fast data association in multiple hypothesis tracking applications
Aubrey B. Poore, Alexander J. Robertson III, Peter J. Shea
Large classes of data association problems in multiple hypothesis tracking applications, including sensorfusion, can be formulated as multidimensional assignment problems. Lagrangian relaxation methods have beenshown to solve these problems to the noise level in the problem in real-time, especially for dense scenarios andfor multiple scans of data from multiple sensors. This work presents a new class of algorithms that circumventthe difficulties of similar previous algorithms. The computational complexity of the new algorithms is shownvia some numerical examples to be linear in the number of arcs.
Multisensor Fusion
icon_mobile_dropdown
Design and experimental validation of a robust-CFAR distributed multifrequency radar data fusion system
Stelios C.A. Thomopoulos, Nickens N. Okello
A robust constant false alarm rate (CFAR) distributed detection system that operates in heavy clutter with unknown distribution is presented. The system is designed to provide CFARness under clutter power fluctuations and robustness under unknown clutter and noise distributions. The system is also designed to operate successfully under unbalanced power distributions among sensors, and exhibits fault-tolerance in the presence of sensor power fluctuations. The test statistic at each sensor is a robust (in terms of signal-to-noise ratio distribution across sensors) CFAR t-statistic. In addition to the primary binary decisions, confidence levels are generated with each decision and used in the fusion logic to robustify the fusion performance and eliminate weaknesses of the Boolean fusion logic. The test statistic and the fusion logic are analyzed theoretically for Weibull and lognormal clutter. The theoretical performance is compared against Monte-Carlo simulations that verify that the system exhibits the desired characteristics of CFARness, robustness, insensitivity to power fluctuations, and fault- tolerance. The system is tested with experimental target-in-clear and target-in-clutter data. The experimental performance agrees with the theoretically predicted behavior when the target is visible by all three radars. When the target is not visible in two out of the three radars, due to a possible undetected misalignment, the fusion performance is compromised. Robustification of the fusion performance against unpredictable and undetectable degradation of data quality in the majority of the sensors is then achieved using geometric filtering. Geometrical filtering is accomplished by using the Hough transform and additional information in the fusion design about the shape of the target trajectory(ies).
Signal and Image Processing I
icon_mobile_dropdown
Practical transform coding of multispectral imagery
In this paper we present a robust and implementable compression algorithm for multispectral imagery with a selectable quality level within the near-lossless to visually lossy range. The three-dimensional terrain-adaptive transform-based algorithm involves a one dimensional Karhunen-Loeve transform (KLT) followed by two-dimensional discrete cosine transform (DCT). The images are spectrally decorrelated via the KLT to produce the eigen images. The resulting spectrally decorrelated eigen images are then compressed using the JPEG algorithm. The key feature of this approach is that it incorporates the best methods available to fully exploit the spectral and spatial correlation in the data. The novelty of this technique lies in its unique capability to adaptively vary the characteristics of the spectral decorrelation transformation based upon variations in the local terrain. The spectral and spatial modularity of the algorithm architecture allows the JPEG to be replaced by a totally different coder (e.g., DPCM). However, the significant practical advantage of this approach is that it is leveraged on the standard and highly developed JPEG compression technology. The algorithm is conveniently parameterized to accommodate reconstructed image fidelities ranging from near- lossless at about 5:1 compression ratio (CR) to visually lossy beginning at around 40:1 CR.