Proceedings Volume 1470

Data Structures and Target Classification

cover
Proceedings Volume 1470

Data Structures and Target Classification

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 1 August 1991
Contents: 4 Sessions, 29 Papers, 0 Presentations
Conference: Orlando '91 1991
Volume Number: 1470

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Multisensor Fusion and Signal Processing
  • Data Structures in Distributed Environments
  • Computational Methods and Architectures
  • Automatic Target Recognition
  • Data Structures in Distributed Environments
Multisensor Fusion and Signal Processing
icon_mobile_dropdown
Fusion or confusion: knowledge or nonsense?
Peter L. Rothman, Richard V. Denton
The terms 'data fusion,' 'sensor fusion,' multi-sensor integration,' and 'multi-source integration' have been used widely in the technical literature to refer to a variety of techniques, technologies, systems, and applications which employ and/or combine data derived from multiple information sources. Applications of data fusion range from real-time fusion of sensor information for the navigation of mobile robots to the off-line fusion of both human and technical strategic intelligence data. The Department of Defense Critical Technologies Plan lists data fusion in the highest priority group of critical technologies, but just what is data fusion? The DoD Critical Technologies Plan states that data fusion involves 'the acquisition, integration, filtering, correlation, and synthesis of useful data from diverse sources for the purposes of situation/environment assessment, planning, detecting, verifying, diagnosing problems, aiding tactical and strategic decisions, and improving system performance and utility.' More simply states, sensor fusion refers to the combination of data from multiple sources to provide enhanced information quality and availability over that which is available from any individual source alone. This paper presents a survey of the state-of-the- art in data fusion technologies, system components, and applications. A set of characteristics which can be utilized to classify data fusion systems is presented. Additionally, a unifying mathematical and conceptual framework within which to understand and organize fusion technologies is described. A discussion of often overlooked issues in the development of sensor fusion systems is also presented.
Survey of multisensor data fusion systems
Robert J. Linn, David L. Hall, James Llinas
Multisensor data fusion integrates data from multiple sensors (and types of sensors) to perform inferences which are more accurate and specific than those from processing single-sensor data. Levels of inference range from target detection and identification to higher level situation assessment and threat assessment. This paper provides a survey of more than 50 data fusion systems and summarizes their application, development environment, system status and key techniques. The techniques are mapped to a taxonomy previously developed by Hall and Linn (1990); these include positional fusion techniques, such as association and estimation, and identity fusion methods, including statistical methods, nonparametric methods, and cognitive techniques (e.g. templating, knowledge-based systems, and fuzzy reasoning). An assessment of the state of fusion system development is provided.
Adaptive selection of sensors based on individual performances in a multisensor environment
Ramon Parra, Wiley E. Thompson, Ajit P. Salvi
An important issue in the fusion of multisensor data in the context of scene interpretation and multitarget tracking is the ability to evaluate and characterize sensor performance and establish confidence factors for the individual sensors. This paper presents a methodology for adaptively determining sensor confidence factors based upon sensor performance as measured by the degree of consensus among the various sensors. The fusion process is based upon evidential reasoning and statistical clustering which utilize the sensor confidence factors. The sensor confidence factors are based upon sensor characteristics, environmental conditions, and sensor performance. The individual sensor performance is derived in terms of the fusion results and the degree of consensus between the individual sensor data and the fusion data. Experimental results are presented to illustrate the technique and to demonstrate the effectiveness of the methodology in scene interpretation.
Fusion of multiple-sensor imagery based on target motion characteristics
Thomas R. Tsao, John M. Libert
Fusion of multiple sensor imagery is an effective approach to clutter rejection in target detection and recognition. However, image registration at the pixel level and even at the feature level poses significant problems. We are developing a neural network computational schemes that will permit fusion of multiple sensor information according to target motion characteristics. One such scheme implements the Law of Common Fate to differentiate moving targets from dynamic background clutter on the basis of homogeneous velocity; spatiotemporal frequency analysis is applied to time-varying sensor imagery to detect and locate individual moving objects. Another computational scheme applies Gabor filters and differential Gabor filters to calculate image flow and then employs a Lie group-based neural network to interpret the 2D image flow in terms of 3D motion, and to delineate regions of homogeneous 3D motion; the motion-keyed regions may be correlated among sensor types to associate multiattribute information with the individual targets in the scene and to exclude clutter.
Pseudo K-means approach to the multisensor multitarget tracking problem
Wiley E. Thompson, Ramon Parra, Chin-Wang Tao
This paper presents a methodology for multitarget tracking based on multisensor data in a cluttered environment. Two very important problems of multitarget tracking are the clustering of multisensor measurements and data association. A clustering algorithm is presented which is based upon a pseudo k-means algorithm. This algorithm does not require a priori knowledge of the number of clusters expected and is computationally efficient in that no iterations are required. A data association technique is presented which does not require posteriori probabilities and utilizes only the basic augmented Kalman filter. Examples are presented to illustrate the effectiveness of the approach.
Conversion of sensor data for real-time scene generation
Vibeke Libby, R. Keith Bardin
In order to perform real-time signal processing and data analysis of fused sensor data and at the same time optimize the use of existing hardware, the scene information most often has to be converted into a particular format. In general, this format conversion is viewed as part of the sensor fusion process, but in this paper it will be treated as a separate entity. In other words, the concentration is on building a representation of the environment which lends itself directly to real-time true three-dimensional processing of the environment with higher-level path planning in mind. An example scenario is using the data as input for terrain-following and terrain-avoidance algorithms, where the output from the sensor data processing is a world model that directly applies to intersect analysis and evaluation. The intersect processing is performed in a hardware unit called the TIGER (three-dimensional intersect & geometrical evaluator in real time). The TIGER is based on VHSIC (very-high-speed integrated-circuit) technology, and performs intersect calculations at rates of the order of millions of objects/sec using a particular three-dimensional object format. This hardware subsystem is designed to be useful for a wide range of airborne, underwater and space applications. In order to address a broad area of sensor types, the architecture is made generic, and has potential applications in solving selected sensor-fusion computational bottlenecks as well. Standard interfaces simplify subsystem coupling to a variety of host processor systems. The TIGER hardware has been built and tested extensively.
MITAS: multisensor imaging technology for airborne surveillance
John D. Thomas
MITAS, a unique and low-cost solution to the problem of collecting and processing multisensor imaging data for airborne surveillance operations has been developed, MITAS results from integrating the established and proven real-time video processing, target tracking, and sensor management software of TAU with commercially available image exploitation and map processing software. The MITAS image analysis station (IAS) supports airborne day/night reconnaissance and surveillance missions involving low-altitude collection platforms employing a suite of sensors to perform reconnaissance functions against a variety of ground and sea targets. The system will detect, locate, and recognize threats likely to be encountered in support of counternarcotic operations and in low-intensity conflict areas. The IAS is capable of autonomous, near real-time target exploitation and has the appropriate communication links to remotely located IAS systems for more extended analysis of sensor data. The IAS supports the collection, fusion, and processing of three main imaging sensors: daylight imagery (DIS), forward looking infrared (FLIR), and infrared line scan (IRLS). The MITAS IAS provides support to all aspects of the airborne surveillance mission, including sensor control, real-time image enhancement, automatic target tracking, sensor fusion, freeze-frame capture, image exploitation, target data-base management, map processing, remote image transmission, and report generation.
Data Structures in Distributed Environments
icon_mobile_dropdown
Routing in distributed information systems
Zbigniew W. Ras
In our distributed information system, two information structures are maintained--the application database and the database used to route queries in the computer network. We introduce a class of routing tables and show their use in the search process within the computer network. Any site of a distributed information system which is unable to answer a query has to search for a site which can answer it. The information stored in routing tables has a strong impact on the speed of this search. It is proposed that each site learns new facts from each of its neighbors. These new facts are compiled into rules and used to build knowledge bases at all sites of a distributed information system. This extended distributed information system is called a distributed knowledge-based system. The main goal of this paper is to suggest a strategy for reconstruction of collapsed application databases and collapsed routing tables.
Flow-control mechanism for distributed systems
Jacek Maitan
A new approach to the rate-based flow control in store-and-forward networks is evaluated. Existing methods display oscillations in the presence of transport delays. The proposed scheme is based on the explicit use of an embedded dynamic model of a store-and-forward buffer in a controller's feedback loop. It is shown that the use of the model eliminates the oscillations caused by the transport delays. The paper presents simulation examples and assesses the applicability of the scheme in the new generation of high-speed photonic networks where transport delays must be considered.
Maximum likelihood estimation of differential delay and differential Doppler
Herbert Gary Greene, Jay MacMullan
In this paper, a maximum likelihood estimator for differential delay and differential Doppler is developed which is appropriate for narrowband, high-frequency signals. The estimator is designed to accommodate a relatively unconstrained noise environment and to operate without prior knowledge of the signal's spectral characteristics.
State estimation for distributed systems with sensing delay
Harold L. Alexander
Control of complex systems such as remote robotic vehicles requires combining data from many sensors where the data may often be delayed by sensory processing requirements. The number and variety of sensors make it desirable to distribute the computational burden of sensing and estimation among multiple processors. Classic Kalman filters do not lend themselves to distributed implementations or delayed measurement data. The alternative Kalman filter designs presented in this paper are adapted for delays in sensor data generation and for distribution of computation for sensing and estimation over a set of networked processors.
Step towards optimal topology of communication networks
Zbigniew Michalewicz
Genetic algorithms are adaptive algorithms which find solutions to problems by an evolutionary process based on natural selection. They can be used to find approximate solutions to optimization problems in cases where finding the precise optimum is prohibitively expensive, or where no algorithm is known. This paper discusses the use of (nonstandard) genetic algorithms for solving an optimization problem for a communication network. In the implementation of the system, a graph representation of a solution of the problem was used, as opposed to the representations based on bit strings (as is done in most work on genetic algorithms). This work is also a part of a larger project to create a new programming environment to support all kinds of optimization problems.
Fault-tolerant capacity-1 protocol for very fast local networks
Wlodek Dobosiewicz, Pawel Gburzynski
A substantial amount of attention has been paid recently to DQDB--a proposed bus architecture and MAC-level protocol for fast local and metropolitan area networks. The main advantage of this solution over previous concepts is in the fact that the performance of DQDB does not degrade with the increasing value of a--the ratio of the packet length to the propagation length of the bus expressed in bits. The big value of a characterizes networks that are either long geographically or very fast, or both. Thus, at the threshold of the forthcoming era of very high transmission rates and increasing demands for wide-area networks with the functionality of LANs, DQDB has been enthusiastically received by the networking community. DQDB's disadvantages can be stresses in the following two points: (1) The flexibility of the network is limited: each station must know the relative location on the bus of every other station. (2) The network is susceptible for faults: the failure of one of the extreme stations or disconnection of one bus segment makes it totally inoperable. In this paper, a capacity-1 network inspired by the DQDB concept which attempts to eliminate the above disadvantages of original DQDB is proposed. The solution is based on the UU-BUS topology, i.e., a network consisting of two separate, folded, unidirectional busses.
Computational Methods and Architectures
icon_mobile_dropdown
Scanning strategies for target detection
Izidor Gertner, Yehoshua Y. Zeevi
This paper presents a new method of image scanning based on the solution of a set of linear congruences, called generalized raster scan (GRS), which is a generalization of the classic raster scan. The main properties of the GRS and algorithms for its implementations are elaborated. The GRS can be regarded as a linear combination of raster and random scans. The new method is in particular instrumental in applications where on-line acquisition and hierarchical processing is of importance. As such, the new approach is suitable for target finding, clustering, and visual inspection. It is also suitable for image processing and/or transmission with progressive resolution.
Scaling of digital shapes with subpixel boundary estimation
Jack Koplowitz
Bilevel images consist of a grid of square cells, with side T, which are colored either black or white depending on whether the center of the cell lies in this black or white region of the pre- image. The digitized boundary can be interpreted as a 4-directional chain code. Subpixel accuracy is considered in determining the edge or boundary of the pre-image, and a practical algorithm is given for its implementation. In particular, small curving objects digitized on a coarse grid are considered. Experimental results show a surprisingly high degree of reconstruction, allowing for the upward scaling or enlargement of small objects from their digitized representation.
Markov random fields on a SIMD machine for global region labelling
Gregory M. Budzban, John M. DeCatrel
The Markov random field (MRF) formation allows independence over small pixel neighborhoods suitable for SIMD implementation. The equivalence between the Gibbs distribution over global configurations and MRF allows description of the problem as maximizing a probability or, equivalently, minimizing an energy function (EF). The EF is a convenient device for integrating 'votes' from disparate, preprocessed features--mean intensity, variance, moments, etc. Contributions from each feature are simply weighted and summed. The EF is flexible and can be easily modified to capture a priori beliefs about the distribution of the configuration space, and still remain theoretically sound. A unique formulation of the EF is given. Notably, a deterministic edge finder contributes to the EF. Weights are independently assigned to each feature's report (indicators). Simulated annealing is the theoretical mechanism which guarantees convergence in distribution to a global minimum. Because the number of iterations is an exponential function of time, the authors depart from theory and implement a fast, heuristic 'cooling' schedule. A videotape of results on simulated FLIR imagery demonstrates real-time update over the entire image. Actual convergence is still too slow for real-time use (O(1 min.)), but the quality of results compares favorably with other region labeling schemes.
Efficient software techniques for morphological image processing with desktop computers
Michael A. Zmuda, Louis A. Tamburino, Mateen M. Rizki
The basis of a system for processing binary images with the operations of mathematical morphology is described. This system exploits the properties of mathematical morphology to minimize computing time and storage requirements. Images are stored in data structures which are memory-efficient and allow several images to be processed simultaneously. Techniques are also presented for efficiently storing globally sized structuring elements. These ternary images are stored in data structures which utilize an adaptive window to provide storage for a 2M X 2N specification space in an optimal M X N data structure. This representation provides efficient storage, retrieval, and comparison of generalized structuring elements.
Innovative architectural and theoretical considerations yield efficient fuzzy logic controller VLSI design
Paul Basehore, Joseph Thomas Yestrebsky
An efficient general-purpose fuzzy logic inference engine for real-time commercial embedded control and signal processing applications is reported. The monolithically implemented engine is capable of processing up to 64 rules, with up to 16 fuzzy membership functions per rule. Novel VLSI implementation is achieved through consideration of alternative computational techniques for fuzzy rule processing. Specifically, an embedded digital neural network is employed to rapidly compute minima across rule membership functions, achieving a computation rate of greater than 20 million fuzzy logical inferences per second. Additional implementation efficiency is achieved through algorithmic methods of membership function construction which are logically consistent with the theory of fuzzy sets. A proprietary method for constructing membership function values based on only the linear distance between an input and the user-defined membership function center value provides a highly efficient mans for constructing membership functions.
Scientific data compression for space: a modified block truncation coding algorithm
Wei-Wei Lu, Michael Paul Gough, Peter N. H. Davies
Satellite science experiments generate a great amount of on-board data. This must be compressed before transmission due to the limited telemetry channel capacity. But the data are essentially random and thus difficult to compress efficiently. This paper presents a modified block truncation coding (MBTC) algorithm for image data compression, especially applicable to scientific data compression, in which the science information should be preserved. The new algorithm has the following new features: (1) optimal quantization--the block truncation coding (BTC) quantizer is replaced by a Lloyd optimal quantizer; (2) differential coding--the differences of pixels of adjoining scan lines (rather than the amplitudes of pixels) are quantized; (3) entropy coding--the quantized outputs are encoded by means of the entropy coding method; and (4) error control--this is included to generate the reconstructed images in which no error is greater than a preset threshold. Simulation results are presented for the compression of satellite geophysical data which are similar to image data. It is shown that the new algorithm, MBTC, is able to maintain many details of the original data and performs better than BTC in terms of: reducing mean-square error MSE (see appendix), increasing compression ratio R (see appendix) and generating a better visual quality of the reconstructed images. The techniques used here are especially applicable to space-acquired science data because (1) the on-board computational requirements are low, and (2) the scientific data information content is maintained. Further improvements to the MBTC are also discussed.
Automatic Target Recognition
icon_mobile_dropdown
Optical correlation filters for large-class OCR applications
David P. Casasent, Anand K. Iyer, Srinivasan Gopalaswamy
The performance of two new optical correlation filters (G-MACE and MINACE) for large class (many fonts and true class words) OCR (optical character recognition) applications is considered. We consider filters that can recognize many key words in upper case (UC) and mixed case (MC) and various point sizes in the presence of OCR scanner sampling errors. New results are presented and guidelines for large class filters are advanced.
Building an optical pattern recognizer
Perry C. Lindberg, Don A. Gregory
The present portable solid-optics correlator for real-time pattern recognition uses pixelated spatial light modulators and phase-only filters, and will operate on sensor information extracted from any sensor system. Prospective operations of such a rugged and portable optical pattern recognizer include smart weapon midcourse guidance and navigation, target recognition, aim-point selection, and precise terminal homing. An account is given of the testing procedure being used by the U.S. Army missile command for a missile-guidance appligation of this optical correlator.
Efficient use of data structures for digital monopulse feature extraction
Robert McEachern, Andrew J. Eckhardt, Alexander Nauda
Data structures that are carefully matched to the processing technology used to implement them can lead to cost-effective designs for radar signal classification/sorting devices. During the past decade, HRB Systems has developed several generations of devices for use in sorting radar signals. The older devices employed analog demodulators and complicated post- demodulation processing to generate a large feature vector containing a set of parameters describing each measured pulse. Clever use of algorithmic data structures specifically tailored to exploit the capabilities of VLSI ASIC fabrication technology eliminated the need for multipliers, making possible a digital demodulator that is much smaller, faster, more reliable, and less expensive than the analog versions. Little information is lost when much smaller feature vectors are used to describe the pulses, enabling further substantial reductions in the complexity of the post-demodulation processing. As a result, the new feature extractor has a price/performance ratio over a hundred times better than the older devices. A four-chip implementation of the new feature extractor is described.
Survey of radar-based target recognition techniques
Marvin N. Cohen
A variety of approaches that are available for the attempt to add target recognition capability to the usual radar functions of surveillance and track are discussed. These approaches include the utilization of fine-resolution, wide-bandwidth Doppler techniques for the recognition of moving targets, high and ultra-high range resolution and polarimetric techniques for the recognition of stationary targets, and high cross-range resolution techniques for the recognition of both moving and stationary targets. Definitions of the levels of recognition that may be attempted as well as the fundamentals of recognition system design, development, and test are provided.
Ground target classification using moving target indicator radar signatures
Chun S. Yoon
The comparative performance of two target-classification algorithms relative to the ultimate performance obtainable through comprehensive use of the spectral data base is presently examined through empirical upper-bound calculations for the case of X-band radar signatures of a variety of military vehicles moving in the 5-15 mph range. This analysis indicates that performance does not significantly change when the number of measurements is reduced to 2/3 the initial number. Target classification with the present algorithms is found to be restricted to favorable aspect angles only; the computationally simple algorithm based on the double-Doppler signal is effective in a sector of favorable aspect angles, if the signal is not masked.
Assumption truth maintenance in model-based ATR algorithm design
Laura Fulton Bennett, Rubin Johnson, Cecil Ivan Hudson
In a given approach to automatic target recognition (ATR) algorithm design, an underlying network of assumptions provides computational and conceptual efficiency. This network includes concrete assumptions about the physical characteristics of the real-world scene and abstract assumptions about knowledge acquisition and representation. A facility for the identification and tracking of assumptions in dynamic systems is critical for algorithm design and performance evaluation purposes. The intersection of assumptions at a designated stage of the target recognition process defines the valid domain of application of the ATR system. An approach to assumption truth maintenance for application to complex, visual pattern recognition systems is described. The types of assumptions made in key-feature, model-based ATR systems are systematically identified, from the low-level pixel domain to the high-level mission statement. The approach permits the tracking of algorithm assumptions as they propagate through the pattern recognition process and provides for belief formation and revision to maintain consistency. The approach is demonstrated on a prototype set of infrared test imagery at varying levels of resolution and signal-to-noise ratio, representative of the given problem domain.
Automatic and operator-assisted solid modeling of objects for automatic recognition
J. Ross Stenstrom, C. Ian Connolly
Model-based recognition usually begs the question of generation of suitable object models. Our solid models are face/edge/vertex representations where the faces form 2-cycle(s) properly enclosing a region of 3-D space. These models facilitate generation of rendered images, computing numeric features for the object, and answering questions such as feature visibility for a given orientation. In this paper, a process for the generation of solid models from 2-D images, range images, or line drawings is described. Examples are provided.
Algorithm for statistical classification of radar clutter into one of several categories
In this paper, an algorithm is described which successfully classifies radar clutter into one of several major categories, including bird, weather, and target classes. Statistical non-Bayesian classification of objects is based on the data samples (each sample being drawn from a different class). It is applied to a set of features derived from the reflection coefficients that contain all spectral information about the observed object and are computed using the multi-segment version of Burgts formula. These coefficients are then transformed and grouped to meet the requirements for inultivariate gaussian behaviour. The proposed algorithm is based on a new approach to solving the problem of testing whether a given sample of multivariate observations could have come from a multivariate normal population with an unknown mean vector and dispersion matrix. By using a series of transformations it is shown that the problem of testing for multi-variate normality can be reduced to that of testing for univariate uniformity (U(O,i)). In this case, the problem of classification, that is of assigning an observed object to its proper group, admits a simple solution.
DIGNET: a self-organizing neural network for automatic pattern recognition and classification
Stelios C.A. Thomopoulos, Dimitrios K. Bougoulias
The demonstrated ability of artificial neural networks to retrieve information that is addressed by content makes them a competitive candidate for automatic pattern recognition. Furthermore, their capability to reconstruct their memory from partially presented stored information compliments their recognition capabilities with classification. However, artificial neural networks (ANNs) are known to possess preferential behavior as far as the initial conditions and noise interference are concerned. A self-organizing artificial neural network is presented that exhibits deterministically reliable behavior to noise interference when the noise does not exceed a specified level of tolerance. The complexity of the proposed ANN, in terms of neuron requirements versus stored patterns, increases linearly with the number of stored patterns and their dimensionality. The self-organization of the proposed DIGNET is based on the idea of competitive generation and elimination of attraction wells in the pattern space. The same artificial neural network can be sued both for pattern recognition and classification.
Data Structures in Distributed Environments
icon_mobile_dropdown
Recognition of contacts between objects in the presence of uncertainties
Jing Xiao
This paper discusses on-line recognizing contacts of objects for robotics application, using multiple sensor modalities. Specifically, it shows how to integrate and reason about sensor information obtained by position/orientation and vision sensors in the presence of sensing uncertainties.