Proceedings Volume 3376

Sensor Fusion: Architectures, Algorithms, and Applications II

cover
Proceedings Volume 3376

Sensor Fusion: Architectures, Algorithms, and Applications II

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 20 March 1998
Contents: 6 Sessions, 22 Papers, 0 Presentations
Conference: Aerospace/Defense Sensing and Controls 1998
Volume Number: 3376

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Feature- and Decision-Level Fusion
  • Sensor Fusion Architectures
  • Data- and Image-Level Fusion
  • Applications I
  • Sensor Fusion Algorithms
  • Applications II
  • Feature- and Decision-Level Fusion
Feature- and Decision-Level Fusion
icon_mobile_dropdown
Intelligent processing techniques for sensor fusion
Katherine A. Byrd, Bart Smith, Doug Allen, et al.
Intelligent processing techniques which can effectively combine sensor data from disparate sensors by selecting and using only the most beneficial individual sensor data is a critical element of exoatmospheric interceptor systems. A major goal of these algorithms is to provide robust discrimination against stressing threats in poor a priori conditions, and to incorporate adaptive approaches in off- nominal conditions. This paper summarizes the intelligent processing algorithms being developed, implemented and tested to intelligently fuse data from passive infrared and active LADAR sensors at the measurement, feature and decision level. These intelligent algorithms employ dynamic selection of individual sensors features and the weighting of multiple classifier decisions to optimize performance in good a priori conditions and robustness in poor a priori conditions. Features can be dynamically selected based on an estimate of the feature confidence which is determined from feature quality and weighting terms derived from the quality of sensor data and expected phenomenology. Multiple classifiers are employed which use both fuzzy logic and knowledge based approaches to fuse the sensor data and to provide a target lethality estimate. Target designation decisions can be made by fusing weighted individual classifier decisions whose output contains an estimate of the confidence of the data and the discrimination decisions. The confidence in the data and decisions can be used in real time to dynamically select different sensor feature data or to request additional sensor data on specific objects that have not been confidently identified as being lethal or non- lethal. The algorithms are implemented in C within a graphic user interface framework. Dynamic memory allocation and the sequentialy implementation of the feature algorithms are employed. The baseline set of fused sensor discrimination algorithms with intelligent processing are described in this paper. Example results from the algorithms are shown based on static range sensor measurement data.
Optimal features-in feature-out (FEI-FEO) fusion for decisions in multisensor environments
The study presents a formal methodology to the problem of feature level fusion, that had been previously addressed in the literature mostly in an ad hoc manner on a case by case basis only. The input set of features extracted from multiple sensors (data sources) are optimally fused to derive a synthetic feature so as to enhance the effective discrimination potential among the defined set of decision classes. This `features in - feature out (FEI-FEO)' fusion process, unlike most other fusion schemes reported in the literature, is designed through a formal learning phase in which an optimal mapping from the multi-sensor derived feature space to a single unified feature is developed. This learning, accomplished through a new composite random and deterministic search based optimization tool, defines the transformation for the FEI-FEO process. This transformation is applied to the multi-sensor generated feature sets in the operational phase to derive the fused feature values corresponding to the objects under observation. The corresponding classification decisions are made on the basis of relative closeness of these feature values to the different class mean values in the transformed single dimensional feature space. The new methodology has been implemented in MATLAB which, being a vector/matrix oriented language, is an ideal candidate for solving problems in pattern recognition and learning. The method is applied to well-known data sets available on the web for testing pattern recognition algorithms to assess its effectiveness relative to the traditional classification methods from both conceptual as well as computational view points.
To fuse or not to fuse: fuser versus best classifier
Nageswara S. V. Rao
A sample from a class defined on a finite-dimensional Euclidean space and distributed according to an unknown distribution is given. We are given a set of classifiers each of which chooses a hypothesis with least misclassification error from a family of hypotheses. We address the question of choosing the classifier with the best performance guarantee versus combining the classifiers using a fuser. We first describe a fusion method based on isolation property such that the performance guarantee of the fused system is at least as good as the best of the classifiers. For a more restricted case of deterministic classes, we present a method based on error set estimation such that the performance guarantee of fusing all classifiers is at least as good as that of fusing any subset of classifiers.
A trainable decisions-in decision-out (DEI-DEO) fusion system
Most of the decision fusion systems proposed hitherto in the literature for multiple data source (sensor) environments operate on the basis of pre-defined fusion logic, be they crisp (deterministic), probabilistic, or fuzzy in nature, with no specific learning phase. The fusion systems that are trainable, i.e., ones that have a learning phase, mostly operate in the features-in-decision-out mode, which essentially reduces the fusion process functionally to a pattern classification task in the joint feature space. In this study, a trainable decisions-in-decision-out fusion system is described which estimates a fuzzy membership distribution spread across the different decision choices based on the performance of the different decision processors (sensors) corresponding to each training sample (object) which is associated with a specific ground truth (true decision). Based on a multi-decision space histogram analysis of the performance of the different processors over the entire training data set, a look-up table associating each cell of the histogram with a specific true decision is generated which forms the basis for the operational phase. In the operational phase, for each set of decision inputs, a pointer to the look-up table learnt previously is generated from which a fused decision is derived. This methodology, although primarily designed for fusing crisp decisions from the multiple decision sources, can be adapted for fusion of fuzzy decisions as well if such are the inputs from these sources. Examples, which illustrate the benefits and limitations of the crisp and fuzzy versions of the trainable fusion systems, are also included.
Sensor Fusion Architectures
icon_mobile_dropdown
Multi-agent data fusion workstation (MADFW) architecture
This paper describes an on-going effort to build a Multi- Agent Data Fusion Workstation (MADFW) based on a Knowledge- Based System (KBS) BlackBoard (BB) architecture to offer a range of innovative techniques for Data Fusion (DF), applicable to various domains. The initial application to be demonstrated is in the area of airborne maritime surveillance where several multi-agent concepts and algorithms have already been studied and demonstrated. The end result will offer the user a flexible and modular environment providing capability for: (1) addition of user defined sensor simulation models and fusion algorithms; (2) integration with existing models and algorithms; and (3) evaluation of performance to derive requirement specifications and help in the design phase towards fielding a real DF system. The workstation is being designed to accommodate modular interchangeable algorithm implementation and performance evaluation of: (1) fusion of positional data from imaging and non-imaging sensors; (2) fusion of attribute information obtained from imaging and non-imaging sensors and other sources such as communication systems, satellites, etc.; and (3) Object Recognition in imaging data. The design allows algorithms for sensor simulators and measures of performance to reside ether on the KBS BB shell or be separate from it, thus facilitating integration with other testbed designs. This architecture also allows the future introduction of fusion management capabilities. The real-time KBS BB shell developed by Lockheed Martin Canada, in collaboration with DREV, is the basis of the MADFW infrastructure. This system is totally generic, and could be used to implement any system comprising of components which can be numeric or AI based. It has been implemented in C++ rather than in a higher-level language (such as LISP, Smalltalk, ...) to satisfy the real-time requirement.
Data fusion system architecture for unattended ground sensors
Junliang Zhang, Yuming Zhao
In this paper, it develops an artificial intelligence method that uses object-oriented approach to construct the blackboard of data fusion for unattended ground sensors including geophone sensor, acoustic sensor, pressure sensor, infra-red sensor, magnetic sensor, image sensor etc.. It can perform detection, correlation, association and estimation to the sensors' output and obtain the exact recognition of targets, the number of target groups and the estimation for both the states of targets and the situation and threat. The whole blackboard is divided into three regions, including: single sensor fusion region, multisensor fusion region and threat estimation region. The three regions are expressed in classes. Knowledges of each domain in the three regions are also expressed by classes and encapsulated in class hierarchy structure. Thus the whole blackboard can be viewed as object forest, the distributed knowledge inference can be realized by object reference. Both statistics and hierarchy inference approaches are used in the blackboard structure so as to efficiently perform fusion and inference. Furthermore, the method is realized in C++ language and demonstrated by the simulation of sensor alarming datum under battlefield environment.
Distributed event-driven architectures for evolutionary sensor fusion
Ron T. Lake
The next decade will require the development of complex sensor systems that integrate data from a large number of sensor elements. Such systems will play important roles in a wide variety of industrial and defense systems, as the fusion of multiple sources of information is crucial to sensor operation in noisy environments, and in complex decision making. The arrival of ubiquitous processing elements is one requirement for the development of such systems; however, the ability to connect and integrate these elements at the logical level is the more limiting aspect of their development. Furthermore, it is unlikely that such systems can be developed in a single linear process. It is much more probable that such systems will need to be evolved over time, perhaps a substantial period of time, and as result the ability to logically interconnect heterogeneous elements in an evolutionary manner will be of great importance. This paper outlines some approaches to this problem based on the distributed object-computing model as introduced in the OMG CORBA. It is our belief that this technology is maturing to the point that it could form the foundations for a sensor architecture that would support the evolutionary development of complex sensor networks.
Data- and Image-Level Fusion
icon_mobile_dropdown
Estimation of subpixel-resolution motion fields from segmented image sequences
Super-resolution enhancement algorithms are used to estimate a high-resolution video still (HRVS) from several low- resolution frames, provided that objects within the image sequence move with subpixel increments. However, estimating an accurate subpixel-resolution motion field between two low-resolution, noisy video frames has proven to be a formidable challenge. Up-sampling the image sequence frames followed by the application of block matching, optical flow estimation, or Bayesian motion estimation results in relatively poor subpixel-resolution motion fields, and consequently inaccurate regions within the super-resolution enhanced video still. This is particularly true for large interpolation factors (greater than or equal to 4). To improve the quality of the subpixel motion fields and the corresponding HRVS, motion can be estimated for each object within a segmented image sequence. First, a reference video frame is segmented into its constituent objects, and a mask is generated for each object which describes its spatial location. As described previously, subpixel-resolution motion estimation is then conducted by video frame up- sampling followed by the application of a motion estimation algorithm. Finally, the motion vectors are averaged over the region of each mask by applying an (alpha) -trimmed mean filter to the horizontal and vertical components separately. Since each object moves as a single entity, averaging eliminates many of the motion estimation errors and results in much more consistent subpixel motion fields. A substantial improvement is also visible within particular regions of the HRVS estimates. Subpixel-resolution motion fields and HRVS estimates are computed for interpolation factors of 2, 4, 8, and 16, to examine the benefits of object segmentation and motion field averaging.
Atmospheric attenuation reduction through multisensor fusion
The visualization of a scene of murky atmospheric conditions is improved by fusing multiple images. A key feature of this system is the use of the wavelet domain in the fusion process. Many possible fusion formulas in this domain exist and to find the `best' formula, we formulate an optimization problem. We assume a set of training data consisting of a sequence of images with the presence of atmospheric effects and the corresponding image with no atmospheric effects present (ground truth). Next, we perform a search over the parameter space of our `generic fusion formula' attempting to minimize the error between the original ground truth image and the image created by fusing the noisy data. Using the resulting `best' fusion formula, we have created a system for pixel level fusion. Experimental results are shown and discussed. Possible applications of this system including processing of outdoor security system data, filters for outdoor vehicle image data and use in heads-up displays.
Morphological filters and wavelet-based image fusion for concealed weapons detection
Liane C. Ramac, Mucahit K. Uner, Pramod K. Varshney, et al.
When viewing a scene for an object recognition task, one imaging sensor may not provide all the information needed for recognition. One way to obtain more information is to use multiple sensors. These sensors should provide images that contain complementary information about the same scene. After preprocessing the source images, we use image fusion to combine the information from the difference sensors. The images to be fused may have some details such as shadows, wrinkles, imaging artifacts, etc., that are not needed in the final fused image. One application of morphological filters is to remove objects of a given size range from the image. Therefore, we use morphological filters in conjunction with wavelets to improve the recognition performance after fusion. After morphological filtering, wavelets are used to construct multiresolution representations of the source images. Once the source images are decomposed, the details are combined to form a composite decomposed image. This method allows details at different levels to be combined independently so that important information is maintained in the final composite image. We are developing image fusion algorithms for concealed weapon detection (CWD) applications. Fusion is useful in situations where the sensor types have different properties, e.g., IR and MMW sensors. Fusing these types of images results in composite images which contain more complete information for CWD applications such as detection of concealed weapons on a person. In this paper we present our most recent results in this area.
Applications I
icon_mobile_dropdown
Surface texture measurement by combining signals from two sensing elements of a piezoelectric tactile sensor
Javad Dargahi, Shahram Payandeh
In this paper we report on the design and testing of a prototype surface texture tactile sensor. The sensor can measure both compliance and surface roughness. The design of the sensor is based on the psychophysiological perception of surface texture by the human hand. The sensor essentially consists of a rigid cylinder surrounded by a compliant cylinder. The deformation of the compliant object from rigid to compliant cylinder is used for measuring the compliance of the contact object and variation of the compliant cylinder over a surface profile with reference to the rigid cylinder is used to measurement surface roughness. Two 25 micrometers thick polyvinylidene films are used as a transducer in the tactile sensor system. The sensor in miniaturized form can be used in a laparoscopic grasper for minimally invasive surgery. The theoretical analysis is made and compared with experimental values. The advantages and limitations of the sensor are also discussed.
Beyond third generation: a sensor-fusion targeting FLIR pod for the F/A-18
William K. Krebs, Dean A. Scribner, Geoffrey M. Miller, et al.
The Navy and Marine Corps F/A-18 pilots state that the targeting FLIR system does not provide enough target definition and clarity. As a result, high altitude tactics missions are the most difficult due to the limited amount of time available to identify the target. If the targeting FLIR system had a better stand-off range and an improved target contrast then the pilots' task would be easier. Unfortunately, the replacement cost of the existing FLIR equipment is prohibitive. The purpose of this study is to modify the existing F/A-18 targeting FLIR system with a dual-band color sensor to improve target contrast and stand- off ranges. Methods: A non-real-time color sensor fusion system was flown on a NASA F/A-18 in a NITE Hawk targeting FLIR pod. Flight videotape was recorded from a third generation image intensified CCD and a first generation long-wave infrared sensor. A standard visual search task was used to assess whether pilots' situational awareness was improved by combining the two sensor videotape sequences into a single fused color or grayscale representation. Results: Fleet aviators showed that color fusion improved target detection, but hindered situational awareness. Aviators reported the lack of color constancy caused the scene to be unaesthetically pleasing; however, target detection was enhanced. Conclusion: A color fusion scene may benefit targeting applications but hinder situational awareness.
Sensor Fusion Algorithms
icon_mobile_dropdown
Minimum-sample SPRT for sensor or operating point selection in a single- or multisensor environment
Robert J. Pawlak
This paper describes a technique for employing the sequential probability ratio test (SPRT) in a single or multisensor environment. The technique minimizes the number of sensor decisions required to declare the null or alternative hypothesis when there is a choice of different sensors or sensor operating points. Thus the technique will be dubbed the Minimum-Sample SPRT (MS-SPRT). The first step of the MS-SPRT requires an off-line optimization of the choice of sensors across all possible values of the alternative hypothesis probability. The second step of the technique involves the application of two Kalman filters to estimate the probability of the alternative hypothesis and to optimize a set of sensor probabilities. The sensor probabilities determine the optimal sensor choice that minimizes the expected number of samples before a decision is made. Three examples are given using simulated data. In the first example, it is shown that the MS-SPRT is not necessary. The second example shows the usefulness of the MS-SPRT when there is a step discontinuity in the null/alternative hypothesis probabilities. In the third example, the MS-SPRT facilitates the use of the proper sensor for a probabilistic variation in the hypothesis probabilities.
Quantization for probability-level fusion on a bandwidth budget
John V. Black, Mark D. Bedworth
Results are established for a simulated data fusion architecture featuring a synthetic two-class Gaussian problem, with Bayesian recognizers. The recognizers output posterior probabilities for each class. The probabilities from two or more recognizers of identical error rate are quantized using the nearest-neighbor coding rule. The coded values are decoded at a fusion center and fused. A decision is made from the fused probabilities. The performance of the architecture is examined experimentally using code values that are uniformly distributed and code values that are produced using the Linde-Buzo-Grey (LBG) algorithm. Results are produced for two to six sensors and two to 32 code values. These results are compared to fusing probabilities represented using 32 bit floating-point numbers. Using 32 uniform or LBG-produced code values, produces results that are at most only 1% worse than fusing the uncoded probabilities.
Fusion of LWIR sensor data by Bayesian methods
Ramarao Inguva, Grannison Garrison
Using Bayesian statistical methods a formulation is setup for fusing multi band data from LWIR sensors. This formulation is illustrated with applications to synthetic data consisting of 100 signatures in the wavelength bands 6 - 10 micrometers , 11 - 16 micrometers and 17 - 21 micrometers . Following the works of Jaynes, and Bretthorst, a Bayesian formulation is given for detrending the time series data for the emissive area, followed by estimations of frequencies and their amplitudes. This formulation is illustrated with analysis of the synthetic data.
Distributed detection of Rayleigh targets in K-distributed clutter
Chandrakanth H. Gowda, R. Viswanathan
Detection of target in the presence of Gaussian clutter using multiple radars have been studied in the recent literature. The algorithms considered include pure decision based fusion rules such as OR and AND and tests involving the fusion of partial information, such as normalized test statistic (NTS). However, based on experimental evidence, in many situations, the compound K-distributed model seems to be a good fit for the clutter envelope. In this paper we study through simulation the performances of the NTS, AND, and OR tests for a detection of Rayleigh target in the presence of K-distributed clutter. The results show that for large signal to clutter power ratio and for large shape parameter values, the K-NTS significantly outperforms both the OR and the AND rules.
Maximum-likelihood approach for multisensor data fusion applications
Yifeng Zhou, Henry Leung
In this paper, we proposed a maximum likelihood fusion approach for multisensor fusion applications. The proposed approach was based on a parametric modeling of the noise covariance and formulated in the transformed noise subspace. It could solve the fusion problems when the sensor noises are correlated and the scaling coefficients unknown. The approach could also deal with nonstationary signals. We showed that in the optimization process, the computation of the noise parameters and the scaling coefficients were separable leading to a reduced optimization dimensionality and computational complexity. Computer simulations were used to demonstrate the effectiveness of the proposed fusion approach.
Applications II
icon_mobile_dropdown
Functional and real-time requirements of a multisensor data fusion (MSDF) situation and threat assessment (STA) resource management (RM) system
Jean Remi Duquet, Pierre Bergeron, Dale E. Blodgett, et al.
The Research and Development group at Lockheed Martin Canada, in collaboration with the Defence Research Establishment Valcartier, has undertaken a research project in order to capture and analyze the real-time and functional requirements of a next generation Command and Control System (CCS) for the Canadian Patrol Frigates, integrating Multi- Sensor Data Fusion (MSDF), Situation and Threat Assessment (STA) and Resource Management (RM). One important aspect of the project is to define how the use of Artificial Intelligence may optimize the performance of an integrated, real-time MSDF/STA/RM system. A closed-loop simulation environment is being developed to facilitate the evaluation of MSDF/STA/RM concepts, algorithms and architectures. This environment comprises (1) a scenario generator, (2) complex sensor, hardkill and softkill weapon models, (3) a real-time monitoring tool, (4) a distributed Knowledge-Base System (KBS) shell. The latter is being completely redesigned and implemented in-house since no commercial KBS shell could adequately satisfy all the project requirements. The closed- loop capability of the simulation environment, together with its `simulated real-time' capability, allows the interaction between the MSDF/STA/RM system and the environment targets during the execution of a scenario. This capability is essential to measure the performance of many STA and RM functionalities. Some benchmark scenarios have been selected to demonstrate quantitatively the capabilities of the selected MSDF/STA/RM algorithms. The paper describes the simulation environment and discusses the MSDF/STA/RM functionalities currently implemented and their performance as an automatic CCS.
Adaptive local fusion systems for novelty detection and diagnostics in condition monitoring
Odin Taylor, John MacIntyre
This paper describes the application of Kohonen Self Organizing Maps in a dynamic machine condition monitoring application which learns fault conditions over time. The authors describe the implementation of a novelty detection and adaptive diagnostic system which forms a modular component of a larger on-line condition monitoring system.
Pulse-coupled neural network sensor fusion
John L. Johnson, Marius P. Schamschula, Ramarao Inguva, et al.
Perception is assisted by sensed impressions of the outside world but not determined by them. The primary organ of perception is the brain and, in particular, the cortex. With that in mind, we have sought to see how a computer-modeled cortex--the PCNN or Pulse Coupled Neural Network--performs as a sensor fusing element. In essence, the PCNN is comprised of an array of integrate-and-fire neurons with one neuron for each input pixel. In such a system, the neurons corresponding to bright pixels reach firing threshold faster than the neurons corresponding to duller pixels. Thus, firing rate is proportional to brightness. In PCNNs, when a neuron fires it sends some of the resulting signal to its neighbors. This linking can cause a near-threshold neuron to fire earlier than it would have otherwise. This leads to synchronization of the pulses across large regions of the image. We can simplify the 3D PCNN output by integrating out the time dimension. Over a long enough time interval, the resulting 2D (x,y) pattern IS the input image. The PCNN has taken it apart and put it back together again. The shorter- term time integrals are interesting in themselves and will be commented upon in the paper. The main thrust of this paper is the use of multiple PCNNs mutually coupled in various ways to assemble a single 2D pattern or fused image. Results of experiments on PCNN image fusion and an evaluation of its advantages are our primary objectives.
Information fusion for onboard and offboard avionics
Thomas Kurien, Alan Chao, Sol W. Gully, et al.
This paper defines and demonstrates an all-source information fusion system for combining onboard and offboard data and maintain continuous track on targets. We provide an architecture containing an offboard data processor to extract data relevant to the attack aircraft mission, and a set of fusion modules for recursively associating sensor reports, tracking targets, and classifying targets. These modules are derived from a well-posed mathematical formulation which enables us to define precise interfaces among the fusion modules. This approach provides three benefits. First, it enables us to construct a fusion algorithm with close-to-optimal target tracking and classification performance. Second, it allows us to study new fusion algorithms by implementing alternate algorithms for each module. Third, it allows us to process data from any combination of sensors making the architecture applicable to a variety of attack aircraft and missions. We show that the proposed system can increase a pilot's situational awareness by providing him with a clear battlefield picture consistent with attack aircraft mission objectives. Results for a simple but realistic air-to-ground scenario simulation demonstrate the benefits of fusing data from offboard and onboard sensors.
Feature- and Decision-Level Fusion
icon_mobile_dropdown
Non-ad-hoc decision rule for the Dempster-Shafer method of evidential reasoning
Ali Cheaito, Michael Lecours, Eloi Bosse
This paper is concerned with the fusion of identity information through the use of statistical analysis rooted in Dempster-Shafer theory of evidence to provide automatic identification aboard a platform. An identity information process for a baseline Multi-Source Data Fusion (MSDF) system is defined. The MSDF system is applied to information sources which include a number of radars, IFF systems, an ESM system, and a remote track source. We use a comprehensive Platform Data Base (PDB) containing all the possible identity values that the potential target may take, and we use the fuzzy logic strategies which enable the fusion of subjective attribute information from sensor and the PDB to make the derivation of target identity more quickly, more precisely, and with statistically quantifiable measures of confidence. The conventional Dempster-Shafer lacks a formal basis upon which decision can be made in the face of ambiguity. We define a non-ad hoc decision rule based on the expected utility interval for pruning the `unessential' propositions which would otherwise overload the real-time data fusion systems. An example has been selected to demonstrate the implementation of our modified Dempster-Shafer method of evidential reasoning.