DEI-DEO fusion as a tool for improving decision robustness across metrics in similarity-measure-based classifiers
Author(s):
Belur V. Dasarathy
Show Abstract
Decisions derived through similarity measure based classifiers, such as nearest neighbor classifiers, are known to be sensitive to the choice of metrics underlying the similarity measure. Accordingly, it would be advisable to explore means of deriving decisions that are relatively independent of such choice. An approach that suggests itself is to fuse the decisions derived through the different metrics such that the fused decision is independent of the choice of metric and hence more robust. Here, in this study, this "fusion across metrics" concept is developed and illustrated experimentally with examples using multiple data sets employed in the open literature. For illustrative purposes, the choice of metrics is limited to three cases of the Minkowski metric, namely the Manhattan, Euclidean, and the Supremum metrics and a single pre-defined DEI-DEO fusion logic.
Quantitative analysis of spatio-temporal decision fusion based on the majority voting technique
Author(s):
Hongwei Wu;
Jerry M. Mendel
Show Abstract
In array signal processing, it is well known that the effective aperture of a physical array can be increased by means of combined spatial and temporal processing of measurements. In the same spirit, the combined decision of an array of experts can be made more accurate by means of spatio-temporal fusion. We propose three approaches to implement spatio-temporal decision fusion based on the majority voting technique, namely overall, space-time and time-space. We compare these three approaches in terms of their implementation costs, and the probability of the fully-combined decision being correct, and conclude that both the overall and the time-space approaches are better choices than the space-time approach.
Multiclassifier information fusion methods for microarray pattern recognition
Author(s):
Jerome J. Braun;
Yan Glina;
Nicholas Judson;
Rachel Herzig-Marx
Show Abstract
This paper addresses automatic recognition of microarray patterns, a capability that could have a major significance for medical diagnostics, enabling development of diagnostic tools for automatic discrimination of specific diseases. The paper presents multiclassifier information fusion methods for microarray pattern recognition. The input space partitioning approach based on fitness measures that constitute an a-priori gauging of classification efficacy for each subspace is investigated. Methods for generation of fitness measures, generation of input subspaces and their use in the multiclassifier fusion architecture are presented. In particular, two-level quantification of fitness that accounts for the quality of each subspace as well as the quality of individual neighborhoods within the subspace is described. Individual-subspace classifiers are Support Vector Machine based. The decision fusion stage fuses the information from mulitple SVMs along with the multi-level fitness information. Final decision fusion stage techniques, including weighted fusion as well as Dempster-Shafer theory based fusion are investigated. It should be noted that while the above methods are discussed in the context of microarray pattern recognition, they are applicable to a broader range of discrimination problems, in particular to problems involving a large number of information sources irreducible to a low-dimensional feature space.
A mathematical framework for the optimization of rejection and ROC thresholds in the fusion of correlated sensor data
Author(s):
Trevor I. Laine;
Kenneth W. Bauer Jr.
Show Abstract
In pattern recognition applications, significant costs can be associated with various decision options and a minimum acceptable level of confidence is often required. Combat target identification is one example where the incorrect labeling of Targets and Non-targets incurs substantial costs; yet, these costs may be difficult to quantify. One way to increase decision confidence is through fusion of data from multiple sources or from multiple looks through time. Numerous methods have been published to determine optimal rules for the fusion of decision labels or to determine the Bayes’ optimal decision if prior and posterior probabilities along with decision costs can be accurately estimated. This paper introduces a mathematical framework to optimize multiple decision thresholds subject to a decision maker’s preferences, when a continuous measure of class membership is available. Decision variables may include rejection thresholds to specify non-declaration regions and ROC thresholds to explore viable true positive and false positive Target classification rates, where the feasible space can be partially visualized by a 3D ROC surface. This methodology yields an optimal class declaration rule subject to decision maker preferences without using explicit costs associated with each type of decision. Some properties of this optimization framework are shown for Gaussian distributions representing Target and Non-target classes with various prior probabilities and correlation levels between simulated multiple sensor looks.
Further results on fault-tolerant distributed classification using error-correcting codes
Author(s):
Tsang-Yi Wang;
Yunghsiang Sam Han;
Pramod Kumar Varshney
Show Abstract
In this paper, we consider the distributed classification problem
in wireless sensor networks. The DCFECC-SD approach employing the
binary code matrix has recently been proposed to cope with the
errors caused by both sensor faults and the effect of fading
channels. The DCFECC-SD approach extends the DCFECC approach by
using soft decision decoding to combat channel fading. However,
the performance of the system employing the binary code matrix
could be degraded if the distance between different hypotheses can
not be kept large. This situation could happen when the number of
sensor is small or the number of hypotheses is large. In this
paper, we design the DCFECC-SD approach employing the D-ary code
matrix, where D>2. Simulation results show that the performance
of the DCFECC-SD approach employing the D-ary code matrix is
better than that of the DCFECC-SD approach employing the binary
code matrix. Performance evaluation of DCFECC-SD using different
number of bits of local decision information is also provided when
the total channel energy output from each sensor node is fixed.
Designing classifier ensembles with constrained performance requirements
Author(s):
Weizhong Yan;
Kai F. Goebel
Show Abstract
Classification requirements for real-world classification problems are often constrained by a given true positive or false positive rate to ensure that the classification error for the most important class is within a desired limit. For a sufficiently high true positive rate, this may result in the set-point being located somewhere in the flat portion of the ROC curve where the associated false positive rate is high. Any further classifier design will then attempt to reduce the false positive rate while maintaining the desired true positive rate is. We call this type of performance requirements for classifier design the constrained performance requirement. This type of performance requirements is different from the accuracy maximization requirement and thus requires different strategies for classifier design. This paper is concerned with designing classifier ensembles under such constrained performance requirements. Classifier ensembles are one of the most significant advances in pattern recognition/classification in recent years and have been actively studied by many researchers. However, not much attention has been given to designing ensembles to satisfy constrained performance requirements. This paper attempts to identify and address some of design related issues associated with the constrained performance requirement. Specifically, we present a design strategy for designing neural network ensembles to satisfy constrained performance requirements, which is illustrated by designing a real-world classification problem. The results are compared to those from conventional design method.
An intelligent approach for sensor integration based on fuzzy set theory and Dempster's rule for combining beliefs
Author(s):
Hassan Hassan
Show Abstract
Humans use variety of sensors, such as sight, hearing, touch, smell, and taste, to gather information from the environment and to make an appropriate decision accordingly. This paper introduces an intelligent model for combination of information collected from variety of sensors. The model input is data collected from multi-sensor framework and the output is a combined signature representing the factual evidence for decision-making process. The approach is based on Fuzzy Set Theory where membership sets are defined, then aggregated using Dempster-Shafer Theory of Evidence. The approach is demonstrated with examples.
Categorizing decision strategies through limbic system models
Author(s):
James K. Peterson
Show Abstract
The solution of difficult optimization problems often requires the use of a parameter set allowing critical algorithm
design choices to be set. For example, in the construction of a valid pattern recognition scheme using a simple
feed forward network (FFN) technique, there can be thousands of equally valid FFN solutions which achieve
high percentage recognition levels on reasonable inputs. The solutions arise from different choices of stopping
tolerance, internal neuron architecture, learning rates and so forth. These meta level optimization parameter
choices can be used to organize collections of optimization algorithms into matrices W. Each column of the matrix
corresponds to a set of parameter choices such a stopping tolerance, learning rate, random restart choices and so
forth. For example, an optimization algorithm is constructed from a 4 x 3 matrix W by choosing an entry from
each column to construct a sequence ABC. The sequence ABC then encodes the collection of meta parameters
that are used to shape the algorithm. In this example, there are thus 64 possible optimization algorithms all
chosen to produce a similar output such as recognition rate. A simplified biologically based model of information
processing includes primary sensory processing and sensor fusion with construction of higher level meta data
modeled via recurrent connections between the site of sensor fusion and a simple model of limbic processing. We
illustrate how such a model can be constructed using as training data the matrices described above. Finally, the
use of this model to model the decision process is discussed.
Information fusion using Bayesian multinets
Author(s):
Peter Bladon;
Richard J. Hall
Show Abstract
Bayesian networks are a powerful and convenient way of encoding expert knowledge. They can be used to infer such "high-level’ variables as "threat’ or "intent’, given observations, background and intelligence data. However, their usefulness depends on the model, i.e. the Bayesian network used for inference. We demonstrate how Bayesian multinets can be used to simplify the representation of certain complex domains, allowing a decomposition into simpler models that are conditionally independent given a class variable. We illustrate this concept using a threat assessment application, in which each component is specialised to a different class of threat and show how this simplifies model construction and target identification.
Karhunen Loeve enhanced synthetic discriminant functions with application to the protein structure identification in cryo-electron microscopic images
Author(s):
Vahid R. Riasati;
Hui Zhou
Show Abstract
In this paper we apply a modified synthetic discriminant function, SDF, based on Karhunen Loeve Transform to the recognition and identification of protein images formed from a cryo-electron microscopic imaging process. In the SDF filter synthesis, the use of whole image often presents a redundancy of features. Essentially, the Karhunen Loeve Transform is used as the means to incorporate training images in an SDF filter synthesis scenario. This method has the advantage of utilizing linearly independent training images, as the Karhunen Loeve Transform is the optimal method of decorrelating images. The transform establishes a new coordinate system. The axes of the new system are in the direction of the eigenvectors of the covariance matrix of the data population, and origin is set at the center of the data population. The principle component images resulted from such a realignment of the data provides us with the means for a new set of training images in a synthetic discriminant function filter, as the KLTSDF. We present the results of the application of this modified filter to a protein structure recognition problem.
Extended Dempster-Shafer combination rules based on random set theory
Author(s):
Yunmin Zhu;
X. Rong Li
Show Abstract
The Dempster combination rule has been widely discussed and used since it is a convenient and promising
method to combine multi-source information with their own confidence degrees/evidences. On the other hand,
it has been criticized and debated upon some of its counterintuitive behavior and restrictive requirements, such
as independence of the confidence degrees from disparate sources. To clarify the theoretical foundation of the
Dempster combination rule and provide a direction as how to solve these problems, the Dempster combination
rule is formulated based on the random set theory first. Then, under this framework, all possible combination
rules are presented, and these combination rules based on correlated sensor confidence degrees (evidence supports)
are proposed. The optimal Bayes combination rule is given finally.
Simulation modeling of disparate sensor discrimination between chemical and high explosive munitions
Author(s):
Bruce W. Fischer;
Michael D. Dunkel
Show Abstract
A modeling solution has been created for simulating battlespace use of disparate sensors for detection of a chemical
or biological (CB) attack. Disparate sensors refer to existing military sensors (acoustic, seismic, infrared, radar),
generally used for area surveillance, which can be multitasked to also detect CB events. The goal is to provide a
simulation that allows development of sensor deployment tactics and sensor system algorithms that optimize
detection of CB events, without compromising performance of existing sensor tasks.
Automated development of linguistic-fuzzy classifier membership functions and weights for use in disparate sensor integration visible and infrared imaging sensor classification
Author(s):
Bruce N. Nelson;
Amnon Birenzvige
Show Abstract
In support of the Disparate Sensor Integration (DSI) Program a number of imaging sensors were fielded to determine the
feasibility of using information from these systems to discriminate between chemical and conventional munitions. The
camera systems recorded video from 160 training and 100 blind munitions detonation events. Two types of munitions
were used; 155 mm conventional rounds and 155 mm chemical simulant rounds. In addition two different modes of
detonation were used with these two classes of munitions; detonation on impact (point detonation) and detonation in the
air (airblasts). The cameras fielded included two visible wavelength cameras, a near infrared camera (peak responsivity
of approximately 1μm), a mid wavelength infrared camera system (3 μm to 5 μm) and a long wavelength infrared
camera system (7.5 μm to 13 μm).
Our recent work has involved developing Linguistic-Fuzzy Classifiers for performing munitions detonation
classification with the DSI visible and infrared imaging sensors data sets. In this initial work, the classifiers were
heuristically developed based on analyses of the training data features distributions. In these initial classification
systems both the membership functions and the feature weights were hand developed and tuned. We have recently
developed new methodologies to automatically generate membership functions and weights in Linguistic-Fuzzy
Classifiers. This paper will describe this new methodology and provide an example of its efficacy for separating
munitions detonation events into either air or point detonation. This is a critical initial step in achieving the overall goal
of DSI; the classification of detonation events as either chemical or conventional. Further, the detonation mode is
important as it significantly effects the dispersion of agents. The results presented in this paper clearly demonstrate that
the automatically developed classifiers perform as well in this classification task as the previously developed and
demonstrated empirically developed classifiers.
Cognitive engineering in algorithm development for multisensor data fusion in military applications
Author(s):
Amanda C. Kight;
S. Narayanan
Show Abstract
In battlefield situations, human operators are bombarded with substantial amounts of information and expected to make near-instantaneous decisions. The large amounts of information, coupled with short decision times and the need to reduce the potential of making incorrect decisions, create the possibility for information overload. This problem is especially prominent in military applications involving imagery from multiple sensors. Computer-based algorithms for fusing pertinent sets of imagery have proven somewhat useful for alleviating this problem. However, little research has been done on designing multisensor data fusion systems using principles of cognitive engineering, which involves the consideration of human cognition during the design process. The design of a sensor fusion system using principles from cognitive engineering would create a more natural relationship between human and machine, and would thus be extremely effective in reducing operator error in military situations. This paper explores the need for integrating human reasoning and cognition in algorithm development for multisensor fusion applications.
Decision fusion algorithm for target tracking in forward-looking infrared imagery
Author(s):
Amer Dawoud;
Mohammad S. Alam;
Abdullah Bal;
Chey Hwa Loo
Show Abstract
In this paper, we propose a novel decision fusion algorithm for target tracking in forward looking infrared (FLIR) image sequences recorded from an airborne platform. The algorithm allows the fusion of complementary ego-motion compensation and tracking algorithms. We identified three modes that contribute to the failure of the tracking system: (1) the sensor ego-motion failure mode, which causes the movement of the target more than the operational limits of the tracking stage; (2) the tracking failure mode, which occurs when the tracking algorithm fails to determine the correct location of the target in the new frame; (3) the distortion of the reference image failure mode, which happens when the reference image accumulates walk-off error, specially when the target is changing in size, shape or orientation from frame to frame. The proposed algorithm prevents these failure modes from developing unrecoverable tracking failures. The overall performance of the algorithm is guaranteed to be much better than any individual tracking algorithm used in the fusion. The experiments performed on the AMCOM FLIR data set verify the robustness of the algorithm.
An information system for target recognition
Author(s):
Tobias Horney;
Jorgen Ahlberg;
Christina Gronwall;
Martin Folkesson;
Karin Silvervarg;
Jorgen Fransson;
Lena Klasen;
Erland Jungert;
Fredrik Lantz;
Morgan Ulvklo
Show Abstract
We present an approach to a general decision support system. The aim is to cover the complete process for automatic
target recognition, from sensor data to the user interface. The approach is based on a query-based information
system, and include tasks like feature extraction from sensor data, data association, data fusion and situation
analysis. Currently, we are working with data from laser radar, infrared cameras, and visual cameras, studying target
recognition from cooperating sensors on one or several platforms. The sensors are typically airborne and at low
altitude.
The processing of sensor data is performed in two steps. First, several attributes are estimated from the (unknown
but detected) target. The attributes include orientation, size, speed, temperature etc. These estimates are
used to select the models of interest in the matching step, where the target is matched with a number of target models,
returning a likelihood value for each model. Several methods and sensor data types are used in both steps.
The user communicates with the system via a visual user interface, where, for instance, the user can mark an
area on a map and ask for hostile vehicles in the chosen area. The user input is converted to a query in ΣQL, a query
language developed for this type of applications, and an ontological system decides which algorithms should be
invoked and which sensor data should be used. The output from the sensors is fused by a fusion module and answers
are given back to the user. The user does not need to have any detailed technical knowledge about the sensors
(or which sensors that are available), and new sensors and algorithms can easily be plugged into the system.
A Java implementation of the probabilistic argumentation system for data fusion in missile defense applications
Author(s):
Moses W. Chan;
Terri N. Hansen;
Paul-Andre Monney;
Todd L. Baker
Show Abstract
In missile defense target recognition applications, knowledge about the problem may be imperfect, imprecise, and incomplete. Consequently, complete probabilistic models are not available. In order to obtain robust inference results and avoid making inaccurate assumptions, the probabilistic argumentation system (PAS) is employed. In PAS, knowledge is encoded as logical rules with probabilistically weighted assumptions. These rules map directly to Dempster-Shafer belief functions, which allow for uncertainty reasoning in the absence of complete probabilistic models. The PAS can be used to compute arguments for and against hypotheses of interest, and numerical answers that quantify these arguments. These arguments can be used as explanations that describe how inference results are computed. This explanation facility can also be used to validate intelligent information, which can in turn improve inference results. This paper presents a Java implementation of the probabilistic argumentation system as well as a number of new features. A rule-based syntax is defined as a problem encoding mechanism and for Monte Carlo simulation purposes. In addition, a graphical user interface (GUI) is implemented so that users can encode the knowledge database, and visualize relationships among rules and probabilistically weighted assumptions. Furthermore, a graphical model is used to represent these rules, which in turn provides graphical explanations of the inference results. We provide examples that illustrate how classical pattern recognition problems can be solved using canonical rule sets, as well as examples that demonstrate how this new software can be used as an explanation facility that describes how the inference results are determined.
Deghosting in multipassive acoustic sensors
Author(s):
Rong Yang;
Gee Wah Ng
Show Abstract
In this paper, we describe a deghosting algorithm in multiple passive acoustic sensor environment. In a passive acoustic
sensor system, a target is detected by its bearing to the sensor, and the target location is obtained from triangulation of
bearings on different sensors. However, in multi-passive sensor and multi-target scenario, triangulation is difficult. This
is because multi-target triangulation results in a number of ghost targets being generated. In order to remove the
triangulating ghosts, the deghosting technique is essential to distinguish the true targets from the ghost targets. We
suggest a deghosting algorithm by applying Bayes’ theorem and the likelihood function on the acoustic signals. A
probability related to acoustic signal on each triangulating point is recursively computed and updated at every time
stamp or frame. The triangulating point will be classified as a true target, once its probability exceeds a predefined
threshold. Furthermore, acoustic signal has propagation delay. The situation yields the triangulating location biased to
the bearing of the nearest sensor. In our algorithm, the propagation delay problem is solved by matching the histories of
bearing tracks, and yields the unbiased location that has similar emitting times for the sensors contributing to the
triangulation point. The emitting times can be derived from detecting times and propagation delays. Performance result
is presented on simulation data.
Dynamic sensor management using multi-objective particle swarm optimizer
Author(s):
Kalyan K. Veeramachaneni;
Lisa Ann Osadciw
Show Abstract
This paper presents a Swarm Intelligence based approach for sensor management of a multi sensor network. Alternate sensor configurations and fusion strategies are evaluated by swarm agents, and an optimum configuration and fusion strategy evolves. An evolutionary algorithm, particle swarm optimization, is modified to optimize two objectives: accuracy and time. The output of the algorithm is the choice of sensors, individual sensor’s thresholds and the optimal decision fusion rule. The results achieved show the capability of the algorithm in selecting optimal configuration for a given requirement consisting of multiple objectives.
Autonomous sensor manager agents (ASMA)
Author(s):
Lisa Ann Osadciw
Show Abstract
Autonomous sensor manager agents are presented as an algorithm to perform sensor management within a multisensor fusion network. The design of the hybrid ant system/particle swarm agents is described in detail with some insight into their performance. Although the algorithm is designed for the general sensor management problem, a simulation example involving 2 radar systems is presented. Algorithmic parameters are determined by the size of the region covered by the sensor network, the number of sensors, and the number of parameters to be selected. With straight forward modifications, this algorithm can be adapted for most sensor management problems.
Association in Level 2 fusion
Author(s):
Mieczyslaw M. Kokar;
Christopher J. Matheus;
Jerzy A. Letkowski;
Kenneth P. Baclawski;
Paul Kogut
Show Abstract
After a number of years of intensive research on Level 1 fusion, the focus is shifting to higher levels. Level 2
fusion differs from Level 1 fusion in its emphasis on relations among objects rather than on the characteristics
(position, velocity, type) of single objects. While the number of such characteristics grows linearly with the
number of objects considered by an information fusion system, this cannot be said about the number of possible
relations, which can grow exponentially. To alleviate the problems of computational complexity in Level 2
processing, the authors of this paper have suggested the use of ontologies. In this paper we analyze the issue of
association in Level 2 fusion. In particular, we investigate ways in which the use of ontologies and annotations
of situations in terms of the ontologies can be used for deciding which of the objects, and/or relations among
such, can be considered to be the same. This is analogous to data association in Level 1 fusion. First, we
show the kinds of reasoning that can be carried out on the annotations in order to identify various objects and
possible coreferences. Second, we analyze how uncertainty information can be incorporated into the process.
The reasoning aspect depends on the features of the ontology representation language used. We focus on OWL -
the web ontology language. This language comprises, among others, constructs related to expressing multiplicity
constraints as well as such features like “functional property” and “inverse functional property”. We will show
how these features can be used in resolving the identities of objects and relations. Moreover, we will show how
a consistency-checking tool (ConsVISor) developed by the authors can be used in this process.
Sensor management and Bayesian networks
Author(s):
Nuri Yilmazer;
Lisa Ann Osadciw
Show Abstract
This paper introduces the sensor management problem and uses Bayesian networks as a scalable approach to handling
the operational decisions concerning the sensor network. In general, single sensor systems only provide partial
information on the state of the event or environment while multisensor systems provide a synergistic effect, which
improves the quality and availability of information. Data fusion techniques can effectively combine this environmental
information from similar and/or dissimilar sensors. Until recently, the operator could manage the systems easily, but
current systems are more complex and produce data more quickly than earlier versions. A sensor manager becomes necessary
when this occurs to assist the operators. Researchers have developed many single point sensor management solutions.
General sensor management algorithms that can handle a variety of sensor network applications have yet to
emerge.
A category theory description of multisensor fusion
Author(s):
Steven N. Thorsen;
Mark E. Oxley
Show Abstract
Data fusion as a science has been described in the literature in great detail by many authors, particularly over the last
two decades. These descriptions are, for the vast majority, non-mathematical in nature and have lacked the symbolism
and clarity of mathematical precision. This paper demonstrates a way of describing the science of data fusion using
diagrams and category theory. The description begins using category theory to develop a clear definition of what
fusion is in a mathematical sense. The definitions of fusion rules and fusors show how a notion of ”betterness” can
be defined by developing appropriate functionals. Using a simple diagram of a multisensor process, an explanation
of how receiver operating characteristic (ROC) curves can provide an appropriate functional to compare fusion rules,
fusors, and even classifiers. A partial ordering of a finite number of fusors can then be created.
Methodology for building confidence measures
Author(s):
Aaron L. Bramson
Show Abstract
This paper presents a generalized methodology for propagating known or
estimated levels of individual source document truth reliability to determine
the confidence level of a combined output. Initial document certainty levels
are augmented by (i) combining the reliability measures of multiply sources,
(ii) incorporating the truth reinforcement of related elements, and (iii) incorporating
the importance of the individual elements for determining the probability
of truth for the whole. The result is a measure of confidence in system
output based on the establishing of links among the truth values of inputs.
This methodology was developed for application to a multi-component situation
awareness tool under development at the Air Force Research Laboratory
in Rome, New York. Determining how improvements in data quality and the
variety of documents collected affect the probability of a correct situational
detection helps optimize the performance of the tool overall.
Extension of a distance-based fusion framework
Author(s):
Eric Gregoire
Show Abstract
In this paper, the general logic-based framework for knowledge and beliefs fusion proposed by Konieczny, Lang and Marquis is investigated. It is shown that contrary to one of its main objectives, it fails to handle the fusion of some inconsistent knowledge bases. Accordingly, it is revisited in order to overcome this drawback.
Object aggregation using merge-at-a-point algorithm
Author(s):
Kanupriya Salaria;
Wiriyanto Darsono;
Michael Hinman;
Mark Linderman;
Li Bai
Show Abstract
This paper describes a novel technique to detect military convoy’s moving patterns using the Ground Moving Target Indicator (GMTI) data. The specific pattern studied here is the moving vehicle groups that are merging onto a prescribed location. The algorithm can be used to detect the military convoy’s identity so that the situation can be assessed to prevent hostile enemy military advancements. The technique uses the minimum error solution (MES) to predict the point of intersection of vehicle tracks. Comparing this point of intersection to the prescribed location it can be determined whether the vehicles are merging. Two tasks are performed to effectively determine the merged vehicle group patterns: 1) investigate necessary number of vehicles needed in the MES algorithms, and 2) analyze three decision rules for clustering the vehicle groups. The simulation has shown the accuracy (88.9% approx.) to detect the vehicle groups that merge at a prescribed location.
A scalable portable object-oriented framework for parallel multisensor data-fusion applications in HPC systems
Author(s):
Pankaj Gupta;
Guru Prasad
Show Abstract
Multi-sensor Data Fusion is synergistic integration of multiple data sets. Data fusion includes processes for
aligning, associating and combining data and information in estimating and predicting the state of objects, their
relationships, and characterizing situations and their significance. The combination of complex data sets and the
need for real-time data storage and retrieval compounds the data fusion problem. The systematic development and
use of data fusion techniques are particularly critical in applications requiring massive, diverse, ambiguous, and
time-critical data. Such conditions are characteristic of new emerging requirements; e.g., network-centric and
information-centric warfare, low intensity conflicts such as special operations, counter narcotics, antiterrorism,
information operations and CALOW (Conventional Arms, Limited Objectives Warfare), economic and political
intelligence. In this paper, Aximetric presents a novel, scalable, object-oriented, metamodel framework for parallel,
cluster-based data-fusion engine on High Performance Computing (HPC) Systems. The data-clustering algorithms
provide a fast, scalable technique to sift through massive, complex data sets coming through multiple streams in
real-time. The load-balancing algorithm provides the capability to evenly distribute the workload among processors
on-the-fly and achieve real-time scalability. The proposed data-fusion engine exploits unique data-structures for fast
storage, retrieval and interactive visualization of the multiple data streams.
Real-time sensor validation and fusion for distributed autonomous sensors
Author(s):
Xiaojing Yuan;
Xiangshang Li;
Bill P. Buckles
Show Abstract
Multi-sensor data fusion has found widespread applications in industrial and research sectors. The purpose of real
time multi-sensor data fusion is to dynamically estimate an improved system model from a set of different data
sources, i.e., sensors. This paper presented a systematic and unified real time sensor validation and fusion
framework (RTSVFF) based on distributed autonomous sensors. The RTSVFF is an open architecture which
consists of four layers - the transaction layer, the process fusion layer, the control layer, and the planning layer.
This paradigm facilitates distribution of intelligence to the sensor level and sharing of information among sensors,
controllers, and other devices in the system. The openness of the architecture also provides a platform to test
different sensor validation and fusion algorithms and thus facilitates the selection of near optimal algorithms for
specific sensor fusion application. In the version of the model presented in this paper, confidence weighted
averaging is employed to address the dynamic system state issue noted above. The state is computed using an
adaptive estimator and dynamic validation curve for numeric data fusion and a robust diagnostic map for decision
level qualitative fusion. The framework is then applied to automatic monitoring of a gas-turbine engine, including
a performance comparison of the proposed real-time sensor fusion algorithms and a traditional numerical weighted
average.
Fusion of color, grayscale, and edge-detection algorithms for the accurate assessment of corrosion in shipboard tank and void imagery
Author(s):
Bruce N. Nelson;
Paul Slebodnick;
William Groeninger;
Edward J. Lemieux
Show Abstract
Over the last several years, the Naval Research Laboratory has developed video based systems for inspecting tanks
(ballast, potable water, fuel, etc.) and other voids on ships. Using these systems, approximately 15 to 30 images of the
coated surfaces of the tank or void being inspected are collected. A corrosion detection algorithm analyzes the
collected imagery. The corrosion detection algorithm output is the percent coatings damage in the tank being inspected.
The corrosion detection algorithm uses four independent algorithms that each separately assesses the coatings damage in
each analyzed image. The independent algorithm results from each image are fused with other available information to
develop a single coatings damage value for each of the analyzed images. The damage values for all of the images
analyzed are next aggregated in order to develop a single coatings damage value for the complete tank or void being
inspected. The results from this Corrosion Detection Algorithm have been extensively compared to the results of human
performed inspections over the last two years.
Data-driven aggregative schemes for multisource estimation fusion: a road travel time application
Author(s):
Nour-Eddin El Faouzi
Show Abstract
The principal motivation for combining estimators has been to avoid the a priori choice of which estimation
method to use, by attempting to aggregate all the information which each estimation model embodies.
In selecting the "best" model, one is often discarding useful independent evidence in those models which are
rejected. This paper deals with estimation fusion; that is, data fusion for the purpose of estimation. More
specifically, estimation fusion is studied under heterogeneous data source configurations.
Two estimation fusion schemes could be considered: projective and aggregative. An unified linear model and
general framework for later schemes are established. Explicit optimal fusion strategies in the sense of the best
linear estimation and weighted least squares are presented. The evaluation of the effectiveness of the proposed
schemes was conducted on the traffic application, namely, travel time estimation in a given path of a road
network.
In this problem, data comes from sensors and other sources of information geographically distributed where
communication limitations and other considerations often eliminate the possibility of transmitting the observations
into a central node processing where computation is performed.
Data fusion in road traffic engineering: an overview
Author(s):
Nour-Eddin El Faouzi
Show Abstract
The objective of this paper is to present an analysis of recent applications of data fusion (DF) in road traffic engineering.
First, we report the most significant applications of data fusion techniques in road traffic engineering area: traffic
monitoring, signal control, Automatic Incident Detection, traffic forecasting, Intelligent Transportation Systems ..., as
well as the extent and direction of DF interest in the field. Second, a classification including applications, fusion goals
and mathematical tools is proposed.
The real-time guide data fusion for an optoelectronic theodolite
Author(s):
Juan Chen;
Yanqiu Huang
Show Abstract
Optoelectronic theodolite has the highest measuring precision for space target positioning and flight-path measurements
among the measuring instruments. With the advantages of real-time, high precision, dynamic tracking property and
image reproduction, it is widely used in the areas of aerospace and weapon experiments. Now, to meet the new
challenges, the measuring system should be changed from singe theodolite to a network, and from manual operation
tracking to multi-source automatic tracking. So that, the technology of multi-source information fusion is highly needed
to realize real-time guide for the theodolite.
Driver drowsiness detection using multimodal sensor fusion
Author(s):
Elena O. Andreeva;
Parham Aarabi;
Marios G. Philiastides;
Keyvan Mohajer;
Majid Emami
Show Abstract
This paper proposes a multi-modal sensor fusion algorithm for the estimation of driver drowsiness. Driver
sleepiness is believed to be responsible for more than 30% of passenger car accidents and for 4% of all accident
fatalities. In commercial vehicles, drowsiness is blamed for 58% of single truck accidents and 31% of commercial
truck driver fatalities. This work proposes an innovative automatic sleep-onset detection system. Using multiple
sensors, the driver’s body is studied as a mechanical structure of springs and dampeners. The sleep-detection
system consists of highly sensitive triple-axial accelerometers to monitor the driver’s upper body in 3-D. The
subject is modeled as a linear time-variant (LTV) system. An LMS adaptive filter estimation algorithm generates
the transfer function (i.e. weight coefficients) for this LTV system. Separate coefficients are generated for the
awake and asleep states of the subject. These coefficients are then used to train a neural network. Once trained, the
neural network classifies the condition of the driver as either awake or asleep. The system has been tested on a
total of 8 subjects. The tests were conducted on sleep-deprived individuals for the sleep state and on fully awake
individuals for the awake state. When trained and tested on the same subject, the system detected sleep and
awake states of the driver with a success rate of 95%. When the system was trained on three subjects and then retested
on a fourth “unseen” subject, the classification rate dropped to 90%. Furthermore, it was attempted to
correlate driver posture and sleepiness by observing how car vibrations propagate through a person’s body. Eight
additional subjects were studied for this purpose. The results obtained in this experiment proved inconclusive
which was attributed to significant differences in the individual habitual postures.
Improvement of multibeam echosondeur bottom detection
Author(s):
Gerard Llort;
Christophe Sintes
Show Abstract
Concerning bathymetric multibeam echosondeur systems, the interferometric technique is widely used to get the altitude of an illuminated seafloor section beam. Classical methods only use the zero crossing instant of phase difference to obtain the sea depth. Nevertheless, the phase difference gives more information near the zero crossing. Using the idea of radar multilook, the seabed footprints of two close beams overlap and, consequently, it exists a common illuminated area. In this paper, we show that the mutual information between the two close beams is enough to merge them into one because of the coherently processing of the signals received from multiple sensors (that is, beamforming). This mutual information, set up by several beamforming methods, makes possible to take into account all points included in the beam footprint in order to rebuild more accurately the sea floor. Besides, considering a beamforming width between ±25° and ±60°, we can recreate a continuous phase difference by merging all phase differences. Beam angles close to nadir will not be considered because of their non acceptable performance in terms of interferometric quality. In addition, the effect of changing the interferometric spacing, commonly called baseline, is also studied. A correct baseline value plays an important role in high-resolution beamforming. Actually, the influence of the multibaseline causes an increase of the phase difference variance, and therefore, an increase of the measurement errors. Finally, we propose the fusion of the multilook techniques and the baseline effects to improve the multibeam
echosondeur bottom detection.
Wireless intelligent monitoring and analysis systems
Author(s):
Nina Berry;
Donna Djordjevich;
Teresa Ko;
Ben Coburn;
Stephen Elliott;
Brett Tsudama;
Melissa Whitcomb
Show Abstract
The wireless intelligent monitoring and analysis systems is a proof-of-concept directed at discovering solution(s) for
providing decentralized intelligent data analysis and control for distributed containers equipped with wireless sensing
units. The objective was to embed smart behavior directly within each wireless sensor container, through the
incorporation of agent technology into each sensor suite. This approach provides intelligent directed fusion of data based
on a social model of teaming behavior. This system demonstrates intelligent sensor behavior that converts raw sensor
data into group knowledge to better understand the integrity of the complete container environment. The emergent team
behavior is achieved with lightweight software agents that analyze sensor data based on their current behavior mode.
When the system starts-up or is reconfigured the agents self-organize into virtual random teams based on the
leader/member/lonely paradigm. The team leader collects sensor data from their members and investigates all abnormal
situations to determine the legitimacy of high sensor readings. The team leaders flag critical situation and report this
knowledge back to the user via a collection of base stations. This research provides insight into the integration issues and
concerns associated with integrating multi-disciplinary fields of software agents, artificial life and autonomous sensor
behavior into a complete system.
Robust real-time audiovisual face detection
Author(s):
Wei Mark Fang;
Parham Aarabi
Show Abstract
This paper presents a face detection system that synergizes audio localization and visual face detection. This audiovisual
face detection system is based on microphone sound localization, and image processing algorithms. The
system integrates the application of sound localization by Time Delay of Arrival and the iterative application of
Adaptive Background Segmentation, to robustly perform real-time face detection on a stream of webcam images.
Experimental results using an array of 24 microphones and a fixed-view webcam, show that the audiovisual face
detection system is able to perform face detection of success rate 97.5% at 0.82 seconds of convergence time, and
5.8Hz display frame rate, on a Pentium IV 2.5GHz.
Data fusion of several support-vector-machine breast-cancer diagnostic paradigms using a GRNN oracle
Author(s):
Walker H. Land Jr.;
Lut Wong;
Dan McKee;
Timothy Masters;
Frances Anderson;
Sapan Sarvaiya
Show Abstract
Breast cancer is second to lung cancer as a tumor-related cause of death in women. For 2003, it was reported that
211,300 new cases and 39,800 deaths would occur in the US. It has been proposed that breast cancer mortality could be
decreased by 25% if women in appropriate age groups were screened regularly. Currently, the preferred method for
breast cancer screening is mammography, due to its widespread availability, low cost, speed, and non-invasiveness. At
the same time, while mammography is sensitive to the detection of breast cancer, its positive predictive value (PPV) is
low, resulting in costly, invasive biopsies that are only 15-34% likely to reveal malignancy at histologic examination.
This paper explores the use of a newly designed Support Vector Machine (SVM)/Generalized Regression Neural
Network (GRNN) Oracle hybrid and evaluates the hybrid’s performance as an interpretive aid to radiologists. The
authors demonstrate that this hybrid has the potential to (1) improve both specificity and PPV of screen film
mammography at 95-100% sensitivity, and (2) consistently produce partial AZ values (defined as average specificity
over the top 10% of the ROC curve) of greater than 30%, using a data set of ~2500 lesions from five different hospitals
and/or institutions.
Cognitive fusion analysis based on context
Author(s):
Erik P. Blasch;
Susan Plano
Show Abstract
The standard fusion model includes active and passive user interaction in level 5 - “User Refinement”. User refinement is more than just details of passive automation partitioning - it is the active management of information. While a fusion system can explore many operational conditions over myopic changes, the user has the ability to reason about the hyperopic “big picture.” Blasch and Plano developed cognitive-fusion models that address user constraints including: intent, attention, trust, workload, and throughput to facilitate hyperopic analysis. To enhance user-fusion performance modeling (i.e. confidence, timeliness, and accuracy); we seek to explore the nature of context. Context, the interrelated conditions of which something exists, can be modeled in many ways including geographic, sensor, object, and environmental conditioning. This paper highlights user refinement actions based on context to constrain the fusion analysis for accurately representing the trade space in the real world. As an example, we explore a target identification task in which contextual information from the user’s cognitive model is imparted to a fusion belief filter.
The application of the system parameter fusion principle to optimization analysis of bracing systems for deep foundation pits
Author(s):
Ying Liao;
Qiangguo Pu
Show Abstract
The optimization analysis of bracing systems for deep foundation pits is a rather complex problem of system engineering
which relates to many indexes belonging to safety and feasibility, economy and rationality, environmental protection and
convenience of construction. In this paper, the evaluation index system of bracing systems for deep foundation pits is
established; every index’s overall sorting weight is determined; the comprehensive evaluation results are obtained to
evaluate the degree of superiority and inferiority of bracing schemes by means of the system parameter fusion principle;
and then the optimum scheme is concluded.
Geo-information reality: mass-user-oriented modeling of the environment
Author(s):
Eugene Levin;
Tomasz P. Jannson;
Guennadi Guienko;
Alexandre Jarnovsky
Show Abstract
Topographic maps, both paper and computerized, require specific qualifications in the end users - planners,
designers, geodesists, etc. These maps offer the user some simplified information about reality, described in
accordance with a meaningful set of cartographic conventions. Overlaid with an orthophoto, such cartographic
information becomes a photomap, representing the surface in a more realistic way. A non-expert, used to perceiving
environmental reality through landscape images taken from the Earth's surface, faces certain difficulties interpreting
images collected from an aircraft or satellite. In fact, modern technologies do not provide the mass user with fullvalue
visual information about the real environment.
The mass-user is not generally concerned about using maps for measurements, but rather uses them to search for
some semantic information. Thus, a new, mass-user-oriented branch of GIS should be based on a new concept - geo-information reality, i.e., mass-user-oriented modeling of the environment. The key to this concept is an object-graphic
basis for GIS, bringing to bear modern methods of acquiring, storing, and representing visual and textual
information in digital form. This paper presents the proposed concept in detail.
Public health situation awareness: toward a semantic approach
Author(s):
Parsa Mirhaji;
Rachel L. Richesson;
James P. Turley;
Jiajie Zhang;
Jack W. Smith
Show Abstract
We propose a knowledge-based public health situation awareness system. The basis for this system is an
explicit representation of public health situation awareness concepts and their interrelationships. This representation is
based upon the users’ (public health decision makers) cognitive model of the world, and optimized towards the efficacy
of performance and relevance to the public health situation awareness processes and tasks. In our approach, explicit
domain knowledge is the foundation for interpretation of public health data, as apposed to conventional systems where
the statistical methods are the essence of the processes.
Objectives: To develop a prototype knowledge-based system for public health situation awareness and to demonstrate
the utility of knowledge intensive approaches in integration of heterogeneous information, eliminating the effects of
incomplete and poor quality surveillance data, uncertainty in syndrome and aberration detection and visualization of
complex information structures in public health surveillance settings, particularly in the context of bioterrorism (BT)
preparedness.
The system employs the Resource Definition Framework (RDF) and additional layers of more expressive
languages to explicate the knowledge of domain experts into machine interpretable and computable problem-solving
modules that can then guide users and computer systems in sifting through the most “relevant” data for syndrome and
outbreak detection and investigation of root cause of the event.
The Center for Biosecurity and Public Health Informatics Research is developing a prototype knowledge-based
system around influenza, which has complex natural disease patterns, many public health implications, and is a potential
agent for bioterrorism.
The preliminary data from this effort may demonstrate superior performance in information integration,
syndrome and aberration detection, information access through information visualization, and cross-domain
investigation of the root causes of public health events.
Spiking neural networks for higher-level information fusion
Author(s):
Neil A. Bomberger;
Allen M. Waxman;
Felipe M. Pait
Show Abstract
This paper presents a novel approach to higher-level (2+) information fusion and knowledge representation using
semantic networks composed of coupled spiking neuron nodes. Networks of spiking neurons have been shown to
exhibit synchronization, in which sub-assemblies of nodes become phase locked to one another. This phase locking
reflects the tendency of biological neural systems to produce synchronized neural assemblies, which have been
hypothesized to be involved in feature binding. The approach in this paper embeds spiking neurons in a semantic
network, in which a synchronized sub-assembly of nodes represents a hypothesis about a situation. Likewise, multiple
synchronized assemblies that are out-of-phase with one another represent multiple hypotheses. The initial network is
hand-coded, but additional semantic relationships can be established by associative learning mechanisms. This
approach is demonstrated with a simulated scenario involving the tracking of suspected criminal vehicles between
meeting places in an urban environment.