Estimability of thrusting trajectories in 3D from a single passive sensor
Author(s):
Ting Yuan;
Yaakov Bar-Shalom;
Peter Willett;
R. Ben-Dov;
S. Pollak
Show Abstract
The problem of estimating the state of thrusting/ballistic endoatmospheric projectiles moving in 3-dimensional
(3-D) space using 2-dimensional (2-D) measurements from a single passive sensor (stationary or moving with
constant velocity) is investigated. The estimability is analyzed based on the Fisher Information Matrix (FIM) of
the target parameter vector, comprising the initial launch (azimuth and elevation) angles, drag coefficient and
thrust, which determine its trajectory according to a nonlinear motion equation. The initial position is assumed
to be obtained from the first line of sight (LoS) measurements intersected with a known-altitude plane. The
full-rank FIM ensures that this is an estimable system. The corresponding Cram´er-Rao lower bound (CRLB)
quantifies the estimation performance of the estimator that is statistically efficient and can be used for impact
point prediction (IPP). Due to the inherent nonlinearity of the problem, the maximum likelihood estimate of
the target parameter vector is found by using iterated least squares (ILS) numerical approach. A combined grid
and ILS approach searches over the launch angles space is proposed. The drag coefficient-thrust grid-based ILS
approach is shown to converge to the global maximum and has reliable estimation performance. This is then
used for IPP.
Advances in displaying uncertain estimates of multiple targets
Author(s):
David Frederic Crouse
Show Abstract
Both maximum likelihood estimation as well as minimum mean optimal subpattern assignment (MMOSPA)
estimation have been shown to provide meaningful estimates in instances of target identity uncertainty when the
number of targets present is known. Maximum likelihood measurement to track association (2D assignment) has
been widely studied and is reviewed in this paper. However, it is widely believed that approximate MMOSPA
estimation can not be performed in real time except when considering a very small number of targets. This paper
demonstrates the MMOSPA estimator arises as a special case of a minimum mean Wasserstein metric estimator
when the number of targets is unknown. Additionally, it is shown that approximate MMOSPA estimates can
be calculated in microseconds to miliseconds without extensive optimization, making MMOSPA estimation a
practicable alternative to more traditional estimators.
Overview of Dempster-Shafer and belief function tracking methods
Author(s):
Erik Blasch;
Jean Dezert;
B. Pannetier
Show Abstract
Over the years, there have been many proposed methods in set-based tracking. One example of set-based methods is the use
of Dempster-Shafer (DS) techniques to support belief-function (BF) tracking. In this paper, we overview the issues and
concepts that motivated DS methods for simultaneous tracking and classification/identification. DS methods have some
attributes, if applied correctly; but there are some pitfalls that need to be carefully avoided such as the redistribution of the
mass associated with conflicting measurements. Such comparisons and applications are found in Dezert-Smarandache
Theory (DSmT) methods from which the Proportional Conflict Redistribution (PCR5) rule supports a more comprehensive
approach towards applying evidential and BF techniques to target tracking. In the paper, we overview two decades of
research in the area of BF tracking and conclude with a comparative analysis of Bayesian, Dempster-Shafer, and the PCR5
methods.
Decentralized closed-loop collaborative surveillance and tracking performance sensitivity to communications connectivity
Author(s):
Jonathan T. DeSena;
Sean R. Martin;
Jesse C. Clarke;
Daniel A. Dutrow;
Brian C. Kohan;
Andrew J. Newman
Show Abstract
As the number and diversity of sensing assets available for intelligence, surveillance and reconnaissance (ISR)
operations continues to expand, the limited ability of human operators to effectively manage, control and exploit the ISR
ensemble is exceeded, leading to reduced operational effectiveness. Our approach is to apply the principles of feedback
control to ISR operations, “closing the loop” from the sensor collections through automated processing to ISR asset
control. Previous work by the authors demonstrated closed-loop control, involving both platform routing and sensor
pointing, of a multi-sensor, multi-platform ISR ensemble tasked with providing situational awareness and performing
search, track and classification of multiple targets. The multi-asset control used a joint optimization of routes and
schedules in a centralized architecture, requiring a fully-connected communications network. This paper presents an
extension of the previous work to a decentralized architecture that relaxes the communications requirements. The
decentralized approach achieves a solution equivalent to the centralized system when the network allows full
communications and gracefully degrades ISR performance as communications links degrade. The decentralized closedloop
ISR system has been exercised via a simulation test bed against a scenario in the Afghanistan theater under a
variety of network conditions, from full to poor connectivity. Simulation experiment results are presented.
Stochastic context-free grammars for scale-dependent intent inference
Author(s):
Bhashyam Balaji;
Mustafa Fanaswala;
Vikram Krishnamurthy
Show Abstract
The detection and tracking of surface targets using airborne radars has been extensively investigated in the literature. However, the state-of-the-art techniques in multi-target tracking do not automatically provide information that is potentially of tactical significance, such as anomalous trajectory patterns. In this paper, recent work that attempts to address this problem that is based on stochastic context-free grammars (SCFGs) is reviewed. It is shown that the production rule probabilities in SCFGs can be used to constrain sizes and orientation of target trajectories and hence lead to development of more refined syntactic trackers.
Sensor selection for target localization in a network of proximity sensors and bearing sensors
Author(s):
Qiang Le;
Lance M. Kaplan
Show Abstract
The work considers sensor fusion in a heterogeneous network of proximity and bearings-only sensors for multiple target tracking. Specifically, various particle implementations of the probability hypothesis density filter are proposed that consider two different fusion strategies: 1) the traditional iterated-corrector approach, and 2) explicit fusion of the multitarget density. This work also investigates sensor type (proximity or bearings-only) selection via the Renyi entropy criteria. The simulation results demonstrate comparable localization performances for the two fusion methods, and they show that sensor type selection usually outperforms single sensor type performance.
Evaluating detection and estimation capabilities of magnetometer-based vehicle sensors
Author(s):
David M. Slater;
Garry M. Jacyna
Show Abstract
In an effort to secure the northern and southern United States borders, MITRE has been tasked
with developing Modeling and Simulation (M&S) tools that accurately capture the mapping between
algorithm-level Measures of Performance (MOP) and system-level Measures of Effectiveness
(MOE) for current/future surveillance systems deployed by the the Customs and Border Protection
Office of Technology Innovations and Acquisitions (OTIA). This analysis is part of a larger
M&S undertaking. The focus is on two MOPs for magnetometer-based Unattended Ground Sensors
(UGS). UGS are placed near roads to detect passing vehicles and estimate properties of the vehicle’s
trajectory such as bearing and speed. The first MOP considered is the probability of detection. We
derive probabilities of detection for a network of sensors over an arbitrary number of observation
periods and explore how the probability of detection changes when multiple sensors are employed.
The performance of UGS is also evaluated based on the level of variance in the estimation of trajectory
parameters. We derive the Cramer-Rao bounds for the variances of the estimated parameters
in two cases: when no a priori information is known and when the parameters are assumed to be
Gaussian with known variances. Sample results show that UGS perform significantly better in the
latter case.
Urban multitarget tracking via gas-kinetic dynamics models
Author(s):
Ronald Mahler
Show Abstract
Multitarget tracking in urban environments presents a major theoretical and practical challenge. A recently suggested approach is that of modeling traffic dynamics using the fluid-kinetic methods of traffic-flow theory (TFT). I propose use of the newer, more general, gas-kinetic (GK) approach to TFT. In GK, traffic flow is modeled as a one- or two-dimensional constrained gas. The paper demonstrates the following. (1) The foundational concept in GK--the "phase-space density"--is the same thing as the probability hypothesis density (PHD) of multitarget tracking theory. (2) The theoretically best-that-one-can do approach to TFT-based tracking is a PHD filter. (3) Better performance can be obtained by augmenting this PHD filter as a cardinalized PHD (CPHD) filter. A simple example is presented to illustrate how PHD/CPHD filters can be integrated with conventional macroscopic, mesoscopic, and microscopic TFT.
Background agnostic CPHD tracking of dim targets in heavy clutter
Author(s):
Adel I. El-Fallah;
Aleksandar Zatezalo;
Ronald P. S. Mahler;
Raman K. Mehra;
Wellesley E. Pereira
Show Abstract
Detection and tracking of dim targets in heavy clutter environments is a daunting theoretical and practical problem.
Application of the recently developed Background Agnostic Cardinalized Probability Hypothesis Density (BA-CPHD)
filter provides a very promising approach that adequately addresses all the complexities and the nonlinear nature of this
problem. In this paper, we present analysis, derivation, development, and application of a BA-CPHD implementation for
tracking dim ballistic targets in environments with a range of unknown clutter rates, unknown clutter distribution, and
unknown target probability of detection. The effectiveness and accuracy of the implemented algorithms are assessed and
evaluated. Results that evaluate and also demonstrate the specific merits of the proposed approach are presented.
Tracking, identification, and classification with random finite sets
Author(s):
Ba Tuong Vo;
Ba Ngu Vo
Show Abstract
This paper considers the problem of joint multiple target tracking, identification, and classification. Standard
approaches tend to treat the tasks of data association, estimation, track management and classification as
separate problems. This paper outlines how it is possible to formulate a unified a Bayesian recursion for joint
tracking, identification and classification. The formulation is based on the theory of random finite sets or finite set
statistics, and specifically labeled random finite sets, which results in a propagation of a multi-target posterior
which contains not only target information but all available track information. Implementations are briefly
discussed. Where appropriate for particular applications this method can be considered Bayes optimal.
PHD filtering with localised target number variance
Author(s):
Emmanuel Delande;
Jérémie Houssineau;
Daniel Clark
Show Abstract
Mahler’s Probability Hypothesis Density (PHD filter), proposed in 2000, addresses the challenges of the multipletarget
detection and tracking problem by propagating a mean density of the targets in any region of the state
space. However, when retrieving some local evidence on the target presence becomes a critical component of
a larger process - e.g. for sensor management purposes - the local target number is insufficient unless some
confidence on the estimation of the number of targets can be provided as well. In this paper, we propose a
first implementation of a PHD filter that also includes an estimation of localised variance in the target number
following each update step; we then illustrate the advantage of the PHD filter + variance on simulated data from
a multiple-target scenario.
Divergence detectors for multitarget tracking algorithms
Author(s):
Ronald Mahler
Show Abstract
Single-target tracking filters will typically diverge when their internal measurement or motion models deviate
too much from the actual models. Niu, Varshney, Alford, Bubalo, Jones, and Scalzo have proposed a metric--
the normalized innovation squared (NIS)--that recursively estimates the degree of nonlinearity in a single-target
tracking problem by detecting filter divergence. This paper establishes the following: (1) NIS can be extended
to generalized NIS (GNIS), which addresses more general nonlinearities; (2) NIS and GNIS are actually anomaly
detectors, rather than filter-divergence detectors; (3) NIS can be heuristically generalized to a multitarget NIS
(MNIS) metric; (4) GNIS also can be rigorously extended to multitarget problems via the multitarget GNIS
(MGNIS); (5) explicit, computationally tractable formulas for MGNIS can be derived for use with CPHD and
PHD filters; and thus (6) these formulas can be employed as anomaly detectors for use with these filters.
A Gaussian mixture ensemble transform filter for vector observations
Author(s):
Santosh Nannuru;
Mark Coates;
Arnaud Doucet
Show Abstract
The ensemble Kalman filter relies on a Gaussian approximation being a reasonably accurate representation of the filtering distribution. Reich recently introduced a Gaussian mixture ensemble transform filter which can address scenarios where the prior can be modeled using a Gaussian mixture. Reichs derivation is suitable for a scalar measurement or a vector of uncorrelated measurements. We extend the derivation to the case of vector observations with arbitrary correlations. We illustrate through numerical simulation that implementation is challenging, because the filter is prone to instability.
High Level Information Fusion (HLIF) with nested fusion loops
Author(s):
Robert Woodley;
Michael Gosnell;
Amber Fischer
Show Abstract
Situation modeling and threat prediction require higher levels of data fusion in order to provide actionable information.
Beyond the sensor data and sources the analyst has access to, the use of out-sourced and re-sourced data is becoming
common. Through the years, some common frameworks have emerged for dealing with information fusion—perhaps the
most ubiquitous being the JDL Data Fusion Group and their initial 4-level data fusion model. Since these initial
developments, numerous models of information fusion have emerged, hoping to better capture the human-centric
process of data analyses within a machine-centric framework. 21st Century Systems, Inc. has developed Fusion with
Uncertainty Reasoning using Nested Assessment Characterizer Elements (FURNACE) to address challenges of high
level information fusion and handle bias, ambiguity, and uncertainty (BAU) for Situation Modeling, Threat Modeling,
and Threat Prediction. It combines JDL fusion levels with nested fusion loops and state-of-the-art data reasoning. Initial
research has shown that FURNACE is able to reduce BAU and improve the fusion process by allowing high level
information fusion (HLIF) to affect lower levels without the double counting of information or other biasing issues. The
initial FURNACE project was focused on the underlying algorithms to produce a fusion system able to handle BAU and
repurposed data in a cohesive manner. FURNACE supports analyst’s efforts to develop situation models, threat models,
and threat predictions to increase situational awareness of the battlespace. FURNACE will not only revolutionize the
military intelligence realm, but also benefit the larger homeland defense, law enforcement, and business intelligence
markets.
A robust technique for semantic annotation of group activities based on recognition of extracted features in video streams
Author(s):
Vinayak Elangovan;
Amir Shirkhodaie
Show Abstract
Recognition and understanding of group activities can significantly improve situational awareness in
Surveillance Systems. To maximize reliability and effectiveness of Persistent Surveillance Systems, annotations of
sequential images gathered from video streams (i.e. imagery and acoustic features) must be fused together to generate
semantic messages describing group activities (GA). To facilitate efficient fusion of extracted features from any physical
sensors a common structure will suffice to ease integration of processed data into new comprehension. In this paper, we
describe a framework for extraction and management of pertinent features/attributes vital for annotation of group
activities reliably. A robust technique is proposed for fusion of generated events and entities’ attributes from video
streams. A modified Transducer Markup Language (TML) is introduced for semantic annotation of events and entities
attributes. By aggregation of multi-attribute TML messages, we have demonstrated that salient group activities can be
spatiotemporal can be reliable annotated. This paper discusses our experimental results; our analysis of a set of simulated
group activities performed under different contexts and demonstrates the efficiency and effectiveness of the proposed
modified TML data structure which facilitates seamless fusion of extracted information from video streams.
Feynman path integral discretization and its applications to nonlinear filtering
Author(s):
Bhashyam Balaji
Show Abstract
In continuous nonlinear filtering theory, we are interested in solving certain parabolic second-order partial dif ferential equations (PDEs), such as the Fokker-Planck equation. The fundamental solution of such PDEs can be written in various ways, such as the Feynman-Kac integral and the Feynman path integral (FPI). In addition, the FPI can be defined in several ways. In this paper, the FPI definition based on discretization is reviewed. This has the advantage of being rigorously defined as limits of finite-dimensional integrals. The rigorous and non-rigorous approaches are compared in terms of insight and successes in nonlinear filtering as well as other areas in mathematics.
Particle flow inspired by Knothe-Rosenblatt transport for nonlinear filters
Author(s):
Fred Daum;
Jim Huang
Show Abstract
We derive a new algorithm for particle flow
corresponding to Bayes’ rule that was inspired by Knothe-
Rosenblatt transport, which is well known in transport
theory. We emphasize that our flow is not Knothe-
Rosenblatt transport, but rather it is a completely different
algorithm for particle flow. In particular, we pick a nearly
upper triangular Jacobian matrix, but the meaning of the
word “Jacobian” as used here is completely different than
used in Knothe-Rosenblatt transport.
Particle flow with non-zero diffusion for nonlinear filters
Author(s):
Fred Daum;
Jim Huang
Show Abstract
We derive several new algorithms for particle flow with non-zero diffusion corresponding to Bayes’ rule. This is unlike all of our previous particle flows, which assumed zero diffusion for the flow corresponding to Bayes’ rule. We emphasize, however, that all of our particle flows have always assumed non-zero diffusion for the dynamical model of the evolution of the state vector in time. Our new algorithm is simple and fast, and it has an especially nice intuitive formula, which is the same as Newton’s method to solve the maximum likelihood estimation (MLE) problem (but for each particle rather than only the MLE), and it is also the same as the extended Kalman filter for the special case of Gaussian densities (but for each particle rather than just the point estimate). All of these new flows apply to arbitrary multimodal densities with smooth nowhere vanishing non-Gaussian densities.
Zero curvature particle flow for nonlinear filters
Author(s):
Fred Daum;
Jim Huang
Show Abstract
We derive a new algorithm for computing Bayes’
rule using particle flow that has zero curvature. The flow is
computed by solving a vector Riccati equation exactly in
closed form rather than solving a PDE, with a significant
reduction in computational complexity. Our theory is valid
for any smooth nowhere vanishing probability densities,
including highly multimodal non-Gaussian densities. We
show that this new flow is similar to the extended Kalman
filter in the special case of nonlinear measurements with
Gaussian noise. We also outline more general particle flows,
including: constant curvature, geodesic flow, non-constant
curvature, piece-wise constant curvature, etc.
Fourier transform particle flow for nonlinear filters
Author(s):
Fred Daum;
Jim Huang
Show Abstract
We derive five new algorithms to design particle
flow for nonlinear filters using the Fourier transform of the
PDE that determines the flow of particles corresponding to
Bayes’ rule. This exploits the fact that our PDE is linear
with constant coefficients. We also use variance reduction
and explicit stabilization to enhance robustness of the filter.
Our new filter works for arbitrary smooth nowhere
vanishing probability densities.
Sequential testing over multiple stages and performance analysis of data fusion
Author(s):
Gaurav Thakur
Show Abstract
We describe a methodology for modeling the performance of decision-level data fusion between different
sensor configurations, implemented as part of the JIEDDO Analytic Decision Engine (JADE). We first discuss
a Bayesian network formulation of classical probabilistic data fusion, which allows elementary fusion
structures to be stacked and analyzed efficiently. We then present an extension of the Wald sequential test
for combining the outputs of the Bayesian network over time. We discuss an algorithm to compute its performance
statistics and illustrate the approach on some examples. This variant of the sequential test involves
multiple, distinct stages, where the evidence accumulated from each stage is carried over into the next one,
and is motivated by a need to keep certain sensors in the network inactive unless triggered by other sensors.
Multisource information fusion for enhanced simultaneous tracking and recognition
Author(s):
Bart Kahler
Show Abstract
A layered sensing approach helps to mitigate sensor, target, and environmental operating conditions affecting target tracking and recognition performance. Radar sensors provide standoff sensing capabilities over a range of weather conditions; however, operating conditions such as obscuration can hinder radar target tracking. By using other sensing modalities such as electro-optical (EO) building cameras or eye witness reports, continuous target tracking and recognition may be achieved when radar data is unavailable. Information fusion is necessary to associate independent multisource data to ensure accurate target track and identification is maintained. Exploiting the unique information obtained from multiple sensor modalities with non-sensor sources will enhance vehicle track and recognition performance and increase confidence in the reported results by providing confirmation of target tracks when multiple sources have overlapping coverage of the vehicle of interest. The author uses a fusion performance model in conjunction with a tracking and recognition performance model to assess which combination of information sources produce the greatest gains for both urban and rural environments for a typical sized ground vehicle.
Dempster-Shafer theory and connections to information theory
Author(s):
Joseph S. J. Peri
Show Abstract
The Dempster-Shafer theory is founded on probability theory. The entire machinery of probability
theory, and that of measure theory, is at one’s disposal for the understanding and the extension of the
Dempster-Shafer theory. It is well known that information theory is also founded on probability theory.
Claude Shannon developed, in the 1940’s, the basic concepts of the theory and demonstrated their utility in
communications and coding. Shannonian information theory is not, however, the only type of information
theory. In the 1960’s and 1970’s, further developments in this field were made by French and Italian
mathematicians. They developed information theory axiomatically, and discovered not only the Wiener-
Shannon composition law, but also the hyperbolic law and the Inf-law. The objective of this paper is to
demonstrate the mathematical connections between the Dempster Shafer theory and the various types of
information theory. A simple engineering example will be used to demonstrate the utility of the concepts.
Object detection and classification using image moment functions in the applied to video and imagery analysis
Author(s):
Olegs Mise;
Stephen Bento
Show Abstract
This paper proposes an object detection algorithm and a framework based on a combination of Normalized Central
Moment Invariant (NCMI) and Normalized Geometric Radial Moment (NGRM). The developed framework allows
detecting objects with offline pre-loaded signatures and/or using the tracker data in order to create an online object
signature representation. The framework has been successfully applied to the target detection and has demonstrated its
performance on real video and imagery scenes.
In order to overcome the implementation constraints of the low-powered hardware, the developed framework uses a
combination of image moment functions and utilizes a multi-layer neural network. The developed framework has been
shown to be robust to false alarms on non-target objects. In addition, optimization for fast calculation of the image
moments descriptors is discussed. This paper presents an overview of the developed framework and demonstrates its
performance on real video and imagery scenes.
Multi-parametric data fusion for enhanced object identification and discrimination
Author(s):
Stephen Kupiec;
Vladimir Markov;
Joseph Chavez
Show Abstract
Effective fusion of multi-parametric heterogeneous data is essential for better object identification, characterization
and discrimination. In this report we discuss a practical example of fusing the data provided by imaging and nonimaging
electro-optic sensors. The proposed approach allows the processing, integration and interpretation of such
data streams from the sensors. Practical examples of improved accuracy in discriminating similar but non-identical
objects are presented.
A neuromorphic system for object detection and classification
Author(s):
Deepak Khosla;
Yang Chen;
Kyungnam Kim;
Shinko Y. Cheng;
Alexander L. Honda;
Lei Zhang
Show Abstract
Unattended object detection, recognition and tracking on unmanned reconnaissance platforms in battlefields and urban
spaces are topics of emerging importance. In this paper, we present an unattended object recognition system that
automatically detects objects of interest in videos and classifies them into various categories (e.g., person, car, truck,
etc.). Our system is inspired by recent findings in visual neuroscience on feed-forward object detection and recognition
pipeline and mirrors that via two main neuromorphic modules (1) A front-end detection module that combines form and
motion based visual attention to search for and detect “integrated” object percepts as is hypothesized to occur in the
human visual pathways; (2) A back-end recognition module that processes only the detected object percepts through a
neuromorphic object classification algorithm based on multi-scale convolutional neural networks, which can be
efficiently implemented in COTS hardware. Our neuromorphic system was evaluated using a variety of urban area video
data collected from both stationary and moving platforms. The data are quite challenging as it includes targets at long
ranges, occurring under variable conditions of illuminations and occlusion with high clutter. The experimental results of
our system showed excellent detection and classification performance. In addition, the proposed bio-inspired approach is
good for hardware implementation due to its low complexity and mapping to off-the-shelf conventional hardware.
Machine vision tracking of carrier-deck assets for improved launch safety
Author(s):
Brynmor J. Davis;
Richard W. Kaszeta;
Robert D. Chambers;
Bruce R. Pilvelait;
Patrick J. Magari;
Michael Withers;
David Rossi
Show Abstract
We present an automated aircraft tracking system as a tool for improving carrier-deck safety. Using a single video
stream, aircraft are tracked with relation to the deck, enabling the automatic evaluation of deck safety criteria. System
operation involves matching observed image edge features to a calibrated projection of a 3D deck/aircraft model. By
identifying the best-fit model, high accuracy 3D tracking is achieved. Testing with a 1:72-scale model indicates a full-scale
accuracy on the order of 1 foot spatially and 1 degree in aircraft orientation. Further, our edge-matching based
method is insensitive to illumination changes, robust to partial obscuration and highly parallelizable (with preliminary
benchmarking indicating real-time feasibility). Automated aircraft tracking allows improved operational locations for
launch control personnel and/or provides a second-look deck safety evaluation and, as such, represents a significant new
tool for the assurance of carrier deck safety.
A comparison of sensor resolution assessment by human vision versus custom software for Landolt C and triangle resolution targets
Author(s):
Alan R. Pinkus;
David W. Dommett;
H. Lee Task
Show Abstract
This paper is the fifth in a series exploring the possibility of using a synthetic observer to assess the resolution of
both real and synthetic (fused) sensors. The previous paper introduced an Automatic Triangle Orientation Detection
Algorithm (ATODA) that was capable of recognizing the orientation of an equilateral triangle used as a resolution
target, which complemented the Automatic Landolt C Orientation Recognition (ALCOR) software developed
earlier. Three different spectral band sensors (infrared, near infrared and visible) were used to collect images that
included both resolution targets and militarily relevant targets at multiple distances. The resolution targets were
evaluated using the two software algorithms described above. For the current study, subjects viewed the same set of
images previously used in order to obtain human-based assessments of the resolutions of these three sensors for
comparison with the automated approaches. In addition, the same set of images contained hand-held target objects
so that human performance in recognizing the targets could be compared to both the automated and human-based
assessment of resolution for each sensor.
Development of a real-world sensor-aided target acquisition model based on human visual performance with a Landolt C
Author(s):
H. Lee Task;
Alan R. Pinkus;
Eric Geiselman
Show Abstract
With the growing number of image-producing sensors in different spectral bands it is often desirable to provide the
operational community with an estimation of the level of target acquisition/recognition performance that could be
expected for specific scenarios using these sensors. Many target acquisition/recognition models have been
developed over the decades to try and predict expected human performance under various conditions. Many of
these are relatively complicated and often concentrate on specific aspects, such as search strategies, atmospherics, or
sensor parameters while ignoring other factors. This paper describes the development of a simple, high-level target
acquisition/recognition model for predicting human performance for a particular class of operationally relevant,
time-based scenarios involving sensor-aided viewing. Assumptions and relevant factors considered in developing
the model are discussed and the model, in different forms, is presented. Fundamentally, the model is based on
previously-collected human visual performance data using images of the Landolt C acuity target recorded using a
short-wave infrared sensor, the Johnny Johnson target recognition criteria, and basic scenario parameters. Limited
real-world testing of the model has been accomplished.
Qualitative evaluations and comparisons of six night-vision colorization methods
Author(s):
Yufeng Zheng;
Kristopher Reese;
Erik Blasch;
Paul McManamon
Show Abstract
Current multispectral night vision (NV) colorization techniques can manipulate images to produce colorized images that
closely resemble natural scenes. The colorized NV images can enhance human perception by improving observer object
classification and reaction times especially for low light conditions. This paper focuses on the qualitative (subjective)
evaluations and comparisons of six NV colorization methods. The multispectral images include visible (Red-Green-
Blue), near infrared (NIR), and long wave infrared (LWIR) images. The six colorization methods are channel-based
color fusion (CBCF), statistic matching (SM), histogram matching (HM), joint-histogram matching (JHM), statistic
matching then joint-histogram matching (SM-JHM), and the lookup table (LUT). Four categries of quality
measurements are used for the qualitative evaluations, which are contrast, detail, colorfulness, and overall quality. The
score of each measurement is rated from 1 to 3 scale to represent low, average, and high quality, respectively.
Specifically, high contrast (of rated score 3) means an adequate level of brightness and contrast. The high detail
represents high clarity of detailed contents while maintaining low artifacts. The high colorfulness preserves more natural
colors (i.e., closely resembles the daylight image). Overall quality is determined from the NV image compared to the
reference image. Nine sets of multispectral NV images were used in our experiments. For each set, the six colorized NV
images (produced from NIR and LWIR images) are concurrently presented to users along with the reference color
(RGB) image (taken at daytime). A total of 67 subjects passed a screening test (“Ishihara Color Blindness Test”) and
were asked to evaluate the 9-set colorized images. The experimental results showed the quality order of colorization
methods from the best to the worst: CBCF < SM < SM-JHM < LUT < JHM < HM. It is anticipated that this work will
provide a benchmark for NV colorization and for quantitative evaluation using an objective metric such as objective
evaluation index (OEI).
Real-time classification of ground from lidar data for helicopter navigation
Author(s):
Ferdinand Eisenkeil;
Tobias Schafhitzel;
Uwe Kühne;
Oliver Deussen
Show Abstract
Helicopter pilots often have to deal with bad weather conditions and degraded views. Such situations may decrease the
pilots' situational awareness significantly. The worst-case scenario would be a complete loss of visual reference during
an off-field landing due to brownout or white out. In order to increase the pilots' situational awareness, helicopters
nowadays are equipped with different sensors that are used to gather information about the terrain ahead of the
helicopter. Synthetic vision systems are used to capture and classify sensor data and to visualize them on multifunctional
displays or pilot's head up displays. This requires the input data to be a reliably classified into obstacles and
ground.
In this paper, we present a regularization-based terrain classifier. Regularization is a popular segmentation method in
computer vision and used in active contours. For a real-time application scenario with LIDAR data, we developed an
optimization that uses different levels of detail depending on the accuracy of the sensor. After a preprocessing step where
points are removed that cannot be ground, the method fits a shape underneath the recorded point cloud. Once this shape
is calculated, the points below this shape can be distinguished from elevated objects and are classified as ground. Finally,
we demonstrate the quality of our segmentation approach by its application on operational flight recordings. This method
builds a part of an entire synthetic vision processing chain, where the classified points are used to support the generation
of a real-time synthetic view of the terrain as an assistance tool for the helicopter pilot.
High-resolution land cover classification using low resolution global data
Author(s):
Mark J. Carlotto
Show Abstract
A fusion approach is described that combines texture features from high-resolution panchromatic imagery with land
cover statistics derived from co-registered low-resolution global databases to obtain high-resolution land cover maps.
The method does not require training data or any human intervention. We use an MxN Gabor filter bank consisting of
M=16 oriented bandpass filters (0-180°) at N resolutions (3-24 meters/pixel). The size range of these spatial filters is
consistent with the typical scale of manmade objects and patterns of cultural activity in imagery. Clustering reduces the
complexity of the data by combining pixels that have similar texture into clusters (regions). Texture classification
assigns a vector of class likelihoods to each cluster based on its textural properties. Classification is unsupervised and
accomplished using a bank of texture anomaly detectors. Class likelihoods are modulated by land cover statistics derived
from lower resolution global data over the scene. Preliminary results from a number of Quickbird scenes show our
approach is able to classify general land cover features such as roads, built up area, forests, open areas, and bodies of
water over a wide range of scenes.
Fusion of multispectral and stereo information for unsupervised target detection in VHR airborne data
Author(s):
Dirk C. Borghys;
Mahamadou Idrissa;
Michal Shimoni;
Ola Friman;
Maria Axelsson;
Mikael Lundberg;
Christiaan Perneel
Show Abstract
Very high resolution multispectral imaging reached a high level of reliability and accuracy for target detection and
classification. However, in an urban scene, the complexity is raised, making the detection and the identification of small
objects difficult. One way to overcome this difficulty is to combine spectral information with 3D data. A set of (very
high resolution) airborne multispectral image sequences was acquired over the urban area of Zeebrugge, Belgium. The
data consist of three bands in the visible (VIS) region, one band in the near infrared (NIR) range and two bands in the
mid-wave infrared (MWIR) region. Images are obtained images at a frame rate of 1/2 frame per second for the VIS and
NIR image and 2 frames per second for the MWIR bands. The sensors have a decimetric spatial resolution. The
combination of frame rate with flight altitude and speed results in a large overlap between successive images. The
current paper proposes a scheme to combine 3D information from along-track stereo, exploiting the overlap between
images on one hand and spectral information on the other hand for unsupervised detection of targets. For the extraction
of 3D information, the disparity map between different image pairs is determined automatically using an MRF-based
method. For the unsupervised target detection, an anomaly detection algorithm is applied. Different methods for inserting
the obtained 3D information into the target detection scheme are discussed.
Combining structured light and ladar for pose tracking in THz sensor management
Author(s):
Philip Engström;
Maria Axelsson;
Mikael Karlsson
Show Abstract
Stand-off 3D THz imaging to detect concealed treats is currently under development. The technology can provide high
resolution 3D range data of a passing subject showing layers of clothes and if there are concealed items. However,
because it is a scanning sensor technology with a narrow field of view, the subjects pose and position need to be
accurately tracked in real time to focus the system and map the imaged THz data to specific body parts. Structured light
is a technique to obtain 3D range information. It is, for example, used in the Microsoft Kinect for pose tracking of game
players in real time. We demonstrate how structured light can contribute to a THz sensor management system and track
subjects in real time. The main advantage of structured light is its simplicity, the disadvantages are the sensitivity to
lighting conditions and material properties as well as a relatively low accuracy. Time of flight laser scanning is a
technique that complements structured light well, the accuracy is usually much higher and it is less sensitive to lighting
conditions. We show that by combining the techniques it is possible to create a robust real time pose tracking system for
THz sensor management. We present a concept system based on the Microsoft Kinect and a SICK LMS-511 laser
scanner. The laser scanner is used for 2D tracking of the subjects, this tracking is then used to initialize and validate the
Microsoft Kinect pose tracking. We have evaluated the sensors individually in both static and dynamic scenes and
present their advantages and drawbacks.
Human activity recognition based on human shape dynamics
Author(s):
Zhiqing Cheng;
Stephen Mosher;
Huaining Cheng;
Timothy Webb
Show Abstract
Human activity recognition based on human shape dynamics was investigated in this paper. The shape dynamics
describe the spatial-temporal shape deformation of a human body during its movement and thus provide important
information about the identity of a human subject and the motions performed by the subject. The dynamic shapes of four
subjects in five activities (digging, jogging, limping, throwing, and walking) were created via 3-D motion replication.
The Paquet Shape Descriptor (PSD) was used to describe subject shapes in each frame. The principal component
analysis was performed on the calculated PSDs and principal components (PCs) were used to characterize PSDs. The
PSD calculation was then reasonably approximated by its significant projections in the eigen-space formed by PCs and
represented by the corresponding projection coefficients. As such, the dynamic human shapes for each activity were
described by these projection coefficients, which in turn, along with their derivatives were used to form the feature
vectors (attribute sets) for activity classification. Data mining technology was employed with six classification methods
used. Seven attribute sets were evaluated with high classification accuracy attained for most of them. The results from
this investigation illustrate the great potential of human shape dynamics for activity recognition.
Seismic signature analysis for discrimination of people from animals
Author(s):
Thyagaraju Damarla;
Asif Mehmood;
James M. Sabatier
Show Abstract
Cadence analysis has been the main focus for discriminating between the seismic signatures of people and animals.
However, cadence analysis fails when multiple targets are generating the signatures. We analyze the mechanism
of human walking and the signature generated by a human walker, and compare it with the signature generated
by a quadruped. We develop Fourier-based analysis to differentiate the human signatures from the animal
signatures. We extract a set of basis vectors to represent the human and animal signatures using non-negative
matrix factorization, and use them to separate and classify both the targets. Grazing animals such as deer, cows,
etc., often produce sporadic signals as they move around from patch to patch of grass and one must characterize
them so as to differentiate their signatures from signatures generated by a horse steadily walking along a path.
These differences in the signatures are used in developing a robust algorithm to distinguish the signatures of
animals from humans. The algorithm is tested on real data collected in a remote area.
Anomalous human behavior detection: an adaptive approach
Author(s):
Coen van Leeuwen;
Arvid Halma;
Klamer Schutte
Show Abstract
Detection of anomalies (outliers or abnormal instances) is an important element in a range of applications such as
fault, fraud, suspicious behavior detection and knowledge discovery. In this article we propose a new method for
anomaly detection and performed tested its ability to detect anomalous behavior in videos from DARPA's Mind's
Eye program, containing a variety of human activities. In this semi-unsupervised task a set of normal instances
is provided for training, after which unknown abnormal behavior has to be detected in a test set. The features
extracted from the video data have high dimensionality, are sparse and inhomogeneously distributed in the
feature space making it a challenging task. Given these characteristics a distance-based method is preferred, but
choosing a threshold to classify instances as (ab)normal is non-trivial. Our novel aproach, the Adaptive Outlier
Distance (AOD) is able to detect outliers in these conditions based on local distance ratios. The underlying
assumption is that the local maximum distance between labeled examples is a good indicator of the variation in
that neighborhood, and therefore a local threshold will result in more robust outlier detection. We compare our
method to existing state-of-art methods such as the Local Outlier Factor (LOF) and the Local Distance-based
Outlier Factor (LDOF). The results of the experiments show that our novel approach improves the quality of
the anomaly detection.
Behavioral profiling in CCTV cameras by combining multiple subtle suspicious observations of different surveillance operators
Author(s):
Henri Bouma;
Jack Vogels;
Olav Aarts;
Chris Kruszynski;
Remco Wijn;
Gertjan Burghouts
Show Abstract
Camera surveillance and recognition of deviant behavior is important for the prevention of criminal incidents. A single
observation of subtle deviant behavior of an individual may sometimes be insufficient to merit a follow-up action.
Therefore, we propose a method that can combine multiple weak observations to make a strong indication that an
intervention is required. We analyze the effectiveness of combining multiple observations/tags of different operators, the
effects of the tagging instruction these operators received (many tags for weak signals or few tags for strong signals), and
the performance of using a semi-automatic system for combining the different observations. The results show that the
method can be used to increase hits (detecting criminals) whilst reducing false alarms (bothering innocent passers-by).
Invariant unsupervised segmentation of dismounts in depth images
Author(s):
Nathan S. Butler;
Richard L. Tutwiler
Show Abstract
This paper will describe a scene invariant method for the unsupervised segmentation of dismounts in depth images. This method can be broken into two parts: ground plane detection and spatial segmentation. The former is accomplished by using RANSAC (RANdom SAmple Consensus) to identify a ground plane in the scene. After performing contrast enhancement the Image is "sliced" into regions. Each classified region is processed by a Robert's edge detector in order to separate each object. Each output is further processed by a block of shape filters that extract the human form.
Acoustic signature recognition technique for Human-Object Interactions (HOI) in persistent surveillance systems
Author(s):
Amjad Alkilani;
Amir Shirkhodaie
Show Abstract
Handling, manipulation, and placement of objects, hereon called Human-Object Interaction (HOI), in the environment
generate sounds. Such sounds are readily identifiable by the human hearing. However, in the presence of background
environment noises, recognition of minute HOI sounds is challenging, though vital for improvement of multi-modality
sensor data fusion in Persistent Surveillance Systems (PSS). Identification of HOI sound signatures can be used as
precursors to detection of pertinent threats that otherwise other sensor modalities may miss to detect. In this paper, we
present a robust method for detection and classification of HOI events via clustering of extracted features from training
of HOI acoustic sound waves. In this approach, salient sound events are preliminary identified and segmented from
background via a sound energy tracking method. Upon this segmentation, frequency spectral pattern of each sound
event is modeled and its features are extracted to form a feature vector for training. To reduce dimensionality of training
feature space, a Principal Component Analysis (PCA) technique is employed to expedite fast classification of test feature
vectors, a kd-tree and Random Forest classifiers are trained for rapid classification of training sound waves. Each
classifiers employs different similarity distance matching technique for classification. Performance evaluations of
classifiers are compared for classification of a batch of training HOI acoustic signatures. Furthermore, to facilitate
semantic annotation of acoustic sound events, a scheme based on Transducer Mockup Language (TML) is proposed.
The results demonstrate the proposed approach is both reliable and effective, and can be extended to future PSS
applications.
Time series prediction of nonlinear and nonstationary process modeling for ATR
Author(s):
Andre Sokolnikov
Show Abstract
An algorithm is proposed for nonlinear and non-stationary processes concerning ATR. The general approach is to
decompose a complex task into multiple domains in space and time based on predictability of the object modification
dynamics. The model is composed of multiple modules, each of which consists of a state prediction model and
correctional multivariate system. Prediction error function is used to weigh the outputs of multiple hierarchical levels.
A multi-attribute based methodology for vehicle detection and identification
Author(s):
Vinayak Elangovan;
Bashir Alsaidi;
Amir Shirkhodaie
Show Abstract
Robust vehicle detection and identification is required for the intelligent persistent surveillance systems. In this paper, we
present a Multi-attribute Vehicle Detection and Identification technique (MVDI) for detection and classification of
stationary vehicles. The proposed model uses a supervised Hamming Neural Network (HNN) for taxonomy of shape of
the vehicle. Vehicles silhouette features are employed for the training of the HNN from a large array of training vehicle
samples in different type, scale, and color variation. Invariant vehicle silhouette attributes are used as features for
training of the HNN which is based on an internal Hamming Distance and shape features to determine degree of
similarity of a test vehicle against those it’s selectively trained with. Upon detection of class of the vehicle, the other
vehicle attributes such as: color and orientation are determined. For vehicle color detection, provincial regions of the
vehicle body are used for matching color of the vehicle. For the vehicle orientation detection, the key structural features
of the vehicle are extracted and subjected to classification based on color tune, geometrical shape, and tire region
detection. The experimental results show the technique is promising and has robustness for detection and identification
of vehicle based on their multi-attribute features. Furthermore this paper demonstrates the importance of the vehicle
attributes detection towards the identification of Human-Vehicle Interaction events.
A cross-spectral variation of the cross-ambiguity function
Author(s):
D. J. Nelson
Show Abstract
We present a new cross-spectral variation of the Cross-Ambiguity Function (CAF) and demonstrate use of the CS-CAF in obtaining improved Frequency Difference of Arrival (FDOA) estimates. Unlike the conventional CAF process that estimates the FDOA of two signals received by moving receivers, as approximately constant over a short observation time, the CS-CAF models FDOA as a slowly varying continuous function. Under the CS-CAF model, we apply cross-spectral estimation methods to estimate and track the instantaneous FDOA of the received signals. This has two important advantages. The cross-spectral frequency estimation methods are extremely accurate, and by modeling the FDOA as a continuous function of time, we resolve the issue of assigning an event time to the estimated FDOA. In addition, by recovering an FDOA component from the received signals, we may apply LaGrange interpolation to track the instantaneous phase of the FDOA component, enabling an even more accurate estimate of instantaneous FDOA.
Analysis of angle of arrival estimation at HF using an ensemble of structurally integrated antennas
Author(s):
Clair F. Corbin;
Geoffrey A. Akers
Show Abstract
The number of elements in a nonplanar antenna array, their respective farfield radiation patterns, the relative
spacing between the elements, and the signal processing approach determine the angle-of-arrival (AOA) estimation
performance for a direction finding (DF) system. Designing an airborne DF system becomes challenging
as the wavelengths of the signals of interest become large with respect to the host aircraft. Given an ensemble
of structurally integrated (SI) antennas designed using a discrete number of feed points, this paper presents an
analysis of AOA estimation as a function of the number of SI feed elements and their respective locations on a
large, three-dimensional aircraft model at HF (2-32 MHz). Empirical results for AOA estimation errors versus
the number and physical location of feed points are presented using the maximum likelihood method estimator
for 4 and 11 MHz signals of interest.
Summary of human social, cultural, behavioral (HSCB) modeling for information fusion panel discussion
Author(s):
Erik Blasch;
John Salerno;
Ivan Kadar;
Shanchieh Jay Yang;
Laurie Fenstermacher;
Mica Endsley;
Lynne Grewe
Show Abstract
During the SPIE 2012 conference, panelists convened to discuss “Real world issues and challenges in Human
Social/Cultural/Behavioral modeling with Applications to Information Fusion.” Each panelist presented their current
trends and issues. The panel had agreement on advanced situation modeling, working with users for situation awareness
and sense-making, and HSCB context modeling in focusing research activities. Each panelist added different perspectives
based on the domain of interest such as physical, cyber, and social attacks from which estimates and projections can be
forecasted. Also, additional techniques were addressed such as interest graphs, network modeling, and variable length
Markov Models. This paper summarizes the panelists discussions to highlight the common themes and the related
contrasting approaches to the domains in which HSCB applies to information fusion applications.
Pattern of life from WAMI objects tracking based on visual context-aware tracking and infusion network models
Author(s):
Jianjun Gao;
Haibin Ling;
Erik Blasch;
Khanh Pham;
Zhonghai Wang;
Genshe Chen
Show Abstract
With the emergence of long lasting surveillance systems, e.g., full motion video (FMV) networks and wide area motion
imagery (WAMI) sensors, extracting targets’ long term pattern of life over a day becomes possible. In this paper, we
present a framework for extracting the pattern of life (POL) of targets from WAMI video. We first apply a context-aware
multi-target tracker (CAMT) to track multiple targets in the WAMI video and obtain the targets’ tracklets, traces, and the
locations, from surveillance information extracted from the targets' long-term trajectories. Then, entity networks
propagate over time are constructed with targets’ tracklets, traces, and the interested locations. Finally, the entity
network is analyzed using network retrieving technique to extract the POL of interested targets.
Learning and detecting coordinated multi-entity activities from persistent surveillance
Author(s):
Georgiy Levchuk;
Matt Jacobsen;
Caitlin Furjanic;
Aaron Bobick
Show Abstract
In this paper, we present our enhanced model of multi-entity activity recognition, which operates on person
and vehicle tracks, converts them into motion and interaction events, and represents activities via multiattributed
role networks encoding spatial, temporal, contextual, and semantic characteristics of coordinated
activities. Our model is flexible enough to capture variations of behaviors, and is used for both learning of
repetitive activity patterns in semi-supervised manner, and detection of activities in data with large ambiguity
and high ratio of irrelevant to relevant tracks and events. We demonstrate our models using activities captured
in CLIF persistent wide area motion data collections.
Consumer-oriented social data fusion: controlled learning in social environments, social advertising, and more
Author(s):
L. Grewe
Show Abstract
This paper explores the current practices in social data fusion and analysis as it applies to consumer-oriented applications
in a slew of areas including business, economics, politics, sciences, medicine, education and more. A categorization of
these systems is proposed and contributions to each area are explored preceded by a discussion of some special issues
related to social data and networks. From this work, future paths of consumer-based social data analysis research and
current outstanding problems are discovered.
Influence versus intent for predictive analytics in situation awareness
Author(s):
Biru Cui;
Shanchieh Jay Yang;
Ivan Kadar
Show Abstract
Predictive analytics in situation awareness requires an element to comprehend and anticipate potential adversary activities that might occur in the future. Most work in high level fusion or predictive analytics utilizes machine learning, pattern mining, Bayesian inference, and decision tree techniques to predict future actions or states. The emergence of social computing in broader contexts has drawn interests in bringing the hypotheses and techniques from social theory to algorithmic and computational settings for predictive analytics. This paper aims at answering the question on how influence and attitude (some interpreted such as intent) of adversarial actors can be formulated and computed algorithmically, as a higher level fusion process to provide predictions of future actions.
The challenges in this interdisciplinary endeavor include drawing existing understanding of influence and attitude in both social science and computing fields, as well as the mathematical and computational formulation for the specific context of situation to be analyzed. The study of ‘influence’ has resurfaced in recent years due to the emergence of social networks in the virtualized cyber world. Theoretical analysis and techniques developed in this area are discussed in this paper in the context of predictive analysis. Meanwhile, the notion of intent, or
‘attitude’ using social theory terminologies, is a relatively uncharted area in the computing field. Note that a key objective of predictive analytics is to identify impending/planned attacks so their ‘impact’ and ‘threat’ can be prevented. In this spirit, indirect and direct observables are drawn and derived to infer the influence network and attitude to predict future threats.
This work proposes an integrated framework that jointly assesses adversarial actors’ influence network and their attitudes as a function of past actions and action outcomes. A preliminary set of algorithms are developed and tested using the Global Terrorism Database (GTD). Our results reveals the benefits to perform joint predictive analytics with both attitude and influence. At the same time, we discover significant challenges in deriving influence and attitude from indirect observables for diverse adversarial behavior. These observations warrant further investigation of optimal use of influence and attitude for predictive analytics, as well as the potential inclusion of other environmental or capability elements for the actors.
Infrared small target detection technology based on OpenCV
Author(s):
Lei Liu;
Zhijian Huang
Show Abstract
Accurate and fast detection of infrared (IR) dim target has very important meaning for infrared precise guidance, early
warning, video surveillance, etc. In this paper, some basic principles and the implementing flow charts of a series of
algorithms for target detection are described. These algorithms are traditional two-frame difference method, improved
three-frame difference method, background estimate and frame difference fusion method, and building background with
neighborhood mean method. On the foundation of above works, an infrared target detection software platform which is
developed by OpenCV and MFC is introduced. Three kinds of tracking algorithms are integrated in this software. In
order to explain the software clearly, the framework and the function are described in this paper. At last, the experiments
are performed for some real-life IR images. The whole algorithm implementing processes and results are analyzed, and
those algorithms for detection targets are evaluated from the two aspects of subjective and objective. The results prove
that the proposed method has satisfying detection effectiveness and robustness. Meanwhile, it has high detection
efficiency and can be used for real-time detection.
Simultaneous optimization by simulation of iterative deconvolution and noise removal for non-negative data
Author(s):
Abolfazl M. Amini;
George E. Ioup;
Juliette W. Ioup
Show Abstract
This paper introduces a method by which one can find the optimum iteration numbers for noise removal and
deconvolution of sampled data. The method employs the mean squared error, which is the square of the difference
between the deconvolution result and the input, for optimization. As an example of the iterative methods of noise
removal and deconvolution, the always convergent method of Ioup is used for the simultaneous optimization by
simulation research presented in this paper. This method is applied to achieve optimization for two Gaussian
impulse response functions, one narrow (rapidly converging) and the other wide (slowly converging). The input
function used consists of three narrow peaks selected to give some overlap after convolution with the Gaussian
impulse response function. Normal distributed noise is added to the convolution of the input with the impulse
response function. A range of signal-to-noise ratio is used to optimize the always convergent iterations for both of
these Gaussians. For the narrow Gaussian 15 signal-to-noise ratio cases are studied while for the wide Gaussian 11
signal-to-noise ratios cases are considered. To achieve statistically reliable results 50 noisy data sets are generated
for each signal-to-noise ratio case. For a given signal-to-noise ratio case the optimum deconvolution and noise
removal iteration numbers are found and tabulated. The tabulated results are given in tables one through three. Once
these optimum numbers are found they can be used in an equivalent window in the Fourier transform domain,
although the non-negativity constraint can only be applied in the function domain.
Simultaneous optimization by simulation of iterative deconvolution and noise removal to improve the resolution of impulsive inputs
Author(s):
Abolfazl M. Amini;
George E. Ioup;
Juliette W. Ioup
Show Abstract
This paper introduces a method by which one can find the optimum iteration numbers for noise removal and
deconvolution of sampled data. The method employs the mean squared error, which is the pointwise square of the
difference between the deconvolution result and the input, for optimization. The always convergent iterative
deconvolution and noise removal methods of Ioup are used for the simultaneous optimization by simulation research
presented in this paper. This method is applied to achieve optimization for a seismic wavelet impulse response
function. The optimized always convergent results are compared to those of least square inverse filtering and the
reblurring procedure of Kawata and Ichioka. The input data used is a spike train of various separations to give a
calibrated measure of resolution. A range of signal-to-noise ratios (SNR’s) is used in the optimization procedure. No
noise removal is applied prior to unfolding for the reblurring procedure and the least squares inverse filtering
methods. To achieve statistically reliable results 50 noisy data sets are generated for each SNR case for the always
convergent method and 10 noisy cases for the reblurring procedure and the least squares inverse filtering techniques.
For a given SNR case the average mean squared error, the average optimum deconvolution, and the average noise
removal iteration numbers are found and tabulated. The tabulated results are plotted versus the average SNR. Once
these optimum numbers are found they can be used in an equivalent window in the Fourier transform domain.
Self-adaptive characteristics segmentation optimized algorithm of weld defects based on flooding
Author(s):
Changying Dang;
Jianmin Gao;
Zhao Wang;
Fumin Chen
Show Abstract
In order to improve the accuracy and efficiency of weld defect segmentation in automatic radiographic nondestructive
testing and evaluation(NDT&E), an effective weld defect segmentation algorithm based on flooding has been developed,
which has the self-adaptive characteristics. Firstly, the defect’s feature points are extracted from the scale space of
radiographic films. Based on the information of defect points, the seed points and seed domains of defect discrimination
are adaptively determined, in which the defect segmentation seed will be searched. Then, aiming at the sparsity of weld
defects and canyon characteristics of 3D topographic map of defect regions, the drip-watering and water flooding have
been used for reference. The flooding is carried out by using line-flooding algorithm, in which water starts from defect
seed points and flows to the neighbor regions in order. On the basis of the flooding-area change and flooding-level
ascending velocity, the defect segmentation threshold values are determined and the weld defects also are segmented
from the radiographic films. At last, the comparative experiments have been carried out to compare the proposed
algorithm with the watershed segmentation algorithm and background subtraction segmentation algorithm. And the
experiment results confirm that the proposed algorithm obviously improves the accuracy and efficiency of weld defect’s
segmentation.
Intrusion detection on oil pipeline right of way using monogenic signal representation
Author(s):
Binu M. Nair;
Varun Santhaseelan;
Chen Cui;
Vijayan K. Asari
Show Abstract
We present an object detection algorithm to automatically detect and identify possible intrusions such as construction vehicles and equipment on the regions designated as the pipeline right-of-way (ROW) from high resolution aerial imagery. The pipeline industry has buried millions of miles of oil pipelines throughout the country and these regions are under constant threat of unauthorized construction activities. We propose a multi-stage framework which uses a pyramidal template matching scheme in the local phase domain by taking a single high resolution training image to classify a construction vehicle. The proposed detection algorithm makes use of the monogenic signal representation to extract the local phase information. Computing the monogenic signal from a two dimensional object region enables us to separate out the local phase information (structural details) from the local energy (contrast) thereby achieving illumination invariance. The first stage involves the local phase based template matching using only a single high resolution training image in a local region at multiple scales. Then, using the local phase histogram matching, the orientation of the detected region is determined and a voting scheme gives a certain weightage to the resulting clusters. The final stage involves the selection of clusters based on the number of votes attained and using the histogram of oriented phase feature descriptor, the object is located at the correct orientation and scale. The algorithm is successfully tested on four different datasets containing imagery with varying image resolution and object orientation.
Optimising the use of hyperspectral and multispectral data for regional crop classification
Author(s):
Li Ni;
Bing Zhang;
Lianru Gao;
Shanshan Li;
Yuanfeng Wu
Show Abstract
Optical remotely sensed data, especially hyperspectral data have emerged as the most useful data source for regional
crop classification. Hyperspectral data contain fine spectra, however, their spatial coverage are narrow. Multispectral data
may not realize unique identification of crop endmembers because of coarse spectral resolution, but they do provide
broad spatial coverage. This paper proposed a method of multisensor analysis to fully make use of the virtues from both
data and to improve multispectral classification with the multispectral signatures convert from hyperspectral signatures
in overlap regions. Full-scene crop mapping using multispectral data was implemented by the multispectral signatures
and SVM classification. The accuracy assessment showed the proposed classification method is promising.
Breast tumor classification via single-frequency microwave imaging
Author(s):
Cuong Manh Do;
Rajeev Bansal
Show Abstract
We propose a novel method for the classification of breast tumors (malignant versus benign) based on principal component analysis (PCA) following single-frequency microwave imaging. For initial evaluation, a simplified model of
the biological tissue was developed in a frequency-domain finite-element framework. The model incorporated various combinations of dielectric constant and conductivity. A double-level classification scheme allows classifying a tumor
with high accuracy.
Stabilizing bidirectional associative memory with Principles in Independent Component Analysis and Null Space (PICANS)
Author(s):
James P. LaRue;
Yuriy Luzanov
Show Abstract
A new extension to the way in which the Bidirectional Associative Memory (BAM) algorithms are implemented is
presented here. We will show that by utilizing the singular value decomposition (SVD) and integrating principles of
independent component analysis (ICA) into the nullspace (NS) we have created a novel approach to mitigating
spurious attractors. We demonstrate this with two applications. The first application utilizes a one-layer
association while the second application is modeled after the several hierarchal associations of ventral pathways.
The first application will detail the way in which we manage the associations in terms of matrices. The second
application will take what we have learned from the first example and apply it to a cascade of a convolutional
neural network (CNN) and perceptron this being our signal processing model of the ventral pathways, i.e., visual
systems.
Option pricing formulas and nonlinear filtering: a Feynman path integral perspective
Author(s):
Bhashyam Balaji
Show Abstract
Many areas of engineering and applied science require the solution of certain parabolic partial differential equa tions, such as the Fokker-Planck and Kolmogorov equations. The fundamental solution, or the Green's function, for such PDEs can be written in terms of the Feynman path integral (FPI). The partial differential equation arising in the valuing of options is the Kolmogorov backward equation that is referred to as the Black-Scholes equation. The utility of this is demonstrated and numerical examples that illustrate the high accuracy of option price calculation even when using a fairly coarse grid.