Show all abstracts
View Session
- Front Matter: Volume 10646
- Multisensor Fusion, Multitarget Tracking, and Resource Management I
- Multisensor Fusion, Multitarget Tracking, and Resource Management II
- Information Fusion Methodologies and Applications I
- Information Fusion Methodologies and Applications II
- Information Fusion Methodologies and Applications III
- Information Fusion Methodologies and Applications IV
- Information Fusion Methodologies and Applications V
- Signal and Image Processing, and Information Fusion Applications I
- Signal and Image Processing, and Information Fusion Applications II
- Signal and Image Processing, and Information Fusion Applications III
- Signal and Data Processing for Small Targets
- Poster Session
Front Matter: Volume 10646
Front Matter: Volume 10646
Show abstract
This PDF file contains the front matter associated with SPIE Proceedings Volume 10646, including the Title Page, Copyright information, Table of Contents, and Conference Committee listing.
Multisensor Fusion, Multitarget Tracking, and Resource Management I
Track stitching and approximate track association on a pairwise-likelihood graph
Show abstract
Single-sensor track stitching is a path cover problem on a graph with pairwise log likelihoods. This paper provides a theoretical justification for pursuing track association on such a graph by using a sum of pairwise log likelihoods in place of the multi-sensor log likelihood. It outlines solution strategies through clique cover, cotemporal subgraph decomposition, and super-node stitching
From labels to tracks: it's complicated
Show abstract
The recently developed Labeled Random Finite Set (RFS) filter seems to make the problem of forming tracks across time trivial: Connect the track points with the same label, and we get a track. This paper shows different ways of forming tracks: through connecting filtered or smoothed track points, through tracing the pedigree of the last best hypothesis, and through solving the batched problem with the entire data set. It shows that the problem is nontrivial, and there are unanswered questions.
An introduction to the generalized labeled multi-Bernoulli filter through Matlab code
Show abstract
The recently developed Generalized Labeled Multi-Bernoulli (GLMB) filter, or the Vo-Vo filter, provides a “closed form” solution to the multi-target tracking problem, and has found many successful applications. However, one often hears from a general practitioner what a daunting task it is to follow all the mathematical notations in order to understand GLMB. This paper strives to describe the operations of the Vo-Vo filter through Matlab code, utilizing its object oriented features for different levels of abstractions.
On-orbit calibration of satellite based imaging sensors
Show abstract
Satellite based imaging sensors are subjected to several factors that may cause the values of the calibration parameters to vary between the time of ground calibration and on-orbit operation. This paper considers the problem of satellite based imaging sensors calibration, while estimating the state of a target of opportunity. The 2D pixel based measurements (estimated location of the target’s image in the Focal Plane Array - (FPA)) generated by these sensors are used to estimate the sensors pointing angle biases. The noisy measurements provided by these sensors are assumed to be perfectly associated, i.e., they belong to the same target. The proposed algorithm leads to a maximum likelihood bias estimator. The evaluation of the corresponding Cramer- Rao Lower Bound (CRLB) on the covariance of the bias estimates, and the statistical tests on the results of simulations show that both the target trajectory and the biases are observable and this method is statistically efficient.
The data-driven delta-generalized labeled multi-Bernoulli tracker for automatic birth initialization
Show abstract
The δ-generalized labeled multi-Bernoulli (δ-GLMB) tracker is the first multiple hypothesis tracking (MHT)-like
tracker that is provably Bayes-optimal. However, in its basic form, the δ-GLMB provides no mechanism for
adaptively initializing targets at their first appearance from unlabeled measurements. By introducing a new
multitarget likelihood function that accounts for new target appearance, a data-driven δ-GLMB tracker is derived
that automatically initializes new targets in the tracker measurement update. Monte Carlo results of simulated
multitarget tracking problems demonstrate improved multitarget tracking accuracy over comparable adaptive
birth methods.
Multisensor Fusion, Multitarget Tracking, and Resource Management II
Localization of a point target from an optical sensor's focal plane array
Show abstract
This paper considers the localization of a point target from an optical sensor's focal plane array (FPA) with a dead zone separating neighboring pixels. The Cramer Rao lower bound (CRLB) for the covariance of the maximum likelihood estimate (MLE) of target location is derived based on the assumptions that the energy density of the target deposited in the FPA conforms to a Gaussian point spread function (PSF) and that the pixel noise is based on a Poisson model (i.e., the mean and variance in each pixel are proportional to the pixel area), . Extensive simulation results are provided to demonstrate the efficiency of the MLE of the target location in the FPA. Furthermore, we investigate how the estimation performance changes with the pixel size for a given dead zone width. It is shown that that there is an optimal pixel size which minimizes the CRLB for a given dead zone width.
Trajectory estimation and impact point prediction of a ballistic object from a single fixed passive sensor
Show abstract
For a thrusting/ballistic target, works have shown that a single fixed sensor with 2-D angle-only measurements (azimuth and elevation angles) is able to estimate the target’s 3-D trajectory. In previous works, the measure- ments have been considered as starting either from the launch point or with a delayed acquisition. In the latter case, the target is in flight and thrusting. The present work solves the estimation problem of a target with delayed acquisition after burn-out time (BoT), i.e. in the ballistic stage. This is done with a 7-D parameter vector (velocity vector azimuth angle and elevation angle, drag coefficient, 3-D acquisition position and target speed at the acquisition time) assuming noiseless motion. The Fisher Information Matrix (FIM) is evaluated to prove the observability numerically. The Maximum Likelihood (ML) estimator is used for the motion parameter estimation at acquisition time. The impact point prediction (IPP) is then carried out with the ML estimate. Simulation results from the scenarios considered illustrate that the MLE is efficient.
A study of particle filtering approaches for the kidnapped robot problem
Clark N. Taylor,
David Mohler
Show abstract
Particle filtering is a popular approach to solving estimation problems that include non-linear, multi-modal, or other irregular structures in the estimation problem. Practically, however, some combinations of problems and implementations of the particle filter require a computationally unreasonable number of particles to achieve accurate estimation results. This is especially true as the number of dimensions in the state space increases. In this paper, we investigate one particular situation where a large number of particles may be required, the kidnapped robot problem. We implement several variants of the particle filter, evaluating which ones can best localize the robot after a “kidnapping” event without requiring too many particles to be practical. We find that significant improvements in performance are available using “particle flow” particle filter implementations.
Evaluation of optimizations of Murty's M-best assignment
Show abstract
Murty’s ranking algorithm provides a clever way of partitioning the solution space to find the M best assignments, whose costs are in a nondecreasing order, to a linear assignment problem with an N N cost matrix, where total assignment cost is minimized. This paper reviews the optimization techniques for Murty’s M -best method combined with successive shortest path assignment algorithms (such as the Jonker and Volgenant assignment algorithm) from two papers. The first paper discussed three optimizations: 1) inheriting dual variables and partial solutions during partitioning, 2) sorting subproblems by lower cost bounds before solving, and 3) partitioning in an optimized order. The second paper proposed updating the dual variables of the previous solution before the shortest path procedure is applied to solve a subproblem without mentioning the use of lower cost bounds. One contribution of this paper is that we propose a much tighter lower bound than that given by the first paper. Comparative tests have been conducted among algorithms employing different combinations of the optimization techniques to evaluate their respective performances.
Information Fusion Methodologies and Applications I
A generalized labeled multi-Bernoulli filter for correlated multitarget systems
Show abstract
The labeled random finite set (LRFS) theory of B.-T. Vo and B.-N. Vo is the first systematic, theoretically rigorous formulation of true multitarget tracking, and is the basis for the generalized labeled multi-Bernoulli (GLMB) filter (the first implementable and provably Bayes-optimal multitarget tracking algorithm). Several of the author’s earlier papers investigated Bayes filters that propagate the correlations between two unlabeled evolving multitarget systems—but with limited success. In this paper we provide a theoretically rigorous and much more general approach, by devising a GLMB filter that propagates the correlations between two evolving labeled multitarget systems.
A clutter-agnostic generalized labeled multi-Bernoulli filter
Show abstract
The labeled random finite set (LRFS) theory of B.-T. Vo and B.-N. Vo is the first systematic, theoretically rigorous formulation of true multitarget tracking, and is the basis for the generalized labeled multi-Bernoulli (GLMB) filter (the first implementable and provably Bayes-optimal multitarget tracking algorithm). Like most multitarget trackers, the GLMB filter is based on the assumption that clutter statistics are known a priori. Recent research has introduced RFS filters that are "clutter-agnostic," in the sense that they can address unknown, dynamically evolving clutter. These filters were unlabeled, however. In this paper we devise a clutter-agnostic GLMB (CA-GLMB) filter, based on the Bernoulli clutter-generator concept.
A fast labeled multi-Bernoulli filter for superpositional sensors
Show abstract
The labeled random finite set (LRFS) theory of B.-T. Vo and B.-N. Vo is the first systematic, theoretically rigorous formulation of true multitarget tracking, and is the basis for the generalized labeled multi-Bernoulli (GLMB) filter (the first implementable and provably Bayes-optimal multitarget tracking algorithm). An earlier paper showed that labeled multi-Bernoulli (LMB) RFS's are the labeled analogs of Poisson RFS's (which are not LRFS's); and, consequently, that the LMB filter of Reuter et al. can be interpreted as a labeled PHD (LPHD) filter for the "standard" multi-target measurementmodel. In like manner, this paper derives an LPHD/LMB filter for superpositional sensors. Whereas the LPHD/LMB filter for the "standard" model is combinatoric, the superpositional LPHD/LMB filter has computational order O(n) where n is the current number of tracks.
Large constellation tracking using a labeled multi-Bernoulli filter
Nicholas Ravago,
Akhil K. Shah,
Sean M. McArdle,
et al.
Show abstract
Multiple companies have recently proposed or begun work on large constellations of hundreds to thousands of satellites in low-Earth orbits for the purpose of providing worldwide internet access. The sudden infusion of so many satellites in an already highly-populated orbital regime presents an operational risk to all LEO objects. To enable risk analyses and ensure safe operations, a robust system will be needed to efficiently observe these constellations, and use the resulting data to accurately and precisely track all objects. This paper proposes a rudimentary tasking-tracking system for this purpose. The scheduler uses an information theoretic reward function to determine which high-value tasks, and uses a ranked assignment algorithm to optimally allocate these tasks to a sensor network. The tracking portion employs a labeled multi-Bernoulli filter to process the generated data and estimate the multitarget state of the entire constellation. The effectiveness of this system is demonstrated using a simulated large constellation of 4,425 satellites and a network of six ground-based radar sensors.
Information Fusion Methodologies and Applications II
Detection system fusion based on the predictive value curve and its variations
Show abstract
This paper presents a method to quantify detection system families (DSFs) based upon the Precision-Recall (PR) curve and variations of the PR curve. The PR curve is related to the Receiver Operating Characteristic (ROC) curve. The ROC curve of a detection system family shows the trade-off between the probabilities of a true positive classification versus the probability of a false positive classification. The conditional probabilities are conditioned on the true outcomes. The PR curve is similar in the sense that the conditional probabilities are conditioned on the outcomes of the detection systems that "say" they are true outcomes. We present the function that produces the PR curve, called the PR function. We produce the (nonlinear) transformation that relates the ROC function to the Precision-Recall function. We discuss variations of the Precision-Recall function that will be useful.
Given two detection system families A and B, for which we know their respective ROC functions, we know the transformation that produces the ROC function of the conjunction of A with B, and the ROC function of the disjunction of A with B. We review these transformations and relate them to the PR functions. In particular, given the PR functions for detection system families A and B, we produce the PR functions for the detection system families A conjoin B and A disjoin B. Examples are given that demonstrate the theory and usefulness of the transformation to predict the performance of the fused systems. The extension to multiple label classification systems will be presented.
Improving ATR system performance through sequences of classification tasks
Show abstract
Complex ATR tasks are often decomposed into the identification of sub-targets, that is, objects are sorted and identified as one particular target type and then those targets are further identified. For instance, a field of view may be partitioned into natural and man-made objects. After which, the man-made objects are screened to identify a particular object of interest. These tasks combine classifiers which operate in isolation of each other, yet in fact, perform as a classification sequence. This work examines this scenario, building the ATR task as a sequence of target identifications. Two sequences will be highlighted: Believe the Negative (BN) and Believe the Extremes (BE). In a BN sequence, the second classification system only operates if a target is identified from the first classification system. In a BE sequence, the second classification system only operates if there is no identification from the first classification system. Performance of these classification sequences will be compared to classification systems operating separately. Further, sequence augmentation will be examined to demonstrate how the ATR task may still be completed when information is missing on the primary target. This missing information may represent atmospheric blurring, alternate field of view, or other disturbances. An example of the performance of the sequences under simulated, theoretical levels of missing information is examined, and formulas are presented to describe the optimal performance of these systems when augmented and un-augmented. In conclusion, this work demonstrates utility in how these sequences fuse target information in order to complete an ATR task.
Information Fusion Methodologies and Applications III
pystemlib: towards an open-source tracking, state estimation, and mapping toolbox in Python
Show abstract
Python State Estimation and Modeling Library, pystemlib, is a library that implements Bayesian State Estimation theory for modeling and tracking target objects. This library was developed to overcome the limitations associated with licensed programming languages as well as imperative and numerical matrix-based programming styles that were used in previously developed libraries. pystemlib incorporates object-oriented, functional, and symbolic programming to develop accurate and easy-to-use tracking filters and models. This library is also capable of mapping state estimation results onto the geographical areas to which they correspond. Future work on this library will include optimizing the algorithms for speed and extending the library to incorporate multi-target tracking, data fusion, and image and video processing.
Analysis of noise impact on distributed average consensus
Show abstract
This paper considers the problem of distributed average consensus in a noisy sensor network in which noise will cause error bits and detriment the accuracy of the results. We use a bit-flipping model to model the noise effect and show that it will lead to biased results. We propose here an unbiased average consensus algorithm for noisy networks with dynamic topologies. We analyze the convergence speed and the mean square error and show that the noise can be suppressed by our method. The proposed algorithm is found effective in a network simulation with and without perfect bit error rate information.
Information Fusion Methodologies and Applications IV
A framework for adaptive MaxEnt modeling within distributed sensors and decision fusion for robust target detection/recognition
Show abstract
The Maximum Entropy (MaxEnt) information theoretic model parametric framework was introduced in a prior paper for distributed decision fusion (DDF) without knowledge of prior probabilities of local decisions. The paper demonstrated the effectiveness of the MaxEnt fusion center by achieving the best, realistic detection performance with respect to published results of either the Bayesian formulation or the Neyman-Pearson criterion. This paper represents the framework of an extension of MaxEnt DDF, called E-MaxEnt using: individual sensor MaxEnt classifiers for targets classification/recognition, and by fusing local classifier decisions. Specifically, in E-MaxEnt each sensor has a front-end pre-processing system for both signal detection and to process unique target attributes extracted for example from observed target imagery, which attributes are stored for reference/learning/comparison in the sensors MaxEnt classifiers. Based on the degree of match, each sensor generates local binary decisions that are sent to a MaxEnt fusion center, in the usual parallel architecture. No assumptions are made about knowing any local decision rules. The sensors are taking simultaneous (synchronized) measurements with overlapping FOV overages. It should be noted that the above description is not meant to address the “needle-in-haystack” problem, but rather address finding the presence, viz., classify/recognize a previously seen “known” target in areas where previously seen targets most likely are, along with other targets. At the time of writing, the data sets to test the algorithm were not available, but front-end image processing and MaxEnt classifiers were implemented. It is hoped that someone could provide the necessary data sets so the efficacy of the method could be demonstrated and compared with alternative approaches.
A comparison between robust information theoretic estimator and asymptotic maximum likelihood estimator for misspecified model
Show abstract
A robust information-theoretic estimator (RITE) is based on a non-homogeneous Poisson spectral representation. When
an autoregressive (AR) Gaussian wide sense stationary (WSS) process is corrupted by noise, RITE is analyzed and shown
by simulation to be more robust to noise than the asymptotic maximum likelihood estimator (MLE). The statistics of RITE
and asymptotic MLE are analyzed for the misspecified model. For large data records, RITE and MLE are asymptotically
normally distributed. MLE has lower variance, but RITE exhibits much less bias. Simulation examples of a noise corrupted
AR process are provided to support the theoretical properties and show the advantage of RITE for low signal-to-noise ratios
(SNR).
Optimizing collaborative computations for scalable distributed inference in large graphs
Show abstract
In this paper, we study two methods to optimize distributed collaborative computations: (a) data partitioning, which exploits locality to reduce data dependencies between local computations, and (b) computation aggregation, which reduces communication load between local partitions. We analyze the benefits of such optimizations and their utility for message-passing processing model. This is a class of general-purpose graph analytics widely used in a range of domains and applications, including computer vision, activity recognition, social network analysis, knowledge mining, and semi-supervised inference. Described optimization methods will improve performance of implementing relational data analytics in distributed environments, including cloud computing, graphical processing units, collaborative multi-agent systems, or specialized chip-boards.
Enabling self-configuration of fusion networks via scalable opportunistic sensor calibration
Show abstract
The range of applications in which sensor networks can be deployed depends heavily on the ease with which sensor locations/orientations can be registered and the accuracy of this process. We present a scalable strategy for algorithmic network calibration using sensor measurements from non-cooperative objects. Specifically, we use recently developed separable likelihoods in order to scale with the number of sensors whilst capturing the overall uncertainties. We demonstrate the efficacy of our self-configuration solution using a real network of radar and lidar sensors for perimeter protection and compare the accuracy achieved to manual calibration.
Multiscale graph-based framework for efficient multi-sensor integration and event detection
Show abstract
We present a general framework for integrating disparate sensors to dynamically detect events. Events are often observed as multiple, asynchronous, disparate sensors’ activations in time. The challenge is to reconcile them to infer that a process of interest is underway or has occurred. We abstractly model each sensor as a value-attributed time interval over which it takes values that are relevant to a known process of interest. Process knowledge is incorporated in the detection scheme by defining sensor neighborhood intervals that overlap with temporally neighboring sensor neighborhood intervals in the process. The sensor neighborhoods are represented as nodes of an interval graph, with edges between nodes of overlapping sensor neighborhoods. Sensor activity is then interpreted via this process model by constructing an interval graph time series, for relevant sensor types and process-driven neighborhoods, and looking for subgraphs that match those of the process model graph. A time series that dynamically records the number of sensor neighborhoods overlapping at any given time is used to detect temporal regions of high sensor activity indicative of an event. Multiscale analysis of this time series reveals peaks over different time scales. The peaks are then used to efficiently triage underlying interval subgraphs of sensor activity to examine them for relational patterns similar to the process model graph of interest. Thus, our framework synergistically uses relational as well as scale information to dynamically and efficiently triage sensors related to a process. Multiple processes of interest may be efficiently detected and tracked in parallel using this approach.
Information Fusion Methodologies and Applications V
Impact of emerging quantum information technologies (QIT) on information fusion: panel summary (Conference Presentation)
Show abstract
Quantum physics has a growing influence on sensor technology; particularly, in the areas of quantum computer science, quantum communications, and quantum sensing based on recent insights from atomic, molecular and optical physics. These quantum contributions have the potential to impact information fusion techniques. Quantum information technology (QIT) methods of interest suggest benefits for information fusion, so a panel was organized to articulate methods of importance for the community. The panel discussion presented many ideas from which the leading impact for information fusion is directly related to the sub-Rayleigh sensing that reduces uncertainty for object assessment through enhanced resolution. The second areas of importance is in the cyber security of data that supports data, sensor, and information fusion. Some elements of QIT that require further analysis is in quantum computing for which only a limited set of information fusion techniques can harness the methods associated with quantum computer architectures. The panel reviewed various aspects of QIT for information fusion which provides a foundation to identify future alignment between quantum and information fusion techniques.
An adaptive sensing approach for the detection of small UAV: first investigation of static sensor network and moving sensor platform
Show abstract
Fusion of information in heterogeneous multi-modal sensor networks has been proven to enhance sensing capabilities of ground troops to detect and track small unmanned aerial vehicles flying at low altitude. Nevertheless, the area coverage of a static sensor network could be permanently or temporally impacted by geographic topologies or moving obstacles which could reduce the local sensing probabilities. An additional moving sensor platform can be used to temporarily enhance sensing capabilities. First theoretical analysis and experimental field trials are presented using a static sensor network consisting of acoustical antenna array, a stationary FMCW RADAR and a passive/active optical sensor unit. Additionally, a measurement vehicle was applied, equipped with passive/active optical sensing devices. While the sensor network was used to monitor a stationary area with a sensor dependent sensing coverage, the measurement vehicle was used to obtain additional information outside the sensing range of the network or behind obstacles. A fusion of these data sets can provide an increased situational awareness. Limitations and improvements of this approach are discussed.
Multi-camera multi-target perceptual activity recognition via meta-data fusion (Conference Presentation)
Show abstract
Human activity detection and recognition capabilities have broad applications for civilian, military, and homeland security. However, monitoring of human activities are very complicated and tedious tasks especially when multiple persons involved perform activities in confined spaces that impose significant obstruction, occultation and observability uncertainty. These applications require fast and reliable tracking systems to observe and inference dynamic objects from multiple coherent video sequences. In compact surveillance systems utilization of multi-cameras monitoring system is highly imperative for tracking, inference, and recognition of variety of group activities. With multi-cameras systems, complexity of occultation can be dealt with by finding and correlating the correspondences from within multiple cameras views observing the same target at once. In this paper, we demonstrate one such a multi-person tracking system developed in a virtual environment. By example, we demonstrate an efficient and effective technique for multi-target tracking, discrimination, and activity recognition in confined spaces. The exemplary scenario considered under this study represents a bus activity where multiple passengers arrive, take seats, and leave while being monitoring by four concurrently operating surveillance camera systems. In this paper, we present how processing tasks of multiple cameras are shared, what objects features they detect, track, and identify jointly. Furthermore, we present the computational intelligence techniques for processing multi-camera images for recognition of objects of interest as well as for annotation of observed individual and group activities via meta-data imagery fusion. The proposed multi-camera processing system is shown to have efficiency and effectively to track multiple targets with different degree of social interactions either with one another or with objects involved with their activities.
MARINE-EO bridging innovative downstream earth observation and Copernicus enabled services for integrated maritime environment, surveillance, and security
Show abstract
Maritime “awareness” is currently a top priority for Europe in regards with the marine environment and climate change, as well as the maritime security, border control against irregular immigration and safety. MARINE-EO is the first European Earth Observation (EO) Pre-Commercial Procurement (PCP) project and aims at the following objectives: (i) Develop, test and validate two sets of demand-driven EO-based services, adopted on open standards, bringing incremental or radical innovations in the field of maritime awareness and leveraging on the existing Copernicus Services and other products from the Copernicus portfolio, (ii) Propose a set of “support” / “envelop” services which will better integrate the EO-based services to the operational logic and code of conduct, (iii) Strengthen transnational collaboration in maritime awareness sector by facilitating knowledge transfer and optimization of resources for the public authorities participating in the buyers group.
A novel architecture for behavior/event detection in security and safety management systems
Show abstract
In this paper the architecture of an autonomous human behavior detection system is presented. The proposed system architecture is intended for Security and Safety surveillance systems that aim to identify adverse events or behaviors which endanger the safety of people or their well-being. Applications include monitoring systems for crowded places (Malls, Mass transport systems, other), critical infrastructures, or border crossing points. The proposed architecture consists of three modules: (a) the event detection module combined with a data fusion component responsible for the fusion of the sensor inputs along with relevant high level metadata, which are pre-defined features that are correlated with a suspicious event, (b) an adaptive learning module which takes inputs from official personnel or healthcare personnel about the correctness of the detected events, and uses it in order to properly parameterise the event detection algorithm, and (c) a statistical and stochastic analysis component which is responsible for specifying the appropriate features to be used by the event detection module. Statistical analysis estimates the correlations between the features employed in the study, while stochastic analysis is used for the estimation of dependencies between the features and the achieved system performance.
Signal and Image Processing, and Information Fusion Applications I
A new FSII-CFAR detector based on fuzzy membership degree
Show abstract
Since a lot of speckles in SAR images, there are a lot of uncertainty in SAR image. It brings a lot of difficulty to the targets detection. Fuzzy theory is a mathematical method used to reduce this uncertainty. A new FSII-CFAR detector is proposed, which is improved intelligent iterative CFAR detection by searching a better fitting distribution model of SAR image background based on fuzzy logic. The best fitting distribution model of background data is decided by the membership value of fuzzy clustering criterion (FCC). Compared with traditional fitting criterion, the results of the FCC improve the detection rate of CFAR. Because the fitting results are more approximated to SAR image background, the simulation results show that the FSII-CFAR detector can make the detection rate reach more than 80% in complex background.
Low-cost multi-camera module and processor array for the ultra-fast framerate recognition, location, and characterization of explosive events
Cedric Yoedt,
Carlos Maraviglia,
Sungjoo Park,
et al.
Show abstract
In the image processing world the detection, location, and identification of explosive events is accomplished usually by single detectors, single element detector arrays, or higher cost cameras (primarily infrared). Imaging systems have been limited by the too few event frames, high costs of components, and poor false alarm rates. For the last three years NRL’s Advanced Techniques Digital Technologies section has been researching ultra-fast framerate explosive event detection. NRL has designed, fabricated, and tested a multi-sensor array of low cost camera modules, each with its own field programmable gate array processor, which are then networked together to implement a system capable of imaging explosive events at 16-30kHz framerates in real time. These camera modules work in the visible band and open up the possibility of exploiting 30-60 frames of an explosive event. With this array it is possible not only to image burning gases and high intensity flashes but also low signature moving effluent and airborne particles. By using processors behind each camera module it is possible to leverage different parts of the algorithm to accomplish computationally expensive operations on individual frames. Networking the array together allows further distribution of the processing for further temporal analysis. Finally all of the resulting images are sent to a central processor where the final parts of the algorithm are completed. The cost of this system once optimized for production will be close to that of acoustic systems but with much higher precision.
Super-resolution of remote sensing images using edge-directed radial basis functions
Show abstract
Edge-Directed Radial Basis Functions (EDRBF) are used to compute super resolution(SR) image from a given set of low resolution (LR) images differing in subpixel shifts. The algorithm is tested on remote sensing images and compared for accuracy with other well-known algorithms such as Iterative Back Projection (IBP), Maximum Likelihood (ML) algorithm, interpolation of scattered points using Nearest Neighbor (NN) and Inversed Distance Weighted (IDW) interpolation, and Radial Basis Functin(RBF) . The accuracy of SR depends on various factors besides the algorithm (i) number of subpixel shifted LR images (ii) accuracy with which the LR shifts are estimated by registration algorithms (iii) and the targeted spatial resolution of SR. In our studies, the accuracy of EDRBF is compared with other algorithms keeping these factors constant. The algorithm has two steps: i) registration of low resolution images and (ii) estimating the pixels in High Resolution (HR) grid using EDRBF. Experiments are conducted by simulating LR images from a input HR image with different sub-pixel shifts. The reconstructed SR image is compared with input HR image to measure the accuracy of the algorithm using sum of squared errors (SSE). The algorithm has outperformed all of the algorithms mentioned above. The algorithm is robust and is not overly sensitive to the registration inaccuracies.
Compressive sensing using the log-barrier algorithm with complex-valued measurements
Show abstract
Compressive sensing constitutes a series of theories and algorithms that, under certain conditions, allow one to reconstruct a signal from limited linear measurements, based on knowledge about a domain where the signal is sparse. The ℓ0-minimization represents the ideal approach for reconstruction, as it searches for the sparsest representation that explains the measurements, but it is an NP-hard procedure. Fortunately, the ℓ1-minimization can frequently be used as an approximation to the ℓ0 approach, and the problem can be solved by some algorithms in polynomial time. One of the optimization problems formulated in this context corresponds to finding the sparsest solution subject to a quadratic constraint, such as in the log-barrier algorithm provided in the well- known ℓ1-Magic package. However, in this particular problem, real-valued signals are reconstructed from real- valued data. In this paper, we show how we can reconstruct sparse real-valued signals from noisy complex-valued measurements. The problem is still posed as a second-order cone program by means of the log barrier method. However, new modifications in the Newton’s step equations and in the ℓ1-Magic codes are necessary to fit the complex-valued data. In addition, in order to evaluate the reconstructions using complex data, we present the results of numerical experimentation and evaluate the performance of the signal reconstruction in terms of signal-to-error ratios. The provided method is well-suited for real applications involving the acquisition of complex-valued data, such as magnetic resonance imaging and computed tomography.
Signal and Image Processing, and Information Fusion Applications II
Stabilization and registration of full-motion video data using deep convolutional neural networks
Show abstract
Stabilization and registration are common techniques applied to overhead imagery and full-motion video (FMV) during production to facilitate further exploitation by the end user. Algorithms designed to accom- plish these tasks must accommodate changes in capture geometry, atmospheric effects, and sensor charac- teristics. Moreover, algorithms that rely on a controlled image base (CIB) reference typically require some degree of robustness with respect to differences in imaging modality. While many factors contributing to gross misalignment can be mitigated using available sensor telemetry and rigorous photogrammetric modeling, the subsequent image-based registration task often relies on loose model assumptions and poor generalizations.
This work presents a modality-agnostic deep learning approach to automatically stabilize and register overhead FMV data to a reference image such as a CIB. The field of deep learning has received significant attention in recent years with advances in high-performance computing and the availability of widely adopted open source tools for numerical computation using data flow graphs. We leverage recent developments in the use of fully differentiable spatial transformer networks to simultaneously remove coarse geometric differences and fine local misalignments in the registration process. Most importantly, no model is required. A convolutional neural network (ConvNet), complete with a spatial transformer, is trained using pairs of frames of FMV data as the input and corresponding label. Once the mechanism by which the deformable warp is learned, the trained network ingests new data and returns a version of the input image sequence that has been warped to a user-specified reference. The performance of our approach is evaluated using several real FMV data sets.
iSight: computer vision based system to assist low vision
Show abstract
iSight is a mobile application to assist low vision people with the everyday task of sight. Specifically, the goal of the system is using 2D computer vision to refocus and visualize specific objects recognized in the image in an Augmented Reality scheme. This paper discusses the development of the application that uses a deep learning TensorFlow module to perform recognition of objects in the scene the user is looking at and consequently directs the formation of an augmented reality scene which is presented to the user to enhance their visual understanding. Both indoor and outdoor environments are tested and results are given. The success and challenges faced by iSight are presented along with future avenues of work.
A real-time object detection framework for aerial imagery using deep neural networks and synthetic training images
Show abstract
Efficient and accurate real-time perception systems are critical for Unmanned Aerial Vehicle (UAV) applications that aim to provide enhanced situational awareness to users. Specifically, object recognition is a crucial element for surveillance and reconnaissance missions since it provides fundamental semantic information of the aerial scene. In this study, we describe the development and implementation of a perception frame-work on an embedded computer vision platform, mounted on a hexacopter for real-time object detection. The framework includes a camera driver and a deep neural network based object detection module and has distributed computing capabilities between the aerial platform and the corresponding ground station. Preliminary aerial real-time object detections using YOLO are performed onboard a UAV and a sequence of images are streamed to the base station where an advanced computer vision algorithm, referred to as Multi-Expert Region-based CNN (ME- RCNN), is leveraged to provide enhanced and fine-grained analytics on the aerial video feeds. Since annotated aerial imagery in the UAV domain is hard to obtain and not routinely available, we use a combination of aerial data as well as air-to-ground synthetic images, such as vehicles, generated by video gaming engines for training the neural network. Through this study, we quantify the level of improvements with the use of the synthetic dataset and the efficacy of using advanced object detection algorithms.
Mobile crowd-sensing for access point localization
Alex Pereira da Silva,
Sylvain Leirens
Show abstract
Nowadays, sensors built-in smartphones enable the collection of a large-scale of data in such a way that several applications can take place in a crowd-sensing paradigm. An interesting application is the device localization, which takes advantage of pervasive Wi-Fi networks in indoor and outdoor environments. We propose a crowd- sensing approach to localize access points in a building based on the collection of received signal strength, step detection and heading changes measurements of free-moving pedestrian users handling smartphones. The localization method boils down to computing likely positions of access points and iteratively estimating and validating related propagation models.
Going deeper with CNN in malicious crowd event classification
Show abstract
Terror attacks are often targeted towards the civilians gathered in one location (e.g., Boston Marathon bombing). Distinguishing such ’malicious’ scenes from the ’normal’ ones, which are semantically different, is a difficult task as both scenes contain large groups of people with high visual similarity. To overcome the difficulty, previous methods exploited various contextual information, such as language-driven keywords or relevant objects. Although useful, they require additional human effort or dataset. In this paper, we show that using more sophisticated and deeper Convolutional Neural Networks (CNNs) can achieve better classification accuracy even without using any additional information outside the image domain. We have conducted a comparative study where we train and compare seven different CNN architectures (AlexNet, VGG-M, VGG16, GoogLeNet, ResNet- 50, ResNet-101, and ResNet-152). Based on the experimental analyses, we found out that deeper networks typically show better accuracy, and that GoogLeNet is the most favorable among the seven architectures for the task of malicious event classification.
Deep learning of group activities from partially observable surveillance video streams (Conference Presentation)
Show abstract
In this paper, we describe the ontology of Partially Observable Group Activities (POGA) in the context of In-Vehicle Group Activity (IVGA) recognition. Initially, we describe the ontology pertaining to the IVGA and show how such an ontology based on in-vehicle volumetric sub-spaces realization and human postural motion limitations can serve as a priori knowledge for inference of human actions inside the confined space of a vehicle. In particular, we treat this predicament as an “action-object” duality problem. This duality signifies the amendable detection of human action observations that can be concluded as the probable object being utilized and vice versa. Furthermore, we use partially observable human postural sequences to recognition actions. Inspired by convolutional neural networks (CNNs) deep learning ability, we present architecture design of a new CNN model for learning “action-object” perception from continuous surveillance videos. In this study, we apply a sequential Deep Hidden Markov Model (DHMM) as a post-processor to CNN to decode realized observations into recognized actions and activities. To generate the needed imagery data set for the training and testing of newly developed techniques, IRIS virtual simulation software is employed for constructing dynamic animation of high fidelity scenarios representing in-vehicle group activities under different operational contexts. The results of our comparative investigation are discussed and presented in detail.
Signal and Image Processing, and Information Fusion Applications III
Object recognition and tracking based on multiscale synthetic SAR and IR in the virtual environment (Conference Presentation)
Show abstract
Identification and tracking of dynamic 3D objects from Synthetic Aperture Radar (SAR) and Infrared (IR) Thermal imaging in the presence of significant clutter and occlusion is a highly challenging task. In this paper, we primarily present an approach for 3D objects recognition and tracking based on their multi-modality (e.g., SAR and IR) imagery signatures and discuss a multi-scale scheme for multi-modality imagery salient keypoint descriptors extraction from 3D objects. Next, we describe how to cluster local salient keypoints and model them as signature surface patch features suitable for object detection and recognition. During our supervised training phase, multiple views of test model are presented to the system where a set of multi-scale invariant surface features are extracted from each model and registered as the object’s class signature exemplar. These features are employed during the online recognition phase to generate recognition hypotheses. When each object of interest is verified and recognized, the object’s attributes are annotated semantically. The coded semantic annotations are then efficiently presented to a Hidden Markov Model (HMM) for spatiotemporal object state discovery and tracking. Through this process, corresponding features of same objects from multiple sequential multi-modality imagery data are realized and tracked overtime. The proposed algorithm was tested using IRIS simulation model where two test scenarios were constructed. One scenario is used for activity recognition of ground-based vehicles, and the other one is used for classification of Unmanned Aerial Vehicles (UAV’s). In both scenarios, synthetic SAR and IR imagery are generated using IRIS simulation model for the purpose of training and testing of newly developed algorithms. Experimental results show that our algorithms offer significant efficiency and effectiveness.
Multiscale synthetic SAR and IR imagery features generation in the cluttered virtual environment (Conference Presentation)
Show abstract
Detection and recognition of 3D objects and their motion characteristics from Synthetic Aperture Radar (SAR) and Infrared (IR) Thermal imaging in the presence of significant clutter and occlusion is a highly challenging task. In this paper, we present an efficient technique for generation of static and dynamic synthetic SAR and IR imagery data in the cluttered virtual environments. Such imagery data sets closely represent the view of physical environment as potentially can be perceived by the physical SAR and IR imaging systems respectively. In this work, we present IRIS simulation model for the efficient construction and modeling of virtual environment with clutter and discuss our techniques for low-poly 3D object surface patch generation. Furthermore, we present several test scenarios based on which the synthetic SAR and IR imaging data sets are obtained and discuss the role of key control parameters impacting the performance of our synthetic multi-modality imaging systems. Lastly, we describe a method for multi-scale feature extraction from 3D objects based on synthetic SAR and IR imagery data sets for a variety of test ground-based and aerial-based vehicles and demonstrate efficiency and effectiveness of this approach in different test scenarios.
Scanning LiDAR for airfield damage assessment
Show abstract
The ability to rapidly assess damage to military infrastructure after an attack is the object of ongoing research. In the case
of runways, sensor systems capable of detecting and locating craters, spall, unexploded ordinance, and debris are necessary
to quickly and efficiently deploy assets to restore a minimum airfield operating surface. We describe measurements
performed using two commercial, robotic scanning LiDAR systems during a round of testing at an airfield. The LiDARs
were used to acquire baseline data and to conduct scans after two rounds of demolition and placement of artifacts for the
entire runway. Configuration of the LiDAR systems was sub-optimal due to availability of only two platforms for
placement of sensors on the same side of the runway. Nevertheless, results prove that the spatial resolution, accuracy, and
cadence of the sensors is sufficient to develop point cloud representations of the runway sufficient to distinguish craters,
debris and most UXO. Location of a complementary set of sensors on the opposite side of the runway would alleviate the
observed shadowing, increase the density of the registered point cloud, and likely allow detection of smaller artifacts.
Importantly, the synoptic data acquired by these static LiDAR sensors is dense enough to allow registration (fusion) with
the smaller, denser, targeted point cloud data acquired at close range by unmanned aerial systems. The paper will also
discuss point cloud manipulation and 3D object recognition algorithms that the team is developing for automatic detection
and geolocation of damage and objects of interest.
FLYSEC: A comprehensive control, command and Information (C2I) system for risk-based security
Show abstract
Increased passenger flows at airports and the need for enhanced security measures from ever increasing and more complex threats lead to long security lines, increased waiting times, as well as often intrusive and disproportionate security measures that result in passenger dissatisfaction and escalating costs. As expressed by the International Air Transport Association (IATA), the Airports Council International, (ACI) and the respective industry, todays airport security model is not sustainable in the long term. The vision for a seamless and continuous journey throughout the airport and efficient security resources allocation based on intelligent risk analysis, set the challenging objectives for the Smart Security of the airport of the future. FLYSEC, a research and innovation project funded by the European Commission under the Horizon 2020 Framework Programme, developed and demonstrated an innovative integrated and risk-based end-to-end airport security process for passengers, while enabling a guided and streamlined procedure from landside to airside and into the boarding gates, offering for the first time an operationally validated innovative concept for end-to-end aviation security. With a consortium of eleven highly specialised partners, coordinated by the National Center for Scientific Research “Demokritos,” FLYSEC developed and tested an integrated risk-based security system with a POC (Proof Of Concept) validation field trial at the Schönhagen Airport in Berlin, and a final pilot demonstration under operational conditions at the Luxembourg International Airport.
Embedding a distributed simulator in a fully-operational control and command airport security system
Show abstract
Command and Control (C2) airport security systems have developed over time, both in terms of technology and in terms of increased security features. Airport control check points are required to operate and maintain modern security systems preventing malicious actions. This paper describes the architecture of embedding a fully distributed, sophisticated simulation platform within a fully operational and robust, state-of-the-art, C2 security system in the context of airport security. The overall system, i.e. the C2, the classification tool and the embedded simulator, delivers a fully operating, validated platform which focuses on: (a) the end-to-end airport security process for passengers, airports and airlines, and (b) the ability to test and validate all security subsystems, processes, as well as the entire security system, via realistically generated and simulated scenarios both in vitro and in vivo. The C2 system has been integrated with iCrowd, a Crowd Simulation platform developed by the Integrated Systems Lab of the Institute of Informatics and Telecommunications in NCSR Demokritos, that features a highly-configurable, high-fidelity agent-based behavior simulator. iCrowd provides a realistic environment inciting behaviors of simulated actors (e.g. passengers, personnel, malicious actors), instantiates the functionality of hardware security technologies (e.g. Beacons, RFID scanners and RFID tags for carry-on luggage tracking) and simulates passengers’ facilitation and customer service. To create a realistic and domain agnostic scenario, multiple simulation instances undertake different kind of entities - whose plans and actions would be naturally unknown to each other - and run in sync constituting a Distributed Simulation Platform. Primary goal is to enable a guided and streamlined procedure from land-side to air-side and into the boarding gates, while offering an operationally validated innovative concept for testing end-to-end aviation security processes, procedures and infrastructure.
Signal and Data Processing for Small Targets
Estimation of single-point sea-surface brightness statistics (Conference Presentation)
Show abstract
Detection system performance analysis is frequently performed assuming Gaussian background statistics, often for convenience or due to a lack of better information. The Gaussian background assumption creates a relationship between probability of detection and probability of false-alarm (the receiver operating characteristic curve or ROC curve) as a function of signal-to-noise ratio. When the background distribution is non-Gaussian (e.g., with strong skew or excess kurtosis), analysis of detection system performance based on the estimated variance of the background signal under the assumption of Gaussianity will result in misleading estimates of detection and false alarm probabilities. In order to correctly define the ROC curve, the background statistics must be known. For infrared imaging systems, one example of a background which may be strongly non-Gaussian is the radiance field of a wavy sea-surface. Although the sea-surface slope field is assumed to be a Gaussian random field, the radiance field maps nonlinearly to the slope field, producing the phenomenon of sun glitter. The result is strongly non-Gaussian radiance distribution functions for certain sea-surface viewing conditions. Based on an analytical expression for sea surface radiance due to Ross, Potvin, and Dion (2005), we construct an approximate analytic expression for the distribution function of single-point (i.e., correlated neither in time nor space) sea-surface radiance as observed by a passive, square-law, electro-optical/infrared detector. With this distribution function, the relationship between detection and false-alarm probabilities can be more accurately characterized.
Robust spectral classification
Andrew W. Tucker,
Steven Kay
Show abstract
Spectral classification is a commonly used technique for discriminating between two or more signals. One popular approach to spectral classification utilizes the autoregressive model. In this model a white Gaussian random process is filtered by an all-pole filter. The autoregressive model leads to a classifier derived from the asymptotic Gaussian likelihood function. Despite substantial prior research effort put into developing a robust classifier, the ability of classifiers to discriminate between signals is not great and in some instances is not even satisfactory. A non-homogeneous Poisson process is an alternative way to model the power spectral density. This type of model leads to a different likelihood function, the realizable Poisson likelihood function. Monte Carlo simulations and data analyses demonstrate that the realizable Poisson likelihood function classifier is more robust then the asymptotic Gaussian classifier. The realizable Poisson likelihood function classifier has a greater probability of correct classification than the asymptotic Gaussian for signals with low signal-to-noise ratios, channel distortion, or certain pole locations.
Poisson maximum likelihood spectral inference (Conference Presentation)
Show abstract
Spectral estimation is at the core to all spectrally based detection systems rather they be infrared (IR) or Raman based technologies, the standard method of spectral inference assumes a Gaussian model for the data. A less well known but alternative spectral representation can be based on a nonhomogeneous Poisson process in the frequency domain which leads to a new likelihood function that can be used for spectral inference. In particular, the very important problems of spectral estimation and spectral classification can be approached with this new likelihood function. If an exponential model is assumed, then the parameter estimation reduces to a simple convex optimization for the spectral estimation problem. For the classification problem with known spectra the classification performance based on the Poisson likelihood function is shown by simulation to outperform the Gaussian classifier in terms of robustness. Finally, a perfect analogy between the Poisson likelihood measure and the Kullback-Leibler measure for probability density functions is established and discussed.
Separation of small targets in multi-wavelength mixtures based on statistical independence
Show abstract
Small target detection is a problem common to a diverse number of fields such as radar, remote sensing, and infrared imaging. In this paper, we consider the application of feature extraction for detection of small hazardous materials in multiwavelength imaging. Since various materials may exist in the area of study each with varying degrees of reflectivity and absortion at different wavelengths of light, flexible, data-driven methods are needed for feature extraction of relevant sources. We propose the use of independent component analysis (ICA), a widely-used blind source separation method based on the statistical independence of the underlying sources. We compare 3 different prominent flavors of ICA on simulated data in a variety of environments. Then, we apply ICA to 2 multi-wavelength imaging datasets with results that suggest that features extracted are useful.
Error statistics of bias-naïve filtering in the presence of bias
Show abstract
In the field of sensing, a typically unavoidable nuisance is the inherent bias of a sensor due to imperfections in timing, calibration, and other sources. The errors incurred by the bias ripple through higher-level processes such as tracking and sensor fusion, causing varying effects to each operation. In many different applications, such as track-to-track correlation, the overall effect of the biases on state estimation is modeled as a constant, translational shift in the position dimension of the track states. This assumption can be appropriate when the required precision of the track states is not stringent. However, in general, sensor bias can not only affect position estimates but also positional derivatives, i.e., velocity, acceleration, in a manner that can change dramatically depending on sensor-target geometry; for situations where high state estimation accuracy is required, these consequences become apparent and need to be handled. The contribution from measurement bias to state estimation error depends on many different aspects, e.g., measurement uncertainty, dynamic model uncertainty, sensor-target geometry. The focus of this work is the quantification of the relative significance of measurement error and measurement bias in the resultant state estimation error. In short, using the results in this work, it is straightforward to: (i) determine regimes where measurement bias becomes a predominant factor, (ii) bound the impact of the sensor bias on the outputted tracking information, (iii) analyze the dependence of the tracking error on sensor-target geometry, all of which can be of great impact when designing a tracking system architecture.
An analytic solution to ellipsoid intersections for multistatic radar
Samuel A. Shapero
Show abstract
Unlike monostatic radars that directly measure the range to a target, multistatic radars measure the total path length from a transmitter, to the target, and then to the receiver. In the absence of angle information, the region of uncertainty described by such a measurement is the surface of an ellipsoid. In order to precisely locate the target, at least three such measurements are needed. In this paper, we derive from geometrical methods a general algorithmic solution to the intersection of three ellipsoids with a common focus. Applying the solution to noisy measurements via the cubature rule provides a solution that approaches the Cramer Rao Lower Bound, which we demonstrate via Monte-Carlo analysis. For conditions of low noise with non-degenerate geometries we also provide a consistent covariance estimate.
Multilevel probabilistic target identification methodology utilizing multiple heterogeneous sensors providing various levels of target characteristics
Show abstract
In modern systems, there are often many sensors which contribute to the identification of targets at various levels of identity amplification. Some sensors provide type or mode level identification while others provide unique fingerprints of the target of interest. This paper investigates combining of IDs from heterogeneous sensors in a probabilistic fashion to produce a fused multi-level identification. The identification of targets is especially difficult when sensors do not provide confidence metrics. When multiple sensors report differing identifications for the same target, the fusing of the results into a stable set of IDs is complicated. Often sensor integration systems are forced to toggle between candidate IDs that may not capture the breadth of the underlying sensor provided data. This paper describes a methodology for calculating a probabilistic ID based on the evaluation of provided identification data which provides intuitive results when faced with conflicting data. Conditions for choosing which calculation method to use are discussed based on the characteristics of each method.
Poster Session
Attitude control system for a balloon based telescope
Show abstract
The Balloon Experimental Twin Telescope for Infrared Interferometry (BETTII) is an 8-meter interferometer which operates on a high-altitude balloon. BETTII had its first successful engineering flight in June 2017.
In this paper we discuss the design of the control system for BETTII, which includes the coarse pointing loop and the estimator controls algorithm (Extended Kalman Filter) implemented in FPGA. We will also discuss the different system modes that we defined in the controls system loop, which are used in different phases of the flight and are activated in order to acquire a target star in the science detector. The pointing loop uses different sensors and actuators in each phase to keep pointing at the desired target. The main sensors are gyroscopes, star cameras, and auxiliary sensors such as high-altitude GPS and magnetometers. The azimuth control is achieved with Compensated Controlled Moment Gyros (CCMG) and a Momentum Dump motor. For the elevation control, high-precision motors are used, which change the elevation of the siderostat mirrors. The combination of these instruments keep the baseline oriented within few arcseconds from the target star.
In this paper, we will also present the software architecture relevant to the control system. This includes the description of the two flight computers present on the payload and the different control loops that are executed on them. Similarly, we will explain the importance of synchronization between all the sensors and actuators, which have to be referenced to a single master clock in order to obtain science data.
Blind modulation detection via staged GLRT
Show abstract
We present a two-stage generalized-likelihood-ratio test (GLRT) based procedure for the classification of modulation schemes with unknown signal parameters, such as frequency, amplitude, phase and symbol sequence. Extensive simulation studies presented in this paper demonstrate the efficacy of the developed scheme under limited observation for various PSK and FSK signals, including those with nested symbol constellations.
CRLB for estimation of 3D sensor biases in spherical coordinates
Show abstract
In order to carry out data fusion, it is crucial to account for the imprecision of sensor measurements due to systematic errors. This requires estimation of the sensor measurement biases. In this paper, we consider a 3D multisensor multitarget bias estimation approach for both additive and multiplicative biases in the measurements. Multiplicative biases can more accurately represent real biases in many sensors, however, they increase the complexity of the estimation problem. By converting biased measurements into pseudo-measurements of the biases it is possible to estimate biases separately from target state estimation. The conversion of the spherical measurements to Cartesian measurements, which has to be done using the unbiased conversion, is the key that allows estimation of the sensor biases without having to estimate the states of the targets of opportunity. The measurements provided by these sensors are assumed time-coincident (synchronous) and perfectly associated. We evaluate the Cram´er-Rao Lower Bound (CRLB) on the covariance of the bias estimates, which serves as a quantification of the available information about the biases.