Share Email Print
cover

Proceedings Paper

Equivalency of Bayesian multisensor information fusion and neural networks
Author(s): Parham Aarabi
Format Member Price Non-Member Price
PDF $14.40 $18.00
cover GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free. Check Access

Paper Abstract

This paper proposes a Bayesian multi-sensor object localization approach that keeps track of the observability of the sensors in order to maximize the accuracy of the final decision. This is accomplished by adaptively monitoring the mean-square-error of the results of the localization system. Knowledge of this error and the distribution of the system's object localization estimates allow the result of each sensor to be scaled and combined in an optimal Bayesian sense. It is shown that under conditions of normality, the Bayesian sensor fusion approach is directly equivalent to a single layer neural network with a sigmoidal non-linearity. Furthermore, spatial and temporal feedback in the neural networks can be used to compensate for practical difficulties such as the spatial dependencies of adjacent positions. Experimental results using 10 binary microphone arrays yield an order of magnitude improvement in localization error for the proposed approach when compared to previous techniques.

Paper Details

Date Published: 22 March 2001
PDF: 10 pages
Proc. SPIE 4385, Sensor Fusion: Architectures, Algorithms, and Applications V, (22 March 2001); doi: 10.1117/12.421126
Show Author Affiliations
Parham Aarabi, Stanford Univ. (Canada)


Published in SPIE Proceedings Vol. 4385:
Sensor Fusion: Architectures, Algorithms, and Applications V
Belur V. Dasarathy, Editor(s)

© SPIE. Terms of Use
Back to Top