Share Email Print
cover

Proceedings Paper

Sensitivity of fusion performance to classifier model variations
Author(s): Kai F. Goebel
Format Member Price Non-Member Price
PDF $14.40 $18.00

Paper Abstract

During design of classifier fusion tools, it is important to evaluate the performance of the fuser. In many cases, the output of the classifiers needs to be simulated to provide the range of fusion input that allows an evaluation throughout the design space. One fundamental question is how the output should be distributed, in particular for multi-class continuous output classifiers. Using the wrong distribution may lead to fusion tools that are either overly optimistic or otherwise distort the outcome. Either case may lead to a fuser that performs sub-optimal in practice. It is therefore imperative to establish the bounds of different classifier output distributions. In addition, one must take into account the design space that may be of considerable complexity. Exhaustively simulating the entire design space may be a lengthy undertaking. Therefore, the simulation has to be guided to populate the relevant areas of the design space. Finally, it is crucial to quantify the performance throughout the design of the fuser. This paper addresses these issues by introducing a simulator that allows the evaluation of different classifier distributions in combination with a design of experiment setup, and a built-in performance evaluation. We show results from an application of diagnostic decision fusion on aircraft engines.

Paper Details

Date Published: 1 April 2003
PDF: 8 pages
Proc. SPIE 5099, Multisensor, Multisource Information Fusion: Architectures, Algorithms, and Applications 2003, (1 April 2003); doi: 10.1117/12.487284
Show Author Affiliations
Kai F. Goebel, GE Global Research Ctr. (United States)


Published in SPIE Proceedings Vol. 5099:
Multisensor, Multisource Information Fusion: Architectures, Algorithms, and Applications 2003
Belur V. Dasarathy, Editor(s)

© SPIE. Terms of Use
Back to Top