Share Email Print
cover

Proceedings Paper

Techniques for evaluating classifiers in application
Format Member Price Non-Member Price
PDF $14.40 $18.00

Paper Abstract

In gauging the generalization capability of a classifier, a good evaluation technique should adhere to certain principles. For instance, the technique should evaluate a selected classifier, not simply an architecture. Secondly, a solution should be assessable at the classifier’s design and, further, throughout its application. Additionally, the technique should be insensitive to data presentation and cover a significant portion of the classifier’s domain. Such principles call for methods beyond supervised learning and statistical training techniques such as cross validation. For this paper, we shall discuss the evaluation of a generalization in application. For illustration, we will present a method for the multilayer perceptron (MLP) that may be drawn from the unlabeled data collected in the operational use of a given classifier. These conclusions support self-supervised learning and computational methods that isolate unstable, nonrepresentational regions in the classifier.

Paper Details

Date Published: 12 April 2004
PDF: 8 pages
Proc. SPIE 5421, Intelligent Computing: Theory and Applications II, (12 April 2004); doi: 10.1117/12.542596
Show Author Affiliations
Amy L. Magnus, Defense Threat Reduction Agency (United States)
Mark E. Oxley, Air Force Institute of Technology (United States)


Published in SPIE Proceedings Vol. 5421:
Intelligent Computing: Theory and Applications II
Kevin L. Priddy, Editor(s)

© SPIE. Terms of Use
Back to Top