Share Email Print

Proceedings Paper

Quantifying the expertise of classifiers using 4-value logic
Format Member Price Non-Member Price
PDF $14.40 $18.00
cover GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free. Check Access

Paper Abstract

An intelligent agent---defined as an autonomous, adaptive, cooperative computer program---must credibly represent its expertise in negotiations with peer agents. Given an agent-based classifier, the determination of where in the domain the classifier is an expert must be explicitly stated. Likewise, where the classifier is confused should also be represented. Currently, an error measures provides an estimate of the relative size of the expertise and confusion sets, but error does not offer a distinct opinion on an untruthed feature vector's membership---i.e., whether its classification is based on specific information, conjecture or chance. We propose the theory for estimating the complete membership of a classifier's expertise sets and confusion sets. From these sets, we construct a 4-value classifier that hypothesizes for each new feature vector whether its classification can be made confidently or not. Examples are given that demonstrate the utility of this theory using multilayer perceptrons.

Paper Details

Date Published: 11 March 2002
PDF: 12 pages
Proc. SPIE 4739, Applications and Science of Computational Intelligence V, (11 March 2002); doi: 10.1117/12.458705
Show Author Affiliations
Amy L. Magnus, Air Force Research Lab. (United States)
Mark E. Oxley, Air Force Institute of Technology (United States)

Published in SPIE Proceedings Vol. 4739:
Applications and Science of Computational Intelligence V
Kevin L. Priddy; Paul E. Keller; Peter J. Angeline, Editor(s)

© SPIE. Terms of Use
Back to Top