Share Email Print

Proceedings Paper

Measuring the generalization capabilities of arbitrary classifiers
Format Member Price Non-Member Price
PDF $14.40 $18.00
cover GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free. Check Access

Paper Abstract

Given a classifier trained on two-class data one wishes to determine how well the classifier will perform on new, unseen data. To do this task one typically uses the data to estimate a distribution of the data, generate new data from this distribution, and then test the data. Also, hold-out methods are used including cross-validation. We propose a new method that uses computational geometry techniques that produces a partial ordering on subsets in feature space and measures how well the classifier will perform on these subsets. There are some conditions on the classifier that must be satisfied in order that this measure, in fact, exists. We give the details for these conditions as well as the results concerning this special collection of classifiers. We derive the measure that quantifies the generalization capability for the special collection classifier.

Paper Details

Date Published: 4 August 2003
PDF: 10 pages
Proc. SPIE 5103, Intelligent Computing: Theory and Applications, (4 August 2003); doi: 10.1117/12.487484
Show Author Affiliations
Mark E. Oxley, Air Force Institute of Technology (United States)
Amy L. Magnus, Defense Threat Reduction Agency (United States)

Published in SPIE Proceedings Vol. 5103:
Intelligent Computing: Theory and Applications
Kevin L. Priddy; Peter J. Angeline, Editor(s)

© SPIE. Terms of Use
Back to Top