Share Email Print

Proceedings Paper

Use of boosting to improve LVQ ATR classifiers
Author(s): Su-How Lim; Nasser M. Nasrabadi; Russell M. Mersereau
Format Member Price Non-Member Price
PDF $14.40 $18.00
cover GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free. Check Access

Paper Abstract

Boosting has emerged as a popular combination technique to refine weak classifiers. Pioneered by Freund and Schapire, numerous variations of the AdaBoost algorithm have emerged, such as Breiman's arc-fs algorithms. The central theme of these methods is the generation of an ensemble of a weak learning algorithm using modified versions of the original training set, with emphasis placed on the more difficult instances. The validation stage then aggregates results from each element of the ensemble using some predetermined rule. In this paper the wavelet decomposition based codebook classifier proposed by Chan et al. is used as the learning algorithm. Starting with the whole training set, modifications to the training set are made at each iteration by resampling the original training data set with replacement. The weights used in the resampling are determined using different algorithms, including AdaBoost and arc-fs. Accuracy of the ensembles generated are then determined using various combination techniques such as simple voting and weighted sum.

Paper Details

Date Published: 5 April 2002
PDF: 9 pages
Proc. SPIE 4668, Applications of Artificial Neural Networks in Image Processing VII, (5 April 2002); doi: 10.1117/12.461671
Show Author Affiliations
Su-How Lim, Army Research Lab. (United States)
Nasser M. Nasrabadi, Army Research Lab. (United States)
Russell M. Mersereau, Army Research Lab. (United States)

Published in SPIE Proceedings Vol. 4668:
Applications of Artificial Neural Networks in Image Processing VII
Nasser M. Nasrabadi; Aggelos K. Katsaggelos, Editor(s)

© SPIE. Terms of Use
Back to Top