Share Email Print

Proceedings Paper

Comparing classifiers that exploit random subspaces
Author(s): Jamie Gantert; David Gray; Don Hulsey; Donald Waagen
Format Member Price Non-Member Price
PDF $17.00 $21.00

Paper Abstract

Many current classification models, such as Random Kitchen Sinks and Extreme Learning Machines (ELM), minimize the need for expert-defined features by transforming the measurement spaces into a set of "features" via random functions or projections. Alternatively, Random Forests exploit random subspaces by limiting tree partitions (i.e. nodes of the tree) to be selected from randomly generated subsets of features. For a synthetic aperture RADAR classification task, and given two orthonormal measurement representations (spatial and multi-scale Haar wavelet), this work compares and contrasts ELM and Random Forest classifier performance as a function of (a) input measurement representation, (b) classifier complexity, and (c) measurement domain mismatch. For the ELM classifier, we also compare two random projection encodings.

Paper Details

Date Published: 14 May 2019
PDF: 19 pages
Proc. SPIE 10988, Automatic Target Recognition XXIX, 109880G (14 May 2019); doi: 10.1117/12.2520184
Show Author Affiliations
Jamie Gantert, Air Force Research Lab. (United States)
David Gray, Air Force Research Lab. (United States)
Don Hulsey, Dynetics, Inc. (United States)
Donald Waagen, Air Force Research Lab. (United States)

Published in SPIE Proceedings Vol. 10988:
Automatic Target Recognition XXIX
Riad I. Hammoud; Timothy L. Overman, Editor(s)

© SPIE. Terms of Use
Back to Top
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research
Forgot your username?