Share Email Print

Proceedings Paper

Information-theoretic feature extraction and selection for robust classification
Author(s): Chandra Shekhar Dhir; Soo Young Lee
Format Member Price Non-Member Price
PDF $14.40 $18.00
cover GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free. Check Access

Paper Abstract

Classification performance of recognition tasks can be improved by selection of highly discriminative features from the low-dimensional linear representation of data. High-dimensional multivariate data can be represented in lower dimensions by unsupervised feature extraction techniques which attempts to remove the redundancy in the data and/or resolve the multivariate prediction problems. These extracted low-dimensional features of raw data may not ensure good class discrimination, therefore, supervised feature selection methods motivated by information-theoretic approaches can improve the recognition performance with lesser number of features. Proposed hybrid feature selection methods efficiently selects features with higher class discrimination in comparison to feature-class mutual information (MI), Fisher criterion or unsupervised selection using variance; thus, resulting in much improved recognition performance. Feature-class MI criterion and hybrid feature selection methods are computationally scalable and optimal selectors for statistically independent features.

Paper Details

Date Published: 19 March 2009
PDF: 12 pages
Proc. SPIE 7343, Independent Component Analyses, Wavelets, Neural Networks, Biosystems, and Nanoengineering VII, 73430H (19 March 2009); doi: 10.1117/12.822569
Show Author Affiliations
Chandra Shekhar Dhir, KAIST (Korea, Republic of)
Soo Young Lee, KAIST (Korea, Republic of)
RIKEN Brain Science Institute (Japan)

Published in SPIE Proceedings Vol. 7343:
Independent Component Analyses, Wavelets, Neural Networks, Biosystems, and Nanoengineering VII
Harold H. Szu; F. Jack Agee, Editor(s)

© SPIE. Terms of Use
Back to Top