Share Email Print
cover

Proceedings Paper

Visual attention based bag-of-words model for image classification
Author(s): Qiwei Wang; Shouhong Wan; Lihua Yue; Che Wang
Format Member Price Non-Member Price
PDF $14.40 $18.00

Paper Abstract

Bag-of-words is a classical method for image classification. The core problem is how to count the frequency of the visual words and what visual words to select. In this paper, we propose a visual attention based bag-of-words model (VABOW model) for image classification task. The VABOW model utilizes visual attention method to generate a saliency map, and uses the saliency map as a weighted matrix to instruct the statistic process for the frequency of the visual words. On the other hand, the VABOW model combines shape, color and texture cues and uses L1 regularization logistic regression method to select the most relevant and most efficient features. We compare our approach with traditional bag-of-words based method on two datasets, and the result shows that our VABOW model outperforms the state-of-the-art method for image classification.

Paper Details

Date Published: 16 April 2014
PDF: 7 pages
Proc. SPIE 9159, Sixth International Conference on Digital Image Processing (ICDIP 2014), 91591P (16 April 2014); doi: 10.1117/12.2064432
Show Author Affiliations
Qiwei Wang, Univ. of Science and Technology of China (China)
Shouhong Wan, Univ. of Science and Technology of China (China)
Lihua Yue, Univ. of Science and Technology of China (China)
Che Wang, Air Force Engineering Univ. (China)


Published in SPIE Proceedings Vol. 9159:
Sixth International Conference on Digital Image Processing (ICDIP 2014)
Charles M. Falco; Chin-Chen Chang; Xudong Jiang, Editor(s)

© SPIE. Terms of Use
Back to Top