Share Email Print

Journal of Electronic Imaging • Open Access

Facial expression recognition in the wild based on multimodal texture features
Author(s): Bo Sun; Liandong Li; Guoyan Zhou; Jun He

Paper Abstract

Facial expression recognition in the wild is a very challenging task. We describe our work in static and continuous facial expression recognition in the wild. We evaluate the recognition results of gray deep features and color deep features, and explore the fusion of multimodal texture features. For the continuous facial expression recognition, we design two temporal–spatial dense scale-invariant feature transform (SIFT) features and combine multimodal features to recognize expression from image sequences. For the static facial expression recognition based on video frames, we extract dense SIFT and some deep convolutional neural network (CNN) features, including our proposed CNN architecture. We train linear support vector machine and partial least squares classifiers for those kinds of features on the static facial expression in the wild (SFEW) and acted facial expression in the wild (AFEW) dataset, and we propose a fusion network to combine all the extracted features at decision level. The final achievement we gained is 56.32% on the SFEW testing set and 50.67% on the AFEW validation set, which are much better than the baseline recognition rates of 35.96% and 36.08%.

Paper Details

Date Published: 22 June 2016
PDF: 8 pages
J. Electron. Imag. 25(6) 061407 doi: 10.1117/1.JEI.25.6.061407
Published in: Journal of Electronic Imaging Volume 25, Issue 6
Show Author Affiliations
Bo Sun, Beijing Normal Univ. (China)
Liandong Li, Beijing Normal Univ. (China)
Guoyan Zhou, Beijing Normal Univ. (China)
Jun He, Beijing Normal Univ. (China)

© SPIE. Terms of Use
Back to Top