Share Email Print

Proceedings Paper

Facial action units recognition by de-expression residue learning
Format Member Price Non-Member Price
PDF $17.00 $21.00

Paper Abstract

Understanding human facial expressions is one of the key steps to achieving human-computer interaction. However, the facial expression is a combination of an expressive component called facial behavior and a neutral component of a person. The most commonly used taxonomy to describe facial behaviors is the Facial Action Coding System (FACS). FACS segments the visible effects of facial muscle activation into 30+ action units (AUs). So, we introduce a method to recognize AUs by extracting information of the expressive component through a de-expression learning procedure, called De-expression Residue Learning (DeRL). Firstly, we train a Generative Adversarial Network named cGAN to filter out the expressive information and generate the corresponding neutral face image. Then, we use the intermediate layers, which contains the action unit information, to recognition AUs. Our work alleviates problems of AUs recognition based on the pixel level difference, which is unreliable due to the variation between images i.e., rotation, translation and lighting condition changes, or the feature level difference, which is also unstable as the expression information may vary according to the identity information. As for experiments, we use the data augmentation method to avoid overfitting and trained deep network to recognition AUs on CK+ datasets. The results reveal that our work achieves more competitive performance than several other popular approaches.

Paper Details

Date Published: 18 November 2019
PDF: 7 pages
Proc. SPIE 11187, Optoelectronic Imaging and Multimedia Technology VI, 1118719 (18 November 2019); doi: 10.1117/12.2539053
Show Author Affiliations
Jun He, Beijing Normal Univ. (China)
Xiaocui Yu, Beijing Normal Univ. (China)
Bo Sun, Beijing Normal Univ. (China)
Yongkang Xiao, Beijing Normal Univ. (China)

Published in SPIE Proceedings Vol. 11187:
Optoelectronic Imaging and Multimedia Technology VI
Qionghai Dai; Tsutomu Shimura; Zhenrong Zheng, Editor(s)

© SPIE. Terms of Use
Back to Top
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research
Forgot your username?