Share Email Print
cover

Proceedings Paper

Audio-visual affective expression recognition
Author(s): Thomas S. Huang; Zhihong Zeng
Format Member Price Non-Member Price
PDF $14.40 $18.00

Paper Abstract

Automatic affective expression recognition has attracted more and more attention of researchers from different disciplines, which will significantly contribute to a new paradigm for human computer interaction (affect-sensitive interfaces, socially intelligent environments) and advance the research in the affect-related fields including psychology, psychiatry, and education. Multimodal information integration is a process that enables human to assess affective states robustly and flexibly. In order to understand the richness and subtleness of human emotion behavior, the computer should be able to integrate information from multiple sensors. We introduce in this paper our efforts toward machine understanding of audio-visual affective behavior, based on both deliberate and spontaneous displays. Some promising methods are presented to integrate information from both audio and visual modalities. Our experiments show the advantage of audio-visual fusion in affective expression recognition over audio-only or visual-only approaches.

Paper Details

Date Published: 15 November 2007
PDF: 8 pages
Proc. SPIE 6788, MIPPR 2007: Pattern Recognition and Computer Vision, 678802 (15 November 2007); doi: 10.1117/12.782299
Show Author Affiliations
Thomas S. Huang, Univ. of Illinois at Urbana-Champaign (United States)
Zhihong Zeng, Univ. of Illinois at Urbana-Champaign (United States)


Published in SPIE Proceedings Vol. 6788:
MIPPR 2007: Pattern Recognition and Computer Vision

© SPIE. Terms of Use
Back to Top