
Proceedings Paper
Gaze estimation using a hybrid appearance and motion descriptorFormat | Member Price | Non-Member Price |
---|---|---|
$17.00 | $21.00 |
Paper Abstract
It is a challenging problem to realize a robust and low cost gaze estimation system. Existing appearance-based and feature-based methods both have achieved impressive progress in the past several years, while their improvements are still limited by feature representation. Therefore, in this paper, we propose a novel descriptor combining eye appearance and pupil center-cornea reflections (PCCR). The hybrid gaze descriptor represents eye structure from both feature level and topology level. At the feature level, a glints-centered appearance descriptor is presented to capture intensity and contour information of eye, and a polynomial representation of normalized PCCR vector is employed to capture motion information of eyeball. At the topology level, the partial least squares is applied for feature fusion and selection. At last, sparse representation based regression is employed to map the descriptor to the point-of-gaze (PoG). Experimental results show that the proposed method achieves high accuracy and has a good tolerance to head movements.
Paper Details
Date Published: 4 March 2015
PDF: 11 pages
Proc. SPIE 9443, Sixth International Conference on Graphic and Image Processing (ICGIP 2014), 944320 (4 March 2015); doi: 10.1117/12.2178824
Published in SPIE Proceedings Vol. 9443:
Sixth International Conference on Graphic and Image Processing (ICGIP 2014)
Yulin Wang; Xudong Jiang; David Zhang, Editor(s)
PDF: 11 pages
Proc. SPIE 9443, Sixth International Conference on Graphic and Image Processing (ICGIP 2014), 944320 (4 March 2015); doi: 10.1117/12.2178824
Show Author Affiliations
Changping Liu, Institute of Automation (China)
Published in SPIE Proceedings Vol. 9443:
Sixth International Conference on Graphic and Image Processing (ICGIP 2014)
Yulin Wang; Xudong Jiang; David Zhang, Editor(s)
© SPIE. Terms of Use
