Share Email Print

Proceedings Paper

Image analysis for face modeling and facial image reconstruction
Author(s): Hiroshi Agawa; Gang Xu; Yoshio Nagashima; Fumio Kishino
Format Member Price Non-Member Price
PDF $17.00 $21.00
cover GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free. Check Access

Paper Abstract

We have studied a stereo-based approach to three-dimensional face modeling and facial image reconstruction virtually viewed from different angles. This paper describes the system, especially image analysis and facial shape feature extraction techniques using information about color and position of face and face components, and image histogram and line segment analysis. Using these techniques, the system can get the facial features precisely, automatically and independent of facial image size and face tilting. In our system, input images viewed from the front and side of the face are processed as follows: the input images axe first transformed into a set of color pictures with significant features. Regions are segmented by thresholding or slicing after analyzing the histograms of the pictures. Using knowledge about color and positions of the face, face and hair regions are obtained and facial boundaries extracted. Feature points along the obtained profile are extracted using information about curvature amplitude and sign, and knowledge about distance between the feature points. In the facial areas which include facial components, regions are again segmented by the same techniques with color information from each face component. The component regions are recognized using knowledge of facial component position. In each region, the pictures are filtered with various differential operators, which are selected according to each picture and region. Thinned images are obtained from the filtered images by various image processing and line segment analysis techniques. Then, feature points of the front and side views are extracted. Finally, the size and position differences and facial tilting between two input images are compensated for by matching the common feature points in the two views. Thus, the three-dimensional data of the feature points and the boundaries of the face are acquired. The two base face models, representing a typical Japanese man and woman, are prepared and the model of the same sex is modified with 3D data from the extracted feature points and boundaries in a linear manner. The images, which are virtually viewed from different angles, are reconstructed by mapping facial texture to the modified model.

Paper Details

Date Published: 1 September 1990
PDF: 14 pages
Proc. SPIE 1360, Visual Communications and Image Processing '90: Fifth in a Series, (1 September 1990); doi: 10.1117/12.24135
Show Author Affiliations
Hiroshi Agawa, ATR Communication Systems Research Labs. (Japan)
Gang Xu, ATR Communication Systems Research Labs. (Japan)
Yoshio Nagashima, ATR Communication Systems Research Labs. (Japan)
Fumio Kishino, ATR Communication Systems Research Labs. (Japan)

Published in SPIE Proceedings Vol. 1360:
Visual Communications and Image Processing '90: Fifth in a Series
Murat Kunt, Editor(s)

© SPIE. Terms of Use
Back to Top