- Biomedical Optics & Medical Imaging
- Defense & Security
- Electronic Imaging & Signal Processing
- Illumination & Displays
- Lasers & Sources
- Micro/Nano Lithography
- Optical Design & Engineering
- Optoelectronics & Communications
- Remote Sensing
- Sensing & Measurement
- Solar & Alternative Energy
- Sign up for Newsroom E-Alerts
- Information for:
Defense & Security
New algorithms generate improved templates for biometric recognition
Techniques to identify rotated faces and irises result in high overall detection rates and real-time processing.
3 April 2007, SPIE Newsroom. DOI: 10.1117/2.1200704.0549
Face detection and eye gaze estimation are crucial components of applications such as virtual reality, video conferencing, video surveillance, face recognition and face database management.1-4
New applications involving human-machine interfaces are under development where eye gaze could be employed to control machines assisting handicapped individuals, or in technologically complex environments such as hospital operating rooms, airplane cockpits, or industrial control units.1,5-7
Several limitations have been identified in existing face and iris detection methods: some are invasive; some focus on only one eye, do not follow head movements, or only follow very small head movements; some fail to detect eyes while blinking; and some require manual initialization. Additionally, some impose restrictions such as symmetry, which limits the background content. Other methods depend on face skin tone, which varies among different ethnic groups and is dependent upon nearby light sources. These limitations contribute to two particular problems that apply to most methods of face and eye detection: they cannot work in real time, and they are not robust for face rotations.1,8-12
To allow real-time processing, our first approach was to use anthropometric templates for face and eye detection. This set of templates, which consists of different face sizes, includes key facial features such as eyebrows, nose, mouth, and the lower part of the chin, thus mainly limiting the search to these specific face and eye characteristics. For eye detection, the templates consider an eye region within an elliptical face contour.
Our second approach was to design templates using PSO (particle swarm optimization) to search for the best templates,8
or use a local component maximization (CM) procedure.9
The problem of face rotations is addressed by building a set of templates for different face rotation angles.10
Face detection in digital images employing templates has been approached in several previous studies.10,13-15
In these studies, face detection is performed using two stages: coarse and then fine face detection. In the first stage, the possible face center is determined using elliptical templates based on a Hough transform over a coarse directional image.10,13
The highest value in the accumulator is taken as the possible face center, and fine face detection is then performed in a region around the coarse face center. For this purpose a set of face templates are used to compute a line integral of each template over the directional image.10,13-15
(a) illustrates coarse face detection, and Figure 1
(b) shows the fine face detection stage.
Figure 1. (a) Coarse face detection: original image, coarse directional image, accumulator, superposition of accumulator over original image, and coarse face detection. (b) Fine face detection: original image, coarse face detection, fine directional image, anthropometric face templates, and face detection by face template.
Although the results using the anthropometric templates are good, especially in real-time applications, it has not been demonstrated that these templates are optimal for the line integral computation. Thus, the CM and PSO algorithms are considered to be possible improvements to the anthropometric templates.
For both the algorithms, a set of templates is considered for different face sizes. The segmented faces are converted to grayscale and their size normalized. At that point, a directional image is computed,10,13-15
containing the average of the tangent vectors in a 7×7 window in the normalized grayscale segmented image.
The differences between the CM and PSO algorithms begin with the resulting set of directional images. CM immediately uses that set (see Figure 2
a for a sample CM template result), while the PSO algorithm uses the set of directional images to select new templates, optimizing its size and response to a face in the directional image (see Figure 2
b for the a sample PSO template result). The CM method was applied to the frontal face detection in the Purdue and Caltech face database,9
while the PSO method was applied to video sequences. Results show that the line integral values are larger for the PSO templates, and that face size estimation is better with PSO than with CM.
Figure 2. (a) CM template for 103 individuals. (b) Final PSO template after 1,800 iterations.
A method for real-time face and eye detection for rotated faces was developed using anthropometric templates, with good results compared to other published methods. Two additional methods were developed to improve the face templates, based on the CM and PSO algorithms. The new templates had better face-size estimation, larger line integral value, improved spatial localization, fewer points (for faster computational time), and an improved face and iris detection rate.8,9
Department of Electrical Engineering, Universidad de Chile
Claudio A. Perez received his BS (EE), PE (EE), and MS (BME) from the Universidad de Chile. He was a Fulbright student at the Ohio State University (OSU) where he obtained his PhD. He is member of the IEEE society for Systems, Man and Cybernetics, its Engineering in Medicine and Biology Society, and its Computational Intelligence Socienty. He is also a member of Sigma-Xi, the Pattern Recognition Society, SPIE and the OSU Alumni Association. His research interests include man-machine interfaces and pattern recognition. In addition, he was conference co-chair (2005-2006) and program committee member (2003-2004) at Optomechatronics Computer Vision Systems, SPIE Optics East. He was also a session chair at SPIE's Optomechatronic Conference between 2003-2006.
3. C. A. Perez, C. A. Salinas, P. A. Estévez, P. Valenzuela, Genetic design of biologically inspired receptive fields for neural pattern recognition,
IEEE Trans. on Systems, Man, and Cybernetics-Part B 33,
no. 2, pp. 258-270, 2003.
7. C. A. Perez, C. P. Peña, C. A. Holzmann, C. M. Held, Design of a Virtual Keyboard Based on Iris Tracking,
Proc. Second Joint Conf. of the IEEE/EMBS and BMES,
pp. 2428-2429, Houston, TX, USA, 2002.
15. C. A. Perez, A. Palma, C. Holzman, C. Peña, Face and Eye Tracking algorithm based on digital Image Processing,
Proc. IEEE Conf. on Systems, Man, and Cybernetics,
pp. 1178-1183, Arizona, USA, 2001.