SPIE Startup Challenge 2015 Founding Partner - JENOPTIK Get updates from SPIE Newsroom
  • Newsroom Home
  • Astronomy
  • Biomedical Optics & Medical Imaging
  • Defense & Security
  • Electronic Imaging & Signal Processing
  • Illumination & Displays
  • Lasers & Sources
  • Micro/Nano Lithography
  • Nanotechnology
  • Optical Design & Engineering
  • Optoelectronics & Communications
  • Remote Sensing
  • Sensing & Measurement
  • Solar & Alternative Energy
  • Sign up for Newsroom E-Alerts
  • Information for:
    Advertisers
SPIE Photonics West 2017 | Register Today

OPIE 2017

OPIC 2017

SPIE Defense + Commercial Sensing 2017 | Register Today

SPIE Journals OPEN ACCESS

SPIE PRESS

SPIE PRESS




Print PageEmail PageView PDF

Defense & Security

A third dimension in face recognition

Three-dimensional models of the face or ear lead to improved performance in personal identification and verification.
20 June 2012, SPIE Newsroom. DOI: 10.1117/2.1201205.004214

Validating a person's identity for access control currently relies on the use of passwords or physical biometrics (the characteristics used to uniquely recognize humans based on one or more intrinsic physical traits). Face recognition is one of the most widely researched topics in biometrics, along with iris recognition, supported by a plethora of new sensors, databases, algorithms, and evaluation frameworks.1–3 In an automatic face recognition system, the data for the enrolled users is stored in a gallery, and each biometric sample from a user that needs to be identified or verified constitutes a probe.

Conventional 2D face recognition methods for data acquired under controlled conditions have pushed performance to the limit.4 For example, with nearly frontal view faces and specified illumination, identification performance reaches the high 90th percentile on average. To overcome constraints of pose and lighting, 3D face recognition systems based on depth images or 3D meshes have emerged as a distinct option5–7 because of the availability of improved 3D sensors, publicly available databases, and systematic evaluation benchmarks such as the Face Recognition Grand Challenge and the Face Recognition Vendor Test (FRVT).8

We have been carrying out research on 3D face recognition (3DFR),9 3DFR for partial data,10 3D-2D face recognition,11 4D facial expression analysis,12 profile-based face recognition,13 and ear recognition.14 The common denominator of our approaches is the use of an annotated face model (AFM) to describe facial data through deformation and fitting (see Figure 1). The deformed model captures the details of an individual's face and represents its 3D geometry in an efficient 2D structure using the model's surface parameterization.


Figure 1. Annotated face model for a specific individual.

Our 3D-3D face recognition software ranked first in the 3D-shape section of the 2007 FRVT organized by the US National Institute of Standards and Technology. However, the unconstrained acquisition of data can result in facial scans with significant pose variations along the yaw axis. Such pose variations can cause extensive occlusions during acquisition and result in hidden facial areas and missing data. Recently, we extended our framework to enable comparison among interpose scans taking advantage of the facial symmetry assumption.10 This system is suitable for real-world applications, as it requires only half of the face to be visible to the sensor.

When 3D data are lacking for both a gallery and probe, 3D-2D face recognition uses 3D face data for enrollment and 2D data for authentication, or alternatively 2D data for the gallery and 3D data for the probe.11 During enrollment, the 3D data (3D shape plus 2D texture) is used to build subject-specific annotated 3D models using the AFM (model fitting). In the authentication phase, a single 2D image is used as the input to map the subject-specific 3D AFM, using point-landmark correspondences to estimate the 3D-2D projection transformation (pose estimation). By having a pose estimate of the face, under a specific 2D-3D data pair, a texture image is formed from the 2D target image on the geometry space of the fitted model (texture lifting). To equalize illumination between probe and gallery textures, we apply an analytical skin reflectance model to the gallery-fitted model. The matching score, between a relighted gallery and probe-texture pair, is formulated on a global similarity metric of local orientation features.

A crucial challenge in processing facial data is the ability to detect landmarks (i.e., salient locations or fiducial points such as eye and mouth corners) in 2D and 3D data sets.15–17 Recently, we developed a method for detecting landmarks on facial images in non-ideal conditions that include pose, illumination, and expression challenges, as well as blurred and low-resolution input. The main idea is to simultaneously find the sequence of deformation parameters that transform each point landmark into its target landmark location. The developed method can be extended to provide solutions for landmark detection from different imaging modalities and multiple views.

We are pushing the envelope in exploring face and ear recognition in the area of physical biometrics. Our next steps include improving the accuracy of face recognition using partial 3D data, partial data for landmark detection, low-quality probe images that depict subjects in the distance and in non-frontal poses, and generalizing performance across heterogeneous data. Going beyond physical biometrics, there is an active interest in developing novel ways to validate the identity of an information system's user through software-based biometrics.18

I wish to thank all faculty, postdoctoral fellows, students at the Computational Biomedicine Lab, and collaborators who have worked over the years on the research described here. Their names and their publications can be found on our Web page.19


Ioannis A. Kakadiaris
Computational Biomedicine Lab
University of Houston (UH)
Houston, TX

Ioannis A. Kakadiaris is a Cullen Professor of Computer Science. He is the recipient of a number of awards, including the National Science Foundation CAREER Award, Schlumberger Technical Award, UH Computer Science Research Excellence Award, UH Teaching Excellence Award, and the James Muller Vulnerable Plaque Young Investigator Prize.


References:
1. R. Jillela, A. Ross, Methods for iris segmentation, in K. Bowyer and M. Burge (eds.), Handbook of Iris Recognition, Springer, 2012.
2. A. Abaza, A. Ross, C. Hebert, M. A. F. Harrison, M. S. Nixon, A survey on ear biometrics, ACM Comput. Surveys, 2012.
3. A. A. Ross, K. Nandakumar, A. K. Jain, Handbook of Multibiometrics 6, Springer, 2006.
4. W. Zhao, R. Chellappa, P. J. Phillips, A. Rosenfeld, Face recognition: a literature survey, ACM Comput. Surveys 35(4), p. 399-458, 2003. doi:10.1145/954339.954342
5. K. W. Bowyer, K. I. Chang, P. Flynn, A survey of approaches and challenges in 3D and multi-modal 3D + 2D face recognition, Comput. Vis. Image Understand. 101(1), p. 1-15, 2006. doi:10.1016/j.cviu.2005.05.005
6. S. Romdhani, J. Ho, T. Vetter, D. J. Kriegman, Face recognition using 3-D models: pose and illumination, Proc. IEEE 94(11), p. 1977-1999, 2006. doi:10.1109/JPROC.2006.886019
7. A. F. Abate, M. Nappi, D. Riccio, G. Sabatino, 2D and 3D face recognition: a survey, Pattern Recognit. Lett. 28(14), p. 1885-1906, 2007. doi:10.1016/j.patrec.2006.12.018
8. P. J. Phillips, W. T. Scruggs, A. J. O'Toole, P. J. Flynn, K. W. Bowyer, C. L. Schott, M. Sharpe, FRVT 2006 and ICE 2006 large-scale experimental results, IEEE Trans. Pattern Anal. Machine Intell. 32(5), p. 831-846, 2010. doi:10.1109/TPAMI.2009.59
9. I. A. Kakadiaris, G. Passalis, G. Toderici, M. N. Murtuza, Y. Lu, N. Karampatziakis, T. Theoharis, Three-dimensional face recognition in the presence of facial expressions: an annotated deformable model approach, IEEE Trans. Pattern Anal. Machine Intell. 29(4), p. 640-649, 2007. doi:10.1109/TPAMI.2007.1017
10. G. Passalis, P. Perakis, T. Theoharis, I. A. Kakadiaris, Using facial symmetry to handle pose variations in real-world 3D face recognition, IEEE Trans. Pattern Anal. Machine Intell. 33(10), p. 1938-1951, 2011. doi:10.1109/TPAMI.2011.49
11. G. Toderici, G. Passalis, S. Zafeiriou, G. Tzimiropoulos, M. Petrou, T. Theoharis, I. A. Kakadiaris, Bidirectional relighting for 3D-aided 2D face recognition, Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., p. 2721-2728, 2010. doi:10.1109/CVPR.2010.5539995
12. T. Fang, X. Zhao, O. Ocegueda, S. K. Shah, I. A. Kakadiaris, 3D/4D facial expression analysis: an advanced annotated face model approach, Image Vis. Comput. 2012. doi:10.1016/j.imavis.2012.02.004
13. B. Efraty, E. Bilgazyev, S. Shah, I. A Kakadiaris, Profile-based 3D-aided face recognition, Pattern Recognit. 45(1), p. 43-53, 2012. doi:10.1016/j.patcog.2011.07.010
14. T. Theoharis, G. Passalis, G. Toderici, I. A. Kakadiaris, Unified 3D face and ear recognition using wavelets on geometry images, Pattern Recognit. 41(3), p. 796-804, 2008. doi:10.1016/j.patcog.2007.06.024
15. B. Efraty, C. Huang, S. K. Shah, I. A. Kakadiaris, Facial landmark detection in uncontrolled conditions, Proc. Int'l Joint Conf. Biomet., 2011.
16. B. Efraty, M. Papadakis, A. Profitt, S. Shah, I. A. Kakadiaris, Facial component-landmark detection, Proc. 9th IEEE Int'l Conf. Automat. Face Gesture Recognit, p. 278-285, 2011. doi:10.1109/FG.2011.5771411
17. B. Efraty, M. Papadakis, A. Profitt, S. Shah, I.A. Kakadiaris, Pose invariant facial component-landmark detection, Proc. IEEE Int'l Conf. Image Process., 2011.
18. Defense Advanced Research Projects Agency, Active authentication Announcement DARPA-BAA-12-06, 2012.
19. University of Houston Computational Biomedicine Lab, http://www.cbl.uh.edu