Share Email Print

Proceedings Paper

Multimodal person authentication on a smartphone under realistic conditions
Author(s): Andrew C. Morris; Sabah Jassim; Harin Sellahewa; Lorene Allano; Johan Ehlers; Dalei Wu; Jacques Koreman; Sonia Garcia-Salicetti; Bao Ly-Van; Bernadette Dorizzi
Format Member Price Non-Member Price
PDF $17.00 $21.00

Paper Abstract

Verification of a person's identity by the combination of more than one biometric trait strongly increases the robustness of person authentication in real applications. This is particularly the case in applications involving signals of degraded quality, as for person authentication on mobile platforms. The context of mobility generates degradations of input signals due to the variety of environments encountered (ambient noise, lighting variations, etc.), while the sensors' lower quality further contributes to decrease in system performance. Our aim in this work is to combine traits from the three biometric modalities of speech, face and handwritten signature in a concrete application, performing non intrusive biometric verification on a personal mobile device (smartphone/PDA). Most available biometric databases have been acquired in more or less controlled environments, which makes it difficult to predict performance in a real application. Our experiments are performed on a database acquired on a PDA as part of the SecurePhone project (IST-2002-506883 project "Secure Contracts Signed by Mobile Phone"). This database contains 60 virtual subjects balanced in gender and age. Virtual subjects are obtained by coupling audio-visual signals from real English speaking subjects with signatures from other subjects captured on the touch screen of the PDA. Video data for the PDA database was recorded in 2 recording sessions separated by at least one week. Each session comprises 4 acquisition conditions: 2 indoor and 2 outdoor recordings (with in each case, a good and a degraded quality recording). Handwritten signatures were captured in one session in realistic conditions. Different scenarios of matching between training and test conditions are tested to measure the resistance of various fusion systems to different types of variability and different amounts of enrolment data.

Paper Details

Date Published: 2 May 2006
PDF: 12 pages
Proc. SPIE 6250, Mobile Multimedia/Image Processing for Military and Security Applications, 62500D (2 May 2006); doi: 10.1117/12.668776
Show Author Affiliations
Andrew C. Morris, Saarland Univ. (Germany)
Sabah Jassim, Univ. of Buckingham (United Kingdom)
Harin Sellahewa, Univ. of Buckingham (United Kingdom)
Lorene Allano, GET Institut National des Télécommunications (France)
Johan Ehlers, Univ. of Buckingham (United Kingdom)
Dalei Wu, Saarland Univ. (Germany)
Jacques Koreman, Saarland Univ. (Germany)
Sonia Garcia-Salicetti, GET Institut National des Télécommunications (France)
Bao Ly-Van, GET Institut National des Télécommunications (France)
Bernadette Dorizzi, GET Institut National des Télécommunications (France)

Published in SPIE Proceedings Vol. 6250:
Mobile Multimedia/Image Processing for Military and Security Applications
Sos S. Agaian; Sabah A. Jassim, Editor(s)

© SPIE. Terms of Use
Back to Top