Noncontact, depth-detailed 3D fingerprinting
Fingerprint recognition has been used extensively to identify people for law enforcement and security applications.1 The technique has the advantage of uniqueness, employs compact and inexpensive sensors, and delivers rapid matching. However, traditional fingerprint acquisition is performed in 2D using contact methods that introduce uncontrollable and nonuniform distortion when pressing or rolling a human finger onto a rigid surface.2 Consequently, applications that require high-precision fingerprints are limited.3
To circumvent these problems, our research group, in partnership with Flashscan3D LLC, is investigating a noncontact 3D fingerprint scanner, shown in Figure 1. This system relies on 3D image acquisition using structured light illumination (SLI), which recovers the necessary information by performing so-called triangulation between a projector and camera pair using a series of time-multiplexed, striped light patterns.4 No physical contact is made with the fingerprint. The system takes 0.7s to acquire a scan with as many as 1420×2064pixels. Figure 2 shows detailed surface reflection variation (i.e., albedo) and depth information for a 3D fingerprint obtained by the scanner. Little distortion has been introduced to the print.
To be backwards compatible with existing 2D automatic fingerprint identification systems (AFISs), the 3D scans produced by our system must be flattened into 2D equivalent prints. We employ a flattening algorithm based on unfolding an elastic tube optimally fit to the peaks and valleys of ridges identified within the scan. The algorithm starts by fitting a deformable tube to the 3D point cloud, achieved by deriving best-fitting circles, spaced 0.508mm (500 rings per inch) apart, along the vertical axis of the finger. Two 1D, discrete-time signals are extracted along each ring at a sampling period of 0.508mm (500 samples per inch). Taking advantage of detailed-depth information greatly reduces the distortion caused by 3D to 2D mapping. The flattened print is further improved by incorporating ridge information extracted from the albedo image with the depth and albedo ridge information fused together according to local scan quality.
Figure 3 shows the contrast between a typical 2D, plain fingerprint image obtained from a Cogent CSD450 scanner versus a 3D print and its flattened equivalent. For most of the scans, ridges are clearly visible in both the 2D plain and 3D flattened prints. Furthermore, ridges in the 3D flattened prints have higher uniformity in width, indicating less distortion. Finally, the 3D prints maintain continuity in ridges across a few apparent scars, whereas the 2D prints show gaps.
To evaluate our scanner`s performance, we created a 3D fingerprint database at the University of Kentucky consisting of 441 3D prints from 11 subjects. All 3D fingers were obtained from the SLI fingerprint scanner and were flattened using the fit-tube algorithm at a resolution of approximately 500 points per inch (PPI). We also collected 441 conventional 2D plain fingerprints with a Cogent CSD450 fingerprint scanner from the same group of subjects. The resolution of 2D prints is also 500 PPI.
The matching algorithm was the BOZORTH system5 developed by the National Institute of Standard Technology. The receiver operating characteristic (ROC), which is a statement of the performance of the fingerprint verification system, is given in Figure 4. For an ideal system, the true acceptance rate (TAR) would be equal to 1 for all false acceptance rates (FARs) from 0 to 1. Given a generally specific FAR, 0.01, the TAR for our 3D system is 0.98, while the 2D system 0.85. And for a FAR of 0.1, the TAR of our 3D system is 0.99, while that for the 2D system is 0.93. In terms of ROC, the 3D fingerprint data set achieves a higher matching performance than that of a 2D plain print.
Employing structured light illumination, we have developed a noncontact method of 3D finger scanning with sufficiently high resolution as to see the ridges and valleys of the fingerprint. Using a fit-tube algorithm, we virtually flatten the 3D finger scan in order to produce a 2D equivalent fingerprint image. The distortion caused by this unwrapping is reduced by controlling the local distances between neighboring points. Our experimental results demonstrate higher matching performance than a commercially available, contact-based scanner.
As next steps, we intend to reduce our scanner's acquisition time to further minimize the effects of finger motion. This will help to eliminate the physical guide currently employed to assist the user in holding their finger steady while scanning. We will also build scanners with multiple cameras to achieve wrap-around scanning, thereby producing fingerprint images that model rolled-equivalent fingerprint scans.6 Finally, we will continue to focus on the interoperability of the existing fingerprint databases by improving the matching performance between our scanner and that of contact-based scanning methods.
This work is partially funded by Flashscan3D, LLC, Richardson, TX, and National Institute of Hometown Security, Somerset, KY.
Yongchang Wang received his BEng degree in 2005 from Zhejiang University, Hangzhou (China) and his MS degree in 2008 from the University of Kentucky, where he is currently working toward his PhD. His research interests include biometrics, signal and image processing, and machine and computer vision.
Laurence G. Hassebrook is Blazie Professor of Electrical and Computer Engineering, a professional engineer, and an active member of the Center for Visualization and Virtual Environments at the University of Kentucky. While studying at the Center of Excellence in Optical Processing, he received his PhD degree from Carnegie Mellon University in 1990, his MSEE from Syracuse University in 1987, and his BSEE from the University of Nebraska in Lincoln, in 1979. He worked at IBM Endicott, NY, between 1981 through 1987. His research interests are in 3D data acquisition, pattern recognition, and N-dimensional signal processing. Current work includes 3D surface scanning of objects in motion, dynamic pattern projection for multitarget tracking, automatic target recognition, and scene reconstruction from partial images.
Daniel L. Lau is a senior member of the IEEE who received his BSc degree (with highest distinction) in electrical engineering from Purdue University, West Lafayette, IN, in 1995 and his PhD degree from the University of Delaware, Newark, in 1999. Today, he is an associate professor at the University of Kentucky. Previously, he was a digital signal processing engineer at Aware, Inc., and an image and signal processing engineer at Lawrence Livermore National Laboratory. His research interests include 3D imaging sensors, 3D fingerprint identification, and multispectral color acquisition and display. His published works include the introduction of the green-noise half-toning model as well as stochastic moire.