Large Scale USA PATRIOT Act Biometric Testing C. L. Wilson Image Group IAD-ITL.

Slides:



Advertisements
Similar presentations
International Workshop on Usability and Biometrics: NIST Welcome
Advertisements

The ISA for Physics What you need to revise.
OptiShip ® Multi-carrier Shipping System. OptiShip ® customers save on average 13.6% of parcel shipping costs… OptiShip ® is a comprehensive system that.
An Evaluation of MC/DC Coverage for Pair-wise Test Cases By David Anderson Software Testing Research Group (STRG)
MGT-491 QUANTITATIVE ANALYSIS AND RESEARCH FOR MANAGEMENT
CHAPTER 25: One-Way Analysis of Variance Comparing Several Means
Introduction to Applicant Livescan processing for DPW.
Chapter 9 Creating and Maintaining Database Presented by Zhiming Liu Instructor: Dr. Bebis.
Confidential – Do Not Distribute Iris Biometrics For Accurate Patient Identification Michael Trader President M2SYS Technology
Department of Labor HSPD-12
Computer Security Biometric authentication Based on a talk by Dr J.J. Atick, Identix, “Biometrics in the Decade of Security”, CNSS 2003.
C. L. Wilson Manager, Image Group Biometrics Overview of the PATRIOT Act.
Detecting Computer Intrusions Using Behavioral Biometrics Ahmed Awad E. A, and Issa Traore University of Victoria PST’05 Oct 13,2005.
The Statistics of Fingerprints A Matching Algorithm to be used in an Investigation into the Reliability of the Use of Fingerprints for Identification Bob.
66: Priyanka J. Sawant 67: Ayesha A. Upadhyay 75: Sumeet Sukthankar.
RESEARCH STUDY FOR THE RELIABILITY OF THE ACE-V PROCESS:
To solve the problem of limited documentation and example code available on the subject of biometrics. Research that is done in this field can not directly.
Biometrics in New Zealand Passport issuing Border crossing System and information access Building access.
CS 8751 ML & KDDEvaluating Hypotheses1 Sample error, true error Confidence intervals for observed hypothesis error Estimators Binomial distribution, Normal.
Standards for Biometrics Dr. Pushkin Kachroo. Introduction Standards needed for interoperability At all levels of the system –hardware level (using one.
Lecture 7 Model Development and Model Verification.
1 BENCHMARKING FINGERPRINT ALGORITHMS Dr. Jim Wayman, Director US National Biometric Test Center San Jose State University
Large Scale Applications Dr. Pushkin Kachroo TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA.
Comparison and Combination of Ear and Face Images in Appearance-Based Biometrics IEEE Trans on PAMI, VOL. 25, NO.9, 2003 Kyong Chang, Kevin W. Bowyer,
05/06/2005CSIS © M. Gibbons On Evaluating Open Biometric Identification Systems Spring 2005 Michael Gibbons School of Computer Science & Information Systems.
Swami NatarajanJuly 14, 2015 RIT Software Engineering Reliability: Introduction.
Determining the Size of
Biometrics: Ear Recognition
National Smartcard Project Work Package 8 – Security Issues Report.
Collecting Multimodal Biometric Data Ross J. Micheals Image Group, Charlie Wilson, Manager Information Access Division, Martin Herman, Chief National Institute.
Zachary Olson and Yukari Hagio CIS 4360 Computer Security November 19, 2008.
1 Card Scanning Solutions SigniShell CSSN – Card Scanning Solutions THE ULTIMATE SIGNATURE CAPTURE & AUTHENTICATION SOLUTION.
Biometrics Investigating Facial and Fingerprint Scanning Technologies prepared by Group
Identity verification in the private sector Chris Gration 30 March 2006.
Discovering Interesting Subsets Using Statistical Analysis Maitreya Natu and Girish K. Palshikar Tata Research Development and Design Centre (TRDDC) Pune,
1 National Fingerprint-based Applicant Check Study (N-FACS) Gary L. Williams Project Manager Programs Development Section Gary S. Barron Project.
1 Validation & Verification Chapter VALIDATION & VERIFICATION Very Difficult Very Important Conceptually distinct, but performed simultaneously.
1 Biometrics and the Department of Defense February 17, 2003.
CPSC 601 Lecture Week 5 Hand Geometry. Outline: 1.Hand Geometry as Biometrics 2.Methods Used for Recognition 3.Illustrations and Examples 4.Some Useful.
CJ328 Unit 3-Review Things you should know Fingerprints contain unique, individual characteristics Galton details are level two details or individual characteristics.
Crossing Methodological Borders to Develop and Implement an Approach for Determining the Value of Energy Efficiency R&D Programs Presented at the American.
Mining and Analysis of Control Structure Variant Clones Guo Qiao.
Anti-Fraud System Design: Authenticating Magnetic Fingerprint Signal Amanda Spencer AND Raja Timihiri Dr. Robert E. Morley, Jr., Edward J. Richter, Washington.
Biometric Measures for Human Identification
The Future of Biometrics. Operation and performance In a typical IT biometric system, a person registers with the system when one or more of his physical.
BIOMETRICS.
1 American Community Survey Categories of Frequently Asked Questions –Purpose –Scope –Content –Operations.
A Cross-Sensor Evaluation of Three Commercial Iris Cameras for Iris Biometrics Ryan Connaughton and Amanda Sgroi June 20, 2011 CVPR Biometrics Workshop.
Lecture 16 Section 8.1 Objectives: Testing Statistical Hypotheses − Stating hypotheses statements − Type I and II errors − Conducting a hypothesis test.
Physical-layer Identification of UHF RFID Tags Authors: Davide Zanetti, Boris Danev and Srdjan Capkun Presented by Zhitao Yang 1.
BPS - 5th Ed. Chapter 221 Two Categorical Variables: The Chi-Square Test.
Designing multiple biometric systems: Measure of ensemble effectiveness Allen Tang NTUIM.
Chapter 10 Verification and Validation of Simulation Models
Statistical Process Control04/03/961 What is Variation? Less Variation = Higher Quality.
DIGITAL SIGNATURE.
Introduction to Biometrics Dr. Bhavani Thuraisingham The University of Texas at Dallas Lecture #5 Issues on Designing Biometric Systems September 7, 2005.
Intelligence Reform: The Process Begins National Association for Public Health Statistics and Information Systems June 7, 2005.
TECHNICAL SEMINAR PRESENTATION BIOMETRICS:THE MAGIC OF IDENTIFICATION.
PRESENTATION ON BIOMETRICS
Biometric for Network Security. Finger Biometrics.
L. F. Coppenrath & Associates PASSWORD BIOPASSWORD ® Biometric Keystroke Dynamics Technology Overview.
By Terry Treutel Ride Quality Consultant Former WisDOT Ride Spec Coordinator.
The Design of Statistical Specifications for a Test Mark D. Reckase Michigan State University.
26134 Business Statistics Week 4 Tutorial Simple Linear Regression Key concepts in this tutorial are listed below 1. Detecting.
ParkNet: Drive-by Sensing of Road-Side Parking Statistics Irfan Ullah Department of Information and Communication Engineering Myongji university, Yongin,
Signature Recognition Using Neural Networks and Rule Based Decision Systems CSC 8810 Computational Intelligence Instructor Dr. Yanqing Zhang Presented.
Hand Geometry Recognition
FACE RECOGNITION TECHNOLOGY
Chapter 10 Verification and Validation of Simulation Models
AFIS and CODIS Duyen & Ladaysha.
Presentation transcript:

Large Scale USA PATRIOT Act Biometric Testing C. L. Wilson Image Group IAD-ITL

Outline Statutory Mandates Biometrics Testing is Scale Dependent Fingerprint Vendor Technology Evaluation (FpVTE) Testing Fingerprint Software Development Kits (SDKs) US-VISIT Identification System (IDENT) Certification of Integrated Automated Fingerprint Identification System (IAFIS)

Statutory Mandates USA Patriot Act (PL ) Enhanced Border Security and Visa Entry Reform Act (PL ) Develop and certify technology standard to –verify identity of foreign nationals applying for a visa visa application at embassies and consulates background check against FBI criminal database and DHS databases and “watch lists” ensure person has not received visa under a different name –verify identity of persons seeking to enter the U.S. verify that the person holding the travel document is the same person to whom the document was issued airports, land border crossings, sea entry points

BIOMETRIC TESTING IS SCALE DEPENDENT SAMPLE SIZE 100-1,000 Proves feasibility SAMPLE SIZE 1,000-10,000 Measures subject variation SAMPLE SIZE 10,000-1,000,000 Measures operational quality control Existing government databases contain millions of entries and have widely varying quality.

FpVTE The Fingerprint Vendor Technology Evaluation (FpVTE) 2003 is an evaluation of the accuracy of fingerprint matching, identification, and verification systems. It is the largest such evaluation yet conducted. FpVTE 2003 was conducted by the National Institute of Standards & Technology (NIST) on behalf of the Justice Management Division (JMD) of the U.S. Department of Justice. FpVTE 2003 serves as part of NIST's statutory mandate under section 403c of the USA PATRIOT Act to certify those biometric technologies that may be used in U.S. VISIT. FpVTE 2003 evaluations were conducted at NIST facilities at Gaithersburg from October through November Eighteen different companies competed, with 35 different systems tested. Each test had a time limit of two or three weeks (running 24/7). FpVTE 2003 used operational fingerprint data from a variety of U.S. Government sources. Analysis of the FpVTE 2003 results is ongoing and will be reported in 2Q 2004.

FpVTE- Comparison of Systems There is a substantial difference in accuracy between the best systems and the average or worst systems The results allow systems to be grouped by performance into tiers The top tier systems are more consistent in performance than the other systems –They perform consistently well over a variety of data, and are less affected by fingerprint quality and other variables The top tier systems are being used for further detailed evaluation at NIST –This reduces the possible dispute over whether results are repeatable –SDK tests will be ongoing—new or revised SDKs can be accepted at any time

FpVTE - Overall Accuracy of Fingerprint Systems The best systems are extremely accurate Measured accuracy of the best system: –1 finger: 97-98% True match False match rate –2 finger: 99% True match False match rate –10 finger: no measured False match rate

SDK Tests Medium scale evaluation of one-to-one matching for: 8 software matchers 12 single finger datasets

What were the goals of the test? Determine the feasibility of verification matching in the US-VISIT and DOS clients Evaluate vendor accuracy variability Evaluate vendor sensitivity to image quality Also used to scale the MST in FpVTE

Scale of Tests Each test involved 36M matches on a 3GHz Pentium platform. Gallery 6K probe 6K Match time must be less than 10ms per fingerprint pair Each test results in an ROC curve

SDK Testing - 8 Algorithms, 12 datasets, 3.46G matches

VTB

SDK - Conclusions All vendors are sensitive to image quality. Three algorithms are clearly more effective. Combining two fingers will provide very effective one-to-one verification for the US-VISIT program. The NIST VTB algorithm is better than many commercial products.

US-VISIT IDENT System IDENT is the primary fingerprint matcher for US-VISIT Three functions: –Watch list checking at enrollment –Duplicate identification check for visa holders –One-to-one verification for enrolled travelers

US-VISIT Identification many-to-one matching Using Department of State (DOS) Mexican visa (BCC) data, the true accept rate (TAR) using index finger pairs is independent of background database size over the range from 100,000 entries to 6,000,000 entries and is 96% with shape filtering on. The false accept rate (FAR) using index finger pairs is linearly increasing with database size and is 0.09% using existing threshold parameters for a gallery size of 6,000,000. At the operating level used by the IDENT system, the trade-off between TAR and FAR is such that a large change in FAR results in only a small change in TAR. The trade-off curve is flat, with very small slope change. All the results given here require that the test data be consolidated, checked for correct ground truth by fingerprint examiners, since between 0.5% and 1.5% of the original data was found to be incorrectly matched. Approximately 0.1% of the questioned data is of insufficient quality to be resolved by examiners. This 0.1% error rate is the minimum error limit detected in existing government fingerprint databases.

US-VISIT Identification many-to-one matching The Cogent image quality is a good predictor of the IDENT many-to- one matching performance. The best quality images, quality 1, produce a TAR of 98% at a FAR of 0.01%. The worst quality images, quality 8, produce a TAR of 47% at a FAR of 0.01%. Image quality distributions for BCC, the Atlanta pilot study, and OHIO web check were studied to determine how well the operational US-VISIT system could be expected to track BCC and Ohio results. The Atlanta data has slightly more quality 8 images and slightly less quality 1 images but should result in a TAR near BCC. The Ohio data has less quality 8 images and more quality 1 images. This is reflected in a TAR of 98% using the IDENT system. The matcher used in this study achieves a match rate of 1,045,000 matches/second with shape filtering on and 731,000 matches/second with shape filtering off.

US-VISIT Verification one-to-one matching Using BCC quality data on two index fingers a one-to-one matching with a TAR of 99.7% at a FAR of 0.1% should be achieved. These results were achieved using the a Software Development Kit (SDK) program set supplied by Cogent that is the same algorithm planned for use in VISIT. Testing of eleven other SDK’s proved that this algorithm is as accurate as any of the algorithms tested although further testing of additional algorithms is planned. All algorithms tested have a significant change in accuracy with image quality. High accuracy algorithms are less sensitive to image quality than low accuracy algorithms.

US-VISIT Verification one-to-one matching Cogent image quality is a good rank statistic for all the algorithms tested for all the datasets used. The error rate of quality 1 (best) fingerprints is always lower than the error rate of any other image quality sample and the error rate of quality 8 (worst) fingerprints is always the highest. All other image quality levels result in the expected ordering of the error rates. Consolidation results on various datasets available to NIST demonstrate that the errors obtained for one-to-one matching is less than the clerical error rate in most government databases. Clerical errors will be more common than biometric errors for one–to-one matching.

ATB - IAFIS ATB – Algorithmic Test Bed for IAFIS ATB is used by NIST for Patriot Act certification of the FBI’s IASIF Functional equivalent of IAFIS component –Same IAFIS software –Same / compatible hardware –3% sample of data (1.2M cards) –Validated scale model Developer’s sandbox –Measure operational characteristics speed and accuracy –Vary operational mode plain vs. rolled, 10-finger vs. 4-finger vs. 2-finger vs. 1-finger

ATB Results How accurate is plain_to_rolled identification? is it less accurate than rolled_to_rolled? –Failure to identify (1-TAR) can be 2% or less for plain_to rolled identification. Accuracy can be as good as with rolled prints of average quality. How accurate is plain_to_rolled identification with fewer than ten fingers? –Accuracy with fewer than ten fingers can exceed accuracy with ten fingers, but performance penalties can be expected. Are performance penalties incurred by adopting plain_to_rolled identification? if penalties are incurred, how great are they? –Performance (throughput) deteriorates markedly as the amount of data is reduced.

ATB Results Does gallery size affect accuracy? if there is an effect, how can it be characterized? –Within the range of gallery sizes covered by these studies (160,000 to 45,000,000), gallery size did not affect overall accuracy (TAR). We feel quite comfortable extrapolating these results to cover the range from 100,000 to 100,000,000 (from 10 5 to 10 8 ). Does the ATB accurately model the behavior of IAFIS? –Yes.