A Cross-Sensor Evaluation of Three Commercial Iris Cameras for Iris Biometrics Ryan Connaughton and Amanda Sgroi June 20, 2011 CVPR Biometrics Workshop.

Slides:



Advertisements
Similar presentations
Ed-D 420 Inclusion of Exceptional Learners. CAT time Learner-Centered - Learner-centered techniques focus on strategies and approaches to improve learning.
Advertisements

Fuzzy Angle Fuzzy Distance + Angle AG = 90 DG = 1 Annual Conference of ITA ACITA 2009 Exact and Fuzzy Sensor Assignment Hosam Rowaih 1 Matthew P. Johnson.
Preliminary Assessment of Discrimination of Twins in Photographs based on Facial Blemishes Nisha Srinivas 1, Matthew Pruitt 1, Gaurav Aggarwal 1, Patrick.
A Comparison of Rule-Based versus Exemplar-Based Categorization Using the ACT-R Architecture Matthew F. RUTLEDGE-TAYLOR, Christian LEBIERE, Robert THOMSON,
BXGrid: A Data Repository and Computing Grid for Biometrics Research Hoang Bui University of Notre Dame 1.
Face Processing in Social Networking Services NTIA Privacy Multistakeholder Process: Commercial Facial Recognition Technology March 25, 2014 Olga Raskin.
Misleading Metrics and Unsound Analyses Presenter: Gil Hartman Authors: Barbara Kitchenham, David Ross Jeffery, and Colin Connaughton IEEE Software 24(2),
C. L. Wilson Manager, Image Group Biometrics Overview of the PATRIOT Act.
A Maximum Coherence Model for Dictionary-based Cross-language Information Retrieval Yi Liu, Rong Jin, Joyce Y. Chai Dept. of Computer Science and Engineering.
Color Image Processing
Zen and the Art of Facial Image Quality Terry P. Riopka.
Biometrics & Security Tutorial 7. 1 (a) Please compare two different kinds of biometrics technologies: Retina and Iris. (P8:2-3)
UPM, Faculty of Computer Science & IT, A robust automated attendance system using face recognition techniques PhD proposal; May 2009 Gawed Nagi.
Pittsburgh, PA Software Engineering Institute Carnegie Mellon University Pittsburgh, PA Sponsored by the U.S. Department of Defense.
Retrieval Evaluation. Introduction Evaluation of implementations in computer science often is in terms of time and space complexity. With large document.
Performance Testing “ Guide to Biometrics” - chapter 7 “ An Introduction to Evaluating Biometric Systems” by Phillips et al., IEEE Computer, February 2000,
R U There? Looking for those Teaching Moments in Chat Transcripts Frances Devlin, John Stratton and Lea Currie University of Kansas ALA Annual Conference.
Yajie Miao Florian Metze
PCFG Based Synthetic Mobility Trace Generation S. C. Geyik, E. Bulut, and B. K. Szymanski Department of Computer Science, Rensselaer Polytechnic Institute.
Uba Anydiewu, Shane Bilinski, Luis Garcia, Lauren Ragland, Debracca Thornton, Joe Tubesing, Kevin Chan, Steve Elliott, and Ben Petry EXAMINING INTRA-VISIT.
March, What does the new law require?  20% State student growth data (increases to 25% upon implementation of value0added growth model)  20%
Biometrics: Ear Recognition
Performance Evaluation of Grouping Algorithms Vida Movahedi Elder Lab - Centre for Vision Research York University Spring 2009.
 A set of objectives or student learning outcomes for a course or a set of courses.  Specifies the set of concepts and skills that the student must.
Chapter 14: Nonparametric Statistics
Person-Specific Domain Adaptation with Applications to Heterogeneous Face Recognition (HFR) Presenter: Yao-Hung Tsai Dept. of Electrical Engineering, NTU.
Large Scale USA PATRIOT Act Biometric Testing C. L. Wilson Image Group IAD-ITL.
CS 736 A methodology for Analyzing the Performance of Authentication Protocol by Laseinde Olaoluwa Peter Department of Computer Science West Virginia.
Impact of different relation extraction methods on network analysis results Jana Diesner.
UNIVERSITY of NOTRE DAME COLLEGE of ENGINEERING Preserving Location Privacy on the Release of Large-scale Mobility Data Xueheng Hu, Aaron D. Striegel Department.
Evolutionary Clustering and Analysis of Bibliographic Networks Manish Gupta (UIUC) Charu C. Aggarwal (IBM) Jiawei Han (UIUC) Yizhou Sun (UIUC) ASONAM 2011.
Analyzing Reliability and Validity in Outcomes Assessment (Part 1) Robert W. Lingard and Deborah K. van Alphen California State University, Northridge.
Digital Camera and Computer Vision Laboratory Department of Computer Science and Information Engineering National Taiwan University, Taipei, Taiwan, R.O.C.
Semantic Information Fusion Shashi Phoha, PI Head, Information Science and Technology Division Applied Research Laboratory The Pennsylvania State.
Intelligent Database Systems Lab N.Y.U.S.T. I. M. Evaluation of novelty metrics for sentence-level novelty mining Presenter : Lin, Shu-Han Authors : Flora.
Accessing and Reporting State Student Achievement Data for GPRA Purposes Amy A. Germuth, Ph.D. Compass Consulting Group, LLC.
Numerical Methods Part: Simpson Rule For Integration.
Measured Progress ©2012 Student Growth in the Non-Tested Subjects and Grades: Options for Teacher Evaluators Elena Diaz-Bilello, Center for Assessment.
March 10, Iris Recognition Instructor: Natalia Schmid BIOM 426: Biometrics Systems.
Mapping in Surveys Uses of maps: Plan operations Facilitate data collection Presentation and analysis of results There are two main categories of maps:
Association Mining via Co-clustering of Sparse Matrices Brian Thompson *, Linda Ness †, David Shallcross †, Devasis Bassu † *†
Chapter 13 Descriptive Data Analysis. Statistics  Science is empirical in that knowledge is acquired by observation  Data collection requires that we.
Designing multiple biometric systems: Measure of ensemble effectiveness Allen Tang NTUIM.
FACSA Performance Management for Florida Authorizers St. Petersburg, Florida June 16, 17 and 18, 2009.
USING GRAPHICAL DISPLAY by John Froelich A Picture is Worth a Thousand Words:
Fusion of Face and Iris Biometrics from a Stand-Off Video Sensor
Measuring Behavioral Trust in Social Networks
CHAPTER OVERVIEW Say Hello to Inferential Statistics The Idea of Statistical Significance Significance Versus Meaningfulness Meta-analysis.
Chapter 10 Copyright © Allyn & Bacon 2008 This multimedia product and its contents are protected under copyright law. The following are prohibited by law:
1 Predicting Classes in Need of Refactoring – An Application of Static Metrics Liming Zhao Jane Hayes 23 September 2006.
YONSEI Univ. High Dimensional Signal Processing Lab. 1 Comparison of DSCQS and SSCQE Chulhee Lee Yonsei University.
CDC Journal Evaluation Project Overview Journal “weighted value” scores Spreadsheet: UC Journal Title Values Voting instructions Questions, discussion,
Statistics Josée L. Jarry, Ph.D., C.Psych. Introduction to Psychology Department of Psychology University of Toronto June 9, 2003.
I owa S tate U niversity Laboratory for Advanced Networks (LAN) Coverage and Connectivity Control of Wireless Sensor Networks under Mobility Qiang QiuAhmed.
PASS Criteria F Construction of Knowledge F Disciplined Inquiry F Value Beyond School.
Chapter 14 Repeated Measures and Two Factor Analysis of Variance PowerPoint Lecture Slides Essentials of Statistics for the Behavioral Sciences Seventh.
Color Image Processing
Keystroke Biometric Studies with Short Numeric Input on Smartphones
Introducing the M-metric Maurice R. Masliah and Paul Milgram
Color Image Processing
EVALUATION OF V&V TOOLS
Keystroke Biometric Studies with Short Numeric Input on Smartphones
Distributed Learning of Multilingual DNN Feature Extractors using GPUs
Analyzing Reliability and Validity in Outcomes Assessment Part 1
Fusion Based Face Recognition Using Statistics of Shaded Subregions
Color Image Processing
BXGrid: A Data Repository and Computing Grid for Biometrics Research
Understanding Statistical Inferences
Analyzing Reliability and Validity in Outcomes Assessment
Keystroke Biometric Studies with Short Numeric Input on Smartphones
Presentation transcript:

A Cross-Sensor Evaluation of Three Commercial Iris Cameras for Iris Biometrics Ryan Connaughton and Amanda Sgroi June 20, 2011 CVPR Biometrics Workshop Computer Vision Research Lab Department of Computer Science & Engineering University of Notre Dame

Objectives Compare three commercially available sensors – Does one sensor consistently out-perform the others? – What factors impact sensor performance the most? Observe performance of sensors in a cross-sensor scenario – What kind of performance can we expect from a cross- sensor system? – What is the relationship between single-sensor and cross-sensor performance? 2

Overview of Experiment Strategy Collect images for the same subjects using all 3 sensors under similar conditions Use 3 different matching algorithms to perform matching experiments In a single-sensor context In a cross-sensor context Analyze performance of sensors in each scenario 3

Sensors 4 SensorIris-to-Sensor Distance Wavelength(s) of NIR Illumination Type of Illumination (cross or direct) Acquisition Instructions S18 to 12 inches820 nmBoth (same time)Sensor Prompt S210 to 14 inches770 and 870 nmBoth (different times)Sensor Prompt S313 inches870 and 760 nmCross *Operator *Speculation

Image Examples 5 S1 S2 S3 Same Subject, Same Session Images

Data Collection Results 23,444 Iris Images acquired, spanning 510 subjects (1,020 unique irises) 6

The Matching Algorithms A1 - Similarity Score - Asymmetric Scores A2 - Distance Score - Asymmetric Scores A3 - Distance Score - Symmetric Scores 7

8 Segmentation Information

9 Note: Image information using A1 is not easily accessible and is thus not included.

Match and Non-Match Comparisons 10

The Experiments S1 vs S1 S2 vs S2 S3 vs S3 S1 vs S2 S1 vs S3 S2 vs S3 11 Single-Sensor & Cross-Session Cross-Sensor & Cross-Session These experiments were repeated for all 3 matchers

12 ROC Curves Using A1

13 ROC Curves Using A2

14 ROC Curves Using A3

15 TAR's at FAR = 0.01 A1A2A3 S1 vs S (1) (2) (1) S2 vs S (3) (1) (4) S3 vs S (6) (6) (5) S1 vs S (2) (3) (3) S1 vs S (4) (5) (2) S2 vs S (5) (4) (6) Numbers in parentheses indicate ranking within the corresponding matching algorithm

16 Sensor FAR = 0.01 A1A2A3 1S1 vs S1S2 vs S2S1 vs S1 2 S1 vs S2S1 vs S1S1 vs S3 3 S2 vs S2S1 vs S2 4 S1 vs S3S2 vs S3S2 vs S2 5 S2 vs S3S1 vs S3S3 vs S3 6 S2 vs S3 Brackets indicate that performance difference at FAR=0.01 may not be statistically significant

17 Single-Sensor Conclusions S3 consistently performed the worst for all matchers S1 was best for 2 of 3 matchers Best overall performance was achieved using S1 sensor with A1 matcher FAR=0.01)

18 Cross-Sensor Conclusions A1: Cross-sensor performance was between performance of individual sensors A2: In general, cross-sensor performance was between performance of individual sensors – S1 vs S2 actually performed slightly worse than either single sensor A3: Individual sensor performance is not a good predictor of cross-sensor performance – S1 vs S3 appears to perform better than S1 vs S2

19 General Conclusions Sensors and matching algorithms should be evaluated in combination, not separately In some cases, adding a new and “better” sensor for cross- sensor matching will increase performance, but in some cases it will degrade performance Single-sensor performance is not always a reliable predictor of cross-sensor performance

20 Future Work Which results are statistically significant? What factors have the largest effect on performance? – Pixels on the iris – Dilation ratio – Occlusion – Contact Lenses – Order of sensors during acquisition – Focus or other quality metrics – Illumination

21 Thanks! Questions? Acknowledgments: This work is sponsored under IARPA BAA through the Army Research Laboratory and was accomplished under Cooperative Agreement Number W911NF The views and conclusions contained in this document are those of the authors and should not be interpreted as representing official policies, either expressed or implied, of IARPA, the Army Research Laboratory, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein.