Adviser:Ming-Yuan Shieh Student:shun-te chuang SN:M

Slides:



Advertisements
Similar presentations
QR Code Recognition Based On Image Processing
Advertisements

Automatic Feature Extraction for Multi-view 3D Face Recognition
Amir Hosein Omidvarnia Spring 2007 Principles of 3D Face Recognition.
HYBRID-BOOST LEARNING FOR MULTI-POSE FACE DETECTION AND FACIAL EXPRESSION RECOGNITION Hsiuao-Ying ChenChung-Lin Huang Chih-Ming Fu Pattern Recognition,
Facial feature localization Presented by: Harvest Jang Spring 2002.
Localization of Piled Boxes by Means of the Hough Transform Dimitrios Katsoulas Institute for Pattern Recognition and Image Processing University of Freiburg.
Multiple Criteria for Evaluating Land Cover Classification Algorithms Summary of a paper by R.S. DeFries and Jonathan Cheung-Wai Chan April, 2000 Remote.
ICIP 2000, Vancouver, Canada IVML, ECE, NTUA Face Detection: Is it only for Face Recognition?  A few years earlier  Face Detection Face Recognition 
Modeling Pixel Process with Scale Invariant Local Patterns for Background Subtraction in Complex Scenes (CVPR’10) Shengcai Liao, Guoying Zhao, Vili Kellokumpu,
HCI Final Project Robust Real Time Face Detection Paul Viola, Michael Jones, Robust Real-Time Face Detetion, International Journal of Computer Vision,
Novel Point-Oriented Inner Searches for Fast Block Motion Lai-Man Po, Chi-Wang Ting, Ka-Man Wong, and Ka-Ho Ng IEEE TRANSACTIONS ON MULTIMEDIA, VOL.9,
Real-time Embedded Face Recognition for Smart Home Fei Zuo, Student Member, IEEE, Peter H. N. de With, Senior Member, IEEE.
1 Learning to Detect Objects in Images via a Sparse, Part-Based Representation S. Agarwal, A. Awan and D. Roth IEEE Transactions on Pattern Analysis and.
Efficient Moving Object Segmentation Algorithm Using Background Registration Technique Shao-Yi Chien, Shyh-Yih Ma, and Liang-Gee Chen, Fellow, IEEE Hsin-Hua.
1 Abstract This paper presents a novel modification to the classical Competitive Learning (CL) by adding a dynamic branching mechanism to neural networks.
A Study of Approaches for Object Recognition
UPM, Faculty of Computer Science & IT, A robust automated attendance system using face recognition techniques PhD proposal; May 2009 Gawed Nagi.
Automatic Pose Estimation of 3D Facial Models Yi Sun and Lijun Yin Department of Computer Science State University of New York at Binghamton Binghamton,
TAUCHI – Tampere Unit for Computer-Human Interaction Automated recognition of facial expressi ns and identity 2003 UCIT Progress Report Ioulia Guizatdinova.
Object Recognition Using Distinctive Image Feature From Scale-Invariant Key point D. Lowe, IJCV 2004 Presenting – Anat Kaspi.
Preprocessing to enhance recognition performance in the presence of: -Illumination variations -Pose/expression/scale variations - Resolution enhancement.
A Novel 2D To 3D Image Technique Based On Object- Oriented Conversion.
Smart Traveller with Visual Translator for OCR and Face Recognition LYU0203 FYP.
Face Processing System Presented by: Harvest Jang Group meeting Fall 2002.
Jacinto C. Nascimento, Member, IEEE, and Jorge S. Marques
TEAM-1 JACKIE ABBAZIO SASHA PEREZ DENISE SILVA ROBERT TESORIERO Face Recognition Systems.
Face Recognition Using Neural Networks Presented By: Hadis Mohseni Leila Taghavi Atefeh Mirsafian.
Biometrics: Ear Recognition
Joint Histogram Based Cost Aggregation For Stereo Matching Dongbo Min, Member, IEEE, Jiangbo Lu, Member, IEEE, Minh N. Do, Senior Member, IEEE IEEE TRANSACTION.
Action and Gait Recognition From Recovered 3-D Human Joints IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS— PART B: CYBERNETICS, VOL. 40, NO. 4, AUGUST.
Irfan Essa, Alex Pentland Facial Expression Recognition using a Dynamic Model and Motion Energy (a review by Paul Fitzpatrick for 6.892)
Shape Recognition and Pose Estimation for Mobile Augmented Reality Author : N. Hagbi, J. El-Sana, O. Bergig, and M. Billinghurst Date : Speaker.
Segmentation Using Texture
ICPR/WDIA-2012 High Quality Novel View Synthesis Based on Low Resolution Depth Image and High Resolution Color Image Jui-Chiu Chiang, Zheng-Feng Liu, and.
Presented by Tienwei Tsai July, 2005
LOGO On Person Authentication by Fusing Visual and Thermal Face Biometrics Presented by: Rubie F. Vi ñ as, 方如玉 Adviser: Dr. Shih-Chung Chen, 陳世中.
1 Webcam Mouse Using Face and Eye Tracking in Various Illumination Environments Yuan-Pin Lin et al. Proceedings of the 2005 IEEE Y.S. Lee.
Object Detection with Discriminatively Trained Part Based Models
Object Recognition in Images Slides originally created by Bernd Heisele.
1 Ying-li Tian, Member, IEEE, Takeo Kanade, Fellow, IEEE, and Jeffrey F. Cohn, Member, IEEE Presenter: I-Chung Hung Advisor: Dr. Yen-Ting Chen Date:
ECE738 Advanced Image Processing Face Detection IEEE Trans. PAMI, July 1997.
Face Recognition: An Introduction
Adaptive Rigid Multi-region Selection for 3D face recognition K. Chang, K. Bowyer, P. Flynn Paper presentation Kin-chung (Ryan) Wong 2006/7/27.
Action and Gait Recognition From Recovered 3-D Human Joints IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS— PART B: CYBERNETICS, VOL. 40, NO. 4, AUGUST.
Computer-based identification and tracking of Antarctic icebergs in SAR images Department of Geography, University of Sheffield, 2004 Computer-based identification.
Design of PCA and SVM based face recognition system for intelligent robots Department of Electrical Engineering, Southern Taiwan University, Tainan County,
A NOVEL METHOD FOR COLOR FACE RECOGNITION USING KNN CLASSIFIER
Face Image-Based Gender Recognition Using Complex-Valued Neural Network Instructor :Dr. Dong-Chul Kim Indrani Gorripati.
By: David Gelbendorf, Hila Ben-Moshe Supervisor : Alon Zvirin
Reference books: – Digital Image Processing, Gonzalez & Woods. - Digital Image Processing, M. Joshi - Computer Vision – a modern approach, Forsyth & Ponce.
FACE DETECTION : AMIT BHAMARE. WHAT IS FACE DETECTION ? Face detection is computer based technology which detect the face in digital image. Trivial task.
Intelligent Database Systems Lab 國立雲林科技大學 National Yunlin University of Science and Technology 1 Recognizing Partially Occluded, Expression Variant Faces.
Pixel Parallel Vessel Tree Extraction for a Personal Authentication System 2010/01/14 學生:羅國育.
WLD: A Robust Local Image Descriptor Jie Chen, Shiguang Shan, Chu He, Guoying Zhao, Matti Pietikäinen, Xilin Chen, Wen Gao 报告人:蒲薇榄.
Shadow Detection in Remotely Sensed Images Based on Self-Adaptive Feature Selection Jiahang Liu, Tao Fang, and Deren Li IEEE TRANSACTIONS ON GEOSCIENCE.
Evaluation of Gender Classification Methods with Automatically Detected and Aligned Faces Speaker: Po-Kai Shen Advisor: Tsai-Rong Chang Date: 2010/6/14.
Face Detection 蔡宇軒.
CONTENTS:  Introduction.  Face recognition task.  Image preprocessing.  Template Extraction and Normalization.  Template Correlation with image database.
Deformation Modeling for Robust 3D Face Matching Xioguang Lu and Anil K. Jain Dept. of Computer Science & Engineering Michigan State University.
Under Guidance of Mr. A. S. Jalal Associate Professor Dept. of Computer Engineering and Applications GLA University, Mathura Presented by Dev Drume Agrawal.
Signature Recognition Using Neural Networks and Rule Based Decision Systems CSC 8810 Computational Intelligence Instructor Dr. Yanqing Zhang Presented.
April 21, 2016Introduction to Artificial Intelligence Lecture 22: Computer Vision II 1 Canny Edge Detector The Canny edge detector is a good approximation.
Presented By Bhargav (08BQ1A0435).  Images play an important role in todays information because A single image represents a thousand words.  Google's.
Guillaume-Alexandre Bilodeau
FINGER PRINT RECOGNITION USING MINUTIAE EXTRACTION FOR BANK LOCKER SECURITY Presented by J.VENKATA SUMAN ECE DEPARTMENT GMRIT, RAJAM.
FACE DETECTION USING ARTIFICIAL INTELLIGENCE
Fast Preprocessing for Robust Face Sketch Synthesis
Computer Vision Lecture 9: Edge Detection II
Hu Li Moments for Low Resolution Thermal Face Recognition
Presented by: Yang Yu Spatiotemporal GMM for Background Subtraction with Superpixel Hierarchy Mingliang Chen, Xing Wei, Qingxiong.
Presentation transcript:

Adviser:Ming-Yuan Shieh Student:shun-te chuang SN:M9820204 3D Face Recognition Using Simulated Annealing and the Surface Interpenetration Measure IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 32, NO. 2, FEBRUARY 2010 Adviser:Ming-Yuan Shieh Student:shun-te chuang SN:M9820204

Outline Abstract INTRODUCTION 3D FACE IMAGE DATABASE 3D FACE IMAGE PREPROCESSING 3D FACE MATCHING 3D FACE AUTHENTICATION EXPERIMENTAL RESULTS FINAL REMARKS

1. Abstract This paper presents a novel automatic framework to perform 3D face recognition. The proposed method uses a Simulated Annealing-based approach (SA) for range image registration with the Surface Interpenetration Measure (SIM), as similarity measure, in order to match two face images. By using all of the images in the database, a verification rate of 96.5 percent was achieved at a False Acceptance Rate (FAR) of 0.1 percent

2. INTRODUCTION (1/3) So far , studies in 2D face recognition have reached significant development, but still bear limitations mostly due to pose variation, illumination, and facial expression . In the 1990s, 3D face recognition stood out due to advances of 3D imaging sensors. However, 3D images also have limitations, such as the presence of noise and difficult image acquisition.

2. INTRODUCTION (2/3) We present a complete framework for face recognition using only 3D information (range images) as input. The framework can be employed in both verification and identification systems and has four main stages: 1. image acquisition, 2. preprocessing, 3. matching, and 4. authentication.

2. INTRODUCTION (3/3) Fig. 1. The main stages of the framework for 3D face authentication.

3. 3D FACE IMAGE DATABASE(1/3) In our experiments, we use the FRGC v2 database, the largest available database of 3D face images, composed of 4,007 images from 466 different subjects . All images have resolution of 640x480 and they were acquired by a Minolta Vivid 910 laser scanner. The most common facial expressions available in the database are: neutral, happy, sad, disgusting, surprised, and puffy cheek.

3. 3D FACE IMAGE DATABASE(2/3) Fig. 2. Example of images with artifacts: (a) stretched/deformed images (04746d44), (b) face images without the nose (04505d222), (c) holes around the nose (04531d293), and (d) waves around the mouth (04514d324).

3. 3D FACE IMAGE DATABASE(3/3) Fig. 3. Example of images classified as neutral in the FRGC v2 database: (a) 04476d230 and (b) 04587d114. TABLE 1 FRGC v2 Face Image Classification

4. 3D FACE IMAGE PREPROCESSING(1/4) Initially, the face image is smoothed with a median filter. The segmentation process uses our own approach based on the depth of range maps and it is basically composed of two main stages: 1) locating homogeneous regions in the input image by combining region clustering and edge information. 2) identifying candidate regions that belong to the face region by an ellipse detection method based on the Hough Transform.

4. 3D FACE IMAGE PREPROCESSING(2/4) Fig. 4. Feature points detection: (a) six feature points, (b) wrong nose points, and (c) wrong eye corners detection.

4. 3D FACE IMAGE PREPROCESSING(3/4) In this paper, four regions of the face are considered : 1. the circular area around the nose, 2. the elliptical area around the nose, 3. the upper head, including eyes, nose, and forehead, 4. the entire face region.

4. 3D FACE IMAGE PREPROCESSING(4/4) Fig. 5. Segmented regions from a same face: (a) entire face and detected feature points, (b) circular nose area, (c) elliptical nose area, and (d) upper head.

5. 3D FACE MATCHING 5.1 Surface Interpenetration Measure(1/2) The SIM was developed by analyzing visual results of two aligned surfaces, each one rendered in a different color, crossing over each other repeatedly in the overlapping area. The interpenetration effect results from the very nature of real range data, which presents slightly rough surfaces with small local distortions caused by limitations of the acquiring system. Through the quantification of interpenetration, one can more precisely assess the registration results and provide a highly robust control.

5.1 Surface Interpenetration Measure(2/2)

5.2 SA Approach for Range Image Registration(1/2) The SA is a stochastic algorithm for local search. From an initial candidate solution, SA iteratively searches a better neighbor solution to the problem as compared to the current one. Our SA-based approach basically has three main stages: 1) initial solution, 2) coarse alignment, and 3) fine alignment, which are illustrated in Fig. 6

5.2 SA Approach for Range Image Registration(2/2) Fig. 6. SA main stages: 1) initial solution, 2) coarse alignment using MSAC, and 3) fine alignment using SIM.

5.3 Modified SA Approach for 3D Face Matching(1/2) We observed that the matching between a neutral face and another with expression may lead to imprecise alignments. These alignments produce low SIM values that consequently affect the authentication process. The main idea of this approach is to guide the matching of a neutral face with the other one with expression to areas that have more expression invariant (e.g., the nose and forehead regions).

5.3 Modified SA Approach for 3D Face Matching(2/2) Fig. 7. Sectors used for the modified SA approach: (a) original approach,(b) face sectors, and (c) brighter sectors used for matching.

6. 3D FACE AUTHENTICATION(1/3) To accomplish 3D face authentication we used the SIM scores computed by matching segmented facial regions. If two face images are from the same person, the matching produces a high SIM value; otherwise, it computes a low SIM value. Each image from the neutral noiseless data set was matched to all the other neutral noiseless images in the FRGC v2 database, and the verification rate was computed at 0 percent FAR.

6. 3D FACE AUTHENTICATION(2/3) TABLE 2 Verification Rate for Each Face Region, at 0 Percent FAR

6. 3D FACE AUTHENTICATION(1/3) TABLE 3 Verification Rate Combining All Face Regions, at 0 Percent FAR

6.1 Hierarchical Evaluation Model(1/2) The matching classification is based on two thresholds: 1) recognition threshold, 2) rejection threshold. If the matching score is higher than the recognition threshold,both images are assumed to belong to the same person. If lower than a rejection threshold, images are labeled as being from different subjects. In case the matching score lies between the thresholds, no affirmation can be made and the next region of hierarchy is then used for a new classification attempt.

6.1 Hierarchical Evaluation Model(2/2) Fig. 8. Hierarchical evaluation model diagram.

6.2 Extended Hierarchical Evaluation Model (1/2) We also propose an extended version of the hierarchy approach. In each step of the hierarchy, instead of assessing only the matching score obtained at that level, the sum of all levels computed to the moment is also used. At the end of the hierarchy, the sum of all regions is used to verify whether images belong or not to the same subject.

6.2 Extended Hierarchical Evaluation Model (2/2) TABLE 4 Scores Computed at Each Level of the Hierarchy

7. EXPERIMENTAL RESULTS(1/2) We used a computer with the following configuration: Linux O.S., Intel Pentium D 3.4 GHz, cache of 2 MB and 1 GB of memory. TABLE 5 Average Time Performance Using SA for Face Matching

7. EXPERIMENTAL RESULTS(2/2) TABLE 6 FRGC v2 Data Sets Classification

7.1 Experiment I: Verification TABLE 7 Experiment I: Verification Rate Using 0 Percent and 0.1 Percent FAR Fig. 9. System performance for the “All vs. All” experiment.

7.2 Experiment II: Hierarchical Evaluation Model TABLE 8 Experiment IV: Verification Rate at 0 Percent FAR

7.3 Experiment III: Recognition Rate Improvement TABLE 9 Experiment V: Recognition Rate

FINAL REMARKS The database was manually segmented into different data sets according to the images facial expression intensity and noise level. When using all images in the database, in the “All vs. All” experiment, faces still can be recognized with 96.5 percent accuracy, at 0.1 percent FAR, these are the best results for this experimentreported in the literature