Facial Recognition as a Pattern Recognition Problem

Slides:



Advertisements
Similar presentations
Face Recognition Sumitha Balasuriya.
Advertisements

EigenFaces and EigenPatches Useful model of variation in a region –Region must be fixed shape (eg rectangle) Developed for face recognition Generalised.
Face Recognition and Biometric Systems Eigenfaces (2)
EigenFaces.
Machine Learning Lecture 8 Data Processing and Representation
Face Recognition CPSC UTC/CSE.
Face Recognition By Sunny Tang.
Face Recognition and Biometric Systems
Computer Vision Spring ,-685 Instructor: S. Narasimhan Wean 5403 T-R 3:00pm – 4:20pm Lecture #20.
Face Recognition Committee Machine Presented by Sunny Tang.
Principal Component Analysis CMPUT 466/551 Nilanjan Ray.
Dimensionality Reduction Chapter 3 (Duda et al.) – Section 3.8
Principal Component Analysis
CS 790Q Biometrics Face Recognition Using Dimensionality Reduction PCA and LDA M. Turk, A. Pentland, "Eigenfaces for Recognition", Journal of Cognitive.
CONTENT BASED FACE RECOGNITION Ankur Jain 01D05007 Pranshu Sharma Prashant Baronia 01D05005 Swapnil Zarekar 01D05001 Under the guidance of Prof.
Principal Component Analysis IML Outline Max the variance of the output coordinates Optimal reconstruction Generating data Limitations of PCA.
Eigenfaces As we discussed last time, we can reduce the computation by dimension reduction using PCA –Suppose we have a set of N images and there are c.
Face Recognition using PCA (Eigenfaces) and LDA (Fisherfaces)
Face Recognition Jeremy Wyatt.
Evaluation of Image Pre-processing Techniques for Eigenface Based Face Recognition Thomas Heseltine york.ac.uk/~tomh
Face Recognition Using Eigenfaces
Principal Component Analysis Barnabás Póczos University of Alberta Nov 24, 2009 B: Chapter 12 HRF: Chapter 14.5.
Face Recognition: A Comparison of Appearance-Based Approaches
PCA Channel Student: Fangming JI u Supervisor: Professor Tom Geoden.
Smart Traveller with Visual Translator for OCR and Face Recognition LYU0203 FYP.
Face Recognition: An Introduction
Three-Dimensional Face Recognition Using Surface Space Combinations Thomas Heseltine, Nick Pears, Jim Austin Advanced Computer Architecture Group Department.
Lightseminar: Learned Representation in AI An Introduction to Locally Linear Embedding Lawrence K. Saul Sam T. Roweis presented by Chan-Su Lee.
CS 485/685 Computer Vision Face Recognition Using Principal Components Analysis (PCA) M. Turk, A. Pentland, "Eigenfaces for Recognition", Journal of Cognitive.
Face Recognition Using EigenFaces Presentation by: Zia Ahmed Shaikh (P/IT/2K15/07) Authors: Matthew A. Turk and Alex P. Pentland Vision and Modeling Group,
Eigenfaces for Recognition Student: Yikun Jiang Professor: Brendan Morris.
Training Database Step 1 : In general approach of PCA, each image is divided into nxn blocks or pixels. Then all pixel values are taken into a single one.
Dimensionality Reduction: Principal Components Analysis Optional Reading: Smith, A Tutorial on Principal Components Analysis (linked to class webpage)
Recognition Part II Ali Farhadi CSE 455.
Face Recognition and Feature Subspaces
Face Recognition and Feature Subspaces
1 Graph Embedding (GE) & Marginal Fisher Analysis (MFA) 吳沛勳 劉冠成 韓仁智
1 Recognition by Appearance Appearance-based recognition is a competing paradigm to features and alignment. No features are extracted! Images are represented.
Face Recognition: An Introduction
Using Support Vector Machines to Enhance the Performance of Bayesian Face Recognition IEEE Transaction on Information Forensics and Security Zhifeng Li,
Classification Course web page: vision.cis.udel.edu/~cv May 12, 2003  Lecture 33.
Face Recognition: An Introduction
CSE 185 Introduction to Computer Vision Face Recognition.
Principal Component Analysis Machine Learning. Last Time Expectation Maximization in Graphical Models – Baum Welch.
EE4-62 MLCV Lecture Face Recognition – Subspace/Manifold Learning Tae-Kyun Kim 1 EE4-62 MLCV.
A NOVEL METHOD FOR COLOR FACE RECOGNITION USING KNN CLASSIFIER
Elements of Pattern Recognition CNS/EE Lecture 5 M. Weber P. Perona.
2D-LDA: A statistical linear discriminant analysis for image matrix
Face Recognition and Feature Subspaces Devi Parikh Virginia Tech 11/05/15 Slides borrowed from Derek Hoiem, who borrowed some slides from Lana Lazebnik,
Face detection and recognition Many slides adapted from K. Grauman and D. Lowe.
Computer Vision Lecture 7 Classifiers. Computer Vision, Lecture 6 Oleh Tretiak © 2005Slide 1 This Lecture Bayesian decision theory (22.1, 22.2) –General.
Machine Learning Supervised Learning Classification and Regression K-Nearest Neighbor Classification Fisher’s Criteria & Linear Discriminant Analysis Perceptron:
Principal Component Analysis (PCA)
LECTURE 10: DISCRIMINANT ANALYSIS
Recognition with Expression Variations
Lecture 8:Eigenfaces and Shared Features
CS 2750: Machine Learning Dimensionality Reduction
Face Recognition and Feature Subspaces
Principal Component Analysis (PCA)
Machine Learning Dimensionality Reduction
Principal Component Analysis
Outline Peter N. Belhumeur, Joao P. Hespanha, and David J. Kriegman, “Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection,”
PCA vs ICA vs LDA.
In summary C1={skin} C2={~skin} Given x=[R,G,B], is it skin or ~skin?
Face Recognition and Detection Using Eigenfaces
Principal Component Analysis
PCA is “an orthogonal linear transformation that transfers the data to a new coordinate system such that the greatest variance by any projection of the.
Feature space tansformation methods
CS4670: Intro to Computer Vision
LECTURE 09: DISCRIMINANT ANALYSIS
Presentation transcript:

Facial Recognition as a Pattern Recognition Problem Hongcheng Wang Beckman Institute, UIUC 2019/5/1

Contents Why Face Recognition? Face Recognition Approaches Eigenface FisherFace SVM, LLE, IsoMAP, NN,… Experimental Results Conclusions 2019/5/1

Why Face Recognition? Non-intrusive. Growing Interest in biometric authentication Data required is easily obtained and readily available. 2019/5/1

Face Recognition Approaches Eigenface Identify discriminating components Fisherface Uses ‘within-class’ information to maximise class separation Kernel Eigenface vs. Kernel Fisherface Take higher order correlations into account LLE, IsoMAP, Charting, SVM, Neural networks 2019/5/1

Eigenface (Turk, Pentland, 91) -1 Use Principle Component Analysis (PCA) to determine the most discriminating features between images of faces. 2019/5/1

Eigenface -2 Create an image subspace (face space) which best discriminates between faces. Like faces occupy near points in face space. Compare two faces by projecting the images into faces pace and measuring the Euclidean distance between them. 2019/5/1

Eigenface -3 Similarly the following 1x2 pixel images are converted into the vectors shown. Each image occupies a different point in image space. Similar images are near each other in image space. Different images are far from each other in image space. 2019/5/1

Eigenface -4 xd x2 x1 A 256x256 pixel image of a face occupies a single point in 65,536-dimensional image space. Images of faces occupy a small region of this large image space. Similarly, different faces should occupy different areas of this smaller region. We can identify a face by finding the nearest ‘known’ face in image space. 2019/5/1

Dimensionality Reduction! Eigenface -5 However, even tiny changes in lighting, expression or head orientation cause the location in image space to change dramatically. Large amounts of storage is required. What should we do? Dimensionality Reduction! 2019/5/1

Eigenface -6 Training Step Align a set of face images (the training set T1, T2, …… TM ) Compute the average face image ψ = 1/M Σ TM Compute the difference image for each image in the training set φi = Ti – ψ Compute the covariance matrix of this set of difference images C = 1/MΣ φn φTn Compute the eigenvectors of the covariance matrix 2019/5/1

Eigenface -7 Here is an example of visualization of eigenvectors of the covariance matrix (which are called EIGENFACEs): These are the first 4 eigenvectors, from a training set of 23 images…. Only selecting the top M eigenfaces  reduces the dimensionality. Too few eigenfaces results in too much information loss, and hence less discrimination between faces. 2019/5/1

Eigenface -8 Recognition Step Projection in Eigenface Projection ωi = W (T – ψ) W = {eigenfaces} Compare projections 2019/5/1

Eigenface -9 Matlab code Matlab code 2019/5/1 load faces.mat mn = mean(c')'; % compute mean of each class for i=1:P msc(:,i) = c(:,i) - mn; end % Find an orthonormal basis using k-l cov = msc * msc'; %covariance matrix [V,D] = eig(cov); vects = V; % Order the eigenvects according to the eigenvalues for i=1:N evals(i) = D(i,i); end [a,b]=sort(evals); for i=1:N ind = b(N-i+1); if (a(N-i+1) > tol) %non-zero eigenvalues u(:,i) = vects(:,ind); ev(i) = D(ind,ind); proj=u'*msc; Matlab code test = reshape(double(imread('images/r3.jpg')),N1*N2,1); mst = test - mn; % subtracting the mean image projt = u'*mst; % project into the same eigenspace define by u % similarity measures using the L2 norm - Euclidean norm or the Euclidean distance proj_train = u'*(c- [mn mn mn mn]); diff = proj_train - [projt projt projt projt]; L=zeros(1,4); for j=1:4 for i=1:3 L(j) = L(j) + (diff(i, j))^2; end; [a b] = sort(L); fprintf('the image what we found is: ') display(b(1)) 2019/5/1

Eigenface -10 Advantages: Discover structure of data lying near a linear subspace of the input space Non-iterative, globally optimal solution Disadvantages: Not capable of discovering nonlinear degrees of freedom; Registration and scaling issues Very sensitive to changes in lighting conditions The scatter being maximized is not only due to the between-class scatter that is useful for classification, but also the within-class scatter which is unwanted information for classification purposes; So PCA projection is optimal for reconstruction from a low dimensional basis, but may not be optimal for discrimination… 2019/5/1

Fisherface -1 Using Linear Discriminant Analysis (LDA) or Fisher’s Linear Discriminant (FLD) Eigenfaces attempt to maximise the scatter of the training images in face space, while Fisherfaces attempt to maximise the between class scatter, while minimising the within class scatter. In other words, moves images of the same face closer together, while moving images of difference faces further apart. “Eigenfaces vs. Fisherfaces: recognition using class specific linear projection” by Belhumeur & Kriegman, 1997 2019/5/1

Fisherface -2 Poor Projection Good Projection 2019/5/1

FisherFace - 3 N Sample images: C classes: Average of each class: Total average: 2019/5/1

FisherFace - 4 Scatter of class i: Within class scatter: Between class scatter: Total scatter: 2019/5/1

FisherFace - 5 After projection: Between class scatter (of y’s): Within class scatter (of y’s): 2019/5/1

FisherFace - 6 Good separation 2019/5/1

FisherFace - 7 The wanted projection: How is it found ? 2019/5/1

FisherFace -8 Data dimension is much larger then the number of Samples The matrix is singular 2019/5/1

FisherFace PCA+FLD Project with PCA to space Project with FLD to space 2019/5/1

FisherFace -10 As in the Eigenfaces, the vectors of Wopt are of dimension d, and can thus be thought of as images, or “Fisherfaces” 2019/5/1

FisherFace -11 %ignore the zero eigens and sort in descend order nz_eval_ind = find(eval>0.0001); nz_eval = eval(nz_eval_ind); for i=1:length(nz_eval_ind); nz_evec(:,i) = evec(:,nz_eval_ind(i)); end [seval,Ind] = sort(nz_eval); Ind = fliplr(Ind); %buld the eigen space v=ones(p,N-1); for i=1:N-1 v(:,i) = nz_evec(:,Ind(i)); for j=1:p v(j,i)=real(v(j,i)); %project images onto the eigen space Y = zeros(N-1,N*M); for i=1:N*M Y(:,i) = v'*X(:,i); N = 4; %Input how many classes are there M = 6; %Input how many images are in each class; p = **; %Input how many pixels are in one image; load face %read in the training images %calculate the mean for each class m = zeros(p,N); Sw = zeros(p,p); Sb = zeros(p,p); for i = 1:N m(:,i) = mean(X(:,((i-1)*M+1):i*M)')'; S = zeros(p,p); %calculate the within class scatter matrix for j = ((i-1)*M+1):i*M S = S + (X(:,j)-m(:,i))*(X(:,j)-m(:,i))'; end Sw = Sw+S; %calculate the between class scatter matrix Sb = Sb+(m(:,i)-total_m)*(m(:,i)-total_m)'; %calculate the generalized eigenvectors and eigen values [evec,eval] = eig(Sb,Sw); e = real(evec); 2019/5/1

Experimental Results - 1 Variation in Facial Expression, Eyewear, and Lighting Input: 160 images of 16 people Train: 159 images Test: 1 image With glasses Without glasses 3 Lighting conditions 5 expressions 2019/5/1

Experimental Results - 2 2019/5/1

Conclusion All methods perform well when conditions do not change much Fisherfaces has best results 2019/5/1

The End Thank You ! 2019/5/1