Face Recognition using PCA (Eigenfaces) and LDA (Fisherfaces)

Slides:



Advertisements
Similar presentations
Jan-Michael Frahm, Enrique Dunn Spring 2013
Advertisements

Component Analysis (Review)
EigenFaces.
Face Recognition Ying Wu Electrical and Computer Engineering Northwestern University, Evanston, IL
Face Recognition CPSC UTC/CSE.
Face Recognition By Sunny Tang.
Face Recognition Method of OpenCV
Face Recognition and Biometric Systems
As applied to face recognition.  Detection vs. Recognition.
Dimensionality Reduction Chapter 3 (Duda et al.) – Section 3.8
© 2003 by Davi GeigerComputer Vision September 2003 L1.1 Face Recognition Recognized Person Face Recognition.
Principal Component Analysis
CS 790Q Biometrics Face Recognition Using Dimensionality Reduction PCA and LDA M. Turk, A. Pentland, "Eigenfaces for Recognition", Journal of Cognitive.
Principal Component Analysis IML Outline Max the variance of the output coordinates Optimal reconstruction Generating data Limitations of PCA.
Eigenfaces As we discussed last time, we can reduce the computation by dimension reduction using PCA –Suppose we have a set of N images and there are c.
Face Recognition Jeremy Wyatt.
Evaluation of Image Pre-processing Techniques for Eigenface Based Face Recognition Thomas Heseltine york.ac.uk/~tomh
Face Recognition Using Eigenfaces
Face Recognition: A Comparison of Appearance-Based Approaches
Face Collections : Rendering and Image Processing Alexei Efros.
Face Recognition: An Introduction
Three-Dimensional Face Recognition Using Surface Space Combinations Thomas Heseltine, Nick Pears, Jim Austin Advanced Computer Architecture Group Department.
Face Detection and Recognition
CS 485/685 Computer Vision Face Recognition Using Principal Components Analysis (PCA) M. Turk, A. Pentland, "Eigenfaces for Recognition", Journal of Cognitive.
U of HCOSC 6397 – Lecture 10 #1 U of HCOSC 6397 Face Recognition in the Infrared Spectrum Prof. Ioannis Pavlidis.
PCA & LDA for Face Recognition
Summarized by Soo-Jin Kim
Training Database Step 1 : In general approach of PCA, each image is divided into nxn blocks or pixels. Then all pixel values are taken into a single one.
Dimensionality Reduction: Principal Components Analysis Optional Reading: Smith, A Tutorial on Principal Components Analysis (linked to class webpage)
Recognition Part II Ali Farhadi CSE 455.
Face Recognition and Feature Subspaces
Face Recognition and Feature Subspaces
1 Graph Embedding (GE) & Marginal Fisher Analysis (MFA) 吳沛勳 劉冠成 韓仁智
Feature extraction 1.Introduction 2.T-test 3.Signal Noise Ratio (SNR) 4.Linear Correlation Coefficient (LCC) 5.Principle component analysis (PCA) 6.Linear.
1 Recognition by Appearance Appearance-based recognition is a competing paradigm to features and alignment. No features are extracted! Images are represented.
Face Recognition: An Introduction
Using Support Vector Machines to Enhance the Performance of Bayesian Face Recognition IEEE Transaction on Information Forensics and Security Zhifeng Li,
Classification Course web page: vision.cis.udel.edu/~cv May 12, 2003  Lecture 33.
Face Recognition: An Introduction
1 Introduction to Kernel Principal Component Analysis(PCA) Mohammed Nasser Dept. of Statistics, RU,Bangladesh
CSE 185 Introduction to Computer Vision Face Recognition.
Face detection Behold a state-of-the-art face detector! (Courtesy Boris Babenko)Boris Babenko slides adapted from Svetlana Lazebnik.
CSSE463: Image Recognition Day 27 This week This week Today: Applications of PCA Today: Applications of PCA Sunday night: project plans and prelim work.
EE4-62 MLCV Lecture Face Recognition – Subspace/Manifold Learning Tae-Kyun Kim 1 EE4-62 MLCV.
Discriminant Analysis
PCA vs ICA vs LDA. How to represent images? Why representation methods are needed?? –Curse of dimensionality – width x height x channels –Noise reduction.
Feature Extraction 主講人:虞台文. Content Principal Component Analysis (PCA) PCA Calculation — for Fewer-Sample Case Factor Analysis Fisher’s Linear Discriminant.
2D-LDA: A statistical linear discriminant analysis for image matrix
(Courtesy Boris Babenko)
Face Recognition and Feature Subspaces Devi Parikh Virginia Tech 11/05/15 Slides borrowed from Derek Hoiem, who borrowed some slides from Lana Lazebnik,
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition LECTURE 10: PRINCIPAL COMPONENTS ANALYSIS Objectives:
Feature Extraction 主講人:虞台文.
Face detection and recognition Many slides adapted from K. Grauman and D. Lowe.
CSSE463: Image Recognition Day 25 This week This week Today: Applications of PCA Today: Applications of PCA Sunday night: project plans and prelim work.
Principal Component Analysis (PCA)
University of Ioannina
Recognition with Expression Variations
CS 2750: Machine Learning Dimensionality Reduction
Face Recognition and Feature Subspaces
Recognition: Face Recognition
Machine Learning Dimensionality Reduction
Outline Peter N. Belhumeur, Joao P. Hespanha, and David J. Kriegman, “Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection,”
PCA vs ICA vs LDA.
Singular Value Decomposition
PCA is “an orthogonal linear transformation that transfers the data to a new coordinate system such that the greatest variance by any projection of the.
Eigenfaces for recognition (Turk & Pentland)
Outline H. Murase, and S. K. Nayar, “Visual learning and recognition of 3-D objects from appearance,” International Journal of Computer Vision, vol. 14,
Introduction PCA (Principal Component Analysis) Characteristics:
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
Facial Recognition as a Pattern Recognition Problem
Presentation transcript:

Face Recognition using PCA (Eigenfaces) and LDA (Fisherfaces) Slides adapted from Pradeep Buddharaju

Principal Component Analysis A N x N pixel image of a face, represented as a vector occupies a single point in N2-dimensional image space. Images of faces being similar in overall configuration, will not be randomly distributed in this huge image space. Therefore, they can be described by a low dimensional subspace. Main idea of PCA for faces: To find vectors that best account for variation of face images in entire image space. These vectors are called eigen vectors. Construct a face space and project the images into this face space (eigenfaces).

Image Representation Training set of m images of size N*N are represented by vectors of size N2 1,2,3,…,M Example

Average Image and Difference Images The average training set is defined by Ψ = (1/M) ∑Mi=1 i Each face differs from the average by vector Φi = Γi – Ψ

Covariance Matrix A covariance matrix is constructed as: C = AAT, where A=[Φ1,…,ΦM] of size N2 x N2 Finding eigenvectors of N2 x N2 matrix is intractable. Hence, use the matrix ATA of size M x M and find eigenvectors of this small matrix. Size of this matrix is N2 x N2 Size of this matrix is M*M

Eigenvalues and Eigenvectors - Definition If v is a nonzero vector and λ is a number such that Av = λv, then              v is said to be an eigenvector of A with eigenvalue λ. Example l (eigenvalues) A v (eigenvectors)

How to Calculate Eigenvectors?

Eigenvectors of Covariance Matrix The eigenvectors vi of ATA are: Consider the eigenvectors vi of ATA such that ATAvi = ivi Premultiplying both sides by A, we have AAT(Avi) = i(Avi)

Face Space The eigenvectors of covariance matrix are ui = Avi Face Space ui resemble facial images which look ghostly, hence called eigenfaces

Projection into Face Space A face image can be projected into this face space by Ωk = UT(Γk – Ψ); k=1,…,M Projection of Image1

Recognition The test image, Γ, is projected into the face space to obtain a vector, Ω: Ω = UT(Γ – Ψ) The distance of Ω to each face class is defined by Єk2 = ||Ω-Ωk||2; k = 1,…,M A distance threshold,Өc, is half the largest distance between any two face images: Өc = ½ maxj,k {||Ωj-Ωk||}; j,k = 1,…,M

Recognition Find the distance, Є , between the original image, Γ, and its reconstructed image from the eigenface space, Γf, Є2 = || Γ – Γf ||2 , where Γf = U * Ω + Ψ Recognition process: IF Є≥Өc then input image is not a face image; IF Є<Өc AND Єk≥Өc for all k then input image contains an unknown face; IF Є<Өc AND Єk*=mink{ Єk} < Өc then input image contains the face of individual k*

Limitations of Eigenfaces Approach Variations in lighting conditions Different lighting conditions for enrolment and query. Bright light causing image saturation. Differences in pose – Head orientation - 2D feature distances appear to distort. Expression - Change in feature location and shape.

Linear Discriminant Analysis PCA does not use class information PCA projections are optimal for reconstruction from a low dimensional basis, they may not be optimal from a discrimination standpoint. LDA is an enhancement to PCA Constructs a discriminant subspace that minimizes the scatter between images of same class and maximizes the scatter between different class images

Mean Images Let X1, X2,…, Xc be the face classes in the database and let each face class Xi, i = 1,2,…,c has k facial images xj, j=1,2,…,k. We compute the mean image i of each class Xi as: Now, the mean image  of all the classes in the database can be calculated as:

Scatter Matrices We calculate within-class scatter matrix as: We calculate the between-class scatter matrix as:

Projection We find the product of SW-1 and SB and then compute the Eigenvectors of this product (SW-1 SB) - AFTER REDUCING THE DIMENSION OF THE FEATURE SPACE. Use same technique as eigenfaces approach to reduce the dimensionality of scatter matrix to compute eigenvectors. Form a matrix U that represents all eigenvectors of SW-1 SB by placing each eigenvector ui as each column in that matrix. Each face image xj  Xi can be projected into this face space by the operation Ωi = UT(xj – )

Testing Same as Eigenfaces Approach

References Turk, M., Pentland, A.: Eigenfaces for recognition. J. Cognitive Neuroscience 3 (1991) 71–86 Belhumeur, P., P.Hespanha, J., Kriegman, D.: Eigenfaces vs. fisherfaces: recognition using class specific linear projection. IEEE Transactions on Pattern Analysis and Machine Intelligence 19 (1997) 711–720