CS 485/685 Computer Vision Face Recognition Using Principal Components Analysis (PCA) M. Turk, A. Pentland, "Eigenfaces for Recognition", Journal of Cognitive.

Slides:



Advertisements
Similar presentations
Face Recognition Sumitha Balasuriya.
Advertisements

Active Appearance Models
EigenFaces and EigenPatches Useful model of variation in a region –Region must be fixed shape (eg rectangle) Developed for face recognition Generalised.
Eigenfaces for Recognition Presented by: Santosh Bhusal.
Principal Component Analysis Based on L1-Norm Maximization Nojun Kwak IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008.
Face Recognition Ying Wu Electrical and Computer Engineering Northwestern University, Evanston, IL
Machine Learning Lecture 8 Data Processing and Representation
On the Dimensionality of Face Space Marsha Meytlis and Lawrence Sirovich IEEE Transactions on PAMI, JULY 2007.
Computer Vision Spring ,-685 Instructor: S. Narasimhan Wean 5403 T-R 3:00pm – 4:20pm Lecture #20.
Principal Component Analysis CMPUT 466/551 Nilanjan Ray.
Dimensionality Reduction Chapter 3 (Duda et al.) – Section 3.8
Principal Component Analysis
CS 790Q Biometrics Face Recognition Using Dimensionality Reduction PCA and LDA M. Turk, A. Pentland, "Eigenfaces for Recognition", Journal of Cognitive.
Face detection and recognition Many slides adapted from K. Grauman and D. Lowe.
Face Recognition using PCA (Eigenfaces) and LDA (Fisherfaces)
Project 4 out today –help session today –photo session today Project 2 winners Announcements.
Face Recognition Jeremy Wyatt.
Face Recognition Using Eigenfaces
SVD and PCA COS 323. Dimensionality Reduction Map points in high-dimensional space to lower number of dimensionsMap points in high-dimensional space to.
The Terms that You Have to Know! Basis, Linear independent, Orthogonal Column space, Row space, Rank Linear combination Linear transformation Inner product.
Computer Vision I Instructor: Prof. Ko Nishino. Today How do we recognize objects in images?
Face Collections : Rendering and Image Processing Alexei Efros.
Smart Traveller with Visual Translator for OCR and Face Recognition LYU0203 FYP.
Eigen Value Analysis in Pattern Recognition
Understanding Faces Computational Photography
Face Detection and Recognition
Face Detection and Recognition Readings: Ch 8: Sec 4.4, Ch 14: Sec 4.4
PCA & LDA for Face Recognition
Summarized by Soo-Jin Kim
Dimensionality Reduction: Principal Components Analysis Optional Reading: Smith, A Tutorial on Principal Components Analysis (linked to class webpage)
Recognition Part II Ali Farhadi CSE 455.
Face Recognition and Feature Subspaces
Face Recognition and Feature Subspaces
Feature extraction 1.Introduction 2.T-test 3.Signal Noise Ratio (SNR) 4.Linear Correlation Coefficient (LCC) 5.Principle component analysis (PCA) 6.Linear.
1 Recognition by Appearance Appearance-based recognition is a competing paradigm to features and alignment. No features are extracted! Images are represented.
PCA explained within the context of Face Recognition Berrin Yanikoglu FENS Computer Science & Engineering Sabancı University Updated Dec Some slides.
Face Recognition: An Introduction
ECE 8443 – Pattern Recognition LECTURE 08: DIMENSIONALITY, PRINCIPAL COMPONENTS ANALYSIS Objectives: Data Considerations Computational Complexity Overfitting.
1 Introduction to Kernel Principal Component Analysis(PCA) Mohammed Nasser Dept. of Statistics, RU,Bangladesh
CSE 185 Introduction to Computer Vision Face Recognition.
CSSE463: Image Recognition Day 27 This week This week Today: Applications of PCA Today: Applications of PCA Sunday night: project plans and prelim work.
EE4-62 MLCV Lecture Face Recognition – Subspace/Manifold Learning Tae-Kyun Kim 1 EE4-62 MLCV.
PCA vs ICA vs LDA. How to represent images? Why representation methods are needed?? –Curse of dimensionality – width x height x channels –Noise reduction.
Irfan Ullah Department of Information and Communication Engineering Myongji university, Yongin, South Korea Copyright © solarlits.com.
Feature Extraction 主講人:虞台文. Content Principal Component Analysis (PCA) PCA Calculation — for Fewer-Sample Case Factor Analysis Fisher’s Linear Discriminant.
Feature Extraction 主講人:虞台文.
Presented by: Muhammad Wasif Laeeq (BSIT07-1) Muhammad Aatif Aneeq (BSIT07-15) Shah Rukh (BSIT07-22) Mudasir Abbas (BSIT07-34) Ahmad Mushtaq (BSIT07-45)
Face detection and recognition Many slides adapted from K. Grauman and D. Lowe.
CSSE463: Image Recognition Day 25 This week This week Today: Applications of PCA Today: Applications of PCA Sunday night: project plans and prelim work.
Martina Uray Heinz Mayer Joanneum Research Graz Institute of Digital Image Processing Horst Bischof Graz University of Technology Institute for Computer.
Machine Learning Supervised Learning Classification and Regression K-Nearest Neighbor Classification Fisher’s Criteria & Linear Discriminant Analysis Perceptron:
CSSE463: Image Recognition Day 27
CSSE463: Image Recognition Day 26
Dimensionality Reduction
University of Ioannina
Recognition with Expression Variations
Face Recognition and Feature Subspaces
Recognition: Face Recognition
Principal Component Analysis (PCA)
Machine Learning Dimensionality Reduction
Outline Peter N. Belhumeur, Joao P. Hespanha, and David J. Kriegman, “Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection,”
PCA vs ICA vs LDA.
Face Recognition and Detection Using Eigenfaces
Principal Component Analysis
PCA is “an orthogonal linear transformation that transfers the data to a new coordinate system such that the greatest variance by any projection of the.
Outline H. Murase, and S. K. Nayar, “Visual learning and recognition of 3-D objects from appearance,” International Journal of Computer Vision, vol. 14,
CSSE463: Image Recognition Day 25
Feature space tansformation methods
CSSE463: Image Recognition Day 25
Announcements Project 4 out today Project 2 winners help session today
Presentation transcript:

CS 485/685 Computer Vision Face Recognition Using Principal Components Analysis (PCA) M. Turk, A. Pentland, "Eigenfaces for Recognition", Journal of Cognitive Neuroscience, 3(1), pp , 1991.Eigenfaces for Recognition

2 Principal Component Analysis (PCA) Pattern recognition in high-dimensional spaces −Problems arise when performing recognition in a high-dimensional space (curse of dimensionality). −Significant improvements can be achieved by first mapping the data into a lower-dimensional sub-space. −The goal of PCA is to reduce the dimensionality of the data while retaining as much information as possible in the original dataset.

3 Principal Component Analysis (PCA) Dimensionality reduction −PCA allows us to compute a linear transformation that maps data from a high dimensional space to a lower dimensional sub-space. K x N

4 Principal Component Analysis (PCA) Lower dimensionality basis −Approximate vectors by finding a basis in an appropriate lower dimensional space. (1) Higher-dimensional space representation: (2) Lower-dimensional space representation:

5 Principal Component Analysis (PCA) Information loss −Dimensionality reduction implies information loss! −PCA preserves as much information as possible, that is, it minimizes the error: How should we determine the best lower dimensional sub- space?

6 Principal Component Analysis (PCA) Methodology −Suppose x 1, x 2,..., x M are N x 1 vectors (i.e., center at zero)

7 Principal Component Analysis (PCA) Methodology – cont.

8 Principal Component Analysis (PCA) Linear transformation implied by PCA −The linear transformation R N  R K that performs the dimensionality reduction is: (i.e., simply computing coefficients of linear expansion)

9 Principal Component Analysis (PCA) Geometric interpretation −PCA projects the data along the directions where the data varies the most. −These directions are determined by the eigenvectors of the covariance matrix corresponding to the largest eigenvalues. −The magnitude of the eigenvalues corresponds to the variance of the data along the eigenvector directions.

10 Principal Component Analysis (PCA) How to choose K (i.e., number of principal components) ? −To choose K, use the following criterion: −In this case, we say that we “preserve” 90% or 95% of the information in our data. −If K=N, then we “preserve” 100% of the information in our data.

11 Principal Component Analysis (PCA) What is the error due to dimensionality reduction? −The original vector x can be reconstructed using its principal components: −PCA minimizes the reconstruction error: −It can be shown that the error is equal to:

12 Principal Component Analysis (PCA) Standardization −The principal components are dependent on the units used to measure the original variables as well as on the range of values they assume. −You should always standardize the data prior to using PCA. −A common standardization method is to transform all the data to have zero mean and unit standard deviation:

13 Application to Faces Computation of low-dimensional basis (i.e.,eigenfaces):

14 Application to Faces Computation of the eigenfaces – cont.

15 Application to Faces Computation of the eigenfaces – cont. uiui

16 Application to Faces Computation of the eigenfaces – cont.

17 Eigenfaces example Training images

18 Eigenfaces example Top eigenvectors: u 1,…u k Mean: μ

19 Application to Faces Representing faces onto this basis Face reconstruction:

20 Eigenfaces Case Study: Eigenfaces for Face Detection/Recognition −M. Turk, A. Pentland, "Eigenfaces for Recognition", Journal of Cognitive Neuroscience, vol. 3, no. 1, pp , Face Recognition −The simplest approach is to think of it as a template matching problem −Problems arise when performing recognition in a high-dimensional space. −Significant improvements can be achieved by first mapping the data into a lower dimensionality space.

21 Eigenfaces Face Recognition Using Eigenfaces where

22 Eigenfaces Face Recognition Using Eigenfaces – cont. −The distance e r is called distance within face space (difs) −The Euclidean distance can be used to compute e r, however, the Mahalanobis distance has shown to work better: Mahalanobis distance Euclidean distance

23 Face detection and recognition DetectionRecognition “Sally”

24 Eigenfaces Face Detection Using Eigenfaces −The distance e d is called distance from face space (dffs)

25 Eigenfaces Reconstruction of faces and non-faces Reconstructed face looks like a face. Reconstructed non-face looks like a fac again! Input Reconstructed

26 Eigenfaces Face Detection Using Eigenfaces – cont. Case 1: in face space AND close to a given face Case 2: in face space but NOT close to any given face Case 3: not in face space AND close to a given face Case 4: not in face space and NOT close to any given face

27 Reconstruction using partial information Robust to partial face occlusion. Input Reconstructed

28 Eigenfaces Face detection, tracking, and recognition Visualize dffs:

29 Limitations Background changes cause problems −De-emphasize the outside of the face (e.g., by multiplying the input image by a 2D Gaussian window centered on the face). Light changes degrade performance −Light normalization helps. Performance decreases quickly with changes to face size −Multi-scale eigenspaces. −Scale input image to multiple sizes. Performance decreases with changes to face orientation (but not as fast as with scale changes) −Plane rotations are easier to handle. −Out-of-plane rotations are more difficult to handle.

30 Limitations Not robust to misalignment

31 Limitations PCA assumes that the data follows a Gaussian distribution (mean µ, covariance matrix Σ) The shape of this dataset is not well described by its principal components

32 Limitations −PCA is not always an optimal dimensionality-reduction procedure for classification purposes: