Presentation is loading. Please wait.

Presentation is loading. Please wait.

Lecture 8:Eigenfaces and Shared Features

Similar presentations


Presentation on theme: "Lecture 8:Eigenfaces and Shared Features"— Presentation transcript:

1 Lecture 8:Eigenfaces and Shared Features
CAP 5415: Computer Vision Fall 2006

2 What I did on Monday

3 What I did on Monday

4 Questions on PS2?

5 Task: Face Recognition
? From “Bayesian Face Recognition” by Moghaddam, Jebara, and Pentland

6 Easiest way to recognize
Compute the squared difference between the test image and each of the examples Nearest-Neighbor Classifier

7 Easiest way to recognize
Problems Slow Distracted by things not related to faces

8 Easiest way to recognize
How can we find a way to represent a face with a few numbers?

9 Linear Algebra to the Rescue
Raster-Scan the image into a vector

10 Stack Examples into a Matrix
If we have M example faces in the database we get a matrix: One face

11 Introducing the SVD Stands for Singular Value Decomposition
The matrix M can be factored as m x n n x n m x n m x m

12 Special Properties of these Matrices
U – unitary, for a real-valued matrix, that means UTU = I Σ – diagonal. Only non-zero along the diagonal. These non-zero entries are called the singular values V – also unitary

13 Another way to think of this
ΣV is a set of weights that tells us how to add together the columns of U to produce the data What if there are too many observations in each column?

14 Approximating M Modify Σ by setting all but a few of the singular values to zero. Effectively, only using some columns of U to reconstruct M Same as using some small number of parameters to measure a N-dimensional signal

15 Approximating M Question: If I were going to approximate M using a few columns of U, which few should I use? Answer: Find the rows in Σ with the largest singular values and use the corresponding columns of M

16 Simple Example

17 Calculate SVD [u,s,v] = svd([x' y']') >> u u = -0.6715 -0.7410
>> s(:,1:2) ans = >>

18 U defines a set of axes [u,s,v] = svd([x' y']') >> u u =
>> s(:,1:2) ans = >>

19 Now, let's reduce these points to one dimension
>> sp=s; >> sp(2,2)=0; >> nm = u*sp*v'; >>

20 What have we done? We have used one dimension to describe the points instead of two We used the SVD to find the best set of axes Using the SVD minimizes the Frobenius norm of difference between M and ~M In other words, minimizes squared error

21 How does this relate to faces?
Let's assume that we have images of faces with N pixels Don't N pixels to represent the face Assume that the space of face images can be represented by relatively few number of axes Called eigenfaces From “Probabilistic Visual Learning” by Moghaddam and Pentland

22 Why “eigenfaces”? If you subtract the mean, MMT is the covariance matrix of the data The matrix U also contains the eigenvectors of MMT These eigenvectors are the vectors that maximize the variance of the reconstruction Same result, different motivation than SVD Called Principle Components Analysis or PCA Take Pattern Recognition!

23 Empirical proof that faces lie in relatively low-dimensional subspace
From “Probabilistic Visual Learning” by Moghaddam and Pentland

24 From “Probabilistic Visual Learning” by Moghaddam and Pentland

25 Empirical proof that faces lie in relatively low-dimensional subspace
From “Probabilistic Visual Learning” by Moghaddam and Pentland

26 Basic Eigenface Recognition
Find the face Normalize the image Project it into onto the eigenfaces Do nearest -neighbor classification Lots of variations that I won't get into

27 What makes it harder or easier?
From “How Features of the Human Face Affect Recognition: a Statistical Comparison of Three Face Recognition Algorithms” by Givens et al.

28 Does it work in the real world?

29 Guessing Gender from Faces
From Learning Gender with Support Faces by Moghaddam and Yeang


Download ppt "Lecture 8:Eigenfaces and Shared Features"

Similar presentations


Ads by Google