Presentation is loading. Please wait.

Presentation is loading. Please wait.

Lecture: Face Recognition and Feature Reduction

Similar presentations


Presentation on theme: "Lecture: Face Recognition and Feature Reduction"— Presentation transcript:

1 Lecture: Face Recognition and Feature Reduction
Juan Carlos Niebles and Ranjay Krishna Stanford Vision and Learning Lab 2-Nov-17

2 Recap - Curse of dimensionality
Assume 5000 points uniformly distributed in the unit hypercube and we want to apply 5-NN. Suppose our query point is at the origin. In 1-dimension, we must go a distance of 5/5000=0.001 on the average to capture 5 nearest neighbors. In 2 dimensions, we must go to get a square that contains of the volume. In d dimensions, we must go 31-Oct-17

3 What we will learn today
Singular value decomposition Principal Component Analysis (PCA) Image compression 2-Nov-17

4 What we will learn today
Singular value decomposition Principal Component Analysis (PCA) Image compression 2-Nov-17

5 Singular Value Decomposition (SVD)
There are several computer algorithms that can “factorize” a matrix, representing it as the product of some other matrices The most useful of these is the Singular Value Decomposition. Represents any matrix A as a product of three matrices: UΣVT Python command: [U,S,V]= numpy.linalg.svd(A) 2-Nov-17

6 Singular Value Decomposition (SVD)
UΣVT = A Where U and V are rotation matrices, and Σ is a scaling matrix. For example: 2-Nov-17

7 Singular Value Decomposition (SVD)
Beyond 2x2 matrices: In general, if A is m x n, then U will be m x m, Σ will be m x n, and VT will be n x n. (Note the dimensions work out to produce m x n after multiplication) 2-Nov-17

8 Singular Value Decomposition (SVD)
U and V are always rotation matrices. Geometric rotation may not be an applicable concept, depending on the matrix. So we call them “unitary” matrices – each column is a unit vector. Σ is a diagonal matrix The number of nonzero entries = rank of A The algorithm always sorts the entries high to low 2-Nov-17

9 SVD Applications We’ve discussed SVD in terms of geometric transformation matrices But SVD of an image matrix can also be very useful To understand this, we’ll look at a less geometric interpretation of what SVD is doing 2-Nov-17

10 SVD Applications Look at how the multiplication works out, left to right: Column 1 of U gets scaled by the first value from Σ. The resulting vector gets scaled by row 1 of VT to produce a contribution to the columns of A 2-Nov-17

11 SVD Applications + = Each product of (column i of U)∙(value i from Σ)∙(row i of VT) produces a component of the final A. 2-Nov-17

12 SVD Applications We’re building A as a linear combination of the columns of U Using all columns of U, we’ll rebuild the original matrix perfectly But, in real-world data, often we can just use the first few columns of U and we’ll get something close (e.g. the first Apartial, above) 2-Nov-17

13 SVD Applications We can call those first few columns of U the Principal Components of the data They show the major patterns that can be added to produce the columns of the original matrix The rows of VT show how the principal components are mixed to produce the columns of the matrix 2-Nov-17

14 SVD Applications We can look at Σ to see that the first column has a large effect while the second column has a much smaller effect in this example 2-Nov-17

15 SVD Applications For this image, using only the first 10 of 300 principal components produces a recognizable reconstruction So, SVD can be used for image compression 2-Nov-17

16 SVD for symmetric matrices
If A is a symmetric matrix, it can be decomposed as the following: Compared to a traditional SVD decomposition, U = VT and is an orthogonal matrix. 2-Nov-17

17 Principal Component Analysis
Remember, columns of U are the Principal Components of the data: the major patterns that can be added to produce the columns of the original matrix One use of this is to construct a matrix where each column is a separate data sample Run SVD on that matrix, and look at the first few columns of U to see patterns that are common among the columns This is called Principal Component Analysis (or PCA) of the data samples 2-Nov-17

18 Principal Component Analysis
Often, raw data samples have a lot of redundancy and patterns PCA can allow you to represent data samples as weights on the principal components, rather than using the original raw form of the data By representing each sample as just those weights, you can represent just the “meat” of what’s different between samples. This minimal representation makes machine learning and other algorithms much more efficient 2-Nov-17

19 How is SVD computed? For this class: tell PYTHON to do it. Use the result. But, if you’re interested, one computer algorithm to do it makes use of Eigenvectors! 2-Nov-17

20 Eigenvector definition
Suppose we have a square matrix A. We can solve for vector x and scalar λ such that Ax= λx In other words, find vectors where, if we transform them with A, the only effect is to scale them with no change in direction. These vectors are called eigenvectors (German for “self vector” of the matrix), and the scaling factors λ are called eigenvalues An m x m matrix will have ≤ m eigenvectors where λ is nonzero 2-Nov-17

21 Finding eigenvectors Computers can find an x such that Ax= λx using this iterative algorithm: X = random unit vector while(x hasn’t converged) X = Ax normalize x x will quickly converge to an eigenvector Some simple modifications will let this algorithm find all eigenvectors 2-Nov-17

22 Finding SVD Eigenvectors are for square matrices, but SVD is for all matrices To do svd(A), computers can do this: Take eigenvectors of AAT (matrix is always square). These eigenvectors are the columns of U. Square root of eigenvalues are the singular values (the entries of Σ). Take eigenvectors of ATA (matrix is always square). These eigenvectors are columns of V (or rows of VT) 2-Nov-17

23 Finding SVD Moral of the story: SVD is fast, even for large matrices
It’s useful for a lot of stuff There are also other algorithms to compute SVD or part of the SVD Python’s np.linalg.svd() command has options to efficiently compute only what you need, if performance becomes an issue A detailed geometric explanation of SVD is here: 2-Nov-17

24 What we will learn today
Introduction to face recognition Principal Component Analysis (PCA) Image compression 2-Nov-17

25 Covariance Variance and Covariance are a measure of the “spread” of a set of points around their center of mass (mean) Variance – measure of the deviation from the mean for points in one dimension e.g. heights Covariance as a measure of how much each of the dimensions vary from the mean with respect to each other. Covariance is measured between 2 dimensions to see if there is a relationship between the 2 dimensions e.g. number of hours studied & marks obtained. The covariance between one dimension and itself is the variance 2-Nov-17

26 Covariance So, if you had a 3-dimensional data set (x,y,z), then you could measure the covariance between the x and y dimensions, the y and z dimensions, and the x and z dimensions. Measuring the covariance between x and x , or y and y , or z and z would give you the variance of the x , y and z dimensions respectively 2-Nov-17

27 Covariance matrix Representing Covariance between dimensions as a matrix e.g. for 3 dimensions Diagonal is the variances of x, y and z cov(x,y) = cov(y,x) hence matrix is symmetrical about the diagonal N-dimensional data will result in NxN covariance matrix 2-Nov-17

28 Covariance What is the interpretation of covariance calculations?
e.g.: 2 dimensional data set x: number of hours studied for a subject y: marks obtained in that subject covariance value is say: what does this value mean? 2-Nov-17

29 Covariance interpretation
2-Nov-17

30 Covariance interpretation
Exact value is not as important as it’s sign. A positive value of covariance indicates both dimensions increase or decrease together e.g. as the number of hours studied increases, the marks in that subject increase. A negative value indicates while one increases the other decreases, or vice-versa e.g. active social life at PSU vs performance in CS dept. If covariance is zero: the two dimensions are independent of each other e.g. heights of students vs the marks obtained in a subject 2-Nov-17

31 Example data Covariance between the two axis is high. Can we reduce the number of dimensions to just 1? 2-Nov-17

32 Geometric interpretation of PCA
2-Nov-17

33 Geometric interpretation of PCA
Let’s say we have a set of 2D data points x. But we see that all the points lie on a line in 2D. So, 2 dimensions are redundant to express the data. We can express all the points with just one dimension. 1D subspace in 2D 2-Nov-17

34 PCA: Principle Component Analysis
Given a set of points, how do we know if they can be compressed like in the previous example? The answer is to look into the correlation between the points The tool for doing this is called PCA 2-Nov-17

35 PCA Formulation Basic idea:
If the data lives in a subspace, it is going to look very flat when viewed from the full space, e.g. 1D subspace in 2D 2D subspace in 3D Slide inspired by N. Vasconcelos 2-Nov-17

36 PCA Formulation Assume x is Gaussian with covariance Σ. x2
Recall that a gaussian is defined with it’s mean and variance: Recall that μ and Σ of a gaussian are defined as: x1 x2 λ1 λ2 φ1 φ2 2-Nov-17

37 PCA formulation Since gaussians are symmetric, it’s covariance matrix is also a symmetric matrix. So we can express it as: Σ = UΛUT = UΛ1/2(UΛ1/2)T 2-Nov-17

38 PCA Formulation If x is Gaussian with covariance Σ, x2 φ1 φ2 λ1
Principal components φi are the eigenvectors of Σ Principal lengths λi are the eigenvalues of Σ by computing the eigenvalues we know the data is Not flat if λ1 ≈ λ2 Flat if λ1 >> λ2 x1 x2 λ1 λ2 φ1 φ2 Slide inspired by N. Vasconcelos 2-Nov-17

39 PCA Algorithm (training)
Slide inspired by N. Vasconcelos 2-Nov-17

40 PCA Algorithm (testing)
Slide inspired by N. Vasconcelos 2-Nov-17

41 PCA by SVD An alternative manner to compute the principal components, based on singular value decomposition Quick reminder: SVD Any real n x m matrix (n>m) can be decomposed as Where M is an (n x m) column orthonormal matrix of left singular vectors (columns of M) Π is an (m x m) diagonal matrix of singular values NT is an (m x m) row orthonormal matrix of right singular vectors (columns of N) Slide inspired by N. Vasconcelos 2-Nov-17

42 PCA by SVD To relate this to PCA, we consider the data matrix
The sample mean is Slide inspired by N. Vasconcelos 2-Nov-17

43 PCA by SVD Center the data by subtracting the mean to each column of X
The centered data matrix is Slide inspired by N. Vasconcelos 2-Nov-17

44 PCA by SVD The sample covariance matrix is
where xic is the ith column of Xc This can be written as Slide inspired by N. Vasconcelos 2-Nov-17

45 PCA by SVD The matrix is real (n x d). Assuming n>d it has SVD decomposition and Slide inspired by N. Vasconcelos 2-Nov-17

46 PCA by SVD Note that N is (d x d) and orthonormal, and Π2 is diagonal. This is just the eigenvalue decomposition of Σ It follows that The eigenvectors of Σ are the columns of N The eigenvalues of Σ are This gives an alternative algorithm for PCA Slide inspired by N. Vasconcelos 2-Nov-17

47 PCA by SVD In summary, computation of PCA by SVD
Given X with one example per column Create the centered data matrix Compute its SVD Principal components are columns of N, eigenvalues are Slide inspired by N. Vasconcelos 2-Nov-17

48 Rule of thumb for finding the number of PCA components
A natural measure is to pick the eigenvectors that explain p% of the data variability Can be done by plotting the ratio rk as a function of k E.g. we need 3 eigenvectors to cover 70% of the variability of this dataset Slide inspired by N. Vasconcelos 2-Nov-17

49 What we will learn today
Introduction to face recognition Principal Component Analysis (PCA) Image compression 2-Nov-17

50 Original Image Divide the original 372x492 image into patches:
Each patch is an instance that contains 12x12 pixels on a grid View each as a 144-D vector 2-Nov-17

51 L2 error and PCA dim 2-Nov-17

52 PCA compression: 144D ) 60D 2-Nov-17

53 PCA compression: 144D ) 16D 2-Nov-17

54 16 most important eigenvectors
2-Nov-17

55 PCA compression: 144D ) 6D 2-Nov-17

56 6 most important eigenvectors
2-Nov-17

57 PCA compression: 144D ) 3D 2-Nov-17

58 3 most important eigenvectors
2-Nov-17

59 PCA compression: 144D ) 1D 2-Nov-17

60 What we have learned today
Introduction to face recognition Principal Component Analysis (PCA) Image compression 2-Nov-17


Download ppt "Lecture: Face Recognition and Feature Reduction"

Similar presentations


Ads by Google