Presentation is loading. Please wait.

Presentation is loading. Please wait.

© 2003 by Davi GeigerComputer Vision September 2003 L1.1 Face Recognition Recognized Person Face Recognition.

Similar presentations


Presentation on theme: "© 2003 by Davi GeigerComputer Vision September 2003 L1.1 Face Recognition Recognized Person Face Recognition."— Presentation transcript:

1 © 2003 by Davi GeigerComputer Vision September 2003 L1.1 Face Recognition Recognized Person Face Recognition

2 © 2003 by Davi GeigerComputer Vision September 2003 L1.2 Recognized Person Face Recognition Query Image Face Recognition Definition: –Given a database of labeled facial images: Recognize an individual from an image formed from new and varying conditions (pose, expression, lighting etc.) Sub-Problems: –Representation: How do we represent images of faces? What information do we store? –Classification: How do we compare stored information to a new sample? –Search

3 © 2003 by Davi GeigerComputer Vision September 2003 L1.3 Representation Shape Representation: Generalized cylinders, Superquadrics … Appearance Based Representation Refers to the recognition of 3D objects from ordinary images. PCA – Eigenfaces, EigenImages Fisher Linear Discriminant

4 © 2003 by Davi GeigerComputer Vision September 2003 L1.4 Image Representation Basis Matrix vector of coefficients An image I is a point in dimensional space pixel 1 pixel kl pixel 2 255 0

5 © 2003 by Davi GeigerComputer Vision September 2003 L1.5 Toy Example-Dimensionality Reduction Consider a set of images of 1person under fixed viewpoint & N lighting condition Each image is made up of 3 pixels and pixel 1 has the same value as pixel 3 for all images pixel 1 pixel 3 pixel 2 D, data matrix C, coefficient matrix

6 © 2003 by Davi GeigerComputer Vision September 2003 L1.6 Idea: Eigenimages and PCA pixel 1 pixel kl pixel 2 255 0 Eigenimages are eigenvectors of image ensemble The Principle Behind Principal Component Analysis (1) (also called the “Hotteling Transform” (2) or the “Karhunen-Loeve Method” (3).) Find an orthogonal coordinate system such that the correlation between different axis is minimized. Eigenvectors are typically computed using the Singular Value Decomposition (SVD) (1) I.T.Jolliffe; Principle Component Analysis; 1986 (2) R.C.Gonzalas, P.A.Wintz; Digital Image Processing; 1987 (3) K.Karhunen; Uber Lineare Methoden in der Wahrscheinlichkeits Rechnug; 1946 M.M.Loeve; Probability Theory; 1955

7 © 2003 by Davi GeigerComputer Vision September 2003 L1.7 x1x1 x2x2 PCA: Theory We first find the direction of maximum variance in the samples (e 1 ) and align it with the first axis (y 1 ), Continue this process with orthogonal directions of decreasing variance, aligning each with the next axis Thus, we have a rotation which minimizes the correlation. x1x1 x2x2 x x x x x xx x PCA e1e1 e2e2 x x x x x xx x y1y1 y2y2

8 © 2003 by Davi GeigerComputer Vision September 2003 L1.8 PCA-Dimensionality Reduction Consider the same set of images PCA chooses axis in the direction of highest variability of the data, maximum scatter pixel 1 pixel 3 pixel 2 1 st axis 2 nd axis Each image is now represented by a vector of coefficients in a reduced dimensionality space. data matrix, D B minimizes the following energy function

9 © 2003 by Davi GeigerComputer Vision The Covariance Matrix Define the covariance (scatter) matrix of the input samples: (where  is the sample mean)

10 © 2003 by Davi GeigerComputer Vision PCA: Some Properties of the Covariance/Scatter Matrix The matrix S T is symmetric The diagonal contains the variance of each parameter (i.e. element S T,ii is the variance in the i’th direction). Each element S T,ij is the co-variance between the two directions i and j, represents the level of correlation (i.e. a value of zero indicates that the two dimensions are uncorrelated).

11 © 2003 by Davi GeigerComputer Vision PCA: Goal Revisited Look for: - B Such that: –[c 1 … c N ] = B T [i 1 … i N ]... and the correlation is minimized OR Cov(C) is diagonal Note that Cov(C) can be expressed via Cov(D) and B as : CC T = B T ( D )( D ) T B

12 © 2003 by Davi GeigerComputer Vision Selecting the Optimal B How do we find such B ? DD T b i  i b i B opt contains the eigenvectors of the covariance of D B opt = [b 1 |…|b d ]

13 © 2003 by Davi GeigerComputer Vision September 2003 L1.13 SVD of a Matrix

14 © 2003 by Davi GeigerComputer Vision Data Reduction: Theory Each eigenvalue represents the the total variance in its dimension. Throwing away the least significant eigenvectors in B opt means throwing away the least significant variance information !

15 © 2003 by Davi GeigerComputer Vision September 2003 L1.15 PCA for Recognition-Eigenfaces Consider the same set of images PCA chooses axis in the direction of highest variability of the data Given a new image,, compute the vector of coefficients associated with the new basis pixel 1 pixel 3 pixel 2 1 st axis 2 nd axis Next, compare a reduced dimensionality representation of against all coefficient vectors One possible classifier: nearest-neighbor classifier © 2002 by M. Alex O. Vasilescu

16 © 2003 by Davi GeigerComputer Vision September 2003 L1.16 Data and Eigenfaces Each image below is a column vector in the basis matrix B Data is composed of 28 faces photographed under same lighting and viewing conditions © 2002 by M. Alex O. Vasilescu

17 © 2003 by Davi GeigerComputer Vision September 2003 L1.17 PCA for Recognition - EigenImages Consider a set of images of 2 people under fixed viewpoint & N lighting condition Each image is made up of 2 pixels 1 st axis 2 nd axis 1 st axis 2 nd axis Reduce dimensionality by throwing away the axis along which the data varies the least The coefficient vector associated with the 1 st basis vector is used for classifiction Possible classifier: Mahalanobis distance Each image is represented by one coefficient vector Each person is displayed in N images and therefore has N coefficient vectors pixel 2 person 1 person 2 pixel 1 pixel 2 person 1 person 2 pixel 1 © 2002 by M. Alex O. Vasilescu

18 © 2003 by Davi GeigerComputer Vision September 2003 L1.18 PIE Database (Weizmann)

19 © 2003 by Davi GeigerComputer Vision September 2003 L1.19 EigenImages-Basis Vectors Each image bellow is a column vector in the basis matrix B PCA encodes encodes the variability across images without distinguishing between variability in people, viewpoints and illumination © 2002 by M. Alex O. Vasilescu

20 © 2003 by Davi GeigerComputer Vision September 2003 L1.20 Fisher’s Linear Discriminant Objective: Find a projection which separates data clusters Good separationPoor separation

21 © 2003 by Davi GeigerComputer Vision September 2003 L1.21 Fisher Linear Discriminant The basis matrix B is chosen in order to maximize ratio of the determinant between class scatter matrix of the projected samples to the determinant within class scatter matrix of the projected samples B is the set of generalized eigenvectors of S Btw and S Win corresponding with a set of decreasing eigenvalues © 2002 by M. Alex O. Vasilescu

22 © 2003 by Davi GeigerComputer Vision September 2003 L1.22 FLD: Data Scatter Within-class scatter matrix Between-class scatter matrix Total scatter matrix

23 © 2003 by Davi GeigerComputer Vision September 2003 L1.23 Fisher Linear Discriminant Consider a set of images of 2 people under fixed viewpoint & N lighting condition pixel 1 pixel 2 person 1 person 2 2 nd axis 1 st axis Each image is represented by one coefficient vector Each person is displayed in N images and therefore has N coefficient vectors © 2002 by M. Alex O. Vasilescu

24 © 2003 by Davi GeigerComputer Vision September 2003 L1.24 Fisher Linear Discriminant Consider a set of images of 2 people under fixed viewpoint & N lighting condition pixel 1 pixel 2 person 1 person 2 pixel 1 pixel 2 person 1 person 2 1 st axis 2 nd axis 1 st axis Each image is represented by one coefficient vector Each person is displayed in N images and therefore has N coefficient vectors © 2002 by M. Alex O. Vasilescu


Download ppt "© 2003 by Davi GeigerComputer Vision September 2003 L1.1 Face Recognition Recognized Person Face Recognition."

Similar presentations


Ads by Google