Download presentation

Presentation is loading. Please wait.

1
EigenFaces

2
(squared) Variance A measure of how "spread out" a sequence of numbers are. ฮป= 1 ๐ ๐=1 ๐ (๐ฅ ๐ โฮผ ) 2 ฮผ= 1 ๐ ๐=1 ๐ ๐ฅ ๐

3
**Covariance matrix A measure of correlation between data elements.**

Example: Data set of size n Each data element has 3 fields: Height Weight Birth date

4
Covariance [Collect data from class]

5
Covariance ฮป 11 ฮป 12 โฆ ฮป 1๐ ฮป 21 ฮป 22 โฆ ฮป 2๐ โฎ ฮป ๐1 โฎ ฮป ๐2 โฑ โฎ โฆ ฮป ๐๐ ฮป ๐๐ = 1 ๐ ๐=1 ๐ ( ๐ฅ ๐ โ ฮผ ๐ )( ๐ฅ ๐ โ ฮผ ๐ )

6
**Covariance The diagonals are the variance of that feature**

Non-diagonals are a measure of correlation High-positive == positive correlation one goes up, other goes up Low-negative == negative correlation one goes up, other goes down Near-zero == no correlation unrelated [How high depends on the range of values]

7
**Covariance You can calculate it with a matrix:**

Raw Matrix is a p x q matrix p features q samples Convert to mean-deviation form Calculate the average sample Subtract this from all samples. Multiply MeanDev (a p x q matrix) by its transpose (a q x p matrix) Multiply by 1/n to get the covariance matrix.

8
Covariance [Calculate our covariance matrix]

9
**EigenSystems An EigenSystem is: Such that: ๐ด ๐ฃ =ฮป ๐ฃ**

A vector ๐ (the eigenvector) A scalar ฮป (the eigenvalue) Such that: ๐ด ๐ฃ =ฮป ๐ฃ (the zero vector isn't an eigenvector) In general, not all matrices have eigenvectors.

10
EigenSystems and PCA When you calculate the eigen-system of an n x n Covariance matrix you get: n eigenvectors (each of dimension n) n matching eigenvalues The biggest eigen-value "explains" the largest amount of variance in the data set.

11
**Example Say we have a 2d data set**

First eigen-pair (v1 = [0.8, 0.6], ฮป=800.0) Second eigen-pair (v2 = [-0.6, 0.8], ฮป=100.0) 8x as much variance is along v1 as v2. v1 and v2 are perpendicular to each other v1 and v2 define a new set of basis vectors for this data set. v2 v1

12
**Conversions between basis vectors**

Let's take one data pointโฆ Let's say it is [-1.5, 0.4] in "world units" Project it onto v1 and v2 to get the coordinates relative to (v1, v2 unit-length basis vectors) ๐๐๐ค๐ถ๐๐๐๐= ๐ โ ๐ฃ1 ๐ โ ๐ฃ2 ๐๐๐ค๐ถ๐๐๐๐โ โ v2 v1 To convert back to "world units": ๐ค๐๐๐๐๐ถ๐๐๐๐= ๐๐๐ค๐ถ๐๐๐๐ 0 โ ๐ฃ1 ๐๐๐ค๐ถ๐๐๐๐ 1 โ ๐ฃ2

13
**PCA and compression Example: n (the number of features) is high (~100)**

Most of the variance is captured by 3 eigen-vectors. You can throw out the other 97 eigen-vectors. You can represent most of the data for each sample using just 3 numbers per sample (instead of 100) For a large data set, this can be huge.

14
**EigenFaces Collect database images**

Subject looking straight ahead, no emotion, neutral lighting. Crop: on the top include all of the eyebrows on the bottom include just to the chin on the sides, include all of the face. Size to 32x32, grayscale (a limit of the eigen-solver) In code, include a way to convert to (and from) a VectorN.

15
**EigenFaces, cont. Calculate the average image**

Just pixel (Vector element) by element.

16
**EigenFaces, cont. Calculate the Covariance matrix**

Calculate the EigenSystem Keep the eigen-pairs that preserve n% of the data variance (98% or so) Your Eigen-database is the 32x32 average image and the (here) 8 32x32 eigen-face images.

17
Eigenfaces, cont. Represent each of your faces as a q-value vector (q = # of eigenfaces). Subtract the average and project onto the q eigenfaces The images I'm showing here are the original image and the 8-value "eigen-coordinates

18
**EigenFaces, cont. (for demonstration of compression)**

You can reconstruct a compressed image by: Start with a copy of the average image, X Repeat for each eigenface: Add the eigen-coord * eigenface to X Here are the reconstructions of the 2 images on the last slide: Original Reconstruction

19
**EigenFaces, cont. Facial Recognition**

Take a novel image (same size as database images) Using the eigenfaces computed earlier (this novel image is usually NOT part of this computation), compute eigen-coordinates. Calculate the q-dimensional distance (pythagorean theorem in q-dimensions) between this image and each database image. The database image with the smallest distance is your best match.

Similar presentations

OK

Face detection and recognition Many slides adapted from K. Grauman and D. Lowe.

Face detection and recognition Many slides adapted from K. Grauman and D. Lowe.

© 2017 SlidePlayer.com Inc.

All rights reserved.

Ads by Google

Ppt on sudoku game Ppt on review writing sample Ppt on limitation act 1953 Business plan template free download ppt on pollution Ppt on point contact diode Ppt on endangered species of birds Ppt on carl friedrich gauss mathematician Ppt on travels and tourism Ppt on meeting etiquettes pour Ppt on 2nd world war memorial washington