CSSE463: Image Recognition Day 27 This week This week Today: Applications of PCA Today: Applications of PCA Sunday night: project plans and prelim work.

Slides:



Advertisements
Similar presentations
Face Recognition Sumitha Balasuriya.
Advertisements

EigenFaces and EigenPatches Useful model of variation in a region –Region must be fixed shape (eg rectangle) Developed for face recognition Generalised.
Principal Component Analysis Based on L1-Norm Maximization Nojun Kwak IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008.
EigenFaces.
Machine Learning Lecture 8 Data Processing and Representation
1er. Escuela Red ProTIC - Tandil, de Abril, 2006 Principal component analysis (PCA) is a technique that is useful for the compression and classification.
As applied to face recognition.  Detection vs. Recognition.
Computer Vision Spring ,-685 Instructor: S. Narasimhan Wean 5403 T-R 3:00pm – 4:20pm Lecture #20.
Principal Component Analysis CMPUT 466/551 Nilanjan Ray.
Principal Component Analysis
Region labelling Giving a region a name. Image Processing and Computer Vision: 62 Introduction Region detection isolated regions Region description properties.
Dimensionality Reduction Chapter 3 (Duda et al.) – Section 3.8
© 2003 by Davi GeigerComputer Vision September 2003 L1.1 Face Recognition Recognized Person Face Recognition.
Data-driven Methods: Faces : Computational Photography Alexei Efros, CMU, Fall 2007 Portrait of Piotr Gibas © Joaquin Rosales Gomez.
Principal Component Analysis
Pattern Recognition Topic 1: Principle Component Analysis Shapiro chap
CS 790Q Biometrics Face Recognition Using Dimensionality Reduction PCA and LDA M. Turk, A. Pentland, "Eigenfaces for Recognition", Journal of Cognitive.
Face Recognition using PCA (Eigenfaces) and LDA (Fisherfaces)
Face Recognition Jeremy Wyatt.
Face Recognition Using Eigenfaces
Principal Component Analysis Barnabás Póczos University of Alberta Nov 24, 2009 B: Chapter 12 HRF: Chapter 14.5.
Face Collections : Rendering and Image Processing Alexei Efros.
Face Detection and Recognition
Computer Vision Spring ,-685 Instructor: S. Narasimhan WH 5409 T-R 10:30am – 11:50am Lecture #18.
CS 485/685 Computer Vision Face Recognition Using Principal Components Analysis (PCA) M. Turk, A. Pentland, "Eigenfaces for Recognition", Journal of Cognitive.
Face Recognition Using EigenFaces Presentation by: Zia Ahmed Shaikh (P/IT/2K15/07) Authors: Matthew A. Turk and Alex P. Pentland Vision and Modeling Group,
Empirical Modeling Dongsup Kim Department of Biosystems, KAIST Fall, 2004.
PCA & LDA for Face Recognition
Training Database Step 1 : In general approach of PCA, each image is divided into nxn blocks or pixels. Then all pixel values are taken into a single one.
Dimensionality Reduction: Principal Components Analysis Optional Reading: Smith, A Tutorial on Principal Components Analysis (linked to class webpage)
Chapter 2 Dimensionality Reduction. Linear Methods
Face Recognition and Feature Subspaces
Principal Components Analysis BMTRY 726 3/27/14. Uses Goal: Explain the variability of a set of variables using a “small” set of linear combinations of.
Face Recognition and Feature Subspaces
Computer Vision Spring ,-685 Instructor: S. Narasimhan Wean 5403 T-R 3:00pm – 4:20pm Lecture #19.
BACKGROUND LEARNING AND LETTER DETECTION USING TEXTURE WITH PRINCIPAL COMPONENT ANALYSIS (PCA) CIS 601 PROJECT SUMIT BASU FALL 2004.
1 Recognition by Appearance Appearance-based recognition is a competing paradigm to features and alignment. No features are extracted! Images are represented.
Principal Component Analysis Bamshad Mobasher DePaul University Bamshad Mobasher DePaul University.
Classification Course web page: vision.cis.udel.edu/~cv May 12, 2003  Lecture 33.
Descriptive Statistics vs. Factor Analysis Descriptive statistics will inform on the prevalence of a phenomenon, among a given population, captured by.
CSE 185 Introduction to Computer Vision Face Recognition.
EE4-62 MLCV Lecture Face Recognition – Subspace/Manifold Learning Tae-Kyun Kim 1 EE4-62 MLCV.
Feature Selection and Dimensionality Reduction. “Curse of dimensionality” – The higher the dimensionality of the data, the more data is needed to learn.
Irfan Ullah Department of Information and Communication Engineering Myongji university, Yongin, South Korea Copyright © solarlits.com.
CSSE463: Image Recognition Day 10 Lab 3 due Weds Lab 3 due Weds Today: Today: finish circularity finish circularity region orientation: principal axes.
Obama and Biden, McCain and Palin Face Recognition Using Eigenfaces Justin Li.
Face detection and recognition Many slides adapted from K. Grauman and D. Lowe.
CSSE463: Image Recognition Day 25 This week This week Today: Applications of PCA Today: Applications of PCA Sunday night: project plans and prelim work.
CSSE463: Image Recognition Day 10 Lab 3 due Weds, 11:59pm Lab 3 due Weds, 11:59pm Take-home quiz due Friday, 4:00 pm Take-home quiz due Friday, 4:00 pm.
Principal Components Analysis ( PCA)
CSSE463: Image Recognition Day 10 Lab 3 due Weds, 3:25 pm Lab 3 due Weds, 3:25 pm Take-home quiz due Friday, 4:00 pm Take-home quiz due Friday, 4:00 pm.
CSSE463: Image Recognition Day 27
CSSE463: Image Recognition Day 26
University of Ioannina
Recognition with Expression Variations
Lecture 8:Eigenfaces and Shared Features
Face Recognition and Feature Subspaces
Lecture: Face Recognition and Feature Reduction
Recognition: Face Recognition
Principal Component Analysis (PCA)
Dimension Reduction via PCA (Principal Component Analysis)
Principal Component Analysis
Principal Component Analysis
PCA is “an orthogonal linear transformation that transfers the data to a new coordinate system such that the greatest variance by any projection of the.
Eigenfaces for recognition (Turk & Pentland)
Outline H. Murase, and S. K. Nayar, “Visual learning and recognition of 3-D objects from appearance,” International Journal of Computer Vision, vol. 14,
Recitation: SVD and dimensionality reduction
CSSE463: Image Recognition Day 25
Principal Components What matters most?.
CSSE463: Image Recognition Day 25
Presentation transcript:

CSSE463: Image Recognition Day 27 This week This week Today: Applications of PCA Today: Applications of PCA Sunday night: project plans and prelim work due Sunday night: project plans and prelim work due Questions? Questions?

Principal Components Analysis Given a set of samples, find the direction(s) of greatest variance. Given a set of samples, find the direction(s) of greatest variance. We’ve done this! We’ve done this! Example: Spatial moments Example: Spatial moments Principal axes are eigenvectors of covariance matrix Principal axes are eigenvectors of covariance matrix Eigenvalues gave relative importance of each dimension Eigenvalues gave relative importance of each dimension Note that each point can be represented in 2D using the new coordinate system defined by the eigenvectors Note that each point can be represented in 2D using the new coordinate system defined by the eigenvectors The 1D representation obtained by projecting the point onto the principal axis is a reasonably-good approximation The 1D representation obtained by projecting the point onto the principal axis is a reasonably-good approximation height weight size girth

Covariance Matrix (using matrix operations) Place the points in their own column. Find the mean of each row. Subtract it. Multiply N * N T You will get a 2x2 matrix, in which each entry is a summation over all n points. You could then divide by n Q1

Generic process The covariance matrix of a set of data gives the ways in which the set varies. The covariance matrix of a set of data gives the ways in which the set varies. The eigenvectors corresponding to the largest eigenvalues give the directions in which it varies most. The eigenvectors corresponding to the largest eigenvalues give the directions in which it varies most. Two applications Two applications Eigenfaces Eigenfaces Time-elapsed photography Time-elapsed photography

“Eigenfaces” Question: what are the primary ways in which faces vary? Question: what are the primary ways in which faces vary? What happens when we apply PCA? What happens when we apply PCA? For each face, create a column vector that contains the intensity of all the pixels from that face For each face, create a column vector that contains the intensity of all the pixels from that face This is a point in a high dimensional space (e.g., for a 256x256 pixel image) This is a point in a high dimensional space (e.g., for a 256x256 pixel image) Create a matrix F of all M faces in the training set. Create a matrix F of all M faces in the training set. Subtract off the “average face”, , to get N Subtract off the “average face”, , to get N Compute the rc x rc covariance matrix C = N*N T. Compute the rc x rc covariance matrix C = N*N T. M. Turk and A. Pentland, Eigenfaces for Recognition, J Cog Neurosci, 3(1)

“Eigenfaces” Question: what are the primary ways in which faces vary? Question: what are the primary ways in which faces vary? What happens when we apply PCA? What happens when we apply PCA? The eigenvectors are the directions of greatest variability The eigenvectors are the directions of greatest variability Note that these are in D; thus form a face. Note that these are in D; thus form a face. This is an “eigenface” This is an “eigenface” Here are the first 4 from the ORL face dataset. Here are the first 4 from the ORL face dataset. Q2-3

“Eigenfaces” Question: what are the primary ways in which faces vary? Question: what are the primary ways in which faces vary? What happens when we apply PCA? What happens when we apply PCA? The eigenvectors are the directions of greatest variability The eigenvectors are the directions of greatest variability Note that these are in D; thus form a face. Note that these are in D; thus form a face. This is an “eigenface” This is an “eigenface” Here are the first 4 from the ORL face dataset. Here are the first 4 from the ORL face dataset. from the ORL face database, AT&T Laboratories Cambridge Q2-3

Interlude: Projecting points onto lines We can project each point onto the principal axis. How? We can project each point onto the principal axis. How? height weight size girth

Interlude: Projecting a point onto a line Assuming the axis is represented by a unit vector u, we can just take the dot-product of the point p and the vector. Assuming the axis is represented by a unit vector u, we can just take the dot-product of the point p and the vector. u*p = u T p (which is 1D) u*p = u T p (which is 1D) Example: Project (5,2) onto line y=x. Example: Project (5,2) onto line y=x. If we want to project onto two vectors, u and v simultaneously: If we want to project onto two vectors, u and v simultaneously: Create w = [u v], then compute w T p, which is 2D. Create w = [u v], then compute w T p, which is 2D. Result: p is now in terms of u and v. Result: p is now in terms of u and v. This generalizes to arbitrary dimensions. This generalizes to arbitrary dimensions. Q4

Application: Face detection If we want to project a point onto two vectors, u and v simultaneously: If we want to project a point onto two vectors, u and v simultaneously: Create w = [u v], then compute w T p, which is 2D. Create w = [u v], then compute w T p, which is 2D. Result: p is now in terms of u and v. Result: p is now in terms of u and v. In arbitrary dimensions, still take the dot product with eigenvectors! In arbitrary dimensions, still take the dot product with eigenvectors! You can represent a face in terms of its eigenfaces; it’s just a different basis. You can represent a face in terms of its eigenfaces; it’s just a different basis. The M most important eigenvectors capture most of the variability: The M most important eigenvectors capture most of the variability: Ignore the rest! Ignore the rest! Instead of 65k dimensions, we only have M (~50 in practice) Instead of 65k dimensions, we only have M (~50 in practice) Call these 50 dimensions “face-space” Call these 50 dimensions “face-space”

“Eigenfaces” Question: what are the primary ways in which faces vary? Question: what are the primary ways in which faces vary? What happens when we apply PCA? What happens when we apply PCA? Keep only the top M eigenfaces for “face space”. Keep only the top M eigenfaces for “face space”. We can project any face onto these eigenvectors. Thus, any face is a linear combination of the eigenfaces. We can project any face onto these eigenvectors. Thus, any face is a linear combination of the eigenfaces. Can classify faces in this lower-D space. Can classify faces in this lower-D space. There are computational tricks to make the computation feasible There are computational tricks to make the computation feasible

Time-elapsed photography Question: what are the ways that outdoor images vary over time? Question: what are the ways that outdoor images vary over time? Form a matrix in which each column is an image Form a matrix in which each column is an image Find eigs of covariance matrix Find eigs of covariance matrix N Jacobs, N Roman, R Pless, Consistent Temporal Variations in Many Outdoor Scenes. IEEE Computer Vision and Pattern Recognition, Minneapolis, MN, June 2007.Consistent Temporal Variations in Many Outdoor Scenes See example images on Dr. B’s laptop or at the link below.

Time-elapsed photography Question: what are the ways that outdoor images vary over time? Question: what are the ways that outdoor images vary over time? The mean and top 3 eigenvectors (scaled): The mean and top 3 eigenvectors (scaled): Interpretation? Interpretation? N Jacobs, N Roman, R Pless, Consistent Temporal Variations in Many Outdoor Scenes. IEEE Computer Vision and Pattern Recognition, Minneapolis, MN, June Q5-6

Time-elapsed photography Recall that each image in the dataset is a linear combination of the eigenimages. Recall that each image in the dataset is a linear combination of the eigenimages. N Jacobs, N Roman, R Pless, Consistent Temporal Variations in Many Outdoor Scenes. IEEE Computer Vision and Pattern Recognition, Minneapolis, MN, June = * - 217* +393* = * + 308* +885* mean PC1 PC2PC3

Time-elapsed photography Every image’s projection onto the first eigenvector Every image’s projection onto the first eigenvector N Jacobs, N Roman, R Pless, Consistent Temporal Variations in Many Outdoor Scenes. IEEE Computer Vision and Pattern Recognition, Minneapolis, MN, June 2007.

Research idea Done: Done: Finding the PCs Finding the PCs Using to detect latitude and longitude given images from camera Using to detect latitude and longitude given images from camera Yet to do: Yet to do: Classifying images based on their projection into this space, as was done for eigenfaces Classifying images based on their projection into this space, as was done for eigenfaces