CSSE463: Image Recognition Day 25

Slides:



Advertisements
Similar presentations
Face Recognition Sumitha Balasuriya.
Advertisements

Principal Component Analysis Based on L1-Norm Maximization Nojun Kwak IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008.
EigenFaces.
Machine Learning Lecture 8 Data Processing and Representation
Computer Vision Spring ,-685 Instructor: S. Narasimhan Wean 5403 T-R 3:00pm – 4:20pm Lecture #20.
Principal Component Analysis
Region labelling Giving a region a name. Image Processing and Computer Vision: 62 Introduction Region detection isolated regions Region description properties.
Dimensionality Reduction Chapter 3 (Duda et al.) – Section 3.8
Data-driven Methods: Faces : Computational Photography Alexei Efros, CMU, Fall 2007 Portrait of Piotr Gibas © Joaquin Rosales Gomez.
Principal Component Analysis
Pattern Recognition Topic 1: Principle Component Analysis Shapiro chap
CS 790Q Biometrics Face Recognition Using Dimensionality Reduction PCA and LDA M. Turk, A. Pentland, "Eigenfaces for Recognition", Journal of Cognitive.
Face Recognition using PCA (Eigenfaces) and LDA (Fisherfaces)
Face Recognition Jeremy Wyatt.
Face Recognition Using Eigenfaces
FACE RECOGNITION, EXPERIMENTS WITH RANDOM PROJECTION
Computer Vision I Instructor: Prof. Ko Nishino. Today How do we recognize objects in images?
Face Collections : Rendering and Image Processing Alexei Efros.
Face Detection and Recognition
Face Detection and Recognition Readings: Ch 8: Sec 4.4, Ch 14: Sec 4.4
CS 485/685 Computer Vision Face Recognition Using Principal Components Analysis (PCA) M. Turk, A. Pentland, "Eigenfaces for Recognition", Journal of Cognitive.
Face Recognition Using EigenFaces Presentation by: Zia Ahmed Shaikh (P/IT/2K15/07) Authors: Matthew A. Turk and Alex P. Pentland Vision and Modeling Group,
Empirical Modeling Dongsup Kim Department of Biosystems, KAIST Fall, 2004.
PCA & LDA for Face Recognition
Dimensionality Reduction: Principal Components Analysis Optional Reading: Smith, A Tutorial on Principal Components Analysis (linked to class webpage)
Chapter 2 Dimensionality Reduction. Linear Methods
Principal Components Analysis BMTRY 726 3/27/14. Uses Goal: Explain the variability of a set of variables using a “small” set of linear combinations of.
1 Recognition by Appearance Appearance-based recognition is a competing paradigm to features and alignment. No features are extracted! Images are represented.
Principal Component Analysis Bamshad Mobasher DePaul University Bamshad Mobasher DePaul University.
Classification Course web page: vision.cis.udel.edu/~cv May 12, 2003  Lecture 33.
Descriptive Statistics vs. Factor Analysis Descriptive statistics will inform on the prevalence of a phenomenon, among a given population, captured by.
CSE 185 Introduction to Computer Vision Face Recognition.
CSSE463: Image Recognition Day 27 This week This week Today: Applications of PCA Today: Applications of PCA Sunday night: project plans and prelim work.
Point Distribution Models Active Appearance Models Compilation based on: Dhruv Batra ECE CMU Tim Cootes Machester.
Feature Selection and Dimensionality Reduction. “Curse of dimensionality” – The higher the dimensionality of the data, the more data is needed to learn.
Irfan Ullah Department of Information and Communication Engineering Myongji university, Yongin, South Korea Copyright © solarlits.com.
CSSE463: Image Recognition Day 10 Lab 3 due Weds Lab 3 due Weds Today: Today: finish circularity finish circularity region orientation: principal axes.
Obama and Biden, McCain and Palin Face Recognition Using Eigenfaces Justin Li.
Face detection and recognition Many slides adapted from K. Grauman and D. Lowe.
CSSE463: Image Recognition Day 25 This week This week Today: Applications of PCA Today: Applications of PCA Sunday night: project plans and prelim work.
CSSE463: Image Recognition Day 10 Lab 3 due Weds, 11:59pm Lab 3 due Weds, 11:59pm Take-home quiz due Friday, 4:00 pm Take-home quiz due Friday, 4:00 pm.
Principal Components Analysis ( PCA)
CSSE463: Image Recognition Day 10 Lab 3 due Weds, 3:25 pm Lab 3 due Weds, 3:25 pm Take-home quiz due Friday, 4:00 pm Take-home quiz due Friday, 4:00 pm.
CSSE463: Image Recognition Day 27
CSSE463: Image Recognition Day 26
Exploring Microarray data
University of Ioannina
9.3 Filtered delay embeddings
Recognition with Expression Variations
Lecture 8:Eigenfaces and Shared Features
Face Recognition and Feature Subspaces
Lecture: Face Recognition and Feature Reduction
Recognition: Face Recognition
Principal Component Analysis (PCA)
Dimension Reduction via PCA (Principal Component Analysis)
Principal Component Analysis
Announcements Project 1 artifact winners
Outline Peter N. Belhumeur, Joao P. Hespanha, and David J. Kriegman, “Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection,”
Face Recognition and Detection Using Eigenfaces
Computational Photography
Principal Component Analysis
PCA is “an orthogonal linear transformation that transfers the data to a new coordinate system such that the greatest variance by any projection of the.
Outline H. Murase, and S. K. Nayar, “Visual learning and recognition of 3-D objects from appearance,” International Journal of Computer Vision, vol. 14,
X.1 Principal component analysis
CS4670: Intro to Computer Vision
Principal Components What matters most?.
CSSE463: Image Recognition Day 25
Announcements Project 4 out today Project 2 winners help session today
HCI/ComS 575X: Computational Perception
The “Margaret Thatcher Illusion”, by Peter Thompson
Presentation transcript:

CSSE463: Image Recognition Day 25 This week Today: PCA in higher dimensions, Applications of PCA Questions?

Principal Components Analysis Given a set of samples, find the direction(s) of greatest variance. We’ve done this! Example: Spatial moments Principal axes are eigenvectors of covariance matrix Eigenvalues gave relative importance of each dimension Note that each point can be represented in 2D using the new coordinate system defined by the eigenvectors The 1D representation obtained by projecting the point onto the principal axis is a reasonably-good approximation size weight girth height

Covariance Matrix (using matrix operations) Place the points in their own column. Find the mean of each row. Subtract it. Multiply N * NT You will get a 2x2 matrix, in which each entry is a summation over all n points. Divide each by n. Compare with what we already know: Q1

Generic process The covariance matrix of a set of data gives the ways in which the set varies. The eigenvectors corresponding to the largest eigenvalues give the directions in which it varies most. Two applications Eigenfaces: http://www.cl.cam.ac.uk/research/dtg/attarchive/facesataglance.html Time-elapsed photography: http://amos.cse.wustl.edu/dataset

“Eigenfaces” Question: what are the primary ways in which faces vary? What happens when we apply PCA? For each face, create a column vector that contains the intensity of all the pixels from that face This is a point in a high dimensional space (e.g., 65536 for a 256x256 pixel image) Create a matrix F of all M faces in the training set. Create and subtract off the “average face” column vector, m, to get N Compute the rc x rc covariance matrix C = N*NT . Contrast with doing in 2D. This is 65536-D. M. Turk and A. Pentland, Eigenfaces for Recognition, J Cog Neurosci, 3(1)

“Eigenfaces” Question: what are the primary ways in which faces vary? What happens when we apply PCA? The eigenvectors are the directions of greatest variability Note that these are in 65536-D; thus form a face. This is an “eigenface” Here are the first 4 from the ORL face dataset. Contrast with doing in 2D. This is 65536-D. Q2-3

“Eigenfaces” Question: what are the primary ways in which faces vary? What happens when we apply PCA? The eigenvectors are the directions of greatest variability Note that these are in 65536-D; thus form a face. This is an “eigenface” Here are the first 4 from the ORL face dataset. Contrast with doing in 2D. This is 65536-D. http://upload.wikimedia.org/wikipedia/commons/6/67/Eigenfaces.png; from the ORL face database, AT&T Laboratories Cambridge Q2-3

Interlude: Projecting points onto lines We can project each point onto the principal axis. How? size weight girth height

Interlude: Projecting a point onto a line Assuming the axis is represented by a unit vector u, we can just take the dot-product of the point p and the vector. u*p = uTp (which is 1D) Example: Project (5,2) onto line y=x. If we want to project onto two vectors, u and v simultaneously: Create w = [u v], then compute wTp, which is 2D. Result: p is now in terms of u and v. This generalizes to arbitrary dimensions. Do example with unit vector, r= 1, theta = 30deg, and the point (0,1) or the point (1,0) Q4

Application: Face detection If we want to project a point onto two vectors, u and v simultaneously: Create w = [u v], then compute wTp, which is 2D. Result: p is now in terms of u and v. In arbitrary dimensions, still take the dot product with eigenvectors! You can represent a face in terms of its eigenfaces; it’s just a different basis. E.g., myFace =avgFace + 0.3 *eig1 + 0.7 * eig2 - .2 * eig3 +.7 * eig4 The M most important eigenvectors capture most of the variability: Ignore the rest! Instead of 65k dimensions, we only have M (~50 in practice) Call these 50 dimensions “face-space” Do example with unit vector, r= 1, theta = 30deg, and the point (0,1) or the point (1,0)

“Eigenfaces” Question: what are the primary ways in which faces vary? What happens when we apply PCA? Keep only the top M eigenfaces for “face space”. We can project any face onto these eigenvectors. Thus, any face is a linear combination of the eigenfaces. Can classify faces in this lower-D space. There are computational tricks to make the computation feasible

Time-elapsed photography Question: what are the ways that outdoor images vary over time? Form a matrix in which each column is an image Find eigs of covariance matrix See example images on Dr. B’s laptop or at this link. http://amos.cse.wustl.edu/dataset N Jacobs, N Roman, R Pless, Consistent Temporal Variations in Many Outdoor Scenes. IEEE Computer Vision and Pattern Recognition, Minneapolis, MN, June 2007.

Time-elapsed photography Question: what are the ways that outdoor images vary over time? The mean and top 3 eigenvectors (scaled): Interpretation? N Jacobs, N Roman, R Pless, Consistent Temporal Variations in Many Outdoor Scenes. IEEE Computer Vision and Pattern Recognition, Minneapolis, MN, June 2007. Q5-6

Time-elapsed photography Recall that each image in the dataset is a linear combination of the eigenimages. mean PC1 PC2 PC3 = + 4912* - 217* +393* = - 2472* + 308* +885* N Jacobs, N Roman, R Pless, Consistent Temporal Variations in Many Outdoor Scenes. IEEE Computer Vision and Pattern Recognition, Minneapolis, MN, June 2007.

Time-elapsed photography Plot every image’s projection onto the first eigenvector (y-axis) vs time (x-axis) N Jacobs, N Roman, R Pless, Consistent Temporal Variations in Many Outdoor Scenes. IEEE Computer Vision and Pattern Recognition, Minneapolis, MN, June 2007.

Recall LST space? L, S, and T are the eigenvectors calculated from the colors in a series of natural images. For a great example of PCA, read the paper below. Karhunen-Loeve (KL) transform = PCA. Y. I. Ohta, T. Kanade, and T. Sakai, Color information for region segmentation, Computer Graphics and Image Processing, Vol. 13, pp. 222-241, 1980. http://www.ri.cmu.edu/pub_files/pub4/ohta_y_1980_1/ohta_y_1980_1.pdf

Idea Done: Yet to do, to my knowledge: Finding the PCs Using to detect latitude and longitude given images from camera Yet to do, to my knowledge: Classifying images based on their projection into this space, as was done for eigenfaces