1 Recognition by Appearance Appearance-based recognition is a competing paradigm to features and alignment. No features are extracted! Images are represented.

Slides:



Advertisements
Similar presentations
Principal Component Analysis Based on L1-Norm Maximization Nojun Kwak IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008.
Advertisements

EigenFaces.
Face Recognition Face Recognition Using Eigenfaces K.RAMNATH BITS - PILANI.
PCA + SVD.
As applied to face recognition.  Detection vs. Recognition.
Computer Vision Spring ,-685 Instructor: S. Narasimhan Wean 5403 T-R 3:00pm – 4:20pm Lecture #20.
Principal Component Analysis CMPUT 466/551 Nilanjan Ray.
Principal Component Analysis
Dimensionality Reduction Chapter 3 (Duda et al.) – Section 3.8
© 2003 by Davi GeigerComputer Vision September 2003 L1.1 Face Recognition Recognized Person Face Recognition.
Principal Component Analysis
How many object categories are there? Biederman 1987.
3D Models and Matching representations for 3D object models
CS 790Q Biometrics Face Recognition Using Dimensionality Reduction PCA and LDA M. Turk, A. Pentland, "Eigenfaces for Recognition", Journal of Cognitive.
Face Recognition using PCA (Eigenfaces) and LDA (Fisherfaces)
Project 4 out today –help session today –photo session today Project 2 winners Announcements.
Face Recognition Jeremy Wyatt.
Principal Component Analysis Barnabás Póczos University of Alberta Nov 24, 2009 B: Chapter 12 HRF: Chapter 14.5.
3D Geometry for Computer Graphics
Computer Vision I Instructor: Prof. Ko Nishino. Today How do we recognize objects in images?
Eigen Value Analysis in Pattern Recognition
Face Recognition: An Introduction
Face Detection and Recognition Readings: Ch 8: Sec 4.4, Ch 14: Sec 4.4
CS 485/685 Computer Vision Face Recognition Using Principal Components Analysis (PCA) M. Turk, A. Pentland, "Eigenfaces for Recognition", Journal of Cognitive.
Nonlinear Dimensionality Reduction Approaches. Dimensionality Reduction The goal: The meaningful low-dimensional structures hidden in their high-dimensional.
SVD(Singular Value Decomposition) and Its Applications
Summarized by Soo-Jin Kim
Principal Components Analysis on Images and Face Recognition
Dimensionality Reduction: Principal Components Analysis Optional Reading: Smith, A Tutorial on Principal Components Analysis (linked to class webpage)
Recognition Part II Ali Farhadi CSE 455.
Face Recognition and Feature Subspaces
Face Recognition and Feature Subspaces
Feature extraction 1.Introduction 2.T-test 3.Signal Noise Ratio (SNR) 4.Linear Correlation Coefficient (LCC) 5.Principle component analysis (PCA) 6.Linear.
Computer Vision Lab. SNU Young Ki Baik Nonlinear Dimensionality Reduction Approach (ISOMAP, LLE)
Face Recognition: An Introduction
CSE 185 Introduction to Computer Vision Face Recognition.
CSSE463: Image Recognition Day 27 This week This week Today: Applications of PCA Today: Applications of PCA Sunday night: project plans and prelim work.
Irfan Ullah Department of Information and Communication Engineering Myongji university, Yongin, South Korea Copyright © solarlits.com.
2D-LDA: A statistical linear discriminant analysis for image matrix
Recognition Readings – C. Bishop, “Neural Networks for Pattern Recognition”, Oxford University Press, 1998, Chapter 1. – Szeliski, Chapter (eigenfaces)
Specific Object Recognition: Matching in 2D engine model image containing an instance of the model Is there an engine in the image? If so, where is it.
Face Recognition and Feature Subspaces Devi Parikh Virginia Tech 11/05/15 Slides borrowed from Derek Hoiem, who borrowed some slides from Lana Lazebnik,
Presented by: Muhammad Wasif Laeeq (BSIT07-1) Muhammad Aatif Aneeq (BSIT07-15) Shah Rukh (BSIT07-22) Mudasir Abbas (BSIT07-34) Ahmad Mushtaq (BSIT07-45)
Face detection and recognition Many slides adapted from K. Grauman and D. Lowe.
Principal Components Analysis ( PCA)
Face Detection and Recognition Readings: Ch 8: Sec 4.4, Ch 14: Sec 4.4
University of Ioannina
9.3 Filtered delay embeddings
Recognition with Expression Variations
Face Recognition and Feature Subspaces
Recognition: Face Recognition
Principal Component Analysis
Announcements Project 1 artifact winners
In summary C1={skin} C2={~skin} Given x=[R,G,B], is it skin or ~skin?
Specific Object Recognition: Matching in 2D
Face Recognition and Detection Using Eigenfaces
3D Models and Matching representations for 3D object models
Principal Component Analysis
PCA is “an orthogonal linear transformation that transfers the data to a new coordinate system such that the greatest variance by any projection of the.
Eigenfaces for recognition (Turk & Pentland)
Outline H. Murase, and S. K. Nayar, “Visual learning and recognition of 3-D objects from appearance,” International Journal of Computer Vision, vol. 14,
CS4670: Intro to Computer Vision
CS4670: Intro to Computer Vision
Announcements Project 2 artifacts Project 3 due Thursday night
Announcements Project 4 out today Project 2 winners help session today
Where are we? We have covered: Project 1b was due today
Announcements Artifact due Thursday
Principal Component Analysis
Announcements Artifact due Thursday
The “Margaret Thatcher Illusion”, by Peter Thompson
Presentation transcript:

1 Recognition by Appearance Appearance-based recognition is a competing paradigm to features and alignment. No features are extracted! Images are represented by basis functions (eigenvectors) and their coefficients. Matching is performed on this compressed image representation.

2 Eigenvectors and Eigenvalues Consider the sum squared distance of a point x to all of the orange points: What unit vector v minimizes SSD? What unit vector v maximizes SSD? Solution:v 1 is eigenvector of A with largest eigenvalue v 2 is eigenvector of A with smallest eigenvalue

3 Principle component analysis Suppose each data point is N-dimensional –Same procedure applies: –The eigenvectors of A define a new coordinate system eigenvector with largest eigenvalue captures the most variation among training vectors x eigenvector with smallest eigenvalue has least variation –We can compress the data by only using the top few eigenvectors

4 The space of faces An image is a point in a high-dimensional space –An N x M image is a point in R NM –We can define vectors in this space

5 Dimensionality reduction The set of faces is a “subspace” of the set of images. –We can find the best subspace using PCA –Suppose it is K dimensional –This is like fitting a “hyper-plane” to the set of faces spanned by vectors v 1, v 2,..., v K any face x  a 1 v 1 + a 2 v 2 +,..., + a K v K

6 Turk and Pentland’s Eigenfaces: Training Let F1, F2,…, FM be a set of training face images. Let F be their mean and  i = Fi – F Use principal components to compute the eigenvectors and eigenvalues of the covariance matrix of the  i s Choose the vector u of most significant M eigenvectors to use as the basis. Each face is represented as a linear combination of eigenfaces u = (u1, u2, u3, u4, u5); F27 = a1*u1 + a2*u2 + … + a5*u5

7 Matching unknown face image I convert to its eigenface representation  = (  1,  2, …,  m) Find the face class k that minimizes  k = ||  -  k ||

8 3 eigen- images mean image training images linear approxi- mations

9 Extension to 3D Objects Murase and Nayar (1994, 1995) extended this idea to 3D objects. The training set had multiple views of each object, on a dark background. The views included multiple (discrete) rotations of the object on a turntable and also multiple (discrete) illuminations. The system could be used first to identify the object and then to determine its (approximate) pose and illumination.

10 Sample Objects Columbia Object Recognition Database

11 Significance of this work The extension to 3D objects was an important contribution. Instead of using brute force search, the authors observed that All the views of a single object, when transformed into the eigenvector space became points on a manifold in that space. Using this, they developed fast algorithms to find the closest object manifold to an unknown input image. Recognition with pose finding took less than a second.

12 Appearance-Based Recognition Training images must be representative of the instances of objects to be recognized. The object must be well-framed. Positions and sizes must be controlled. Dimensionality reduction is needed. It is not powerful enough to handle general scenes without prior segmentation into relevant objects. Newer systems are using interest operators to identify “parts” and learning objects with these parts. *