EigenFaces and EigenPatches Useful model of variation in a region –Region must be fixed shape (eg rectangle) Developed for face recognition Generalised.

Slides:



Advertisements
Similar presentations
Pattern Recognition and Machine Learning
Advertisements

Face Recognition Sumitha Balasuriya.
Eigen Decomposition and Singular Value Decomposition
Active Shape Models Suppose we have a statistical shape model –Trained from sets of examples How do we use it to interpret new images? Use an “Active Shape.
Eigen Decomposition and Singular Value Decomposition
Covariance Matrix Applications
Face Recognition and Biometric Systems Eigenfaces (2)
EigenFaces.
Face Recognition Ying Wu Electrical and Computer Engineering Northwestern University, Evanston, IL
IDEAL 2005, 6-8 July, Brisbane Multiresolution Analysis of Connectivity Atul Sajjanhar, Deakin University, Australia Guojun Lu, Monash University, Australia.
, Tim Landgraf Active Appearance Models AG KI, Journal Club 03 Nov 2008.
Computer Vision Spring ,-685 Instructor: S. Narasimhan Wean 5403 T-R 3:00pm – 4:20pm Lecture #20.
A 4-WEEK PROJECT IN Active Shape and Appearance Models
Principal Component Analysis CMPUT 466/551 Nilanjan Ray.
Computer Vision - A Modern Approach Set: Linear Filters Slides by D.A. Forsyth Differentiation and convolution Recall Now this is linear and shift invariant,
Pattern Recognition Topic 1: Principle Component Analysis Shapiro chap
Project 4 out today –help session today –photo session today Project 2 winners Announcements.
Face Recognition Jeremy Wyatt.
The Terms that You Have to Know! Basis, Linear independent, Orthogonal Column space, Row space, Rank Linear combination Linear transformation Inner product.
CS Pattern Recognition Review of Prerequisites in Math and Statistics Prepared by Li Yang Based on Appendix chapters of Pattern Recognition, 4.
Visual Recognition Tutorial1 Random variables, distributions, and probability density functions Discrete Random Variables Continuous Random Variables.
Computer Vision I Instructor: Prof. Ko Nishino. Today How do we recognize objects in images?
Atul Singh Junior Undergraduate CSE, IIT Kanpur.  Dimension reduction is a technique which is used to represent a high dimensional data in a more compact.
Statistical Shape Models Eigenpatches model regions –Assume shape is fixed –What if it isn’t? Faces with expression changes, organs in medical images etc.
Techniques for studying correlation and covariance structure
Separate multivariate observations
CS 485/685 Computer Vision Face Recognition Using Principal Components Analysis (PCA) M. Turk, A. Pentland, "Eigenfaces for Recognition", Journal of Cognitive.
Eigenfaces for Recognition Student: Yikun Jiang Professor: Brendan Morris.
Summarized by Soo-Jin Kim
Principle Component Analysis Presented by: Sabbir Ahmed Roll: FH-227.
Dimensionality Reduction: Principal Components Analysis Optional Reading: Smith, A Tutorial on Principal Components Analysis (linked to class webpage)
Machine Vision for Robots
1 ECE 738 Paper presentation Paper: Active Appearance Models Author: T.F.Cootes, G.J. Edwards and C.J.Taylor Student: Zhaozheng Yin Instructor: Dr. Yuhen.
1 Recognition by Appearance Appearance-based recognition is a competing paradigm to features and alignment. No features are extracted! Images are represented.
Classification Course web page: vision.cis.udel.edu/~cv May 12, 2003  Lecture 33.
N– variate Gaussian. Some important characteristics: 1)The pdf of n jointly Gaussian R.V.’s is completely described by means, variances and covariances.
ECE 8443 – Pattern Recognition LECTURE 08: DIMENSIONALITY, PRINCIPAL COMPONENTS ANALYSIS Objectives: Data Considerations Computational Complexity Overfitting.
CSE 185 Introduction to Computer Vision Face Recognition.
CSSE463: Image Recognition Day 27 This week This week Today: Applications of PCA Today: Applications of PCA Sunday night: project plans and prelim work.
EE4-62 MLCV Lecture Face Recognition – Subspace/Manifold Learning Tae-Kyun Kim 1 EE4-62 MLCV.
EIGENSYSTEMS, SVD, PCA Big Data Seminar, Dedi Gadot, December 14 th, 2014.
Principal Component Analysis (PCA)
Point Distribution Models Active Appearance Models Compilation based on: Dhruv Batra ECE CMU Tim Cootes Machester.
Feature Extraction 主講人:虞台文. Content Principal Component Analysis (PCA) PCA Calculation — for Fewer-Sample Case Factor Analysis Fisher’s Linear Discriminant.
Statistical Models of Appearance for Computer Vision 主講人:虞台文.
CSSE463: Image Recognition Day 10 Lab 3 due Weds Lab 3 due Weds Today: Today: finish circularity finish circularity region orientation: principal axes.
Face detection and recognition Many slides adapted from K. Grauman and D. Lowe.
CSSE463: Image Recognition Day 10 Lab 3 due Weds, 11:59pm Lab 3 due Weds, 11:59pm Take-home quiz due Friday, 4:00 pm Take-home quiz due Friday, 4:00 pm.
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 1: INTRODUCTION.
1 C.A.L. Bailer-Jones. Machine Learning. Data exploration and dimensionality reduction Machine learning, pattern recognition and statistical data modelling.
Introduction to Vectors and Matrices
LECTURE 09: BAYESIAN ESTIMATION (Cont.)
University of Ioannina
LECTURE 10: DISCRIMINANT ANALYSIS
Lecture 8:Eigenfaces and Shared Features
Lecture: Face Recognition and Feature Reduction
Recognition: Face Recognition
Outline Peter N. Belhumeur, Joao P. Hespanha, and David J. Kriegman, “Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection,”
In summary C1={skin} C2={~skin} Given x=[R,G,B], is it skin or ~skin?
Singular Value Decomposition
Principal Component Analysis
PCA is “an orthogonal linear transformation that transfers the data to a new coordinate system such that the greatest variance by any projection of the.
Introduction PCA (Principal Component Analysis) Characteristics:
Recitation: SVD and dimensionality reduction
Feature space tansformation methods
CS4670: Intro to Computer Vision
LECTURE 09: DISCRIMINANT ANALYSIS
Introduction to Vectors and Matrices
The “Margaret Thatcher Illusion”, by Peter Thompson
Presentation transcript:

EigenFaces and EigenPatches Useful model of variation in a region –Region must be fixed shape (eg rectangle) Developed for face recognition Generalised for –face location –object location/recognition

Overview Model of variation in a region

Overview of Construction Mark face region on training set Sample region Normalise Statistical Analysis

Sampling a region Must sample at equivalent points across region Place grid on image and rotate/scale as necessary Use interpolation to sample image at each grid node

Interpolation Pixel values are known at integer positions –What is a suitable value at non-integer positions? Values known at integer values Estimate value here

Interpolation in 1D Estimate continuous function, f(x), that passes through a set of points (i,g(i)) f(x) x

1D Interpolation techniques f(x) x Nearest Neighbour f(x) x Linear f(x) x Cubic

2D Interpolation Extension of 1D case Nearest Neighbour Bilinear y interp at x=0 y interp at x=1

Representing Regions Represent each region as a vector –Raster scan values n x m region: nm vector g

Normalisation Allow for global lighting variations Common linear approach –Shift and scale so that Mean of elements is zero Variance of elements is 1 Alternative non-linear approach –Histogram equalization Transforms so similar numbers of each grey-scale value

Review of Construction Mark face region on training set Sample region Normalise Statistical Analysis The Fun Step

Multivariate Statistical Analysis Need to model the distribution of normalised vectors –Generate plausible new examples –Test if new region similar to training set –Classify region

Fitting a gaussian Mean and covariance matrix of data define a gaussian model

Principal Component Analysis Compute eigenvectors of covariance, S Eigenvectors : main directions Eigenvalue : variance along eigenvector

Eigenvector Decomposition If A is a square matrix then an eigenvector of A is a vector, p, such that Usually p is scaled to have unit length,|p|=1

Eigenvector Decomposition If K is an n x n covariance matrix, there exist n linearly independent eigenvectors, and all the corresponding eigenvalues are non-negative. We can decompose K as

Eigenvector Decomposition Recall that a normal pdf has The inverse of the covariance matrix is

Fun with Eigenvectors The normal distribution has form

Fun with Eigenvectors Consider the transformation

Fun with Eigenvectors The exponent of the distribution becomes

Normal distribution Thus by applying the transformation The normal distribution is simplified to

Dimensionality Reduction Co-ords often correllated Nearby points move together

Dimensionality Reduction Data lies in subspace of reduced dim. However, for some t,

Approximation Each element of the data can be written

Normal PDF

Useful Trick If x of high dimension, S huge If No. samples, N<dim(x) use

Building Eigen-Models Given examples Compute mean and eigenvectors of covar. Model is then P – First t eigenvectors of covar. matrix b – Shape model parameters

Eigen-Face models Model of variation in a region

Applications: Locating objects Scan window over target region At each position: –Sample, normalise, evaluate p(g) Select position with largest p(g)

Multi-Resolution Search Train models at each level of pyramid –Gaussian pyramid with step size 2 –Use same points but different local models Start search at coarse resolution –Refine at finer resolution

Application: Object Detection Scan image to find points with largest p(g) If p(g)>p min then object is present Strictly should use a background model: This only works if the PDFs are good approximations – often not the case

Application: Face Recognition Eigenfaces developed for face recognition –More about this later