Recognition, SVD, and PCA. Recognition Suppose you want to find a face in an imageSuppose you want to find a face in an image One possibility: look for.

Slides:



Advertisements
Similar presentations
3D Geometry for Computer Graphics
Advertisements

Principal Component Analysis Based on L1-Norm Maximization Nojun Kwak IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008.
Generalised Inverses Modal Analysis and Modal Testing S. Ziaei Rad.
PCA + SVD.
3D Reconstruction – Factorization Method Seong-Wook Joo KG-VISA 3/10/2004.
Solving Linear Systems (Numerical Recipes, Chap 2)
Computer Vision Spring ,-685 Instructor: S. Narasimhan Wean 5403 T-R 3:00pm – 4:20pm Lecture #20.
1cs542g-term High Dimensional Data  So far we’ve considered scalar data values f i (or interpolated/approximated each component of vector values.
Principal Component Analysis
Dan Witzner Hansen  Groups?  Improvements – what is missing?
Dimensionality Reduction Chapter 3 (Duda et al.) – Section 3.8
Computer Graphics Recitation 5.
3D Geometry for Computer Graphics
Some useful linear algebra. Linearly independent vectors span(V): span of vector space V is all linear combinations of vectors v i, i.e.
Math for CSLecture 41 Linear Least Squares Problem Over-determined systems Minimization problem: Least squares norm Normal Equations Singular Value Decomposition.
Singular Value Decomposition COS 323. Underconstrained Least Squares What if you have fewer data points than parameters in your function?What if you have.
Information Retrieval in Text Part III Reference: Michael W. Berry and Murray Browne. Understanding Search Engines: Mathematical Modeling and Text Retrieval.
Face Recognition Jeremy Wyatt.
SVD and PCA COS 323. Dimensionality Reduction Map points in high-dimensional space to lower number of dimensionsMap points in high-dimensional space to.
The Terms that You Have to Know! Basis, Linear independent, Orthogonal Column space, Row space, Rank Linear combination Linear transformation Inner product.
Structure from Motion (With a digression on SVD).
3D Geometry for Computer Graphics
3D Geometry for Computer Graphics
Principal Components Analysis (PCA) 273A Intro Machine Learning.
Recognition – PCA and Templates. Recognition Suppose you want to find a face in an imageSuppose you want to find a face in an image One possibility: look.
SVD and PCA COS 323, Spring 05. SVD and PCA Principal Components Analysis (PCA): approximating a high-dimensional data set with a lower-dimensional subspacePrincipal.
1cs542g-term Notes  Extra class next week (Oct 12, not this Friday)  To submit your assignment: me the URL of a page containing (links to)
Face Detection and Recognition
Matrices CS485/685 Computer Vision Dr. George Bebis.
Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka Virginia de Sa (UCSD) Cogsci 108F Linear.
Introduction The central problems of Linear Algebra are to study the properties of matrices and to investigate the solutions of systems of linear equations.
SVD(Singular Value Decomposition) and Its Applications
Summarized by Soo-Jin Kim
CSE554AlignmentSlide 1 CSE 554 Lecture 8: Alignment Fall 2014.
Recognition Part II Ali Farhadi CSE 455.
Face Recognition and Feature Subspaces
Face Recognition and Feature Subspaces
Feature extraction 1.Introduction 2.T-test 3.Signal Noise Ratio (SNR) 4.Linear Correlation Coefficient (LCC) 5.Principle component analysis (PCA) 6.Linear.
CSE554AlignmentSlide 1 CSE 554 Lecture 5: Alignment Fall 2011.
D. van Alphen1 ECE 455 – Lecture 12 Orthogonal Matrices Singular Value Decomposition (SVD) SVD for Image Compression Recall: If vectors a and b have the.
1 Recognition by Appearance Appearance-based recognition is a competing paradigm to features and alignment. No features are extracted! Images are represented.
CpSc 881: Information Retrieval. 2 Recall: Term-document matrix This matrix is the basis for computing the similarity between documents and queries. Today:
Face Recognition: An Introduction
CSE554AlignmentSlide 1 CSE 554 Lecture 8: Alignment Fall 2013.
CSE 185 Introduction to Computer Vision Face Recognition.
E IGEN & S INGULAR V ALUE D ECOMPOSITION قسمتی از درس ریاضی مهندسی پیشرفته ( ارشد و دکتری ) دوشنبه، 3/9/90 1 دکتر رنجبر نوعی، گروه مهندسی کنترل.
CSSE463: Image Recognition Day 27 This week This week Today: Applications of PCA Today: Applications of PCA Sunday night: project plans and prelim work.
Last update Heejune Ahn, SeoulTech
CSE 446 Dimensionality Reduction and PCA Winter 2012 Slides adapted from Carlos Guestrin & Luke Zettlemoyer.
Review of Linear Algebra Optimization 1/16/08 Recitation Joseph Bradley.
EIGENSYSTEMS, SVD, PCA Big Data Seminar, Dedi Gadot, December 14 th, 2014.
1. Systems of Linear Equations and Matrices (8 Lectures) 1.1 Introduction to Systems of Linear Equations 1.2 Gaussian Elimination 1.3 Matrices and Matrix.
Instructor: Mircea Nicolescu Lecture 8 CS 485 / 685 Computer Vision.
Singular Value Decomposition and Numerical Rank. The SVD was established for real square matrices in the 1870’s by Beltrami & Jordan for complex square.
Unsupervised Learning II Feature Extraction
CSE 554 Lecture 8: Alignment
Eigenfaces (for Face Recognition)
Introduction The central problems of Linear Algebra are to study the properties of matrices and to investigate the solutions of systems of linear equations.
Eigen & Singular Value Decomposition
Lecture 8:Eigenfaces and Shared Features
Face Recognition and Feature Subspaces
Recognition: Face Recognition
Singular Value Decomposition
Some useful linear algebra
Recitation: SVD and dimensionality reduction
Parallelization of Sparse Coding & Dictionary Learning
Outline Singular Value Decomposition Example of PCA: Eigenfaces.
CS4670: Intro to Computer Vision
Midterm Exam Closed book, notes, computer Similar to test 1 in format:
Presentation transcript:

Recognition, SVD, and PCA

Recognition Suppose you want to find a face in an imageSuppose you want to find a face in an image One possibility: look for something that looks sort of like a face (oval, dark band near top, dark band near bottom)One possibility: look for something that looks sort of like a face (oval, dark band near top, dark band near bottom) Another possibility: look for pieces of faces (eyes, mouth, etc.) in a specific arrangementAnother possibility: look for pieces of faces (eyes, mouth, etc.) in a specific arrangement

Templates Model of a “generic” or “average” faceModel of a “generic” or “average” face – Learn templates from example data For each location in image, look for template at that locationFor each location in image, look for template at that location – Optionally also search over scale, orientation

Templates In the simplest case, based on intensityIn the simplest case, based on intensity – Template is average of all faces in training set – Comparison based on e.g. SSD More complex templatesMore complex templates – Outputs of feature detectors – Color histograms – Both position and frequency information (wavelets)

Face Detection Results Sample Images Wavelet Histogram Template Detection of frontal / profile faces

More Face Detection Results Schneiderman and Kanade

Recognition Using Relations Between Templates Often easier to recognize a small featureOften easier to recognize a small feature – e.g., lips easier to recognize than faces – For articulated objects (e.g. people), template for whole class usually complicated So, identify small pieces and look for spatial arrangementsSo, identify small pieces and look for spatial arrangements – Many false positives from identifying pieces

Graph Matching Head LegLeg Arm Arm Body Head Head Body Body Arm ArmLeg Leg Leg Leg Model Feature detection results

Graph Matching Head LegLeg Arm Arm Body Constraints Next to, right of Next to, below

Graph Matching Head LegLeg Arm Arm Body Head Head Body Body Arm ArmLeg Leg Leg Leg Combinatorial search

Graph Matching Head LegLeg Arm Arm Body Head Head Body Body Arm ArmLeg Leg Leg Leg Combinatorial search OK

Graph Matching Head LegLeg Arm Arm Body Head Head Body Body Arm ArmLeg Leg Leg Leg Combinatorial search Violates constraint

Graph Matching Large search spaceLarge search space – Heuristics for pruning Missing featuresMissing features – Look for maximal consistent assignment Noise, spurious featuresNoise, spurious features Incomplete constraintsIncomplete constraints – Verification step at end

Recognition Suppose you want to recognize a particular faceSuppose you want to recognize a particular face How does this face differ from average faceHow does this face differ from average face

How to Recognize Specific People? Consider variation from average faceConsider variation from average face Not all variations equally importantNot all variations equally important – Variation in a single pixel relatively unimportant If image is high-dimensional vector, want to find directions in this space with high variationIf image is high-dimensional vector, want to find directions in this space with high variation

Principal Components Analaysis Principal Components Analysis (PCA): approximating a high-dimensional data set with a lower-dimensional subspacePrincipal Components Analysis (PCA): approximating a high-dimensional data set with a lower-dimensional subspace Original axes * * * * * * * *** * * *** * * * * * * * * * Data points First principal component Second principal component

Digression: Singular Value Decomposition (SVD) Handy mathematical technique that has application to many problemsHandy mathematical technique that has application to many problems Given any m  n matrix A, algorithm to find matrices U, V, and W such thatGiven any m  n matrix A, algorithm to find matrices U, V, and W such that A = U W V T U is m  n and orthonormal V is n  n and orthonormal W is n  n and diagonal

SVD Treat as black box: code widely available ( svd(A,0) in Matlab)Treat as black box: code widely available ( svd(A,0) in Matlab)

SVD The w i are called the singular values of AThe w i are called the singular values of A If A is singular, some of the w i will be 0If A is singular, some of the w i will be 0 In general rank ( A ) = number of nonzero w iIn general rank ( A ) = number of nonzero w i SVD is mostly unique (up to permutation of singular values, or if some w i are equal)SVD is mostly unique (up to permutation of singular values, or if some w i are equal)

SVD and Inverses Why is SVD so useful?Why is SVD so useful? Application #1: inversesApplication #1: inverses A -1 =( V T ) -1 W -1 U -1 = V W -1 U T A -1 =( V T ) -1 W -1 U -1 = V W -1 U T This fails when some w i are 0This fails when some w i are 0 – It’s supposed to fail – singular matrix Pseudoinverse: if w i =0, set 1/ w i to 0 (!)Pseudoinverse: if w i =0, set 1/ w i to 0 (!) – “Closest” matrix to inverse – Defined for all (even non-square) matrices

SVD and Least Squares Solving Ax = b by least squaresSolving Ax = b by least squares x =pseudoinverse( A ) times b x =pseudoinverse( A ) times b Compute pseudoinverse using SVDCompute pseudoinverse using SVD – Lets you see if data is singular – Even if not singular, ratio of max to min singular values (condition number) tells you how stable the solution will be – Set 1/ w i to 0 if w i is small (even if not exactly 0)

SVD and Eigenvectors Let A = UWV T, and let x i be i th column of VLet A = UWV T, and let x i be i th column of V Consider A T A x i :Consider A T A x i : So elements of W are squared eigenvalues and columns of V are eigenvectors of A T ASo elements of W are squared eigenvalues and columns of V are eigenvectors of A T A

SVD and Matrix Similarity One common definition for the norm of a matrix is the Frobenius norm:One common definition for the norm of a matrix is the Frobenius norm: Frobenius norm can be computed from SVDFrobenius norm can be computed from SVD So changes to a matrix can be evaluated by looking at changes to singular valuesSo changes to a matrix can be evaluated by looking at changes to singular values

SVD and Matrix Similarity Suppose you want to find best rank- k approximation to ASuppose you want to find best rank- k approximation to A Answer: set all but the largest k singular values to zeroAnswer: set all but the largest k singular values to zero Can form compact representation by eliminating columns of U and V corresponding to zeroed w iCan form compact representation by eliminating columns of U and V corresponding to zeroed w i

SVD and Orthogonalization The matrix U is the “closest” orthonormal matrix to AThe matrix U is the “closest” orthonormal matrix to A Yet another useful application of the matrix- approximation properties of SVDYet another useful application of the matrix- approximation properties of SVD Much more stable numerically than Graham-Schmidt orthogonalizationMuch more stable numerically than Graham-Schmidt orthogonalization Find rotation given general affine matrixFind rotation given general affine matrix

SVD and PCA Principal Components Analysis (PCA): approximating a high-dimensional data set with a lower-dimensional subspacePrincipal Components Analysis (PCA): approximating a high-dimensional data set with a lower-dimensional subspace Original axes * * * * * * * *** * * *** * * * * * * * * * Data points First principal component Second principal component

SVD and PCA Data matrix with points as rows, take SVDData matrix with points as rows, take SVD – Subtract out mean (“whitening”) Columns of V k are principal componentsColumns of V k are principal components Value of w i gives importance of each componentValue of w i gives importance of each component

PCA on Faces: “Eigenfaces” Average face First principal component Other components For all except average, “gray” = 0, “white” > 0, “black” < 0

Using PCA for Recognition Store each person as coefficients of projection onto first few principal componentsStore each person as coefficients of projection onto first few principal components Compute projections of target image, compare to database (“nearest neighbor classifier”)Compute projections of target image, compare to database (“nearest neighbor classifier”)