Computer Graphics Recitation 6. 2 Last week - eigendecomposition A We want to learn how the transformation A works:

Slides:



Advertisements
Similar presentations
3D Geometry for Computer Graphics
Advertisements

Tensors and Component Analysis Musawir Ali. Tensor: Generalization of an n-dimensional array Vector: order-1 tensor Matrix: order-2 tensor Order-3 tensor.
Dimensionality Reduction PCA -- SVD
1er. Escuela Red ProTIC - Tandil, de Abril, 2006 Principal component analysis (PCA) is a technique that is useful for the compression and classification.
Slides by Olga Sorkine, Tel Aviv University. 2 The plan today Singular Value Decomposition  Basic intuition  Formal definition  Applications.
1cs542g-term High Dimensional Data  So far we’ve considered scalar data values f i (or interpolated/approximated each component of vector values.
Principal Component Analysis CMPUT 466/551 Nilanjan Ray.
Principal Component Analysis
Eigenvalues and eigenvectors
Dimensionality Reduction Chapter 3 (Duda et al.) – Section 3.8
Symmetric Matrices and Quadratic Forms
© 2003 by Davi GeigerComputer Vision September 2003 L1.1 Face Recognition Recognized Person Face Recognition.
Computer Graphics Recitation 5.
3D Geometry for Computer Graphics
CS 790Q Biometrics Face Recognition Using Dimensionality Reduction PCA and LDA M. Turk, A. Pentland, "Eigenfaces for Recognition", Journal of Cognitive.
3D Geometry for Computer Graphics
3-D Geometry.
Face Recognition Jeremy Wyatt.
Face Recognition Using Eigenfaces
The Terms that You Have to Know! Basis, Linear independent, Orthogonal Column space, Row space, Rank Linear combination Linear transformation Inner product.
3D Geometry for Computer Graphics
3D Geometry for Computer Graphics
Lecture 20 SVD and Its Applications Shang-Hua Teng.
Lecture 18 Eigenvalue Problems II Shang-Hua Teng.
Olga Sorkine’s slides Tel Aviv University. 2 Spectra and diagonalization A If A is symmetric, the eigenvectors are orthogonal (and there’s always an eigenbasis).
1cs542g-term Notes  Extra class next week (Oct 12, not this Friday)  To submit your assignment: me the URL of a page containing (links to)
Normal Estimation in Point Clouds 2D/3D Shape Manipulation, 3D Printing March 13, 2013 Slides from Olga Sorkine.
Stats & Linear Models.
CS 485/685 Computer Vision Face Recognition Using Principal Components Analysis (PCA) M. Turk, A. Pentland, "Eigenfaces for Recognition", Journal of Cognitive.
5.1 Orthogonality.
Orthogonal Matrices and Spectral Representation In Section 4.3 we saw that n  n matrix A was similar to a diagonal matrix if and only if it had n linearly.
SVD(Singular Value Decomposition) and Its Applications
Summarized by Soo-Jin Kim
Linear Least Squares Approximation. 2 Definition (point set case) Given a point set x 1, x 2, …, x n  R d, linear least squares fitting amounts to find.
Dimensionality Reduction: Principal Components Analysis Optional Reading: Smith, A Tutorial on Principal Components Analysis (linked to class webpage)
Chapter 2 Dimensionality Reduction. Linear Methods
Next. A Big Thanks Again Prof. Jason Bohland Quantitative Neuroscience Laboratory Boston University.
Feature extraction 1.Introduction 2.T-test 3.Signal Noise Ratio (SNR) 4.Linear Correlation Coefficient (LCC) 5.Principle component analysis (PCA) 6.Linear.
Digital Image Processing, 3rd ed. © 1992–2008 R. C. Gonzalez & R. E. Woods Gonzalez & Woods Matrices and Vectors Objective.
Shape Analysis and Retrieval Statistical Shape Descriptors Notes courtesy of Funk et al., SIGGRAPH 2004.
What is the determinant of What is the determinant of
Linear Subspace Transforms PCA, Karhunen- Loeve, Hotelling C306, 2000.
Introduction to Linear Algebra Mark Goldman Emily Mackevicius.
Review of Linear Algebra Optimization 1/16/08 Recitation Joseph Bradley.
EIGENSYSTEMS, SVD, PCA Big Data Seminar, Dedi Gadot, December 14 th, 2014.
Signal & Weight Vector Spaces
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
Feature Extraction 主講人:虞台文. Content Principal Component Analysis (PCA) PCA Calculation — for Fewer-Sample Case Factor Analysis Fisher’s Linear Discriminant.
CSSE463: Image Recognition Day 10 Lab 3 due Weds Lab 3 due Weds Today: Today: finish circularity finish circularity region orientation: principal axes.
1 Chapter 8 – Symmetric Matrices and Quadratic Forms Outline 8.1 Symmetric Matrices 8.2Quardratic Forms 8.3Singular ValuesSymmetric MatricesQuardratic.
3D Geometry for Computer Graphics Class 3. 2 Last week - eigendecomposition A We want to learn how the transformation A works:
Feature Extraction 主講人:虞台文.
Boot Camp in Linear Algebra TIM 209 Prof. Ram Akella.
1 Objective To provide background material in support of topics in Digital Image Processing that are based on matrices and/or vectors. Review Matrices.
Graphics Graphics Korea University kucg.korea.ac.kr Mathematics for Computer Graphics 고려대학교 컴퓨터 그래픽스 연구실.
Tutorial 6. Eigenvalues & Eigenvectors Reminder: Eigenvectors A vector x invariant up to a scaling by λ to a multiplication by matrix A is called.
Principal Component Analysis (PCA)
Spectral Methods Tutorial 6 1 © Maks Ovsjanikov
Principal Component Analysis
Recitation: SVD and dimensionality reduction
Parallelization of Sparse Coding & Dictionary Learning
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
Outline Singular Value Decomposition Example of PCA: Eigenfaces.
Feature space tansformation methods
Symmetric Matrices and Quadratic Forms
Quantum Two.
Lecture 13: Singular Value Decomposition (SVD)
Principal Component Analysis
Symmetric Matrices and Quadratic Forms
Presentation transcript:

Computer Graphics Recitation 6

2 Last week - eigendecomposition A We want to learn how the transformation A works:

3 Last week - eigendecomposition A If we look at arbitrary vectors, it doesn’t tell us much.

4 Spectra and diagonalization A Moreover, if A is symmetric, the eigenvectors are orthogonal (and there’s always an eigenbasis). A = U  U T ==Au i = i u i

5 In real life… Matrices that have eigenbasis are rare – general transformations involve also rotations, not only scalings. We want to understand how general transformations behave. Need a generalization of eigendecomposition  SVD: A = U  V T Before we learn SVD, we’ll see Principal Component Analysis – usage of spectral analysis to analyze the shape and dimensionality of scattered data.

6 The plan for today First we’ll see some applications of PCA Then look at the theory.

7 x’ y’ PCA finds an orthogonal basis that best represents given data set. The sum of distances 2 from the x’ axis is minimized. PCA – the general idea x y

8 PCA finds an orthogonal basis that best represents given data set. PCA finds a best approximating plane (again, in terms of  distances 2 ) 3D point set in standard basis x y z

9 PCA – the general idea PCA finds an orthogonal basis that best represents given data set. PCA finds a best approximating plane (again, in terms of  distances 2 ) 3D point set in standard basis

10 Application: finding tight bounding box An axis-aligned bounding box: agrees with the axes x y minXmaxX maxY minY

11 Usage of bounding boxes (bounding volumes) Serve as very simple “approximation” of the object Fast collision detection, visibility queries Whenever we need to know the dimensions (size) of the object The models consist of thousands of polygons To quickly test that they don’t intersect, the bounding boxes are tested Sometimes a hierarchy of BB’s is used The tighter the BB – the less “false alarms” we have

12 Application: finding tight bounding box Oriented bounding box: we find better axes! x’ y’

13 Application: finding tight bounding box This is not the optimal bounding box x y z

14 Application: finding tight bounding box Oriented bounding box: we find better axes!

15 Notations Denote our data points by x 1, x 2, …, x n  R d

16 The origin is zero-order approximation of our data set (a point) It will be the center of mass: It can be shown that: The origin of the new axes

17 Scatter matrix Denote y i = x i – m, i = 1, 2, …, n = d  d

18 Scatter matrix - eigendecomposition S is symmetric  S has eigendecomposition: S = V  V T S = v2v2 v1v1 vnvn 1 2 n v2v2 v1v1 vnvn The eigenvectors form orthogonal basis

19 Principal components S measures the “scatterness” of the data. Eigenvectors that correspond to big eigenvalues are the directions in which the data has strong components. If the eigenvalues are more or less the same – there is not preferable direction.

20 Principal components There’s no preferable direction S looks like this: Any vector is an eigenvector There is a clear preferable direction S looks like this:  is close to zero, much smaller than.

21 How to use what we got For finding oriented bounding box – we simply compute the bounding box with respect to the axes defined by the eigenvectors. The origin is at the mean point m. v2v2 v1v1 v3v3

22 For approximation x y v1v1 v2v2 x y This line segment approximates the original data set The projected data set approximates the original data set x y

23 For approximation In general dimension d, the eigenvalues are sorted in descending order: 1  2  …  d The eigenvectors are sorted accordingly. To get an approximation of dimension d’ < d, we take the d’ first eigenvectors and look at the subspace they span ( d’ = 1 is a line, d’ = 2 is a plane…)

24 For approximation To get an approximating set, we project the original data points onto the chosen subspace: x i = m +  1 v 1 +  2 v 2 +…+  d’ v d’ +…+  d v d Projection : x i ’ = m +  1 v 1 +  2 v 2 +…+  d’ v d’ +0  v d’+1 +…+ 0  v d

25 Optimality of approximation The approximation is optimal in least-squares sense. It gives the minimal of: The projected points have maximal variance. Original setprojection on arbitrary lineprojection on v 1 axis

26 Technical remarks: i  0, i = 1,…,d (such matrices are called positive semi- definite). So we can indeed sort by the magnitude of i Theorem: i  0   0  v Proof: Therefore, i  0   0  v

27 Technical remarks: In our case, indeed  0  v This is because S can be represented as S = XX T, X is d  n matrix So:

28 Technical remarks: S = XX T, X is d  n matrix = d  d = = = XX T

See you next time