3D Geometry for Computer Graphics

Slides:



Advertisements
Similar presentations
Eigen Decomposition and Singular Value Decomposition
Advertisements

3D Geometry for Computer Graphics
Chapter 28 – Part II Matrix Operations. Gaussian elimination Gaussian elimination LU factorization LU factorization Gaussian elimination with partial.
Dimensionality Reduction PCA -- SVD
Linear Algebra.
PCA + SVD.
Object Orie’d Data Analysis, Last Time Finished NCI 60 Data Started detailed look at PCA Reviewed linear algebra Today: More linear algebra Multivariate.
Lecture 19 Singular Value Decomposition
Slides by Olga Sorkine, Tel Aviv University. 2 The plan today Singular Value Decomposition  Basic intuition  Formal definition  Applications.
1cs542g-term High Dimensional Data  So far we’ve considered scalar data values f i (or interpolated/approximated each component of vector values.
Principal Component Analysis CMPUT 466/551 Nilanjan Ray.
Principal Component Analysis
Symmetric Matrices and Quadratic Forms
Computer Graphics Recitation 5.
Some useful linear algebra. Linearly independent vectors span(V): span of vector space V is all linear combinations of vectors v i, i.e.
Computer Graphics Recitation 6. 2 Last week - eigendecomposition A We want to learn how the transformation A works:
3D Geometry for Computer Graphics
Math for CSLecture 41 Linear Least Squares Problem Over-determined systems Minimization problem: Least squares norm Normal Equations Singular Value Decomposition.
1 Numerical geometry of non-rigid shapes Spectral Methods Tutorial. Spectral Methods Tutorial 6 © Maks Ovsjanikov tosca.cs.technion.ac.il/book Numerical.
Chapter 3 Determinants and Matrices
The Terms that You Have to Know! Basis, Linear independent, Orthogonal Column space, Row space, Rank Linear combination Linear transformation Inner product.
Previously Two view geometry: epipolar geometry Stereo vision: 3D reconstruction epipolar lines Baseline O O’ epipolar plane.
3D Geometry for Computer Graphics
3D Geometry for Computer Graphics
Lecture 20 SVD and Its Applications Shang-Hua Teng.
Olga Sorkine’s slides Tel Aviv University. 2 Spectra and diagonalization A If A is symmetric, the eigenvectors are orthogonal (and there’s always an eigenbasis).
Boot Camp in Linear Algebra Joel Barajas Karla L Caballero University of California Silicon Valley Center October 8th, 2008.
1cs542g-term Notes  Extra class next week (Oct 12, not this Friday)  To submit your assignment: me the URL of a page containing (links to)
5.1 Orthogonality.
Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka Virginia de Sa (UCSD) Cogsci 108F Linear.
Orthogonal Matrices and Spectral Representation In Section 4.3 we saw that n  n matrix A was similar to a diagonal matrix if and only if it had n linearly.
Introduction The central problems of Linear Algebra are to study the properties of matrices and to investigate the solutions of systems of linear equations.
SVD(Singular Value Decomposition) and Its Applications
Linear Least Squares Approximation. 2 Definition (point set case) Given a point set x 1, x 2, …, x n  R d, linear least squares fitting amounts to find.
CSE554AlignmentSlide 1 CSE 554 Lecture 8: Alignment Fall 2014.
+ Review of Linear Algebra Optimization 1/14/10 Recitation Sivaraman Balakrishnan.
CSE554AlignmentSlide 1 CSE 554 Lecture 5: Alignment Fall 2011.
SVD: Singular Value Decomposition
CSE554AlignmentSlide 1 CSE 554 Lecture 8: Alignment Fall 2013.
4.8 Rank Rank enables one to relate matrices to vectors, and vice versa. Definition Let A be an m  n matrix. The rows of A may be viewed as row vectors.
Introduction to Linear Algebra Mark Goldman Emily Mackevicius.
Review of Linear Algebra Optimization 1/16/08 Recitation Joseph Bradley.
EIGENSYSTEMS, SVD, PCA Big Data Seminar, Dedi Gadot, December 14 th, 2014.
1. Systems of Linear Equations and Matrices (8 Lectures) 1.1 Introduction to Systems of Linear Equations 1.2 Gaussian Elimination 1.3 Matrices and Matrix.
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
4.8 Rank Rank enables one to relate matrices to vectors, and vice versa. Definition Let A be an m  n matrix. The rows of A may be viewed as row vectors.
Instructor: Mircea Nicolescu Lecture 8 CS 485 / 685 Computer Vision.
1 Chapter 8 – Symmetric Matrices and Quadratic Forms Outline 8.1 Symmetric Matrices 8.2Quardratic Forms 8.3Singular ValuesSymmetric MatricesQuardratic.
3D Geometry for Computer Graphics Class 3. 2 Last week - eigendecomposition A We want to learn how the transformation A works:
Singular Value Decomposition and Numerical Rank. The SVD was established for real square matrices in the 1870’s by Beltrami & Jordan for complex square.
Chapter 61 Chapter 7 Review of Matrix Methods Including: Eigen Vectors, Eigen Values, Principle Components, Singular Value Decomposition.
Unsupervised Learning II Feature Extraction
ALGEBRAIC EIGEN VALUE PROBLEMS
Lecture XXVII. Orthonormal Bases and Projections Suppose that a set of vectors {x 1,…,x r } for a basis for some space S in R m space such that r  m.
CS246 Linear Algebra Review. A Brief Review of Linear Algebra Vector and a list of numbers Addition Scalar multiplication Dot product Dot product as a.
CSE 554 Lecture 8: Alignment
Introduction to Vectors and Matrices
Principal Component Analysis (PCA)
Introduction The central problems of Linear Algebra are to study the properties of matrices and to investigate the solutions of systems of linear equations.
Introduction The central problems of Linear Algebra are to study the properties of matrices and to investigate the solutions of systems of linear equations.
Review of Linear Algebra
Spectral Methods Tutorial 6 1 © Maks Ovsjanikov
Singular Value Decomposition
Multivariate Analysis: Theory and Geometric Interpretation
SVD: Physical Interpretation and Applications
Feature space tansformation methods
Lecture 13: Singular Value Decomposition (SVD)
Linear Algebra Lecture 32.
Maths for Signals and Systems Linear Algebra in Engineering Lecture 18, Friday 18th November 2016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL.
Presentation transcript:

3D Geometry for Computer Graphics Class 5

The plan today Singular Value Decomposition Basic intuition Formal definition Applications

Geometric analysis of linear transformations We want to know what a linear transformation A does Need some simple and “comprehendible” representation of the matrix of A. Let’s look what A does to some vectors Since A(v) = A(v), it’s enough to look at vectors v of unit length A

The geometry of linear transformations A linear (non-singular) transform A always takes hyper-spheres to hyper-ellipses. A A

The geometry of linear transformations Thus, one good way to understand what A does is to find which vectors are mapped to the “main axes” of the ellipsoid. A A

Geometric analysis of linear transformations If we are lucky: A = V  VT, V orthogonal (true if A is symmetric) The eigenvectors of A are the axes of the ellipse A

Symmetric matrix: eigen decomposition In this case A is just a scaling matrix. The eigen decomposition of A tells us which orthogonal axes it scales, and by how much: A 1 2 1

General linear transformations: SVD In general A will also contain rotations, not just scales: 1 1 1 2 A

General linear transformations: SVD 1 1 1 2 A orthonormal orthonormal

SVD more formally SVD exists for any matrix Formal definition: For square matrices A  Rnn, there exist orthogonal matrices U, V  Rnn and a diagonal matrix , such that all the diagonal values i of  are non-negative and =

SVD more formally The diagonal values of  (1, …, n) are called the singular values. It is accustomed to sort them: 1  2 …  n The columns of U (u1, …, un) are called the left singular vectors. They are the axes of the ellipsoid. The columns of V (v1, …, vn) are called the right singular vectors. They are the preimages of the axes of the ellipsoid. =

Reduced SVD For rectangular matrices, we have two forms of SVD. The reduced SVD looks like this: The columns of U are orthonormal Cheaper form for computation and storage =

Full SVD We can complete U to a full orthogonal matrix and pad  by zeros accordingly =

Some history SVD was discovered by the following people: E. Beltrami (1835  1900) M. Jordan (1838  1922) J. Sylvester (1814  1897) E. Schmidt (1876-1959) H. Weyl (1885-1955)

SVD is the “working horse” of linear algebra There are numerical algorithms to compute SVD. Once you have it, you have many things: Matrix inverse  can solve square linear systems Numerical rank of a matrix Can solve least-squares systems PCA Many more…

Matrix inverse and solving linear systems So, to solve

Matrix rank The rank of A is the number of non-zero singular values n 1 2 n m =

Numerical rank If there are very small singular values, then A is close to being singular. We can set a threshold t, so that numeric_rank(A) = #{i| i > t} If rank(A) < n then A is singular. It maps the entire space Rn onto some subspace, like a plane (so A is some sort of projection).

PCA We wanted to find principal components x y x’ y’

PCA Move the center of mass to the origin pi’ = pi  m y y’ x’ x

PCA Constructed the matrix X of the data points. The principal axes are eigenvectors of S = XXT

PCA We can compute the principal components by SVD of X: Thus, the left singular vectors of X are the principal components! We sort them by the size of the singular values of X.

Shape matching We have two objects in correspondence Want to find the rigid transformation that aligns them

Shape matching When the objects are aligned, the lengths of the connecting lines are small.

Shape matching – formalization Align two point sets Find a translation vector t and rotation matrix R so that:

Shape matching – solution Turns out we can solve the translation and rotation separately. Theorem: if (R, t) is the optimal transformation, then the points {pi} and {Rqi + t} have the same centers of mass.

Finding the rotation R To find the optimal R, we bring the centroids of both point sets to the origin: We want to find R that minimizes

Summary of rigid alignment: Translate the input points to the centroids: Compute the “covariance matrix” Compute the SVD of H: The optimal rotation is The translation vector is

Complexity Numerical SVD is an expensive operation We always need to pay attention to the dimensions of the matrix we’re applying SVD to.

See you next time

Finding the rotation R I These terms do not depend on R, so we can ignore them in the minimization

Finding the rotation R this is a scalar

Finding the rotation R = 3 1 1 3 = 3 3

Finding the rotation R So, we want to find R that maximizes Theorem: if M is symmetric positive definite (all eigenvalues of M are positive) and B is any orthogonal matrix then So, let’s find R so that RH is symmetric positive definite. Then we know for sure that Trace(RH) is maximal.

Finding the rotation R This is easy! Compute SVD of H: Define R: Check RH: This is a symmetric matrix, Its eigenvalues are i  0 So RH is positive!