Instructor: Mircea Nicolescu Lecture 8 CS 485 / 685 Computer Vision.

Slides:



Advertisements
Similar presentations
3D Geometry for Computer Graphics
Advertisements

Chapter 28 – Part II Matrix Operations. Gaussian elimination Gaussian elimination LU factorization LU factorization Gaussian elimination with partial.
Signal , Weight Vector Spaces and Linear Transformations
Signal , Weight Vector Spaces and Linear Transformations
Symmetric Matrices and Quadratic Forms
Computer Graphics Recitation 5.
Some useful linear algebra. Linearly independent vectors span(V): span of vector space V is all linear combinations of vectors v i, i.e.
Chapter 6 Eigenvalues.
Singular Value Decomposition (SVD) (see Appendix A.6, Trucco & Verri) CS485/685 Computer Vision Prof. George Bebis.
Ch 7.3: Systems of Linear Equations, Linear Independence, Eigenvalues
The Terms that You Have to Know! Basis, Linear independent, Orthogonal Column space, Row space, Rank Linear combination Linear transformation Inner product.
CSci 6971: Image Registration Lecture 2: Vectors and Matrices January 16, 2004 Prof. Chuck Stewart, RPI Dr. Luis Ibanez, Kitware Prof. Chuck Stewart, RPI.
Lecture 20 SVD and Its Applications Shang-Hua Teng.
Finding Eigenvalues and Eigenvectors What is really important?
Boot Camp in Linear Algebra Joel Barajas Karla L Caballero University of California Silicon Valley Center October 8th, 2008.
化工應用數學 授課教師: 郭修伯 Lecture 9 Matrices
Matrices CS485/685 Computer Vision Dr. George Bebis.
5.1 Orthogonality.
1 Statistical Analysis Professor Lynne Stokes Department of Statistical Science Lecture 5QF Introduction to Vector and Matrix Operations Needed for the.
Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka Virginia de Sa (UCSD) Cogsci 108F Linear.
Introduction The central problems of Linear Algebra are to study the properties of matrices and to investigate the solutions of systems of linear equations.
A vector can be interpreted as a file of data A matrix is a collection of vectors and can be interpreted as a data base The red matrix contain three column.
Boyce/DiPrima 9th ed, Ch 7.3: Systems of Linear Equations, Linear Independence, Eigenvalues Elementary Differential Equations and Boundary Value Problems,
CHAPTER SIX Eigenvalues
Linear Algebra Review 1 CS479/679 Pattern Recognition Dr. George Bebis.
1 February 24 Matrices 3.2 Matrices; Row reduction Standard form of a set of linear equations: Chapter 3 Linear Algebra Matrix of coefficients: Augmented.
8.1 Vector spaces A set of vector is said to form a linear vector space V Chapter 8 Matrices and vector spaces.
1 MAC 2103 Module 12 Eigenvalues and Eigenvectors.
BMI II SS06 – Class 3 “Linear Algebra” Slide 1 Biomedical Imaging II Class 3 – Mathematical Preliminaries: Elementary Linear Algebra 2/13/06.
CPSC 491 Xin Liu Nov 17, Introduction Xin Liu PhD student of Dr. Rokne Contact Slides downloadable at pages.cpsc.ucalgary.ca/~liuxin.
Digital Image Processing, 3rd ed. © 1992–2008 R. C. Gonzalez & R. E. Woods Gonzalez & Woods Matrices and Vectors Objective.
Matrices CHAPTER 8.1 ~ 8.8. Ch _2 Contents  8.1 Matrix Algebra 8.1 Matrix Algebra  8.2 Systems of Linear Algebra Equations 8.2 Systems of Linear.
Matrices. Definitions  A matrix is an m x n array of scalars, arranged conceptually as m rows and n columns.  m is referred to as the row dimension.
Linear algebra: matrix Eigen-value Problems
Computing Eigen Information for Small Matrices The eigen equation can be rearranged as follows: Ax = x  Ax = I n x  Ax - I n x = 0  (A - I n )x = 0.
Chapter 5 Eigenvalues and Eigenvectors 大葉大學 資訊工程系 黃鈴玲 Linear Algebra.
Elementary Linear Algebra Anton & Rorres, 9th Edition
Elementary Linear Algebra Anton & Rorres, 9 th Edition Lecture Set – 07 Chapter 7: Eigenvalues, Eigenvectors.
Eigenvalues The eigenvalue problem is to determine the nontrivial solutions of the equation Ax= x where A is an n-by-n matrix, x is a length n column.
Review of Linear Algebra Optimization 1/16/08 Recitation Joseph Bradley.
7.1 Eigenvalues and Eigenvectors
5.1 Eigenvalues and Eigenvectors
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
1 Chapter 8 – Symmetric Matrices and Quadratic Forms Outline 8.1 Symmetric Matrices 8.2Quardratic Forms 8.3Singular ValuesSymmetric MatricesQuardratic.
Chapter 61 Chapter 7 Review of Matrix Methods Including: Eigen Vectors, Eigen Values, Principle Components, Singular Value Decomposition.
Matrices CHAPTER 8.9 ~ Ch _2 Contents  8.9 Power of Matrices 8.9 Power of Matrices  8.10 Orthogonal Matrices 8.10 Orthogonal Matrices 
Boot Camp in Linear Algebra TIM 209 Prof. Ram Akella.
1 Objective To provide background material in support of topics in Digital Image Processing that are based on matrices and/or vectors. Review Matrices.
Reduced echelon form Matrix equations Null space Range Determinant Invertibility Similar matrices Eigenvalues Eigenvectors Diagonabilty Power.
Graphics Graphics Korea University kucg.korea.ac.kr Mathematics for Computer Graphics 고려대학교 컴퓨터 그래픽스 연구실.
CS246 Linear Algebra Review. A Brief Review of Linear Algebra Vector and a list of numbers Addition Scalar multiplication Dot product Dot product as a.
Introduction The central problems of Linear Algebra are to study the properties of matrices and to investigate the solutions of systems of linear equations.
Review of Linear Algebra
CS479/679 Pattern Recognition Dr. George Bebis
Elementary Linear Algebra Anton & Rorres, 9th Edition
Matrices and vector spaces
Matrices and Vectors Review Objective
Systems of First Order Linear Equations
Euclidean Inner Product on Rn
Eigenvalues and Eigenvectors
Some useful linear algebra
CS485/685 Computer Vision Dr. George Bebis
Numerical Analysis Lecture 16.
Eigenvalues and Eigenvectors
Symmetric Matrices and Quadratic Forms
Matrix Definitions It is assumed you are already familiar with the terms matrix, matrix transpose, vector, row vector, column vector, unit vector, zero.
Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 – 14, Tuesday 8th November 2016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR)
Eigenvalues and Eigenvectors
Subject :- Applied Mathematics
Symmetric Matrices and Quadratic Forms
Presentation transcript:

Instructor: Mircea Nicolescu Lecture 8 CS 485 / 685 Computer Vision

Orthogonal/Orthonormal Vectors A set of vectors x 1, x 2,..., x n is orthogonal if A set of vectors x 1, x 2,..., x n is orthonormal if k 2

Linear Combinations of Vectors A vector v is a linear combination of the vectors v 1,..., v k : where c 1,..., c k are scalars Example: any vector in R 3 can be expressed as a linear combinations of the unit vectors i = (1, 0, 0), j = (0, 1, 0), and k = (0, 0, 1) 3

Space Spanning A set of vectors S = (v 1, v 2,..., v k ) span some space W if every vector in W can be written as a linear combination of the vectors in S Example: the vectors i, j, and k span R 3 w 4

Linear Dependence A set of vectors v 1,..., v k are linearly dependent if at least one of them is a linear combination of the others. (where v j does not appear on the right side) 5

Linear Independence A set of vectors v 1,..., v k is linearly independent if Example: implies 6

Vector Basis A set of vectors (v 1,..., v k ) is said to be a basis for a vector space W if (1) (v 1,..., v k ) are linearly independent (2) (v 1,..., v k ) span W Standard bases: R 2 R 3 R n 7

Orthogonal Basis A basis with orthogonal basis vectors: Any set of basis vectors (x 1, x 2,..., x n ) can be transformed to an orthogonal basis (o 1, o 2,..., o n ) using the Gram-Schmidt orthogonalization. k 8

Orthonormal Basis A basis with orthonormal basis vectors: 9

Uniqueness of Vector Expansion Suppose v 1, v 2,..., v n represents a basis in W, then any v є W has a unique vector expansion in this basis: The vector expansion provides a meaning for writing a vector as a “column of numbers”. Note: to interpret v, we need to know what basis was used for the expansion! 10

Computing Vector Expansion (1) Assuming the basis vectors are orthogonal, to compute x i, take the inner product of v i and v: (2) The coefficients of the expansion can be computed as follows: 11

Matrix Operations Matrix addition/subtraction −Matrices must be of same size. Matrix multiplication Condition: n = q m x nq x pm x p 12 n

Identity Matrix 13

Matrix Transpose 14

Symmetric Matrices Example: 15

Determinants 2 x 2 3 x 3 m x m 16

Determinants diagonal matrix: 17

Matrix Inverse The inverse A -1 of a (square) matrix A has the property: AA -1 =A -1 A=I A -1 exists only if Terminology 18 −Singular matrix: A -1 does not exist −Ill-conditioned matrix: A is close to being singular

Matrix Inverse Properties of the inverse: 19

Pseudo-Inverse The pseudo-inverse A + of a matrix A (could be non-square, e.g., m x n) is given by: It can be shown that: 20

Matrix Trace Properties: 21

Rank of Matrix Equal to the dimension of the largest square sub- matrix of A that has a non-zero determinant. Example: has rank 3 22

Rank of Matrix Alternative definition: the maximum number of linearly independent columns (or rows) of A. Therefore, rank is not 4 ! Example: 23

Rank and Singular Matrices 24

Orthogonal Matrices A is orthogonal if: Notation: Example: 25

Orthonormal Matrices A is orthonormal if: Note that if A is orthonormal, it easy to find its inverse: Property: 26

Eigenvalues and Eigenvectors The vector v is an eigenvector of (square) matrix A and λ is an eigenvalue of A if: Interpretation: the linear transformation implied by A cannot change the direction of the eigenvectors v, only their magnitude. (assume non-zero v) 27

Computing λ and v To find the eigenvalues λ of a matrix A, find the roots of the characteristic polynomial: Example: 28

Properties Eigenvalues and eigenvectors are only defined for square matrices (i.e., m = n) Eigenvectors are not unique (e.g., if v is an eigenvector, so is kv) Suppose λ 1, λ 2,..., λ n are the eigenvalues of A, then: 29

Properties x T Ax > 0 for every 30

Matrix Diagonalization Given A, find P such that P -1 AP is diagonal (i.e., P diagonalizes A) Take P = [v 1 v 2... v n ], where v 1,v 2,... v n are the eigenvectors of A: 31

Matrix Diagonalization Example: 32

Only if P -1 exists (i.e., A must have n linearly independent eigenvectors, that is, rank(A)=n) If A has n distinct eigenvalues λ 1, λ 2,..., λ n, then the corresponding eigenvectors v 1,v 2,... v n form a basis: (1) linearly independent (2) span R n Are All n × n Matrices Diagonalizable? 33

Diagonalization  Decomposition Let us assume that A is diagonalizable, then: 34

Decomposition: Symmetric Matrices A=PDP T = P -1 =P T The eigenvalues of symmetric matrices are all real. The eigenvectors corresponding to distinct eigenvalues are orthogonal. 35

Singular Value Decomposition (SVD) Any real m x n matrix A can be decomposed uniquely: U is m x n and column orthonormal (U T U=I) D is n x n and diagonal −σ i are called singular values of A −It is assumed that σ 1 ≥ σ 2 ≥ … ≥ σ n ≥ 0 V is n x n and orthonormal (VV T =V T V=I) 36

SVD If m=n, then: U is n x n and orthonormal (U T U=UU T =I) D is n x n and diagonal V is n x n and orthonormal (VV T =V T V=I) 37

SVD The columns of U are eigenvectors of AA T The columns of V are eigenvectors of A T A If λ i is an eigenvalue of A T A (or AA T ), then λ i = σ i 2 38

SVD – Example D U = (u 1 u 2... u n )V = (v 1 v 2... v n ) 39

SVD – Another Example The eigenvalues of AA T, A T A are: The eigenvectors of AA T, A T A are: λ1λ2λ3λ1λ2λ3 40

SVD Properties A square (n × n) matrix A is singular iff at least one of its singular values σ 1, …, σ n is zero. The rank of matrix A is equal to the number of nonzero singular values σ i 41

Matrix “Condition” SVD gives a way of determining how singular A is. The condition of A measures the degree of singularity of A: (ratio of largest singular value to its smallest singular value) Matrices with a large condition number are called ill conditioned. cond (A)= 42

Computing A -1 Using SVD If A is a n x n nonsingular matrix, then its inverse can be computed as follows: easy to compute! (U T U=UU T =I so U T =U -1, and V T V=VV T =I so V T =V -1 ) 43

Computing A -1 Using SVD If A is singular (or ill-conditioned), we can use SVD to approximate its inverse as follows: where (t is a small threshold) ? 44