Presentation is loading. Please wait.

Presentation is loading. Please wait.

Instructor: Mircea Nicolescu Lecture 8 CS 485 / 685 Computer Vision.

Similar presentations


Presentation on theme: "Instructor: Mircea Nicolescu Lecture 8 CS 485 / 685 Computer Vision."— Presentation transcript:

1 Instructor: Mircea Nicolescu Lecture 8 CS 485 / 685 Computer Vision

2 Orthogonal/Orthonormal Vectors A set of vectors x 1, x 2,..., x n is orthogonal if A set of vectors x 1, x 2,..., x n is orthonormal if k 2

3 Linear Combinations of Vectors A vector v is a linear combination of the vectors v 1,..., v k : where c 1,..., c k are scalars Example: any vector in R 3 can be expressed as a linear combinations of the unit vectors i = (1, 0, 0), j = (0, 1, 0), and k = (0, 0, 1) 3

4 Space Spanning A set of vectors S = (v 1, v 2,..., v k ) span some space W if every vector in W can be written as a linear combination of the vectors in S Example: the vectors i, j, and k span R 3 w 4

5 Linear Dependence A set of vectors v 1,..., v k are linearly dependent if at least one of them is a linear combination of the others. (where v j does not appear on the right side) 5

6 Linear Independence A set of vectors v 1,..., v k is linearly independent if Example: implies 6

7 Vector Basis A set of vectors (v 1,..., v k ) is said to be a basis for a vector space W if (1) (v 1,..., v k ) are linearly independent (2) (v 1,..., v k ) span W Standard bases: R 2 R 3 R n 7

8 Orthogonal Basis A basis with orthogonal basis vectors: Any set of basis vectors (x 1, x 2,..., x n ) can be transformed to an orthogonal basis (o 1, o 2,..., o n ) using the Gram-Schmidt orthogonalization. k 8

9 Orthonormal Basis A basis with orthonormal basis vectors: 9

10 Uniqueness of Vector Expansion Suppose v 1, v 2,..., v n represents a basis in W, then any v є W has a unique vector expansion in this basis: The vector expansion provides a meaning for writing a vector as a “column of numbers”. Note: to interpret v, we need to know what basis was used for the expansion! 10

11 Computing Vector Expansion (1) Assuming the basis vectors are orthogonal, to compute x i, take the inner product of v i and v: (2) The coefficients of the expansion can be computed as follows: 11

12 Matrix Operations Matrix addition/subtraction −Matrices must be of same size. Matrix multiplication Condition: n = q m x nq x pm x p 12 n

13 Identity Matrix 13

14 Matrix Transpose 14

15 Symmetric Matrices Example: 15

16 Determinants 2 x 2 3 x 3 m x m 16

17 Determinants diagonal matrix: 17

18 Matrix Inverse The inverse A -1 of a (square) matrix A has the property: AA -1 =A -1 A=I A -1 exists only if Terminology 18 −Singular matrix: A -1 does not exist −Ill-conditioned matrix: A is close to being singular

19 Matrix Inverse Properties of the inverse: 19

20 Pseudo-Inverse The pseudo-inverse A + of a matrix A (could be non-square, e.g., m x n) is given by: It can be shown that: 20

21 Matrix Trace Properties: 21

22 Rank of Matrix Equal to the dimension of the largest square sub- matrix of A that has a non-zero determinant. Example: has rank 3 22

23 Rank of Matrix Alternative definition: the maximum number of linearly independent columns (or rows) of A. Therefore, rank is not 4 ! Example: 23

24 Rank and Singular Matrices 24

25 Orthogonal Matrices A is orthogonal if: Notation: Example: 25

26 Orthonormal Matrices A is orthonormal if: Note that if A is orthonormal, it easy to find its inverse: Property: 26

27 Eigenvalues and Eigenvectors The vector v is an eigenvector of (square) matrix A and λ is an eigenvalue of A if: Interpretation: the linear transformation implied by A cannot change the direction of the eigenvectors v, only their magnitude. (assume non-zero v) 27

28 Computing λ and v To find the eigenvalues λ of a matrix A, find the roots of the characteristic polynomial: Example: 28

29 Properties Eigenvalues and eigenvectors are only defined for square matrices (i.e., m = n) Eigenvectors are not unique (e.g., if v is an eigenvector, so is kv) Suppose λ 1, λ 2,..., λ n are the eigenvalues of A, then: 29

30 Properties x T Ax > 0 for every 30

31 Matrix Diagonalization Given A, find P such that P -1 AP is diagonal (i.e., P diagonalizes A) Take P = [v 1 v 2... v n ], where v 1,v 2,... v n are the eigenvectors of A: 31

32 Matrix Diagonalization Example: 32

33 Only if P -1 exists (i.e., A must have n linearly independent eigenvectors, that is, rank(A)=n) If A has n distinct eigenvalues λ 1, λ 2,..., λ n, then the corresponding eigenvectors v 1,v 2,... v n form a basis: (1) linearly independent (2) span R n Are All n × n Matrices Diagonalizable? 33

34 Diagonalization  Decomposition Let us assume that A is diagonalizable, then: 34

35 Decomposition: Symmetric Matrices A=PDP T = P -1 =P T The eigenvalues of symmetric matrices are all real. The eigenvectors corresponding to distinct eigenvalues are orthogonal. 35

36 Singular Value Decomposition (SVD) Any real m x n matrix A can be decomposed uniquely: U is m x n and column orthonormal (U T U=I) D is n x n and diagonal −σ i are called singular values of A −It is assumed that σ 1 ≥ σ 2 ≥ … ≥ σ n ≥ 0 V is n x n and orthonormal (VV T =V T V=I) 36

37 SVD If m=n, then: U is n x n and orthonormal (U T U=UU T =I) D is n x n and diagonal V is n x n and orthonormal (VV T =V T V=I) 37

38 SVD The columns of U are eigenvectors of AA T The columns of V are eigenvectors of A T A If λ i is an eigenvalue of A T A (or AA T ), then λ i = σ i 2 38

39 SVD – Example D U = (u 1 u 2... u n )V = (v 1 v 2... v n ) 39

40 SVD – Another Example The eigenvalues of AA T, A T A are: The eigenvectors of AA T, A T A are: λ1λ2λ3λ1λ2λ3 40

41 SVD Properties A square (n × n) matrix A is singular iff at least one of its singular values σ 1, …, σ n is zero. The rank of matrix A is equal to the number of nonzero singular values σ i 41

42 Matrix “Condition” SVD gives a way of determining how singular A is. The condition of A measures the degree of singularity of A: (ratio of largest singular value to its smallest singular value) Matrices with a large condition number are called ill conditioned. cond (A)= 42

43 Computing A -1 Using SVD If A is a n x n nonsingular matrix, then its inverse can be computed as follows: easy to compute! (U T U=UU T =I so U T =U -1, and V T V=VV T =I so V T =V -1 ) 43

44 Computing A -1 Using SVD If A is singular (or ill-conditioned), we can use SVD to approximate its inverse as follows: where (t is a small threshold) ? 44


Download ppt "Instructor: Mircea Nicolescu Lecture 8 CS 485 / 685 Computer Vision."

Similar presentations


Ads by Google