Presentation is loading. Please wait.

Presentation is loading. Please wait.

Eigen Decomposition Based on the slides by Mani Thomas

Similar presentations


Presentation on theme: "Eigen Decomposition Based on the slides by Mani Thomas"— Presentation transcript:

1 Eigen Decomposition Based on the slides by Mani Thomas
Modified and extended by Longin Jan Latecki

2 Introduction Eigenvalue decomposition
Physical interpretation of eigenvalue/eigenvectors

3 A(x) = (Ax) = (x) = (x)
What are eigenvalues? Given a matrix, A, x is the eigenvector and  is the corresponding eigenvalue if Ax = x A must be square and the determinant of A -  I must be equal to zero Ax - x = 0 iff (A - I) x = 0 Trivial solution is if x = 0 The non trivial solution occurs when det(A - I) = 0 Are eigenvectors unique? If x is an eigenvector, then x is also an eigenvector and  is an eigenvalue A(x) = (Ax) = (x) = (x)

4 Calculating the Eigenvectors/values
Expand the det(A - I) = 0 for a 2 x 2 matrix For a 2 x 2 matrix, this is a simple quadratic equation with two solutions (maybe complex) This “characteristic equation” can be used to solve for x

5 Eigenvalue example Consider,
The corresponding eigenvectors can be computed as For  = 0, one possible solution is x = (2, -1) For  = 5, one possible solution is x = (1, 2) For more information: Demos in Linear algebra by G. Strang,

6 Let σ(A) be the set of all eigenvalues of A .
Then σ(A)=σ(AT ) where AT is the transposed matrix of A. Proof: The matrix (A−λI)T is the same as the matrix (AT −λI) , since the identity matrix is symmetric. Thus: det(AT −λI)=det( (A−λI)T )=det(A−λI) The last equation follows from the fact that a matrix and its transpose have the same determinant, since both A and its transpose have the same characteristic polynomial. Hence the eigenvalues are the same for both A and AT .

7 Physical interpretation
Consider a covariance matrix, A, i.e., A = 1/n S ST for some S Error ellipse with the major axis as the larger eigenvalue and the minor axis as the smaller eigenvalue

8 Physical interpretation
Original Variable A Original Variable B PC 1 PC 2 Orthogonal directions of greatest variance in data Projections along PC1 (Principal Component) discriminate the data most along any one axis

9 Physical interpretation
First principal component is the direction of greatest variability (covariance) in the data Second is the next orthogonal (uncorrelated) direction of greatest variability So first remove all the variability along the first component, and then find the next direction of greatest variability And so on … Thus each eigenvectors provides the directions of data variances in decreasing order of eigenvalues For more information: See Gram-Schmidt Orthogonalization in G. Strang’s lectures


Download ppt "Eigen Decomposition Based on the slides by Mani Thomas"

Similar presentations


Ads by Google