Eigen-analysis and the Power Method

Slides:



Advertisements
Similar presentations
Numerical Solution of Linear Equations
Advertisements

Chapter 9 Approximating Eigenvalues
Chapter 28 – Part II Matrix Operations. Gaussian elimination Gaussian elimination LU factorization LU factorization Gaussian elimination with partial.
B.Macukow 1 Lecture 12 Neural Networks. B.Macukow 2 Neural Networks for Matrix Algebra Problems.
Extremum Properties of Orthogonal Quotients Matrices By Achiya Dax Hydrological Service, Jerusalem, Israel
Online Social Networks and Media. Graph partitioning The general problem – Input: a graph G=(V,E) edge (u,v) denotes similarity between u and v weighted.
PCA + SVD.
Linear Systems of Equations
Lecture 13 - Eigen-analysis CVEN 302 July 1, 2002.
SOLVING SYSTEMS OF LINEAR EQUATIONS. Overview A matrix consists of a rectangular array of elements represented by a single symbol (example: [A]). An individual.
1cs542g-term Notes  In assignment 1, problem 2: smoothness = number of times differentiable.
1cs542g-term Notes  Assignment 1 due tonight ( me by tomorrow morning)
Linear Algebraic Equations
Mar Numerical approach for large-scale Eigenvalue problems 1 Definition Why do we study it ? Is the Behavior system based or nodal based? What are.
Motion Analysis Slides are from RPI Registration Class.
Pádraig Cunningham University College Dublin Matrix Tutorial Transition Matrices Graphs Random Walks.
1 EE 616 Computer Aided Analysis of Electronic Networks Lecture 7 Instructor: Dr. J. A. Starzyk, Professor School of EECS Ohio University Athens, OH,
數位控制(九).
Tutorial 10 Iterative Methods and Matrix Norms. 2 In an iterative process, the k+1 step is defined via: Iterative processes Eigenvector decomposition.
Finding Eigenvalues and Eigenvectors What is really important?
Lecture 10: Robust fitting CS4670: Computer Vision Noah Snavely.
MATH 685/ CSI 700/ OR 682 Lecture Notes Lecture 6. Eigenvalue problems.
Stats & Linear Models.
Dominant Eigenvalues & The Power Method
Section 8.3 – Systems of Linear Equations - Determinants Using Determinants to Solve Systems of Equations A determinant is a value that is obtained from.

Point set alignment Closed-form solution of absolute orientation using unit quaternions Berthold K. P. Horn Department of Electrical Engineering, University.
Today’s class Boundary Value Problems Eigenvalue Problems
Compiled By Raj G. Tiwari
Summarized by Soo-Jin Kim
CSE554AlignmentSlide 1 CSE 554 Lecture 8: Alignment Fall 2014.
Eigenvalue Problems Solving linear systems Ax = b is one part of numerical linear algebra, and involves manipulating the rows of a matrix. The second main.
8.1 Vector spaces A set of vector is said to form a linear vector space V Chapter 8 Matrices and vector spaces.
Linear Algebra/Eigenvalues and eigenvectors. One mathematical tool, which has applications not only for Linear Algebra but for differential equations,
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
Using Adaptive Methods for Updating/Downdating PageRank Gene H. Golub Stanford University SCCM Joint Work With Sep Kamvar, Taher Haveliwala.
Camera Geometry and Calibration Thanks to Martial Hebert.
1 Iterative Solution Methods Starts with an initial approximation for the solution vector (x 0 ) At each iteration updates the x vector by using the sytem.
Lecture 22 - Exam 2 Review CVEN 302 July 29, 2002.
ΑΡΙΘΜΗΤΙΚΕΣ ΜΕΘΟΔΟΙ ΜΟΝΤΕΛΟΠΟΙΗΣΗΣ 4. Αριθμητική Επίλυση Συστημάτων Γραμμικών Εξισώσεων Gaussian elimination Gauss - Jordan 1.
Chapter 4 Section 4: Inverse and Identity Matrices 1.
Multivariate Statistics Matrix Algebra I W. M. van der Veld University of Amsterdam.
Chapter 10 Real Inner Products and Least-Square (cont.) In this handout: Angle between two vectors Revised Gram-Schmidt algorithm QR-decompoistion of matrices.
Vector Norms and the related Matrix Norms. Properties of a Vector Norm: Euclidean Vector Norm: Riemannian metric:
Lecture 7 - Systems of Equations CVEN 302 June 17, 2002.
Ch 2.5 Variable on Both Sides Objective: To solve equations where one variable exists on both sides of the equation.
CSE554AlignmentSlide 1 CSE 554 Lecture 8: Alignment Fall 2013.
A Note on Rectangular Quotients By Achiya Dax Hydrological Service Jerusalem, Israel
What is the determinant of What is the determinant of
Numerical Analysis – Eigenvalue and Eigenvector Hanyang University Jong-Il Park.
Eigenvalues The eigenvalue problem is to determine the nontrivial solutions of the equation Ax= x where A is an n-by-n matrix, x is a length n column.
Solving Equations Inverse operations. INVERSE = Opposite If I am solving an equation using inverses operations, I am solving it using opposite signs.
1 Instituto Tecnológico de Aeronáutica Prof. Maurício Vicente Donadon AE-256 NUMERICAL METHODS IN APPLIED STRUCTURAL MECHANICS Lecture notes: Prof. Maurício.
Matrices CHAPTER 8.9 ~ Ch _2 Contents  8.9 Power of Matrices 8.9 Power of Matrices  8.10 Orthogonal Matrices 8.10 Orthogonal Matrices 
Review of Eigenvectors and Eigenvalues from CliffsNotes Online mining-the-Eigenvectors-of-a- Matrix.topicArticleId-20807,articleId-
CSE 554 Lecture 8: Alignment
Introduction to Vectors and Matrices
3. 3 Solving Equations Using Addition or Subtraction 3
Review of Linear Algebra
Matrices and vector spaces
Complex Eigenvalues Prepared by Vince Zaccone
Two-view geometry Computer Vision Spring 2018, Lecture 10
1C9 Design for seismic and climate changes
Eigenvalues and Eigenvectors
Numerical Analysis Lecture 16.
Eigenvalues and Eigenvectors
Feature space tansformation methods
Numerical Analysis Lecture 17.
SKTN 2393 Numerical Methods for Nuclear Engineers
Introduction to Vectors and Matrices
Presentation transcript:

Eigen-analysis and the Power Method

Module Goals Power Method Shift technique (optional) Inverse Method Accelerated Power Method

Power method The special advantage of the power method is that the eigenvector corresponds to the dominant eigenvalue and is generated at the same time. The inverse power method solves for the minimal eigenvalue/vector pair. The disadvantage is that the method only supplies obtains one eigenvalue

Power Method Readers Digest Version Eigenvalues can be ordered in magnitude and the largest is called the dominant eigenvalue or spectral radius. Think about how eigenvalues are a reflection of the nature of a matrix. Now if we multiply by that matrix over and over again..eventually the biggest eigenvalue will make everyone else have eigen-envy. One λ to rule them all, One λ to find them, One λ to bring them all and in the darkness bind them.

Power Method In general continue the multiplication: where,

Power Method Factor the large l value term As you continue to multiply the vector by [A]

Power Method The basic computation of the power method is summarized as

Power Method The basic computation of the power method is summarized as The equation can be written as:

The Power Method Algorithm (algorithm 3.3.1 pg 107) y=nonzero random vector Initialize x = A*y vector for k =1,2,…n y=x/||x|| x =Ay (x is the approximate eigenvector) approximate eigenvalue μ= (yT*x)/(yT*y) r=μy-x k++

Example of Power Method Consider the follow matrix A Assume an arbitrary vector x0 = { 1 1 1}T

Example of Power Method Multiply the matrix by the matrix [A] by {x} Normalize the result of the product

Example of Power Method

Example of Power Method As you continue to multiple each successive vector l = 4 and the vector uk={1 0 0}T

Shift method (optional) It is possible to obtain another eigenvalue from the set equations by using a technique known as shifting the matrix. Subtract the a vector from each side, thereby changing the maximum eigenvalue

Shift method The eigenvalue, s, is the maximum value of the matrix A. The matrix is rewritten in a form. Use the Power method to obtain the largest eigenvalue of [B].

Example of Power Method Consider the follow matrix A Assume an arbitrary vector x0 = { 1 1 1}T

Example of Power Method Multiply the matrix by the matrix [A] by {x} Normalize the result of the product

Example of Power Method Continue with the iteration and the final value is l = -5. However, to get the true you need to shift back by:

Inverse Power Method The inverse method is similar to the power method, except that it finds the smallest eigenvalue. Using the following technique.

Inverse Power Method The algorithm is the same as the Power method and the “eigenvector” is not the eigenvector for the smallest eigenvalue. To obtain the smallest eigenvalue from the power method.

Inverse Power Method The inverse method is similar to the power method, except that it finds the smallest eigenvalue. Using the following technique.

Inverse Power Method The algorithm is the same as the Power method and the “eigenvector” is not the eigenvector for the smallest eigenvalue. To obtain the smallest eigenvalue from the power method.

Inverse Power Method The inverse algorithm use the technique avoids calculating the inverse matrix and uses a LU decomposition to find the {x} vector.

Example The matrix is defined as:

Accelerated Power Method The Power method can be accelerated by using the Rayleigh Quotient instead of the largest wk value. The Rayeigh Quotient is defined as:

Accelerated Power Method The values of the next z term is defined as: The Power method is adapted to use the new value.

Example of Accelerated Power Method Consider the follow matrix A Assume an arbitrary vector x0 = { 1 1 1}T

Example of Power Method Multiply the matrix by the matrix [A] by {x}

Example of Accelerated Power Method Multiply the matrix by the matrix [A] by {x}

Example of Accelerated Power Method

Example of Accelerated Power Method And so on ...