Iterative methods TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAA A A A A A A A.

Slides:



Advertisements
Similar presentations
05/11/2005 Carnegie Mellon School of Computer Science Aladdin Lamps 05 Combinatorial and algebraic tools for multigrid Yiannis Koutis Computer Science.
Advertisements

C&O 355 Mathematical Programming Fall 2010 Lecture 9
Chapter 28 – Part II Matrix Operations. Gaussian elimination Gaussian elimination LU factorization LU factorization Gaussian elimination with partial.
The Combinatorial Multigrid Solver Yiannis Koutis, Gary Miller Carnegie Mellon University TexPoint fonts used in EMF. Read the TexPoint manual before you.
Numerical Analysis 1 EE, NCKU Tien-Hao Chang (Darby Chang)
Solving Systems of Equations. Rule of Thumb: More equations than unknowns  system is unlikely to have a solution. Same number of equations as unknowns.
Matrices: Inverse Matrix
CHAPTER ONE Matrices and System Equations
1.5 Elementary Matrices and a Method for Finding
Modern iterative methods For basic iterative methods, converge linearly Modern iterative methods, converge faster –Krylov subspace method Steepest descent.
1cs542g-term Notes  Assignment 1 will be out later today (look on the web)
1cs542g-term Notes  Assignment 1 is out (questions?)
1cs542g-term Notes  Assignment 1 due tonight ( me by tomorrow morning)
Dan Witzner Hansen  Groups?  Improvements – what is missing?
Eigenvalues and Eigenvectors
Symmetric Matrices and Quadratic Forms
Nonlinear Optimization for Optimal Control
Lecture 19 Quadratic Shapes and Symmetric Positive Definite Matrices Shang-Hua Teng.
Message Passing Algorithms for Optimization
Lecture 20 SVD and Its Applications Shang-Hua Teng.
Mujahed AlDhaifallah (Term 342) Read Chapter 9 of the textbook
Linear regression models in matrix terms. The regression function in matrix terms.
Dominant Eigenvalues & The Power Method
Numerical Analysis 1 EE, NCKU Tien-Hao Chang (Darby Chang)
Scientific Computing Matrix Norms, Convergence, and Matrix Condition Numbers.
ME751 Advanced Computational Multibody Dynamics Introduction January 21, 2010 Dan Negrut University of Wisconsin, Madison © Dan Negrut, 2010 ME751, UW-Madison.
ME451 Kinematics and Dynamics of Machine Systems Review of Matrix Algebra – 2.2 September 13, 2011 Dan Negrut University of Wisconsin-Madison © Dan Negrut,
CHAPTER SIX Eigenvalues
Yiannis Koutis , U of Puerto Rico, Rio Piedras
Eigenvalue Problems Solving linear systems Ax = b is one part of numerical linear algebra, and involves manipulating the rows of a matrix. The second main.
Solving Scalar Linear Systems Iterative approach Lecture 15 MA/CS 471 Fall 2003.
 Row and Reduced Row Echelon  Elementary Matrices.
1 Iterative Solution Methods Starts with an initial approximation for the solution vector (x 0 ) At each iteration updates the x vector by using the sytem.
F UNDAMENTALS OF E NGINEERING A NALYSIS Eng. Hassan S. Migdadi Inverse of Matrix. Gauss-Jordan Elimination Part 1.
Sec 3.5 Inverses of Matrices Where A is nxn Finding the inverse of A: Seq or row operations.
8.1 Matrices & Systems of Equations
Have we ever seen this phenomenon before? Let’s do some quick multiplication…
TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA A A A A A A A A Image:
Numerical Computation Lecture 9: Vector Norms and Matrix Condition Numbers United International College.
Lecture 7 - Systems of Equations CVEN 302 June 17, 2002.
Linear algebra: matrix Eigen-value Problems Eng. Hassan S. Migdadi Part 1.
Lesson 3 CSPP58001.
CS240A: Conjugate Gradients and the Model Problem.
Linear Systems – Iterative methods
Case Study in Computational Science & Engineering - Lecture 5 1 Iterative Solution of Linear Systems Jacobi Method while not converged do { }
Matrix Condition Numbers
Copyright © 2011 Pearson Education, Inc. Solving Linear Systems Using Matrices Section 6.1 Matrices and Determinants.
KEY THEOREMS KEY IDEASKEY ALGORITHMS LINKED TO EXAMPLES next.
2.5 Determinants and Multiplicative Inverses of Matrices. Objectives: 1.Evaluate determinants. 2.Find the inverses of matrices. 3.Solve systems of equations.
Consider Preconditioning – Basic Principles Basic Idea: is to use Krylov subspace method (CG, GMRES, MINRES …) on a modified system such as The matrix.
Link Analysis Algorithms Page Rank Slides from Stanford CS345, slightly modified.
Gaoal of Chapter 2 To develop direct or iterative methods to solve linear systems Useful Words upper/lower triangular; back/forward substitution; coefficient;
5 5.1 © 2016 Pearson Education, Ltd. Eigenvalues and Eigenvectors EIGENVECTORS AND EIGENVALUES.
7 7.2 © 2016 Pearson Education, Ltd. Symmetric Matrices and Quadratic Forms QUADRATIC FORMS.
Network Systems Lab. Korea Advanced Institute of Science and Technology No.1 Maximum Norms & Nonnegative Matrices  Weighted maximum norm e.g.) x1x1 x2x2.
Matrices CHAPTER 8.9 ~ Ch _2 Contents  8.9 Power of Matrices 8.9 Power of Matrices  8.10 Orthogonal Matrices 8.10 Orthogonal Matrices 
Conjugate gradient iteration One matrix-vector multiplication per iteration Two vector dot products per iteration Four n-vectors of working storage x 0.
CHAPTER ONE Matrices and System Equations Objective:To provide solvability conditions of a linear equation Ax=b and introduce the Gaussian elimination.
CHAPTER 7 Determinant s. Outline - Permutation - Definition of the Determinant - Properties of Determinants - Evaluation of Determinants by Elementary.
Iterative Solution Methods
Solving Linear Systems Ax=b
Linear Algebra Lecture 15.
Singular Value Decomposition
RECORD. RECORD Gaussian Elimination: derived system back-substitution.
Elementary Matrix Methid For find Inverse
Solving Linear Systems: Iterative Methods and Sparse Systems
Linear Algebra Lecture 16.
Linear Algebra Lecture 28.
Pivoting, Perturbation Analysis, Scaling and Equilibration
Presentation transcript:

Iterative methods TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAA A A A A A A A

Iterative methods for Ax=b Iterative methods produce a sequence of approximations So, they also produce a sequence of errors We say that a method converges if 2

What is computationally appealing about iterative methods ? Iterative methods produce the approximation using a very basic primitive: matrix-vector multiplication If a matrix A has m non-zero entries then A*x can be computed in time O(m). Essentially every non-zero matrix is recalled from the RAM only one time. In some applications we don’t even know the matrix A explicitly. For example we may have only a function f(x), which returns A*x for some unknown matrix A. 3

What is computationally appealing about iterative methods ? The memory requirements are also O(m). If the matrix can fit into the RAM, the method will run too. This is a HUGE IMPROVEMENT with respect to the O(m 2 ) of Gaussian elimination. In most applications Gaussian elimination is impossible because the memory is limited. Iterative methods are the only approach that can potentially work. 4

What might be problematic about iterative methods? Do they converge always? If they converge, how fast do they converge? Speed of convergence: How many steps are required so that we gain one decimal place in the error, in other words: 5

Richardson’s iteration Perhaps the most simple iteration It can be seen as a fixed point iteration for (I-A)x+b The error reduction equation is simple too! 6

Richardson’s iteration for diagonal matrices Let’s study convergence for positive diagonal matrices Assume initial error is the all-ones vector and Then Let’s play with numbers….. 7

Richardson’s iteration for diagonal matrices So it seems that we must have …and Richardson’s won’t converge for all matrices ARE WE DOOMED? 8

Maybe we’re not (yet) doomed Suppose we know a d such that We can then try to solve the equivalent system The new matrix A’ is a diagonal matrix with elements d’ i such that: 9

Richardson’s iteration converges for an equivalent “scaled” system 10

You’re cheating us: Diagonal matrices are easy anyways! Spectral decomposition theorem Every non-singular matrix has the eigenvalue decomposition: So what is (I-A) i ? So if (I-D) i goes to zero then (I-A) i goes to zero too 11

Can we scale any positive system? Ghersghorin’s Theorem Recall the definition of infinity norm for matrices 12

Richardson’s iteration converges for an equivalent “scaled” system for all positive matrices 13

How about the rate of convergence? We showed that the j th coordinate of the error e i is: How many steps do we need to take to make this coordinate smaller than 1/10 ? …. And the answer is 14

How about the rate of convergence? So in the worst case the number of iterations we need is: ARE WE DOOMED version2 ? 15

We may have some more information about the matrix A Suppose we know numbers d’ i such that We can then try to solve the equivalent system The new matrix A’ is a diagonal matrix with elements d’ i such that: 16

Preconditioning In general, changing the system to is called preconditioning Why? Because we try to make a new matrix with a better condition. Do we need to have the inverse of B? So, B must be a matrix such that By = z can be solved more easily. 17

Richardson’s iteration converges fast for an equivalent system when we have some more info in the form of a preconditioner 18

Symmetric, negative off-diagonals, zero row-sums Symmetric diagonally dominant matrix: Add a positive diagonal to Laplacian and flip off-diagonal signs, keeping symmetry. 1 Laplacians of weighted graphs

Preconditioning for Laplacians The star graph: A simple diagonal scaling makes the condition number good for The line graph The condition number is bad despite even with scaling ARE WE DOOMED version3 ? 20