CSE 245: Computer Aided Circuit Simulation and Verification Matrix Computations: Iterative Methods I Chung-Kuan Cheng.

Slides:



Advertisements
Similar presentations
CSE 245: Computer Aided Circuit Simulation and Verification Matrix Computations: Iterative Methods (II) Chung-Kuan Cheng.
Advertisements

CSE 245: Computer Aided Circuit Simulation and Verification
Optimization.
Optimization 吳育德.
Least Squares example There are 3 mountains u,y,z that from one site have been measured as 2474 ft., 3882 ft., and 4834 ft.. But from u, y looks 1422 ft.
Linear Systems of Equations
Iterative Methods and QR Factorization Lecture 5 Alessandra Nardi Thanks to Prof. Jacob White, Suvranu De, Deepak Ramaswamy, Michal Rewienski, and Karen.
Steepest Decent and Conjugate Gradients (CG). Solving of the linear equation system.
Modern iterative methods For basic iterative methods, converge linearly Modern iterative methods, converge faster –Krylov subspace method Steepest descent.
Jonathan Richard Shewchuk Reading Group Presention By David Cline
Function Optimization Newton’s Method. Conjugate Gradients
1cs542g-term Notes  Extra class this Friday 1-2pm  If you want to receive s about the course (and are auditing) send me .
1 Systems of Linear Equations Iterative Methods. 2 B. Iterative Methods 1.Jacobi method and Gauss Seidel 2.Relaxation method for iterative methods.
Special Matrices and Gauss-Siedel
CS240A: Conjugate Gradients and the Model Problem.
Tutorial 10 Iterative Methods and Matrix Norms. 2 In an iterative process, the k+1 step is defined via: Iterative processes Eigenvector decomposition.
Monica Garika Chandana Guduru. METHODS TO SOLVE LINEAR SYSTEMS Direct methods Gaussian elimination method LU method for factorization Simplex method of.
ECE 552 Numerical Circuit Analysis Chapter Five RELAXATION OR ITERATIVE TECHNIQUES FOR THE SOLUTION OF LINEAR EQUATIONS Copyright © I. Hajj 2012 All rights.
CSE 245: Computer Aided Circuit Simulation and Verification
Math for CSLecture 51 Function Optimization. Math for CSLecture 52 There are three main reasons why most problems in robotics, vision, and arguably every.
Iterative Methods for Solving Linear Systems of Equations ( part of the course given for the 2 nd grade at BGU, ME )
Scientific Computing Matrix Norms, Convergence, and Matrix Condition Numbers.
9 1 Performance Optimization. 9 2 Basic Optimization Algorithm p k - Search Direction  k - Learning Rate or.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
Numerical Analysis 1 EE, NCKU Tien-Hao Chang (Darby Chang)
Computational Optimization
CHAPTER SIX Eigenvalues
Systems of Linear Equations Iterative Methods
Eigenvalue Problems Solving linear systems Ax = b is one part of numerical linear algebra, and involves manipulating the rows of a matrix. The second main.
Linear Algebra and Complexity Chris Dickson CAS Advanced Topics in Combinatorial Optimization McMaster University, January 23, 2006.
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
Solving Scalar Linear Systems Iterative approach Lecture 15 MA/CS 471 Fall 2003.
Algorithms for a large sparse nonlinear eigenvalue problem Yusaku Yamamoto Dept. of Computational Science & Engineering Nagoya University.
1 Iterative Solution Methods Starts with an initial approximation for the solution vector (x 0 ) At each iteration updates the x vector by using the sytem.
Scientific Computing Partial Differential Equations Poisson Equation.
1 Unconstrained Optimization Objective: Find minimum of F(X) where X is a vector of design variables We may know lower and upper bounds for optimum No.
Linear Systems Iterative Solutions CSE 541 Roger Crawfis.
1 Optimization Multi-Dimensional Unconstrained Optimization Part II: Gradient Methods.
Linear Systems – Iterative methods
Case Study in Computational Science & Engineering - Lecture 5 1 Iterative Solution of Linear Systems Jacobi Method while not converged do { }
Chapter 10 Minimization or Maximization of Functions.
CHAPTER 10 Widrow-Hoff Learning Ming-Feng Yeh.
Variations on Backpropagation.
Signal & Weight Vector Spaces
Chapter 2-OPTIMIZATION G.Anuradha. Contents Derivative-based Optimization –Descent Methods –The Method of Steepest Descent –Classical Newton’s Method.
Krylov-Subspace Methods - I Lecture 6 Alessandra Nardi Thanks to Prof. Jacob White, Deepak Ramaswamy, Michal Rewienski, and Karen Veroy.
Solving Scalar Linear Systems A Little Theory For Jacobi Iteration
9 Nov B - Introduction to Scientific Computing1 Sparse Systems and Iterative Methods Paul Heckbert Computer Science Department Carnegie Mellon.
ILAS Threshold partitioning for iterative aggregation – disaggregation method Ivana Pultarova Czech Technical University in Prague, Czech Republic.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
Network Systems Lab. Korea Advanced Institute of Science and Technology No.1 Maximum Norms & Nonnegative Matrices  Weighted maximum norm e.g.) x1x1 x2x2.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems
Krylov-Subspace Methods - II Lecture 7 Alessandra Nardi Thanks to Prof. Jacob White, Deepak Ramaswamy, Michal Rewienski, and Karen Veroy.
Matrices CHAPTER 8.9 ~ Ch _2 Contents  8.9 Power of Matrices 8.9 Power of Matrices  8.10 Orthogonal Matrices 8.10 Orthogonal Matrices 
The Landscape of Sparse Ax=b Solvers Direct A = LU Iterative y’ = Ay Non- symmetric Symmetric positive definite More RobustLess Storage More Robust More.
Iterative Solution Methods
Krylov-Subspace Methods - I
Solving Linear Systems Ax=b
Computational Optimization
CSE 245: Computer Aided Circuit Simulation and Verification
CSE 245: Computer Aided Circuit Simulation and Verification
CSE 245: Computer Aided Circuit Simulation and Verification
Conjugate Gradient Method
CS5321 Numerical Optimization
Optimization Part II G.Anuradha.
Introduction to Scientific Computing II
Introduction to Scientific Computing II
Introduction to Scientific Computing II
Numerical Linear Algebra
Performance Optimization
Presentation transcript:

CSE 245: Computer Aided Circuit Simulation and Verification Matrix Computations: Iterative Methods I Chung-Kuan Cheng

2 Outline  Introduction  Direct Methods  Iterative Methods Formulations Projection Methods Krylov Space Methods Preconditioned Iterations Multigrid Methods Domain Decomposition Methods

3 Introduction Direct Method LU Decomposition Iterative Methods Jacobi Gauss-Seidel Conjugate Gradient GMRES Multigrid Domain Decomposition Preconditioning General and Robust but can be complicated if N>= 1M Excellent choice for SPD matrices Remain an art for arbitrary matrices

Introduction: Matrix Condition 4 Ax=b With errors, we have (A+eE)x(e)=b+ed Thus, the deviation is x(e)-x=e(A+eE) -1 (d-Ex) |x(e)-x|/|x| <= e|A -1 |(|d|/|x|+|E|)+O(e) <= e|A||A -1 |(|d|/|b|+|E|/|A|)+O(e) We define the matrix condition as K(A)=|A||A -1 |, i.e. |x(e)-x|/|x|<=K(A)(|ed|/|b|+|eE|/|A|)

Introduction: Gershgorin Circle Theorem 5 For all eigenvalues r of matrix A, there exists an i such that |r-a ii |<= sum i=!j |a ij | Proof: Given r and eigenvector V s.t. AV=rV Let |v i |>= |v j | for all j=!i, we have sum j a ij v j = rv i Thus (r-a ii )v i = sum j=!i a ij v j r-a ii =(sum j=!i a ij v j )/v i Therefore |r-a ii |<= sum j=!i |a ij | Note if equality holds then for all i, |v i | are equal.

6 Iterative Methods Stationary: x (k+1) =Gx (k) +c where G and c do not depend on iteration count (k) Non Stationary: x (k+1) =x (k) +a k p (k) where computation involves information that change at each iteration

7 In the i-th equation solve for the value of x i while assuming the other entries of x remain fixed: In matrix terms the method becomes: where D, -L and -U represent the diagonal, the strictly lower- trg and strictly upper-trg parts of M M=D-L-U Stationary: Jacobi Method

8 Like Jacobi, but now assume that previously computed results are used as soon as they are available: In matrix terms the method becomes: where D, -L and -U represent the diagonal, the strictly lower- trg and strictly upper-trg parts of M M=D-L-U Stationary-Gause-Seidel

9 Devised by extrapolation applied to Gauss-Seidel in the form of weighted average: In matrix terms the method becomes: where D, -L and -U represent the diagonal, the strictly lower- trg and strictly upper-trg parts of M M=D-L-U Stationary: Successive Overrelaxation (SOR)

SOR  Choose w to accelerate the convergence W =1 : Jacobi / Gauss-Seidel 2>W>1: Over-Relaxation W < 1: Under-Relaxation

Convergence of Stationary Method  Linear Equation: MX=b  A sufficient condition for convergence of the solution(GS,Jacob) is that the matrix M is diagonally dominant.  If M is symmetric positive definite, SOR converges for any w (0<w<2)  A necessary and sufficient condition for the convergence is the magnitude of the largest eigenvalue of the matrix G is smaller than 1 Jacobi: Gauss-Seidel SOR:

Convergence of Gauss-Seidel Eigenvalues of G=(D-L) -1 L T is inside a unit circle Proof: G 1 =D 1/2 GD -1/2 =(I-L 1 ) -1 L 1 T, L 1 =D -1/2 LD -1/2 Let G 1 x=rx we have L 1 T x=r(I-L 1 )x xL 1 T x=r(1-x T L 1 x) y=r(1-y) r= y/(1-y), |r|<= 1 iff Re(y) <= ½. Since A=D-L-L T is PD, D -1/2 AD -1/2 is PD, 1-2x T L 1 x >= 0 or 1-2y>= 0, i.e. y<= ½.

Linear Equation: an optimization problem  Quadratic function of vector x  Matrix A is positive-definite, if for any nonzero vector x  If A is symmetric, positive-definite, f(x) is minimized by the solution

Linear Equation: an optimization problem  Quadratic function  Derivative  If A is symmetric  If A is positive-definite is minimized by setting to 0

15 For symmetric positive definite matrix A

16 Gradient of quadratic form The points in the direction of steepest increase of f(x)

 If A is symmetric positive definite P is the arbitrary point X is the solution point Symmetric Positive-Definite Matrix A since We have, If p != x

18 If A is not positive definite a)Positive definite matrix b) negative-definite matrix c) Singular matrix d) positive indefinite matrix

Non-stationary Iterative Method  State from initial guess x0, adjust it until close enough to the exact solution  How to choose direction and step size? i=0,1,2,3, …… Adjustment Direction Step Size

Steepest Descent Method (1)  Choose the direction in which f decrease most quickly: the direction opposite of  Which is also the direction of residue

Steepest Descent Method (2)  How to choose step size ? Line Search should minimize f, along the direction of, which means Orthogonal

Steepest Descent Algorithm Given x0, iterate until residue is smaller than error tolerance

23 Steepest Descent Method: example a)Starting at (-2,-2) take the direction of steepest descent of f b) Find the point on the intersec- tion of these two surfaces that minimize f c) Intersection of surfaces. d) The gradient at the bottommost point is orthogonal to the gradient of the previous step

24 Iterations of Steepest Descent Method

Convergence of Steepest Descent-1 let Eigenvector: Energy norm: EigenValue: j=1,2, …,n

Convergence of Steepest Descent-2

Convergence Study (n=2) assume letSpectral condition number let

28 Plot of w

29 Case Study

30 Bound of Convergence It can be proved that it is also valid for n>2, where