Conjugate Gradient Method

Slides:



Advertisements
Similar presentations
CSE 245: Computer Aided Circuit Simulation and Verification Matrix Computations: Iterative Methods (II) Chung-Kuan Cheng.
Advertisements

Parallel Jacobi Algorithm Steven Dong Applied Mathematics.
Optimization.
Linear Algebra (Aljabar Linier) Week 13 Universitas Multimedia Nusantara Serpong, Tangerang Dr. Ananda Kusuma
Optimization 吳育德.
Steepest Decent and Conjugate Gradients (CG). Solving of the linear equation system.
Modern iterative methods For basic iterative methods, converge linearly Modern iterative methods, converge faster –Krylov subspace method Steepest descent.
Computational Solutions of Helmholtz Equation Yau Shu Wong Department of Mathematical & Statistical Sciences University of Alberta Edmonton, Alberta, Canada.
Chapter 5 Orthogonality
Function Optimization Newton’s Method. Conjugate Gradients
QR Factorization –Direct Method to solve linear systems Problems that generate Singular matrices –Modified Gram-Schmidt Algorithm –QR Pivoting Matrix must.
Tutorial 12 Unconstrained optimization Conjugate gradients.
Gradient Methods May Preview Background Steepest Descent Conjugate Gradient.
Tutorial 5-6 Function Optimization. Line Search. Taylor Series for Rn
Lecture 12 Projection and Least Square Approximation Shang-Hua Teng.
An introduction to iterative projection methods Eigenvalue problems Luiza Bondar the 23 rd of November th Seminar.
© John M. Abowd 2005, all rights reserved Modeling Integrated Data John M. Abowd April 2005.
Lecture 12 Least Square Approximation Shang-Hua Teng.
CS240A: Conjugate Gradients and the Model Problem.
Function Optimization. Newton’s Method Conjugate Gradients Method
CSE 245: Computer Aided Circuit Simulation and Verification
1) CG is a numerical method to solve a linear system of equations 2) CG is used when A is Symmetric and Positive definite matrix (SPD) 3) CG of Hestenes.
Linear, Exponential, and Quadratic Functions. Write an equation for the following sequences.
13-1 Introduction to Quadratic Equations  CA Standards 14.0 and 21.0  Quadratic Equations in Standard Form.
9 1 Performance Optimization. 9 2 Basic Optimization Algorithm p k - Search Direction  k - Learning Rate or.
Introduction to Numerical Analysis I MATH/CMPSC 455 Conjugate Gradient Methods.
By Mary Hudachek-Buswell. Overview Atmospheric Turbulence Blur.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
Example: Introduction to Krylov Subspace Methods DEF: Krylov sequence
Lecture 10: Inner Products Norms and angles Projection Sections 2.10.(1-4), Sections 2.2.3, 2.3.
Length and Dot Product in R n Notes: is called a unit vector. Notes: The length of a vector is also called its norm. Chapter 5 Inner Product Spaces.
CHAPTER FIVE Orthogonality Why orthogonal? Least square problem Accuracy of Numerical computation.
Section 4-1: Introduction to Linear Systems. To understand and solve linear systems.
Chapter 2 System of Linear Equations Sensitivity and Conditioning (2.3) Solving Linear Systems (2.4) January 19, 2010.
1 Marijn Bartel Schreuders Supervisor: Dr. Ir. M.B. Van Gijzen Date:Monday, 24 February 2014.
CSE 245: Computer Aided Circuit Simulation and Verification Matrix Computations: Iterative Methods I Chung-Kuan Cheng.
Inner Products, Length, and Orthogonality (11/30/05) If v and w are vectors in R n, then their inner (or dot) product is: v  w = v T w That is, you multiply.
Section 5.1 Length and Dot Product in ℝ n. Let v = ‹v 1­­, v 2, v 3,..., v n › and w = ‹w 1­­, w 2, w 3,..., w n › be vectors in ℝ n. The dot product.
Review of Matrix Operations Vector: a sequence of elements (the order is important) e.g., x = (2, 1) denotes a vector length = sqrt(2*2+1*1) orientation.
1 Chapter 6 General Strategy for Gradient methods (1) Calculate a search direction (2) Select a step length in that direction to reduce f(x) Steepest Descent.
Consider Preconditioning – Basic Principles Basic Idea: is to use Krylov subspace method (CG, GMRES, MINRES …) on a modified system such as The matrix.
Krylov-Subspace Methods - I Lecture 6 Alessandra Nardi Thanks to Prof. Jacob White, Deepak Ramaswamy, Michal Rewienski, and Karen Veroy.
12.1 Orthogonal Functions a function is considered to be a generalization of a vector. We will see the concepts of inner product, norm, orthogonal (perpendicular),
X = 2 + t y = t t = x – 2 t = (y + 3)/2 x – 2 = y x – 4 = y + 3 y – 2x + 7 = 0 Finding the Cartesian Equation from a vector equation x = 2.
MA5233 Lecture 6 Krylov Subspaces and Conjugate Gradients Wayne M. Lawton Department of Mathematics National University of Singapore 2 Science Drive 2.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems
Krylov-Subspace Methods - II Lecture 7 Alessandra Nardi Thanks to Prof. Jacob White, Deepak Ramaswamy, Michal Rewienski, and Karen Veroy.
An inner product on a vector space V is a function that, to each pair of vectors u and v in V, associates a real number and satisfies the following.
MA237: Linear Algebra I Chapters 1 and 2: What have we learned?
Conjugate gradient iteration One matrix-vector multiplication per iteration Two vector dot products per iteration Four n-vectors of working storage x 0.
1) CG is a numerical method to solve a linear system of equations 2) CG is used when A is Symmetric and Positive definite matrix (SPD) 3) CG of Hestenes.
The Landscape of Sparse Ax=b Solvers Direct A = LU Iterative y’ = Ay Non- symmetric Symmetric positive definite More RobustLess Storage More Robust More.
Krylov-Subspace Methods - I
Solving Quadratic Equations by the Complete the Square Method
CSE 245: Computer Aided Circuit Simulation and Verification
Linear Transformations
CSE 245: Computer Aided Circuit Simulation and Verification
CS5321 Numerical Optimization
Introduction to Scientific Computing II
Introduction to Scientific Computing II
Introduction to Scientific Computing II
Linear Algebra Lecture 38.
Solving simultaneous linear and quadratic equations
Introduction to Scientific Computing II
Administrivia: November 9, 2009
Systems of Linear Equations: An Introduction
Performance Optimization
Solving a System of Linear Equations
RKPACK A numerical package for solving large eigenproblems
CS5321 Numerical Optimization
Presentation transcript:

Conjugate Gradient Method invented by Hestenes and Stiefel around 1951 Conjugate Gradient Method It is an iterative method to solve the linear system of equations

Conjugate Gradient Method Example: Solve: 10 -1 2 0 -1 11 -1 3 2 -1 10 -1 0 3 -1 8 k=1 k=2 k=3 k=4 x1 x2 x3 X4 0 0.4716 0.9964 1.0015 1.0000 0 1.9651 1.9766 1.9833 2.0000 0 -0.8646 -0.9098 -1.0099 -1.0000 0 1.1791 1.0976 1.0197 1.0000 31.7 5.1503 1.0433 0.1929 0.0000

Quadratic function Define: Example: We want to solve the following linear system Define: quadratic function Example:

Quadratic Function Example: Remark:

Minimization equivalent ot linear system Remark: Problem (1) Problem (2) IDEA: Search for the minimum

Conjugate Gradient Method Example: minimum

Minimum IDEA: Search for the minimum Remark:

Conjugate Gradient Method “search direction” “step length”

Conjugate Gradient Method vectors constants

Conjugate Gradient Method

Conjugate Gradient Method

INNER PRODUCT

Inner Product DEF: Example: Example: We say that Is an inner product if Example: Example: A is SPD We define the norm

Inner Product DEF: DEF: where A is SPD Example:

Conjugate Gradient

Conjugate Gradient Method

Conjugate Gradient Method

Conjugate Gradient Method HW:

Conjugate Gradient Method Lemma:[Elman,Silvester,Wathen Book] vectors Orthogonal A-Orthogonal

Error and Residual vectors REMARK REMARK Orthogonal A-Orthogonal Minimizes the A-norm of the error

Conjugate Gradient Method 0.0000 0.4716 0.9964 1.0015 1.0000 0.0000 1.9651 1.9766 1.9833 2.0000 0.0000 -0.8646 -0.9098 -1.0099 -1.0000 0.0000 1.1791 1.0976 1.0197 1.0000 6.0000 4.9781 -0.1681 -0.0123 0.0000 25.0000 -0.5464 0.0516 0.1166 -0.0000 -11.0000 -0.1526 -0.8202 0.0985 0.0000 15.0000 -1.1925 -0.6203 -0.1172 -0.0000 6.0000 5.1362 0.0427 -0.0108 25.0000 0.1121 0.0562 0.1185 -11.0000 -0.4424 -0.8384 0.0698 15.0000 -0.7974 -0.6530 -0.1395 0.0786 0.1022 0.1193 0.1411 0.0713 0.0263 0.0410 0.0342

Connection to Lanczos

Introduction to Krylov Subspace Methods DEF: Krylov sequence Example: Krylov sequence 1 11 118 1239 12717 1 12 141 1651 19446 1 10 100 989 9546 1 10 106 1171 13332 10 -1 2 0 -1 11 -1 3 2 -1 10 -1 0 3 -1 8

Introduction to Krylov Subspace Methods DEF: Krylov subspace Example: Krylov subspace 10 -1 2 0 -1 11 -1 3 2 -1 10 -1 0 3 -1 8 DEF: Example: Krylov matrix

Introduction to Krylov Subspace Methods DEF: Example: Krylov matrix Remark:

Lanczos method Lanczos: The Lanczos algorithm is defined as follows An orthogonal basis for