Linear Systems Iterative Solutions CSE 541 Roger Crawfis.

Slides:



Advertisements
Similar presentations
Algebraic, transcendental (i.e., involving trigonometric and exponential functions), ordinary differential equations, or partial differential equations...
Advertisements

Linear Systems LU Factorization CSE 541 Roger Crawfis.
Numerical Algorithms ITCS 4/5145 Parallel Computing UNC-Charlotte, B. Wilkinson, 2009.
MATH 685/ CSI 700/ OR 682 Lecture Notes
Linear Systems of Equations
Solving Linear Systems (Numerical Recipes, Chap 2)
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
1.5 Elementary Matrices and a Method for Finding
SOLVING SYSTEMS OF LINEAR EQUATIONS. Overview A matrix consists of a rectangular array of elements represented by a single symbol (example: [A]). An individual.
CISE301_Topic3KFUPM1 SE301: Numerical Methods Topic 3: Solution of Systems of Linear Equations Lectures 12-17: KFUPM Read Chapter 9 of the textbook.
Rayan Alsemmeri Amseena Mansoor. LINEAR SYSTEMS Jacobi method is used to solve linear systems of the form Ax=b, where A is the square and invertible.
Numerical Algorithms Matrix multiplication
Linear Systems Pivoting in Gaussian Elim. CSE 541 Roger Crawfis.
Numerical Algorithms • Matrix multiplication
1cs542g-term Notes  No extra class tomorrow.
Total Recall Math, Part 2 Ordinary diff. equations First order ODE, one boundary/initial condition: Second order ODE.
CSCI 317 Mike Heroux1 Sparse Matrix Computations CSCI 317 Mike Heroux.
ECE669 L4: Parallel Applications February 10, 2004 ECE 669 Parallel Computer Architecture Lecture 4 Parallel Applications.
1 Systems of Linear Equations Iterative Methods. 2 B. Iterative Methods 1.Jacobi method and Gauss Seidel 2.Relaxation method for iterative methods.
PARTIAL DIFFERENTIAL EQUATIONS
1 Systems of Linear Equations Iterative Methods. 2 B. Direct Methods 1.Jacobi method and Gauss Seidel 2.Relaxation method for iterative methods.
Special Matrices and Gauss-Siedel
ECIV 520 Structural Analysis II Review of Matrix Algebra.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
Thomas algorithm to solve tridiagonal matrices
CSE 245: Computer Aided Circuit Simulation and Verification
Partial differential equations Function depends on two or more independent variables This is a very simple one - there are many more complicated ones.
Chapter 13 Finite Difference Methods: Outline Solving ordinary and partial differential equations Finite difference methods (FDM) vs Finite Element Methods.
Numerical Analysis 1 EE, NCKU Tien-Hao Chang (Darby Chang)
Systems of Linear Equations Iterative Methods
Linear Algebra and Complexity Chris Dickson CAS Advanced Topics in Combinatorial Optimization McMaster University, January 23, 2006.
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
Solving Scalar Linear Systems Iterative approach Lecture 15 MA/CS 471 Fall 2003.
Square n-by-n Matrix.
Linear Systems Gaussian Elimination CSE 541 Roger Crawfis.
Iterative Methods for Solving Linear Systems Leo Magallon & Morgan Ulloa.
1 Iterative Solution Methods Starts with an initial approximation for the solution vector (x 0 ) At each iteration updates the x vector by using the sytem.
Scientific Computing Partial Differential Equations Poisson Equation.
Finite Element Method.
CS 219: Sparse Matrix Algorithms
Module 4 Multi-Dimensional Steady State Heat Conduction.
Lecture 7 - Systems of Equations CVEN 302 June 17, 2002.
Elliptic PDEs and the Finite Difference Method
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 Part 3- Chapter 12 Iterative Methods.
CSE 245: Computer Aided Circuit Simulation and Verification Matrix Computations: Iterative Methods I Chung-Kuan Cheng.
Lesson 3 CSPP58001.
Linear Systems – Iterative methods
Direct Methods for Linear Systems Lecture 3 Alessandra Nardi Thanks to Prof. Jacob White, Suvranu De, Deepak Ramaswamy, Michal Rewienski, and Karen Veroy.
Linear Systems What is the Matrix?. Outline Announcements: –Homework III: due Today. by 5, by –Ideas for Friday? Linear Systems Basics Matlab and.
CS 484. Iterative Methods n Gaussian elimination is considered to be a direct method to solve a system. n An indirect method produces a sequence of values.
1 Chapter 7 Numerical Methods for the Solution of Systems of Equations.
Partial Derivatives bounded domain Its boundary denoted by
Partial Derivatives Example: Find If solution: Partial Derivatives Example: Find If solution: gradient grad(u) = gradient.
9 Nov B - Introduction to Scientific Computing1 Sparse Systems and Iterative Methods Paul Heckbert Computer Science Department Carnegie Mellon.
Gaoal of Chapter 2 To develop direct or iterative methods to solve linear systems Useful Words upper/lower triangular; back/forward substitution; coefficient;
Unit #1 Linear Systems Fall Dr. Jehad Al Dallal.
1 Numerical Methods Solution of Systems of Linear Equations.
Solving Linear Systems Ax=b
CSE 245: Computer Aided Circuit Simulation and Verification
CSE 245: Computer Aided Circuit Simulation and Verification
CSCE569 Parallel Computing
Matrix Methods Summary
Numerical Analysis Lecture14.
CSE 541 – Numerical Methods
CS6068 Applications: Numerical Methods
Linear Systems Numerical Methods.
Jacobi Project Salvatore Orlando.
Numerical Analysis Lecture11.
Linear Algebra Lecture 16.
Ax = b Methods for Solution of the System of Equations:
Presentation transcript:

Linear Systems Iterative Solutions CSE 541 Roger Crawfis

Sparse Linear Systems Computational Science deals with the simulation of natural phenomenon, such as weather, blood flow, impact collision, earthquake response, etc. To simulate these issues such as heat transfer, electromagnetic radiation, fluid flow and shock wave propagation need to be taken into account. Combining initial conditions with general laws of physics (conservation of energy and mass), a model of these may involve a Partial Differential Equation (PDE).

Example PDE’s The Wave equation: 1D:  2  /  t 2 = -c 2  2  /  x 2 3D:  2  /  t 2 = -c 2  2  Note:  2  =  2  /  x 2 +  2  /  y 2 +  2  /  z 2  (x,y,z,t) is some continuous function of space and time (e.g., temperature).

Example PDE’s No changes over time (steady state): Laplace’s Equation: This can be solved for very simple geometric configurations and initial conditions. In general, we need to use the computer to solve it.

Example PDE’s Second derivatives:  Up  Down  Left  right  middle  2  = (  Left +  Right +  Up +  Down – 4  Middle ) / h 2 = 0

Finite Differences Fundamentally we are taking derivatives: Grid spacing or step size of h. Finite-Difference method – uses a regular grid.

Finite Differences A very simple problem: Find the electrostatic potential inside a box whose sides are at a given potential Set up a n by n grid on which the potential is defined and satisfies Laplace’s Equation:

Linear System n2n2 n

3D Simulations n3n3 nn2n2

Gaussian Elimination What happens to these banded matrices when Gaussian Elimination is applied? Matrix only has about 7n 3 non-zero elements. Matrix size = N 2, where N=n 3 or n 6 elements Gaussian Elimination on these suffers from fill. The forward elimination phase will produce n 2 non-zero elements per row, or n 5 non- zero elements.

Memory Costs Example n=300 Memory cost: 189,000,000 = 189*10 6 elements Floats => 756MB Doubles => 1.4GB Full matrix would be: 7.29*10 14 ! Gaussian Elimination Floats => 1.9*10 13 MB With n=300, simulating weather for the state of Ohio would have samples > 1km apart. Remember, this is h in central differences.

Solutions for Sparse Matrices Need to keep memory (and computation) low. These types of problems motivate the Iterative Solutions for Linear Systems. Iterate until convergence.

Jacobi Iteration One of the easiest splitting of the matrix A is A=D-M, where D is the diagonal elements of A. Ax=b Dx-Mx=b Dx = Mx+b x (k) =D -1 Mx (k-1) +D -1 b Trivial to compute D -1.

Jacobi Iteration Another way to understand this is to treat each equation separately: Given the i th equation, solve for x i. Assume you know the other variables. Use the current guess for the other variables.

Jacobi iteration

Jacobi Iteration Cute, but will it work? Algorithms, even mathematical ones, need a mathematical framework or analysis. Let’s first look at a simple example.

Example system: Initial guess: Algorithm: Jacobi Iteration - Example

1 st iteration: 2 nd iteration: Jacobi Iteration - Example

x (3) = y (3) = z (3) = x (4) = y (4) = z (4) = Actual Solution: x =0 y =1 z =2

Jacobi Iteration Questions: 1. How many iterations do we need? 2. What is our stopping criteria? 3. Is it faster than applying Gaussian Elimination? 4. Are there round-off errors or other precision and robustness issues?

Jacobi Method - Implementation while( !converged ) { for (int i=0; i<N; i++) { // For each equation double sum = b[i]; for (int j=0; j<N; j++) { // Compute new xi if( i <> j ) sum += -A(i,j)x(j); } temp[i] = sum / A[i,i]; } // Test for convergence … x = temp; } Complexity: Each Iteration: O(N 2 ) Total: O(MN 2 )

Jacobi Method - Complexity while( !converged ) { for (int i=0; i<N; i++) { // For each equation double sum = b[i]; foreach (double element in nonZeroElements[i]) { // Compute new xi if( i <> j ) sum += -A(i,j)*x(j); } temp[i] = sum / A[i,i]; } // Test for convergence … x = temp; } Complexity: Each Iteration: O(pN) Total: O(MpN) p= # non-zero elements For our 2D Laplacian Equation, p=4 N=n 2 with n=300 => N=90,000

Jacobi Iteration Cute, but does it work for all matrices? Does it work for all initial guesses? Algorithms, even mathematical ones, need a mathematical framework or analysis. We still do not have this.

(D+L)x (k) =b-Ux (k-1) Gauss-Seidel Iteration Split the matrix A into three parts A=D+L+U, where D is the diagonal elements of A, L is the lower triangular part of A and U is the upper part. Ax=b Dx+Lx+Ux=b (D+L)x = b-Ux

Gauss-Seidel Iteration Another way to understand this is to again treat each equation separately: Given the i th equation, solve for x i. Assume you know the other variables. Use the most current guess for the other variables.

Gauss-Seidel Iteration Looking at it more simply: Last iteration This iteration

Gauss-Seidel Iteration Questions: 1. How many iterations do we need? 2. What is our stopping criteria? 3. Is it faster than applying Gaussian Elimination? 4. Are there round-off errors or other precision and robustness issues?

Gauss-Seidel - Implementation while( !converged ) { for (int i=0; i<N; i++) { // For each equation double sum = b[i]; foreach (double element in nonZeroElements[i]) { if( i <> j ) sum += -A(i,j)x(j); } x [i] = sum / A[i,i]; } // Test for convergence … temp = x; } Complexity: Each Iteration: O(pN) Total: O(MpN) p= # non-zero elements Differences from Jacobi

Convergence Jacobi Iteration can be shown to converge from any initial guess if A is strictly diagonally dominant. Diagonally dominant Strictly Diagonally dominant

Convergence Gauss-Seidel can be shown to converge is A is symmetric positive definite.

Convergence - Jacobi Consider the convergence graphically for a 2D system: Initial guess Equation 1 Equation 2 x=… y=… x=… y=…

Convergence - Jacobi What if we swap the order of the equations? Initial guess Equation 2 Equation 1 x=… Not diagonally dominant Same set of equations!

Diagonally Dominant What does diagonally dominant mean for a 2D system? 10x+y=12 => high-slope (more vertical) x+10y=21 => low-slope (more horizontal) Identity matrix (or any diagonal matrix) would have the intersection of a vertical and a horizontal line. The b vector controls the location of the lines.

Convergence – Gauss-Seidel Initial guess Equation 1 Equation 2 x=… y=… x=… y=…

Convergence - SOR Successive Over-Relaxation (SOR) just adds an extrapolation step. w = 1.3 implies go an extra 30% Initial guess Equation 1 Equation 2 x=… y=… Extrapolation x=… y=… Extrapolation This would Extrapolation at the very end (mix of Jacobi and Gauss-Seidel.

Convergence - SOR SOR with Gauss-Seidel Initial guess Equation 1 Equation 2 x=… y=… x=… Extrapolation in bold