Direct Methods for Sparse Linear Systems Lecture 4 Alessandra Nardi Thanks to Prof. Jacob White, Suvranu De, Deepak Ramaswamy, Michal Rewienski, and Karen.

Slides:



Advertisements
Similar presentations
Chapter 28 – Part II Matrix Operations. Gaussian elimination Gaussian elimination LU factorization LU factorization Gaussian elimination with partial.
Advertisements

Fill Reduction Algorithm Using Diagonal Markowitz Scheme with Local Symmetrization Patrick Amestoy ENSEEIHT-IRIT, France Xiaoye S. Li Esmond Ng Lawrence.
Algebraic, transcendental (i.e., involving trigonometric and exponential functions), ordinary differential equations, or partial differential equations...
ECE 552 Numerical Circuit Analysis Chapter Four SPARSE MATRIX SOLUTION TECHNIQUES Copyright © I. Hajj 2012 All rights reserved.
Chapter 13 Gaussian Elimination (III) Bunch-Parlett diagonal pivoting
MATH 685/ CSI 700/ OR 682 Lecture Notes
Scientific Computing Linear Systems – Gaussian Elimination.
Solving Linear Systems (Numerical Recipes, Chap 2)
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
SOLVING SYSTEMS OF LINEAR EQUATIONS. Overview A matrix consists of a rectangular array of elements represented by a single symbol (example: [A]). An individual.
CISE301_Topic3KFUPM1 SE301: Numerical Methods Topic 3: Solution of Systems of Linear Equations Lectures 12-17: KFUPM Read Chapter 9 of the textbook.
Rayan Alsemmeri Amseena Mansoor. LINEAR SYSTEMS Jacobi method is used to solve linear systems of the form Ax=b, where A is the square and invertible.
Lecture 9: Introduction to Matrix Inversion Gaussian Elimination Sections 2.4, 2.5, 2.6 Sections 2.2.3, 2.3.
Asymptotic error expansion Example 1: Numerical differentiation –Truncation error via Taylor expansion.
Lecture 17 Introduction to Eigenvalue Problems
Numerical Algorithms Matrix multiplication
Iterative Methods and QR Factorization Lecture 5 Alessandra Nardi Thanks to Prof. Jacob White, Suvranu De, Deepak Ramaswamy, Michal Rewienski, and Karen.
Chapter 9.1 = LU Decomposition MATH 264 Linear Algebra.
Part 3 Chapter 9 Gauss Elimination
Solution of linear system of equations
Chapter 9 Gauss Elimination The Islamic University of Gaza
Solving Systems of Linear Equations Part Pivot a Matrix 2. Gaussian Elimination Method 3. Infinitely Many Solutions 4. Inconsistent System 5. Geometric.
1cs542g-term Notes  Assignment 1 will be out later today (look on the web)
1cs542g-term Notes  Assignment 1 is out (questions?)
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
1 EE 616 Computer Aided Analysis of Electronic Networks Lecture 4 Instructor: Dr. J. A. Starzyk, Professor School of EECS Ohio University Athens, OH,
ECE 552 Numerical Circuit Analysis
Systems of Linear Equations
10.1 Gaussian Elimination Method
Chapter 12 Gaussian Elimination (II) Speaker: Lung-Sheng Chien Reference book: David Kincaid, Numerical Analysis Reference lecture note: Wen-wei Lin, chapter.
Scientific Computing Linear Systems – LU Factorization.
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
System of Linear Equations Nattee Niparnan. LINEAR EQUATIONS.
Solving Scalar Linear Systems Iterative approach Lecture 15 MA/CS 471 Fall 2003.
Barnett/Ziegler/Byleen Finite Mathematics 11e1 Review for Chapter 4 Important Terms, Symbols, Concepts 4.1. Systems of Linear Equations in Two Variables.
1 Iterative Solution Methods Starts with an initial approximation for the solution vector (x 0 ) At each iteration updates the x vector by using the sytem.
ΑΡΙΘΜΗΤΙΚΕΣ ΜΕΘΟΔΟΙ ΜΟΝΤΕΛΟΠΟΙΗΣΗΣ 4. Αριθμητική Επίλυση Συστημάτων Γραμμικών Εξισώσεων Gaussian elimination Gauss - Jordan 1.
CMPS 1371 Introduction to Computing for Engineers MATRICES.
Lecture 28: Mathematical Insight and Engineering.
Chapter 3 Solution of Algebraic Equations 1 ChE 401: Computational Techniques for Chemical Engineers Fall 2009/2010 DRAFT SLIDES.
Efficiency and Flexibility of Jagged Arrays Geir Gundersen Department of Informatics University of Bergen Norway Joint work with Trond Steihaug.
 6.2 Pivoting Strategies 1/17 Chapter 6 Direct Methods for Solving Linear Systems -- Pivoting Strategies Example: Solve the linear system using 4-digit.
Lecture 7 - Systems of Equations CVEN 302 June 17, 2002.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 Part 3 - Chapter 9 Linear Systems of Equations: Gauss Elimination.
MATH 685/ CSI 700/ OR 682 Lecture Notes Lecture 4. Least squares.
Solution of Sparse Linear Systems
Matrices and Systems of Equations
Direct Methods for Linear Systems Lecture 3 Alessandra Nardi Thanks to Prof. Jacob White, Suvranu De, Deepak Ramaswamy, Michal Rewienski, and Karen Veroy.
Chapter 9 Gauss Elimination The Islamic University of Gaza
1 EE 616 Computer Aided Analysis of Electronic Networks Lecture 12 Instructor: Dr. J. A. Starzyk, Professor School of EECS Ohio University Athens, OH,
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
1 Chapter 7 Numerical Methods for the Solution of Systems of Equations.
Krylov-Subspace Methods - I Lecture 6 Alessandra Nardi Thanks to Prof. Jacob White, Deepak Ramaswamy, Michal Rewienski, and Karen Veroy.
Unit #1 Linear Systems Fall Dr. Jehad Al Dallal.
Krylov-Subspace Methods - II Lecture 7 Alessandra Nardi Thanks to Prof. Jacob White, Deepak Ramaswamy, Michal Rewienski, and Karen Veroy.
Solution of Sparse Linear Systems Numerical Simulation CSE245 Thanks to: Kin Sou, Deepak Ramaswamy, Michal Rewienski, Jacob White, Shihhsien Kuo and Karen.
Conjugate gradient iteration One matrix-vector multiplication per iteration Two vector dot products per iteration Four n-vectors of working storage x 0.
Numerical Computation Lecture 6: Linear Systems – part II United International College.
1 Numerical Methods Solution of Systems of Linear Equations.
Iterative Solution Methods
Krylov-Subspace Methods - I
Direct Methods for Sparse Linear Systems
CSCE569 Parallel Computing
CSE 541 – Numerical Methods
RECORD. RECORD COLLABORATE: Discuss: Is the statement below correct? Try a 2x2 example.
4.8 Pivoting Implementing Gaussian Elimination computationally can lead to problems Happens if the element used to create zeros is small relative to other.
Lecture 13 Simultaneous Linear Equations – Gaussian Elimination (2) Partial Pivoting Dr .Qi Ying.
Lecture 8 Matrix Inverse and LU Decomposition
Ax = b Methods for Solution of the System of Equations:
Outline Sparse Reconstruction RIP Condition
Presentation transcript:

Direct Methods for Sparse Linear Systems Lecture 4 Alessandra Nardi Thanks to Prof. Jacob White, Suvranu De, Deepak Ramaswamy, Michal Rewienski, and Karen Veroy

Last lecture review Solution of system of linear equations Existence and uniqueness review Gaussian elimination basics –GE basics –LU factorization –Pivoting

Outline Error Mechanisms Sparse Matrices –Why are they nice –How do we store them –How can we exploit and preserve sparsity

Error Mechanisms Round-off error –Pivoting helps Ill conditioning (almost singular) –Bad luck: property of the matrix –Pivoting does not help Numerical Stability of Method

Ill-Conditioning : Norms Norms useful to discuss error in numerical problems Norm

L 2 (Euclidean) norm : L 1 norm : L  norm : Unit circle Unit square 1 1 Ill-Conditioning : Vector Norms

Vector induced norm : Induced norm of A is the maximum “magnification” of by = max abs column sum = max abs row sum = (largest eigenvalue of A T A) 1/2 Ill-Conditioning : Matrix Norms

More properties on the matrix norm: Condition Number: -It can be shown that: -Large  (A) means matrix is almost singular (ill-conditioned)

What happens if we perturb b? Ill-Conditioning: Perturbation Analysis   (M) large is bad

What happens if we perturb M? Ill-Conditioning: Perturbation Analysis   (M) large is bad Bottom line: If matrix is ill-conditioned, round-off puts you in troubles

Geometric Approach is more intuitive When vectors are nearly aligned, Hard to decide how much of versus how much of Vectors are orthogonal Ill-Conditioning: Perturbation Analysis Vectors are nearly aligned

Numerical Stability Rounding errors may accumulate and propagate unstably in a bad algorithm. Can be proven that for Gaussian elimination the accumulated error is bounded

Summary on Error Mechanisms for GE Rounding: due to machine finite precision we have an error in the solution even if the algorithm is perfect –Pivoting helps to reduce it Matrix conditioning –If matrix is “good”, then complete pivoting solves any round-off problem –If matrix is “bad” (almost singular), then there is nothing to do Numerical stability –How rounding errors accumulate –GE is stable

LU – Computational Complexity Computational Complexity –O(n 3 ), where M: n x n We cannot afford this complexity Exploit natural sparsity that occurs in circuits equations –Sparsity: many zero elements –Matrix is sparse when it is advantageous to exploit its sparsity Exploiting sparsity: O(n 1.1 ) to O(n 1.5 )

LU – Goals of exploiting sparsity (1)Avoid storing zero entries –Memory usage reduction –Decomposition is faster since you do need to access them (but more complicated data structure) (2) Avoid trivial operations –Multiplication by zero –Addition with zero (3) Avoid losing sparsity

m Sparse Matrices – Resistor Line Tridiagonal Case

For i = 1 to n-1 { “ For each Row ” For j = i+1 to n { “For each target Row below the source” For k = i+1 to n { “For each Row element beyond Pivot” } Pivot Multiplier Order N Operations! GE Algorithm – Tridiagonal Example

Symmetric Diagonally Dominant Nodal Matrix 0 Sparse Matrices – Fill-in – Example 1

X X XX X= Non zero Matrix Non zero structureMatrix after one LU step XX Sparse Matrices – Fill-in – Example 1

Fill-ins Propagate XX X X X XX X XX Fill-ins from Step 1 result in Fill-ins in step 2 Sparse Matrices – Fill-in – Example 2

Node Reordering Can Reduce Fill-in - Preserves Properties (Symmetry, Diagonal Dominance) - Equivalent to swapping rows and columns 0 Fill-ins 0 No Fill-ins Sparse Matrices – Fill-in & Reordering

Exploiting and maintaining sparsity Criteria for exploiting sparsity: –Minimum number of ops –Minimum number of fill-ins Pivoting to maintain sparsity: NP-complete problem  heuristics are used –Markowitz, Berry, Hsieh and Ghausi, Nakhla and Singhal and Vlach –Choice: Markowitz 5% more fill-ins but Faster Pivoting for accuracy may conflict with pivoting for sparsity

Where can fill-in occur ? Multipliers Already Factored Possible Fill-in Locations Fill-in Estimate = (Non zeros in unfactored part of Row -1) (Non zeros in unfactored part of Col -1) Markowitz product Sparse Matrices – Fill-in & Reordering

Markowitz Reordering (Diagonal Pivoting) Greedy Algorithm (but close to optimal) ! Sparse Matrices – Fill-in & Reordering

Why only try diagonals ? Corresponds to node reordering in Nodal formulation Reduces search cost Preserves Matrix Properties - Diagonal Dominance - Symmetry Sparse Matrices – Fill-in & Reordering

Pattern of a Filled-in Matrix Very Sparse Dense Sparse Matrices – Fill-in & Reordering

Unfactored random Matrix

Sparse Matrices – Fill-in & Reordering Factored random Matrix

Sparse Matrices – Data Structure Several ways of storing a sparse matrix in a compact form Trade-off –Storage amount –Cost of data accessing and update procedures Efficient data structure: linked list

Sparse Matrices – Data Structure 1 Orthogonal linked list

Val 11 Col 11 Val 12 Col 12 Val 1K Col 1K Val 21 Col 21 Val 22 Col 22 Val 2L Col 2L Val N1 Col N1 Val N2 Col N2 Val Nj Col Nj 1 N Arrays of Data in a Row Vector of row pointers Matrix entries Column index Sparse Matrices – Data Structure 2

Eliminating Source Row i from Target row j Row i Row j Must read all the row j entries to find the 3 that match row i Every Miss is an unneeded memory reference (expensive!!!) Could have more misses than ops! Sparse Matrices – Data Structure Problem of Misses

Row j 1) Read all the elements in Row j, and scatter them in an n-length vector 2) Access only the needed elements using array indexing! Sparse Matrices – Data Structure Scattering for Miss Avoidance

X X X X X X XX X XX X X XX One Node Per Matrix Row One Edge Per Off-diagonal Pair X X Structurally Symmetric Matrices and Graphs Sparse Matrices – Graph Approach

X X X X X X XX X XX X X XX X X Markowitz Products =(Node Degree) 2 Sparse Matrices – Graph Approach Markowitz Products

X X X X X X XX X XX X X XX Delete the node associated with pivot row “Tie together” the graph edges X X X X X One Step of LU Factorization Sparse Matrices – Graph Approach Factorization

12345 Graph Markowitz products ( = Node degree) Sparse Matrices – Graph Approach Example

1345 Graph Swap 2 with 1 Sparse Matrices – Graph Approach Example

Gaussian Elimination Error Mechanisms –Ill-conditioning –Numerical Stability Gaussian Elimination for Sparse Matrices –Improved computational cost: factor in O(N 1.5 ) operations (dense is O(N 3 ) ) –Example: Tridiagonal Matrix Factorization O(N) –Data structure –Markowitz Reordering to minimize fill-ins –Graph Based Approach Summary