Presentation is loading. Please wait.

Presentation is loading. Please wait.

Direct Methods for Sparse Linear Systems Lecture 4 Alessandra Nardi Thanks to Prof. Jacob White, Suvranu De, Deepak Ramaswamy, Michal Rewienski, and Karen.

Similar presentations


Presentation on theme: "Direct Methods for Sparse Linear Systems Lecture 4 Alessandra Nardi Thanks to Prof. Jacob White, Suvranu De, Deepak Ramaswamy, Michal Rewienski, and Karen."— Presentation transcript:

1 Direct Methods for Sparse Linear Systems Lecture 4 Alessandra Nardi Thanks to Prof. Jacob White, Suvranu De, Deepak Ramaswamy, Michal Rewienski, and Karen Veroy

2 Last lecture review Solution of system of linear equations Existence and uniqueness review Gaussian elimination basics –GE basics –LU factorization –Pivoting

3 Outline Error Mechanisms Sparse Matrices –Why are they nice –How do we store them –How can we exploit and preserve sparsity

4 Error Mechanisms Round-off error –Pivoting helps Ill conditioning (almost singular) –Bad luck: property of the matrix –Pivoting does not help Numerical Stability of Method

5 Ill-Conditioning : Norms Norms useful to discuss error in numerical problems Norm

6 L 2 (Euclidean) norm : L 1 norm : L  norm : Unit circle Unit square 1 1 Ill-Conditioning : Vector Norms

7 Vector induced norm : Induced norm of A is the maximum “magnification” of by = max abs column sum = max abs row sum = (largest eigenvalue of A T A) 1/2 Ill-Conditioning : Matrix Norms

8 More properties on the matrix norm: Condition Number: -It can be shown that: -Large  (A) means matrix is almost singular (ill-conditioned)

9 What happens if we perturb b? Ill-Conditioning: Perturbation Analysis   (M) large is bad

10 What happens if we perturb M? Ill-Conditioning: Perturbation Analysis   (M) large is bad Bottom line: If matrix is ill-conditioned, round-off puts you in troubles

11 Geometric Approach is more intuitive When vectors are nearly aligned, Hard to decide how much of versus how much of Vectors are orthogonal Ill-Conditioning: Perturbation Analysis Vectors are nearly aligned

12 Numerical Stability Rounding errors may accumulate and propagate unstably in a bad algorithm. Can be proven that for Gaussian elimination the accumulated error is bounded

13 Summary on Error Mechanisms for GE Rounding: due to machine finite precision we have an error in the solution even if the algorithm is perfect –Pivoting helps to reduce it Matrix conditioning –If matrix is “good”, then complete pivoting solves any round-off problem –If matrix is “bad” (almost singular), then there is nothing to do Numerical stability –How rounding errors accumulate –GE is stable

14 LU – Computational Complexity Computational Complexity –O(n 3 ), where M: n x n We cannot afford this complexity Exploit natural sparsity that occurs in circuits equations –Sparsity: many zero elements –Matrix is sparse when it is advantageous to exploit its sparsity Exploiting sparsity: O(n 1.1 ) to O(n 1.5 )

15 LU – Goals of exploiting sparsity (1)Avoid storing zero entries –Memory usage reduction –Decomposition is faster since you do need to access them (but more complicated data structure) (2) Avoid trivial operations –Multiplication by zero –Addition with zero (3) Avoid losing sparsity

16 m Sparse Matrices – Resistor Line Tridiagonal Case

17 For i = 1 to n-1 { “ For each Row ” For j = i+1 to n { “For each target Row below the source” For k = i+1 to n { “For each Row element beyond Pivot” } Pivot Multiplier Order N Operations! GE Algorithm – Tridiagonal Example

18 Symmetric Diagonally Dominant Nodal Matrix 0 Sparse Matrices – Fill-in – Example 1

19 X X XX X= Non zero Matrix Non zero structureMatrix after one LU step XX Sparse Matrices – Fill-in – Example 1

20 Fill-ins Propagate XX X X X XX X XX Fill-ins from Step 1 result in Fill-ins in step 2 Sparse Matrices – Fill-in – Example 2

21 Node Reordering Can Reduce Fill-in - Preserves Properties (Symmetry, Diagonal Dominance) - Equivalent to swapping rows and columns 0 Fill-ins 0 No Fill-ins Sparse Matrices – Fill-in & Reordering

22 Exploiting and maintaining sparsity Criteria for exploiting sparsity: –Minimum number of ops –Minimum number of fill-ins Pivoting to maintain sparsity: NP-complete problem  heuristics are used –Markowitz, Berry, Hsieh and Ghausi, Nakhla and Singhal and Vlach –Choice: Markowitz 5% more fill-ins but Faster Pivoting for accuracy may conflict with pivoting for sparsity

23 Where can fill-in occur ? Multipliers Already Factored Possible Fill-in Locations Fill-in Estimate = (Non zeros in unfactored part of Row -1) (Non zeros in unfactored part of Col -1) Markowitz product Sparse Matrices – Fill-in & Reordering

24 Markowitz Reordering (Diagonal Pivoting) Greedy Algorithm (but close to optimal) ! Sparse Matrices – Fill-in & Reordering

25 Why only try diagonals ? Corresponds to node reordering in Nodal formulation 0 3 12 0 2 31 Reduces search cost Preserves Matrix Properties - Diagonal Dominance - Symmetry Sparse Matrices – Fill-in & Reordering

26 Pattern of a Filled-in Matrix Very Sparse Dense Sparse Matrices – Fill-in & Reordering

27 Unfactored random Matrix

28 Sparse Matrices – Fill-in & Reordering Factored random Matrix

29 Sparse Matrices – Data Structure Several ways of storing a sparse matrix in a compact form Trade-off –Storage amount –Cost of data accessing and update procedures Efficient data structure: linked list

30 Sparse Matrices – Data Structure 1 Orthogonal linked list

31 Val 11 Col 11 Val 12 Col 12 Val 1K Col 1K Val 21 Col 21 Val 22 Col 22 Val 2L Col 2L Val N1 Col N1 Val N2 Col N2 Val Nj Col Nj 1 N Arrays of Data in a Row Vector of row pointers Matrix entries Column index Sparse Matrices – Data Structure 2

32 Eliminating Source Row i from Target row j Row i Row j Must read all the row j entries to find the 3 that match row i Every Miss is an unneeded memory reference (expensive!!!) Could have more misses than ops! Sparse Matrices – Data Structure Problem of Misses

33 Row j 1) Read all the elements in Row j, and scatter them in an n-length vector 2) Access only the needed elements using array indexing! Sparse Matrices – Data Structure Scattering for Miss Avoidance

34 X X X X X X XX X XX X X XX 1 2 4 3 5 One Node Per Matrix Row One Edge Per Off-diagonal Pair X X Structurally Symmetric Matrices and Graphs Sparse Matrices – Graph Approach

35 X X X X X X XX X XX X X XX X X 1 2 4 3 5 Markowitz Products =(Node Degree) 2 Sparse Matrices – Graph Approach Markowitz Products

36 X X X X X X XX X XX X X XX 1 2 4 3 5 Delete the node associated with pivot row “Tie together” the graph edges X X X X X One Step of LU Factorization Sparse Matrices – Graph Approach Factorization

37 12345 Graph Markowitz products ( = Node degree) 12345 Sparse Matrices – Graph Approach Example

38 1345 Graph Swap 2 with 1 Sparse Matrices – Graph Approach Example

39 Gaussian Elimination Error Mechanisms –Ill-conditioning –Numerical Stability Gaussian Elimination for Sparse Matrices –Improved computational cost: factor in O(N 1.5 ) operations (dense is O(N 3 ) ) –Example: Tridiagonal Matrix Factorization O(N) –Data structure –Markowitz Reordering to minimize fill-ins –Graph Based Approach Summary


Download ppt "Direct Methods for Sparse Linear Systems Lecture 4 Alessandra Nardi Thanks to Prof. Jacob White, Suvranu De, Deepak Ramaswamy, Michal Rewienski, and Karen."

Similar presentations


Ads by Google