CS 240A: Solving Ax = b in parallel Dense A: Gaussian elimination with partial pivoting (LU) Same flavor as matrix * matrix, but more complicated Sparse.

Slides:



Advertisements
Similar presentations
05/11/2005 Carnegie Mellon School of Computer Science Aladdin Lamps 05 Combinatorial and algebraic tools for multigrid Yiannis Koutis Computer Science.
Advertisements

CSE 245: Computer Aided Circuit Simulation and Verification Matrix Computations: Iterative Methods (II) Chung-Kuan Cheng.
CSE 245: Computer Aided Circuit Simulation and Verification
Siddharth Choudhary.  Refines a visual reconstruction to produce jointly optimal 3D structure and viewing parameters  ‘bundle’ refers to the bundle.
CS 290H 7 November Introduction to multigrid methods
Numerical Algorithms ITCS 4/5145 Parallel Computing UNC-Charlotte, B. Wilkinson, 2009.
MATH 685/ CSI 700/ OR 682 Lecture Notes
Solving Linear Systems (Numerical Recipes, Chap 2)
Sparse Matrices in Matlab John R. Gilbert Xerox Palo Alto Research Center with Cleve Moler (MathWorks) and Rob Schreiber (HP Labs)
Systems of Linear Equations
SOLVING SYSTEMS OF LINEAR EQUATIONS. Overview A matrix consists of a rectangular array of elements represented by a single symbol (example: [A]). An individual.
Numerical Algorithms Matrix multiplication
Modern iterative methods For basic iterative methods, converge linearly Modern iterative methods, converge faster –Krylov subspace method Steepest descent.
Solution of linear system of equations
Numerical Algorithms • Matrix multiplication
1cs542g-term Notes  Assignment 1 is out (due October 5)  Matrix storage: usually column-major.
1cs542g-term Sparse matrix data structure  Typically either Compressed Sparse Row (CSR) or Compressed Sparse Column (CSC) Informally “ia-ja” format.
Sparse Matrix Algorithms CS 524 – High-Performance Computing.
Avoiding Communication in Sparse Iterative Solvers Erin Carson Nick Knight CS294, Fall 2011.
CS267 L12 Sources of Parallelism(3).1 Demmel Sp 1999 CS 267 Applications of Parallel Computers Lecture 12: Sources of Parallelism and Locality (Part 3)
Sparse Matrix Methods Day 1: Overview Day 2: Direct methods
The Landscape of Ax=b Solvers Direct A = LU Iterative y’ = Ay Non- symmetric Symmetric positive definite More RobustLess Storage (if sparse) More Robust.
CS 240A: Solving Ax = b in parallel °Dense A: Gaussian elimination with partial pivoting Same flavor as matrix * matrix, but more complicated °Sparse A:
Sparse Matrix Methods Day 1: Overview Matlab and examples Data structures Ax=b Sparse matrices and graphs Fill-reducing matrix permutations Matching and.
CS240A: Conjugate Gradients and the Model Problem.
Monica Garika Chandana Guduru. METHODS TO SOLVE LINEAR SYSTEMS Direct methods Gaussian elimination method LU method for factorization Simplex method of.
Domain decomposition in parallel computing Ashok Srinivasan Florida State University COT 5410 – Spring 2004.
Conjugate gradients, sparse matrix-vector multiplication, graphs, and meshes Thanks to Aydin Buluc, Umit Catalyurek, Alan Edelman, and Kathy Yelick for.
Support-Graph Preconditioning John R. Gilbert MIT and U. C. Santa Barbara coauthors: Marshall Bern, Bruce Hendrickson, Nhat Nguyen, Sivan Toledo authors.
Scientific Computing Linear Systems – LU Factorization.
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
Complexity of direct methods n 1/2 n 1/3 2D3D Space (fill): O(n log n)O(n 4/3 ) Time (flops): O(n 3/2 )O(n 2 ) Time and space to solve any problem on any.
The Landscape of Sparse Ax=b Solvers Direct A = LU Iterative y’ = Ay Non- symmetric Symmetric positive definite More RobustLess Storage More Robust More.
Computation on meshes, sparse matrices, and graphs Some slides are from David Culler, Jim Demmel, Bob Lucas, Horst Simon, Kathy Yelick, et al., UCB CS267.
CS 290H Lecture 5 Elimination trees Read GLN section 6.6 (next time I’ll assign 6.5 and 6.7) Homework 1 due Thursday 14 Oct by 3pm turnin file1.
CS 219: Sparse Matrix Algorithms
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd ed., by B. Wilkinson & M
Administrivia: May 20, 2013 Course project progress reports due Wednesday. Reading in Multigrid Tutorial: Chapters 3-4: Multigrid cycles and implementation.
1 Incorporating Iterative Refinement with Sparse Cholesky April 2007 Doron Pearl.
1 Marijn Bartel Schreuders Supervisor: Dr. Ir. M.B. Van Gijzen Date:Monday, 24 February 2014.
Solution of Sparse Linear Systems
Parallel Solution of the Poisson Problem Using MPI
CS240A: Conjugate Gradients and the Model Problem.
Case Study in Computational Science & Engineering - Lecture 5 1 Iterative Solution of Linear Systems Jacobi Method while not converged do { }
Administrivia: October 5, 2009 Homework 1 due Wednesday Reading in Davis: Skim section 6.1 (the fill bounds will make more sense next week) Read section.
CS 484. Iterative Methods n Gaussian elimination is considered to be a direct method to solve a system. n An indirect method produces a sequence of values.
Domain decomposition in parallel computing Ashok Srinivasan Florida State University.
CS 290H Administrivia: May 14, 2008 Course project progress reports due next Wed 21 May. Reading in Saad (second edition): Sections
CS 290H 31 October and 2 November Support graph preconditioners Final projects: Read and present two related papers on a topic not covered in class Or,
CS 290H Lecture 15 GESP concluded Final presentations for survey projects next Tue and Thu 20-minute talk with at least 5 min for questions and discussion.
Linear Systems Dinesh A.
Consider Preconditioning – Basic Principles Basic Idea: is to use Krylov subspace method (CG, GMRES, MINRES …) on a modified system such as The matrix.
CS 290H Lecture 9 Left-looking LU with partial pivoting Read “A supernodal approach to sparse partial pivoting” (course reader #4), sections 1 through.
A Parallel Hierarchical Solver for the Poisson Equation Seung Lee Deparment of Mechanical Engineering
Symmetric-pattern multifrontal factorization T(A) G(A)
Conjugate gradient iteration One matrix-vector multiplication per iteration Two vector dot products per iteration Four n-vectors of working storage x 0.
CS 140: Sparse Matrix-Vector Multiplication and Graph Partitioning
The Landscape of Sparse Ax=b Solvers Direct A = LU Iterative y’ = Ay Non- symmetric Symmetric positive definite More RobustLess Storage More Robust More.
Parallel Direct Methods for Sparse Linear Systems
Model Problem: Solving Poisson’s equation for temperature
CS 290N / 219: Sparse Matrix Algorithms
Solving Linear Systems Ax=b
CS 290H Administrivia: April 16, 2008
Computation on meshes, sparse matrices, and graphs
CSE 245: Computer Aided Circuit Simulation and Verification
Computational meshes, matrices, conjugate gradients, and mesh partitioning Some slides are from David Culler, Jim Demmel, Bob Lucas, Horst Simon, Kathy.
Singular Value Decomposition
CS 290H Lecture 3 Fill: bounds and heuristics
Read GLN sections 6.1 through 6.4.
Administrivia: November 9, 2009
Presentation transcript:

CS 240A: Solving Ax = b in parallel Dense A: Gaussian elimination with partial pivoting (LU) Same flavor as matrix * matrix, but more complicated Sparse A: Gaussian elimination – Cholesky, LU, etc. Graph algorithms Sparse A: Iterative methods – Conjugate gradient, etc. Sparse matrix times dense vector Sparse A: Preconditioned iterative methods and multigrid Mixture of lots of things

Matrix and Graph Edge from row i to column j for nonzero A(i,j) No edges for diagonal nonzeros If A is symmetric, G(A) is an undirected graph Symmetric permutation PAP T renumbers the vertices AG(A)

Compressed Sparse Matrix Storage Full storage: 2-dimensional array. (nrows*ncols) memory Sparse storage: Compressed storage by columns (CSC). Three 1-dimensional arrays. (2*nzs + ncols + 1) memory. Similarly, CSR value: row: colstart:

The Landscape of Ax=b Solvers Direct A = LU Iterative y’ = Ay Non- symmetric Symmetric positive definite More RobustLess Storage (if sparse) More Robust More General

CS 240A: Solving Ax = b in parallel Dense A: Gaussian elimination with partial pivoting (LU) See April 15 slides Same flavor as matrix * matrix, but more complicated Sparse A: Gaussian elimination – Cholesky, LU, etc. Graph algorithms Sparse A: Iterative methods – Conjugate gradient, etc. Sparse matrix times dense vector Sparse A: Preconditioned iterative methods and multigrid Mixture of lots of things

For a symmetric, positive definite matrix: 1.Matrix factorization: A = LL T (Cholesky factorization) 2.Forward triangular solve: Ly = b 3.Backward triangular solve: L T x = y For a nonsymmetric matrix: 1.Matrix factorization: PA = LU (Partial pivoting) Gaussian elimination to solve Ax = b

Sparse Column Cholesky Factorization for j = 1 : n L(j:n, j) = A(j:n, j); for k < j with L(j, k) nonzero % sparse cmod(j,k) L(j:n, j) = L(j:n, j) – L(j, k) * L(j:n, k); end; % sparse cdiv(j) L(j, j) = sqrt(L(j, j)); L(j+1:n, j) = L(j+1:n, j) / L(j, j); end; Column j of A becomes column j of L L L LTLT A j

8 Irregular mesh: NASA Airfoil in 2D

Graphs and Sparse Matrices : Cholesky factorization G(A) G + (A) [chordal] Symmetric Gaussian elimination: for j = 1 to n add edges between j’s higher-numbered neighbors Fill: new nonzeros in factor

Permutations of the 2-D model problem Theorem: With the natural permutation, the n-vertex model problem has  (n 3/2 ) fill. (“order exactly”) Theorem: With any permutation, the n-vertex model problem has  (n log n) fill. (“order at least”) Theorem: With a nested dissection permutation, the n-vertex model problem has O(n log n) fill. (“order at most”)

Nested dissection ordering A separator in a graph G is a set S of vertices whose removal leaves at least two connected components. A nested dissection ordering for an n-vertex graph G numbers its vertices from 1 to n as follows: Find a separator S, whose removal leaves connected components T 1, T 2, …, T k Number the vertices of S from n-|S|+1 to n. Recursively, number the vertices of each component: T 1 from 1 to |T 1 |, T 2 from |T 1 |+1 to |T 1 |+|T 2 |, etc. If a component is small enough, number it arbitrarily. It all boils down to finding good separators!

Separators in theory If G is a planar graph with n vertices, there exists a set of at most sqrt(6n) vertices whose removal leaves no connected component with more than 2n/3 vertices. (“Planar graphs have sqrt(n)-separators.”) “Well-shaped” finite element meshes in 3 dimensions have n 2/3 - separators. Also some other classes of graphs – trees, graphs of bounded genus, chordal graphs, bounded-excluded- minor graphs, … Mostly these theorems come with efficient algorithms, but they aren’t used much.

Separators in practice Graph partitioning heuristics have been an active research area for many years, often motivated by partitioning for parallel computation. Some techniques: Spectral partitioning (uses eigenvectors of Laplacian matrix of graph) Geometric partitioning (for meshes with specified vertex coordinates) Iterative-swapping (Kernighan-Lin, Fiduccia-Matheysses) Breadth-first search (fast but dated) Many popular modern codes (e.g. Metis, Chaco) use multilevel iterative swapping Matlab graph partitioning toolbox: see course web page

Complexity of direct methods n 1/2 n 1/3 2D3D Space (fill): O(n log n)O(n 4/3 ) Time (flops): O(n 3/2 )O(n 2 ) Time and space to solve any problem on any well- shaped finite element mesh

CS 240A: Solving Ax = b in parallel Dense A: Gaussian elimination with partial pivoting (LU) See April 15 slides Same flavor as matrix * matrix, but more complicated Sparse A: Gaussian elimination – Cholesky, LU, etc. Graph algorithms Sparse A: Iterative methods – Conjugate gradient, etc. Sparse matrix times dense vector Sparse A: Preconditioned iterative methods and multigrid Mixture of lots of things

The Landscape of Ax=b Solvers Direct A = LU Iterative y’ = Ay Non- symmetric Symmetric positive definite More RobustLess Storage (if sparse) More Robust More General

Conjugate gradient iteration One matrix-vector multiplication per iteration Two vector dot products per iteration Four n-vectors of working storage x 0 = 0, r 0 = b, d 0 = r 0 for k = 1, 2, 3,... α k = (r T k-1 r k-1 ) / (d T k-1 Ad k-1 ) step length x k = x k-1 + α k d k-1 approx solution r k = r k-1 – α k Ad k-1 residual β k = (r T k r k ) / (r T k-1 r k-1 ) improvement d k = r k + β k d k-1 search direction

Sparse matrix data structure (stored by rows) Full: 2-dimensional array of real or complex numbers (nrows*ncols) memory Sparse: compressed row storage about (2*nzs + nrows) memory

P0P0 P1P1 P2P2 P p-1 Each processor stores: # of local nonzeros range of local rows nonzeros in CSR form Distributed row sparse matrix data structure

Lay out matrix and vectors by rows y(i) = sum(A(i,j)*x(j)) Skip terms with A(i,j) = 0 Algorithm Each processor i: Broadcast x(i) Compute y(i) = A(i,:)*x Optimizations: reduce communication by Only send as much of x as necessary to each proc Reorder matrix for better locality by graph partitioning x y P0 P1 P2 P3 P0 P1 P2 P3 Matrix-vector product: Parallel implementation

Sparse Matrix-Vector Multiplication

CS 240A: Solving Ax = b in parallel Dense A: Gaussian elimination with partial pivoting (LU) See April 15 slides Same flavor as matrix * matrix, but more complicated Sparse A: Gaussian elimination – Cholesky, LU, etc. Graph algorithms Sparse A: Iterative methods – Conjugate gradient, etc. Sparse matrix times dense vector Sparse A: Preconditioned iterative methods and multigrid Mixture of lots of things

Conjugate gradient: Convergence In exact arithmetic, CG converges in n steps (completely unrealistic!!) Accuracy after k steps of CG is related to: consider polynomials of degree k that are equal to 1 at 0. how small can such a polynomial be at all the eigenvalues of A? Thus, eigenvalues close together are good. Condition number: κ (A) = ||A|| 2 ||A -1 || 2 = λ max (A) / λ min (A) Residual is reduced by a constant factor by O( sqrt( κ (A)) ) iterations of CG.

Preconditioners Suppose you had a matrix B such that: 1.condition number κ (B -1 A) is small 2.By = z is easy to solve Then you could solve (B -1 A)x = B -1 b instead of Ax = b Each iteration of CG multiplies a vector by B -1 A: First multiply by A Then solve a system with B

Preconditioned conjugate gradient iteration x 0 = 0, r 0 = b, d 0 = B -1 r 0, y 0 = B -1 r 0 for k = 1, 2, 3,... α k = (y T k-1 r k-1 ) / (d T k-1 Ad k-1 ) step length x k = x k-1 + α k d k-1 approx solution r k = r k-1 – α k Ad k-1 residual y k = B -1 r k preconditioning solve β k = (y T k r k ) / (y T k-1 r k-1 ) improvement d k = y k + β k d k-1 search direction One matrix-vector multiplication per iteration One solve with preconditioner per iteration

Choosing a good preconditioner Suppose you had a matrix B such that: 1.condition number κ (B -1 A) is small 2.By = z is easy to solve Then you could solve (B -1 A)x = B -1 b instead of Ax = b B = A is great for (1), not for (2) B = I is great for (2), not for (1) Domain-specific approximations sometimes work B = diagonal of A sometimes works Better: blend in some direct-methods ideas...

Incomplete Cholesky factorization (IC, ILU) Compute factors of A by Gaussian elimination, but ignore fill Preconditioner B = R T R  A, not formed explicitly Compute B -1 z by triangular solves (in time nnz(A)) Total storage is O(nnz(A)), static data structure Either symmetric (IC) or nonsymmetric (ILU) x ARTRT R

Incomplete Cholesky and ILU: Variants Allow one or more “levels of fill” unpredictable storage requirements Allow fill whose magnitude exceeds a “drop tolerance” may get better approximate factors than levels of fill unpredictable storage requirements choice of tolerance is ad hoc Partial pivoting (for nonsymmetric A) “Modified ILU” (MIC): Add dropped fill to diagonal of U or R A and R T R have same row sums good in some PDE contexts

Incomplete Cholesky and ILU: Issues Choice of parameters good: smooth transition from iterative to direct methods bad: very ad hoc, problem-dependent tradeoff: time per iteration (more fill => more time) vs # of iterations (more fill => fewer iters) Effectiveness condition number usually improves (only) by constant factor (except MIC for some problems from PDEs) still, often good when tuned for a particular class of problems Parallelism Triangular solves are not very parallel Reordering for parallel triangular solve by graph coloring

Coloring for parallel nonsymmetric preconditioning [Aggarwal, Gibou, G] Level set method for multiphase interface problems in 3D Nonsymmetric-structure, second-order-accurate octree discretization. BiCGSTAB preconditioned by parallel triangular solves. 263 million DOF

Sparse approximate inverses Compute B -1  A explicitly Minimize || B -1 A – I || F (in parallel, by columns) Variants: factored form of B -1, more fill,.. Good: very parallel Bad: effectiveness varies widely AB -1

Other Krylov subspace methods Nonsymmetric linear systems: GMRES: for i = 1, 2, 3,... find x i  K i (A, b) such that r i = (Ax i – b)  K i (A, b) But, no short recurrence => save old vectors => lots more space (Usually “restarted” every k iterations to use less space.) BiCGStab, QMR, etc.: Two spaces K i (A, b) and K i (A T, b) w/ mutually orthogonal bases Short recurrences => O(n) space, but less robust Convergence and preconditioning more delicate than CG Active area of current research Eigenvalues: Lanczos (symmetric), Arnoldi (nonsymmetric)

Multigrid For a PDE on a fine mesh, precondition using a solution on a coarser mesh Use idea recursively on hierarchy of meshes Solves the model problem (Poisson’s eqn) in linear time! Often useful when hierarchy of meshes can be built Hard to parallelize coarse meshes well This is just the intuition – lots of theory and technology

Complexity of linear solvers 2D3D Sparse Cholesky:O(n 1.5 )O(n 2 ) CG, exact arithmetic: O(n 2 ) CG, no precond: O(n 1.5 )O(n 1.33 ) CG, modified IC: O(n 1.25 )O(n 1.17 ) CG, support trees: O(n 1.20 ) -> O(n 1+ )O(n 1.75 ) -> O(n 1+ ) Multigrid:O(n) n 1/2 n 1/3 Time to solve model problem (Poisson’s equation) on regular mesh

Complexity of direct methods n 1/2 n 1/3 2D3D Space (fill): O(n log n)O(n 4/3 ) Time (flops): O(n 3/2 )O(n 2 ) Time and space to solve any problem on any well- shaped finite element mesh