CS 290H Administrivia: May 14, 2008 Course project progress reports due next Wed 21 May. Reading in Saad (second edition): Sections 13.1-13.5.

Slides:



Advertisements
Similar presentations
05/11/2005 Carnegie Mellon School of Computer Science Aladdin Lamps 05 Combinatorial and algebraic tools for multigrid Yiannis Koutis Computer Science.
Advertisements

Fill Reduction Algorithm Using Diagonal Markowitz Scheme with Local Symmetrization Patrick Amestoy ENSEEIHT-IRIT, France Xiaoye S. Li Esmond Ng Lawrence.
Siddharth Choudhary.  Refines a visual reconstruction to produce jointly optimal 3D structure and viewing parameters  ‘bundle’ refers to the bundle.
CS 240A: Solving Ax = b in parallel Dense A: Gaussian elimination with partial pivoting (LU) Same flavor as matrix * matrix, but more complicated Sparse.
CS 290H 7 November Introduction to multigrid methods
MATH 685/ CSI 700/ OR 682 Lecture Notes
Solving Linear Systems (Numerical Recipes, Chap 2)
Sparse Matrices in Matlab John R. Gilbert Xerox Palo Alto Research Center with Cleve Moler (MathWorks) and Rob Schreiber (HP Labs)
Systems of Linear Equations
Iterative methods TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAA A A A A A A A.
Numerical Algorithms Matrix multiplication
Modern iterative methods For basic iterative methods, converge linearly Modern iterative methods, converge faster –Krylov subspace method Steepest descent.
Solution of linear system of equations
CISC October Goals for today: Foster’s parallel algorithm design –Partitioning –Task dependency graph Granularity Concurrency Collective communication.
1cs542g-term Sparse matrix data structure  Typically either Compressed Sparse Row (CSR) or Compressed Sparse Column (CSC) Informally “ia-ja” format.
Sparse Matrix Algorithms CS 524 – High-Performance Computing.
Symmetric Weighted Matching for Indefinite Systems Iain Duff, RAL and CERFACS John Gilbert, MIT and UC Santa Barbara June 21, 2002.
Avoiding Communication in Sparse Iterative Solvers Erin Carson Nick Knight CS294, Fall 2011.
Sparse Matrix Methods Day 1: Overview Day 2: Direct methods
The Landscape of Ax=b Solvers Direct A = LU Iterative y’ = Ay Non- symmetric Symmetric positive definite More RobustLess Storage (if sparse) More Robust.
Sparse Matrix Methods Day 1: Overview Day 2: Direct methods Nonsymmetric systems Graph theoretic tools Sparse LU with partial pivoting Supernodal factorization.
Sparse Matrix Methods Day 1: Overview Matlab and examples Data structures Ax=b Sparse matrices and graphs Fill-reducing matrix permutations Matching and.
CS240A: Conjugate Gradients and the Model Problem.
Monica Garika Chandana Guduru. METHODS TO SOLVE LINEAR SYSTEMS Direct methods Gaussian elimination method LU method for factorization Simplex method of.
15-853Page :Algorithms in the Real World Separators – Introduction – Applications.
Domain decomposition in parallel computing Ashok Srinivasan Florida State University COT 5410 – Spring 2004.
Conjugate gradients, sparse matrix-vector multiplication, graphs, and meshes Thanks to Aydin Buluc, Umit Catalyurek, Alan Edelman, and Kathy Yelick for.
CS 290H Lecture 12 Column intersection graphs, Ordering for sparsity in LU with partial pivoting Read “Computing the block triangular form of a sparse.
Support-Graph Preconditioning John R. Gilbert MIT and U. C. Santa Barbara coauthors: Marshall Bern, Bruce Hendrickson, Nhat Nguyen, Sivan Toledo authors.
Complexity of direct methods n 1/2 n 1/3 2D3D Space (fill): O(n log n)O(n 4/3 ) Time (flops): O(n 3/2 )O(n 2 ) Time and space to solve any problem on any.
Symbolic sparse Gaussian elimination: A = LU
The Landscape of Sparse Ax=b Solvers Direct A = LU Iterative y’ = Ay Non- symmetric Symmetric positive definite More RobustLess Storage More Robust More.
Computation on meshes, sparse matrices, and graphs Some slides are from David Culler, Jim Demmel, Bob Lucas, Horst Simon, Kathy Yelick, et al., UCB CS267.
CS 290H Lecture 5 Elimination trees Read GLN section 6.6 (next time I’ll assign 6.5 and 6.7) Homework 1 due Thursday 14 Oct by 3pm turnin file1.
CS 219: Sparse Matrix Algorithms
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd ed., by B. Wilkinson & M
Administrivia: May 20, 2013 Course project progress reports due Wednesday. Reading in Multigrid Tutorial: Chapters 3-4: Multigrid cycles and implementation.
1 Incorporating Iterative Refinement with Sparse Cholesky April 2007 Doron Pearl.
Elliptic PDEs and the Finite Difference Method
Data Structures & Algorithms Graphs
Parallel Solution of the Poisson Problem Using MPI
CS240A: Conjugate Gradients and the Model Problem.
Case Study in Computational Science & Engineering - Lecture 5 1 Iterative Solution of Linear Systems Jacobi Method while not converged do { }
CS 484. Iterative Methods n Gaussian elimination is considered to be a direct method to solve a system. n An indirect method produces a sequence of values.
Domain decomposition in parallel computing Ashok Srinivasan Florida State University.
Data Structures and Algorithms in Parallel Computing Lecture 7.
CS 290H 31 October and 2 November Support graph preconditioners Final projects: Read and present two related papers on a topic not covered in class Or,
CS 290H Lecture 15 GESP concluded Final presentations for survey projects next Tue and Thu 20-minute talk with at least 5 min for questions and discussion.
Consider Preconditioning – Basic Principles Basic Idea: is to use Krylov subspace method (CG, GMRES, MINRES …) on a modified system such as The matrix.
CS 290H Lecture 9 Left-looking LU with partial pivoting Read “A supernodal approach to sparse partial pivoting” (course reader #4), sections 1 through.
A Parallel Hierarchical Solver for the Poisson Equation Seung Lee Deparment of Mechanical Engineering
Symmetric-pattern multifrontal factorization T(A) G(A)
Honors Track: Competitive Programming & Problem Solving Seminar Topics Kevin Verbeek.
Computation on Graphs. Graphs and Sparse Matrices Sparse matrix is a representation of.
Conjugate gradient iteration One matrix-vector multiplication per iteration Two vector dot products per iteration Four n-vectors of working storage x 0.
Laplacian Matrices of Graphs: Algorithms and Applications ICML, June 21, 2016 Daniel A. Spielman.
CS 140: Sparse Matrix-Vector Multiplication and Graph Partitioning
Laplacian Matrices of Graphs: Algorithms and Applications ICML, June 21, 2016 Daniel A. Spielman.
The Landscape of Sparse Ax=b Solvers Direct A = LU Iterative y’ = Ay Non- symmetric Symmetric positive definite More RobustLess Storage More Robust More.
Model Problem: Solving Poisson’s equation for temperature
CS 290N / 219: Sparse Matrix Algorithms
A computational loop k k Integration Newton Iteration
Solving Linear Systems Ax=b
CS 290H Administrivia: April 16, 2008
GPU Implementations for Finite Element Methods
A robust preconditioner for the conjugate gradient method
CS 290H Lecture 3 Fill: bounds and heuristics
Read GLN sections 6.1 through 6.4.
Administrivia: November 9, 2009
A computational loop k k Integration Newton Iteration
Presentation transcript:

CS 290H Administrivia: May 14, 2008 Course project progress reports due next Wed 21 May. Reading in Saad (second edition): Sections

Conjugate gradient iteration One matrix-vector multiplication per iteration Two vector dot products per iteration Four n-vectors of working storage x 0 = 0, r 0 = b, d 0 = r 0 for k = 1, 2, 3,... α k = (r T k-1 r k-1 ) / (d T k-1 Ad k-1 ) step length x k = x k-1 + α k d k-1 approx solution r k = r k-1 – α k Ad k-1 residual β k = (r T k r k ) / (r T k-1 r k-1 ) improvement d k = r k + β k d k-1 search direction

Preconditioned conjugate gradient iteration x 0 = 0, r 0 = b, d 0 = B -1 r 0, y 0 = B -1 r 0 for k = 1, 2, 3,... α k = (y T k-1 r k-1 ) / (d T k-1 Ad k-1 ) step length x k = x k-1 + α k d k-1 approx solution r k = r k-1 – α k Ad k-1 residual y k = B -1 r k preconditioning solve β k = (y T k r k ) / (y T k-1 r k-1 ) improvement d k = y k + β k d k-1 search direction One matrix-vector multiplication per iteration One solve with preconditioner per iteration

Incomplete Cholesky factorization (IC, ILU) Compute factors of A by Gaussian elimination, but ignore fill Preconditioner B = R T R  A, not formed explicitly Compute B -1 z by triangular solves (in time nnz(A)) Total storage is O(nnz(A)), static data structure Either symmetric (IC) or nonsymmetric (ILU) x ARTRT R

Incomplete Cholesky and ILU: Variants Allow one or more “levels of fill” unpredictable storage requirements Allow fill whose magnitude exceeds a “drop tolerance” may get better approximate factors than levels of fill unpredictable storage requirements choice of tolerance is ad hoc Partial pivoting (for nonsymmetric A) “Modified ILU” (MIC): Add dropped fill to diagonal of U or R A and R T R have same row sums good in some PDE contexts

Matrix permutations for iterative methods Symmetric matrix permutations don’t change the convergence of unpreconditioned CG Symmetric matrix permutations affect the quality of an incomplete factorization – poorly understood, controversial Often banded (RCM) is best for IC(0) / ILU(0) Often min degree & nested dissection are bad for no-fill incomplete factorization but good if some fill is allowed Some experiments with orderings that use matrix values e.g. “minimum discarded fill” order sometimes effective but expensive to compute

Nonsymmetric matrix permutations for iterative methods Dynamic permutation: ILU with row or column pivoting E.g. ILUTP (Saad), Matlab “luinc” More robust but more expensive than ILUT Static permutation: Try to increase diagonal dominance E.g. bipartite weighted matching (Duff & Koster) Often effective; no theory for quality of preconditioner Field is not well understood, to say the least

Row permutation for heavy diagonal Row permutation for heavy diagonal [Duff, Koster] Represent A as a weighted, undirected bipartite graph (one node for each row and one node for each column) Find matching (set of independent edges) with maximum product of weights Permute rows to place matching on diagonal Matching algorithm also gives a row and column scaling to make all diag elts =1 and all off-diag elts <= A PA

Preconditioned conjugate gradient iteration x 0 = 0, r 0 = b, d 0 = B -1 r 0, y 0 = B -1 r 0 for k = 1, 2, 3,... α k = (y T k-1 r k-1 ) / (d T k-1 Ad k-1 ) step length x k = x k-1 + α k d k-1 approx solution r k = r k-1 – α k Ad k-1 residual y k = B -1 r k preconditioning solve β k = (y T k r k ) / (y T k-1 r k-1 ) improvement d k = y k + β k d k-1 search direction Several vector inner products per iteration (easy to parallelize) One matrix-vector multiplication per iteration (medium to parallelize) One solve with preconditioner per iteration (hard to parallelize)

Incomplete Cholesky and ILU: Issues Choice of parameters good: smooth transition from iterative to direct methods bad: very ad hoc, problem-dependent tradeoff: time per iteration (more fill => more time) vs # of iterations (more fill => fewer iters) Effectiveness condition number usually improves (only) by constant factor (except MIC for some problems from PDEs) still, often good when tuned for a particular class of problems Parallelism Triangular solves are not very parallel Reordering for parallel triangular solve by graph coloring Dynamic coloring: ILUM (Saad)

Sparse approximate inverses Compute B -1  A explicitly Minimize || A B -1 – I || F (in parallel, by columns) Variants: factored form of B -1, more fill,.. Good: very parallel, seldom breaks down Bad: effectiveness varies widely AB -1

Complexity of linear solvers 2D3D Dense Cholesky:O(n 3 ) Sparse Cholesky:O(n 1.5 )O(n 2 ) CG, exact arithmetic: O(n 2 ) CG, no precond: O(n 1.5 )O(n 1.33 ) CG, modified IC: O(n 1.25 )O(n 1.17 ) CG, support trees: O(n 1.20 ) -> O(n 1+ )O(n 1.75 ) -> O(n 1.31 ) Multigrid:O(n) n 1/2 n 1/3 Time to solve model problem (Poisson’s equation) on regular mesh

Lay out matrix and vectors by rows Hard part is matrix-vector product y = A*x Algorithm Each processor j: Broadcast x(j) Compute y(j) = A(j,:)*x May send more of x than needed Partition / reorder matrix to reduce communication x y P0 P1 P2 P3 P0 P1 P2 P3 Matrix-vector product: Parallel implementation

Sparse Matrix-Vector Multiplication

Graph partitioning in theory If G is a planar graph with n vertices, there exists a set of at most sqrt(6n) vertices whose removal leaves no connected component with more than 2n/3 vertices. (“Planar graphs have sqrt(n)-separators.”) “Well-shaped” finite element meshes in 3 dimensions have n 2/3 - separators. Also some other classes of graphs – trees, graphs of bounded genus, chordal graphs, bounded-excluded- minor graphs, … Mostly these theorems come with efficient algorithms, but they aren’t used much.

Graph partitioning in practice Graph partitioning heuristics have been an active research area for many years, often motivated by partitioning for parallel computation. See CS 240A. Some techniques: Spectral partitioning (uses eigenvectors of Laplacian matrix of graph) Geometric partitioning (for meshes with specified vertex coordinates) Iterative-swapping (Kernighan-Lin, Fiduccia-Matheysses) Breadth-first search (GLN 7.3.3, fast but dated) Many popular modern codes (e.g. Metis, Chaco) use multilevel iterative swapping Matlab graph partitioning toolbox: see course web page

Parallel Incomplete Cholesky and ILU: Issues Computing the preconditioner Parallel direct methods well developed But IC/ILU is sparser => harder to speed up Still, you only have to do it once Applying the preconditioner Triangular solves are not very parallel Reordering by graph coloring (see example) But the orderings are not great for convergence