Sparse Matrix Methods Day 1: Overview Matlab and examples Data structures Ax=b Sparse matrices and graphs Fill-reducing matrix permutations Matching and.

Slides:



Advertisements
Similar presentations
Chapter 28 – Part II Matrix Operations. Gaussian elimination Gaussian elimination LU factorization LU factorization Gaussian elimination with partial.
Advertisements

Scientific Computing QR Factorization Part 2 – Algorithm to Find Eigenvalues.
Fill Reduction Algorithm Using Diagonal Markowitz Scheme with Local Symmetrization Patrick Amestoy ENSEEIHT-IRIT, France Xiaoye S. Li Esmond Ng Lawrence.
Solving linear systems through nested dissection Noga Alon Tel Aviv University Raphael Yuster University of Haifa.
Using Sparse Matrix Reordering Algorithms for Cluster Identification Chris Mueller Dec 9, 2004.
Siddharth Choudhary.  Refines a visual reconstruction to produce jointly optimal 3D structure and viewing parameters  ‘bundle’ refers to the bundle.
CS 240A: Solving Ax = b in parallel Dense A: Gaussian elimination with partial pivoting (LU) Same flavor as matrix * matrix, but more complicated Sparse.
Least Squares example There are 3 mountains u,y,z that from one site have been measured as 2474 ft., 3882 ft., and 4834 ft.. But from u, y looks 1422 ft.
MATH 685/ CSI 700/ OR 682 Lecture Notes
Solving Linear Systems (Numerical Recipes, Chap 2)
Sparse Matrices in Matlab John R. Gilbert Xerox Palo Alto Research Center with Cleve Moler (MathWorks) and Rob Schreiber (HP Labs)
The QR iteration for eigenvalues. ... The intention of the algorithm is to perform a sequence of similarity transformations on a real matrix so that the.
Systems of Linear Equations
SOLVING SYSTEMS OF LINEAR EQUATIONS. Overview A matrix consists of a rectangular array of elements represented by a single symbol (example: [A]). An individual.
Solution of linear system of equations
1cs542g-term Notes  Assignment 1 is out (questions?)
1cs542g-term Notes  Assignment 1 is out (due October 5)  Matrix storage: usually column-major.
Part 3 Chapter 10 LU Factorization PowerPoints organized by Dr. Michael R. Gustafson II, Duke University All images copyright © The McGraw-Hill Companies,
1cs542g-term Sparse matrix data structure  Typically either Compressed Sparse Row (CSR) or Compressed Sparse Column (CSC) Informally “ia-ja” format.
CS 290H: Sparse Matrix Algorithms
Linear Systems What is the Matrix?. Outline Announcements: –Homework III: due Wed. by 5, by Office Hours: Today & tomorrow, 11-1 –Ideas for Friday?
QR-RLS Algorithm Cy Shimabukuro EE 491D
Sparse Matrix Methods Day 1: Overview Day 2: Direct methods
The Landscape of Ax=b Solvers Direct A = LU Iterative y’ = Ay Non- symmetric Symmetric positive definite More RobustLess Storage (if sparse) More Robust.
CS232.
Sparse Matrices and Combinatorial Algorithms in Star-P Status and Plans April 8, 2005.
Graph Algorithms in Numerical Linear Algebra: Past, Present, and Future John R. Gilbert MIT and UC Santa Barbara September 28, 2002.
Sparse Matrix Methods Day 1: Overview Day 2: Direct methods Nonsymmetric systems Graph theoretic tools Sparse LU with partial pivoting Supernodal factorization.
Ordinary least squares regression (OLS)
CS240A: Conjugate Gradients and the Model Problem.
The Evolution of a Sparse Partial Pivoting Algorithm John R. Gilbert with: Tim Davis, Jim Demmel, Stan Eisenstat, Laura Grigori, Stefan Larimore, Sherry.
CS 290H Lecture 17 Dulmage-Mendelsohn Theory
Conjugate gradients, sparse matrix-vector multiplication, graphs, and meshes Thanks to Aydin Buluc, Umit Catalyurek, Alan Edelman, and Kathy Yelick for.
CS 290H Lecture 12 Column intersection graphs, Ordering for sparsity in LU with partial pivoting Read “Computing the block triangular form of a sparse.
Support-Graph Preconditioning John R. Gilbert MIT and U. C. Santa Barbara coauthors: Marshall Bern, Bruce Hendrickson, Nhat Nguyen, Sivan Toledo authors.
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
Solving Scale Linear Systems (Example system continued) Lecture 14 MA/CS 471 Fall 2003.
Symbolic sparse Gaussian elimination: A = LU
The Landscape of Sparse Ax=b Solvers Direct A = LU Iterative y’ = Ay Non- symmetric Symmetric positive definite More RobustLess Storage More Robust More.
CS 290H Lecture 5 Elimination trees Read GLN section 6.6 (next time I’ll assign 6.5 and 6.7) Homework 1 due Thursday 14 Oct by 3pm turnin file1.
CS 219: Sparse Matrix Algorithms
1 Incorporating Iterative Refinement with Sparse Cholesky April 2007 Doron Pearl.
MATH 685/ CSI 700/ OR 682 Lecture Notes Lecture 4. Least squares.
Solution of Sparse Linear Systems
CS240A: Conjugate Gradients and the Model Problem.
CS240A: Computation on Graphs. Graphs and Sparse Matrices Sparse matrix is a representation.
Linear Systems What is the Matrix?. Outline Announcements: –Homework III: due Today. by 5, by –Ideas for Friday? Linear Systems Basics Matlab and.
Administrivia: October 5, 2009 Homework 1 due Wednesday Reading in Davis: Skim section 6.1 (the fill bounds will make more sense next week) Read section.
Graphs A graphs is an abstract representation of a set of objects, called vertices or nodes, where some pairs of the objects are connected by links, called.
CS 290H Administrivia: May 14, 2008 Course project progress reports due next Wed 21 May. Reading in Saad (second edition): Sections
CS 290H 31 October and 2 November Support graph preconditioners Final projects: Read and present two related papers on a topic not covered in class Or,
CS 290H Lecture 15 GESP concluded Final presentations for survey projects next Tue and Thu 20-minute talk with at least 5 min for questions and discussion.
CS 290H Lecture 9 Left-looking LU with partial pivoting Read “A supernodal approach to sparse partial pivoting” (course reader #4), sections 1 through.
Symmetric-pattern multifrontal factorization T(A) G(A)
Conjugate gradient iteration One matrix-vector multiplication per iteration Two vector dot products per iteration Four n-vectors of working storage x 0.
The Landscape of Sparse Ax=b Solvers Direct A = LU Iterative y’ = Ay Non- symmetric Symmetric positive definite More RobustLess Storage More Robust More.
Parallel Direct Methods for Sparse Linear Systems
Model Problem: Solving Poisson’s equation for temperature
CS 290N / 219: Sparse Matrix Algorithms
Solving Linear Systems Ax=b
Linear Equations.
CS 290H Administrivia: April 16, 2008
The Landscape of Sparse Ax=b Solvers
CS 290H Lecture 3 Fill: bounds and heuristics
~ Least Squares example
Part 3 Chapter 10 LU Factorization
~ Least Squares example
Read GLN sections 6.1 through 6.4.
Nonsymmetric Gaussian elimination
Presentation transcript:

Sparse Matrix Methods Day 1: Overview Matlab and examples Data structures Ax=b Sparse matrices and graphs Fill-reducing matrix permutations Matching and block triangular form Day 2: Direct methods Day 3: Iterative methods D

(Demo in Matlab) Matlab sprand spy sparse, full matrix arithmetic and indexing examples of sparse matrices from different applications (from UF site)

Matlab sparse matrices: Design principles All operations should give the same results for sparse and full matrices (almost all) Sparse matrices are never created automatically, but once created they propagate Performance is important -- but usability, simplicity, completeness, and robustness are more important Storage for a sparse matrix should be O(nonzeros) Time for a sparse operation should be O(flops) (as nearly as possible)

Data structures Full: 2-dimensional array of real or complex numbers (nrows*ncols) memory Sparse: compressed column storage about (1.5*nzs +.5*ncols) memory D

(Demos in Matlab) A \ b Cholesky factorization and orderings

Solving linear equations x = A \ b; Is A square? no => use QR to solve least squares problem Is A triangular or permuted triangular? yes => sparse triangular solve Is A symmetric with positive diagonal elements? yes => attempt Cholesky after symmetric minimum degree Otherwise => use LU on A(:, colamd(A))

Matrix factorizations in Matlab Cholesky: R = chol(A); simple left-looking column algorithm Nonsymmetric LU: [L,U,P] = lu(A); left-looking “GPMOD”, depth-first search, symmetric pruning Orthogonal: [Q,R] = qr(A); George-Heath algorithm: row-wise Givens rotations

Graphs and Sparse Matrices : Cholesky factorization G(A) G + (A) [chordal] Symmetric Gaussian elimination: for j = 1 to n add edges between j’s higher-numbered neighbors Fill: new nonzeros in factor

Elimination Tree Cholesky factor G + (A)T(A) T(A) : parent(j) = min { i > j : (i,j) in G + (A) } T describes dependencies among columns of factor Can compute T from G(A) in almost linear time Can compute G + (A) easily from T D

(Demos in Matlab) matrix and graph elimination tree orderings in detail

Fill-reducing matrix permutations Minimum degree: Eliminate row/col with fewest nzs, add fill, repeat Theory: can be suboptimal even on 2D model problem Practice: often wins for medium-sized problems Nested dissection: Find a separator, number it last, proceed recursively Theory: approx optimal separators => approx optimal fill and flop count Practice: often wins for very large problems Banded orderings (Reverse Cuthill-McKee, Sloan,...): Try to keep all nonzeros close to the diagonal Theory, practice: often wins for “long, thin” problems Best modern general-purpose orderings are ND/MD hybrids.

Fill-reducing permutations in Matlab Nonsymmetric approximate minimum degree: p = colamd(A); column permutation: lu(A(:,p)) often sparser than lu(A) also for QR factorization Symmetric approximate minimum degree: p = symamd(A); symmetric permutation: chol(A(p,p)) often sparser than chol(A) Reverse Cuthill-McKee p = symrcm(A); A(p,p) often has smaller bandwidth than A similar to Sparspak RCM D

(Demos in Matlab) nonsymmetric LU dmperm, dmspy, components

Matching and block triangular form Dulmage-Mendelsohn decomposition: Bipartite matching followed by strongly connected components Square, full rank A: [p, q, r] = dmperm(A); A(p,q) has nonzero diagonal and is in block upper triangular form also, strongly connected components of a directed graph also, connected components of an undirected graph Arbitrary A: [p, q, r, s] = dmperm(A); maximum-size matching in a bipartite graph minimum-size vertex cover in a bipartite graph decomposition into strong Hall blocks

Complexity of direct methods 2D3D Space (fill): O(n log n)O(n 4/3 ) Time (flops): O(n 3/2 )O(n 2 ) n 1/2 n 1/3

The Landscape of Sparse Ax=b Solvers Direct A = LU Iterative y’ = Ay Non- symmetric Symmetric positive definite More RobustLess Storage More Robust More General D

(Demos in Matlab) conjugate gradients