Presentation is loading. Please wait.

Presentation is loading. Please wait.

Sparse Matrices in Matlab John R. Gilbert Xerox Palo Alto Research Center with Cleve Moler (MathWorks) and Rob Schreiber (HP Labs)

Similar presentations


Presentation on theme: "Sparse Matrices in Matlab John R. Gilbert Xerox Palo Alto Research Center with Cleve Moler (MathWorks) and Rob Schreiber (HP Labs)"— Presentation transcript:

1 Sparse Matrices in Matlab John R. Gilbert Xerox Palo Alto Research Center with Cleve Moler (MathWorks) and Rob Schreiber (HP Labs)

2 Design principles All operations should give the same results for sparse and full matrices (almost all) Sparse matrices are never created automatically, but once created they propagate Performance is important -- but usability, simplicity, completeness, and robustness are more important Storage for a sparse matrix should be O(nonzeros) Time for a sparse operation should be O(flops) (as nearly as possible)

3 Two storage classes for matrices Full: 2-dimensional array of real or complex numbers matrix uses (nrows * ncols) memory Sparse: compressed column storage matrix uses about (2*nonzeros + nrows) memory

4 Matrix arithmetic and indexing A+B, A*B, A.*B, A’, ~A, A|B,... A(2:n, 2:n) = A(2:n, 2:n) + u * v’ A = [B C ; D E];

5 Matrix factorizations Cholesky: R = chol(A); simple left-looking column algorithm Nonsymmetric LU: [L,U,P] = lu(A); left-looking “GPMOD”, depth-first search, symmetric pruning Orthogonal: [Q,R] = qr(A); George-Heath algorithm: row-wise Givens rotations

6 Fill-reducing matrix permutations Nonsymmetric approximate minimum degree: p = colamd(A); column permutation: lu(A(:,p)) often sparser than lu(A) also for qr factorization [Davis, Larimore, G, Ng] Symmetric approximate minimum degree: p = symamd(A); symmetric permutation: chol(A(p,p)) often sparser than chol(A) Reverse Cuthill-McKee p = symrcm(A); A(p,p) often has smaller bandwidth than A similar to Sparspak RCM

7 Matching and block triangular form Dulmage-Mendelsohn decomposition: Bipartite matching followed by strongly connected components Square, full rank A: [p, q, r] = dmperm(A); A(p,q) has nonzero diagonal and is in block upper triangular form also, strongly connected components of a directed graph Arbitrary A: [p, q, r, s] = dmperm(A); maximum-size matching in a bipartite graph minimum-size vertex cover in a bipartite graph decomposition into strong Hall blocks

8 Solving linear equations x = A \ b; Is A square? no => use QR to solve least squares problem Is A triangular or permuted triangular? yes => sparse triangular solve Is A symmetric with positive diagonal elements? yes => attempt Cholesky after symmetric minimum degree Otherwise => use LU on A(:, colamd(A))

9 Extensions (due to various people) Matlab code (“m-files”) Iterative linear solvers Sparse optimization toolbox... C or Fortran code with Matlab interfaces (“mex-files”) Sparse eigenvalues (based on ARPACK) Graph partitioning (based on METIS or Chaco) Supernodal sparse linear solver (based on SuperLU)...


Download ppt "Sparse Matrices in Matlab John R. Gilbert Xerox Palo Alto Research Center with Cleve Moler (MathWorks) and Rob Schreiber (HP Labs)"

Similar presentations


Ads by Google