Presentation is loading. Please wait.

Presentation is loading. Please wait.

Parallel Computing Sciences Department MOV’01 Multilevel Combinatorial Methods in Scientific Computing Bruce Hendrickson Sandia National Laboratories Parallel.

Similar presentations


Presentation on theme: "Parallel Computing Sciences Department MOV’01 Multilevel Combinatorial Methods in Scientific Computing Bruce Hendrickson Sandia National Laboratories Parallel."— Presentation transcript:

1 Parallel Computing Sciences Department MOV’01 Multilevel Combinatorial Methods in Scientific Computing Bruce Hendrickson Sandia National Laboratories Parallel Computing Sciences Dept.

2 Parallel Computing Sciences Department MOV’01 An Overdue Acknowledgement l Parallel Computing Uses Graph Partitioning l We owe a deep debt to circuit researchers »KL/FM »Spectral partitioning »Hypergraph models »Terminal propagation

3 Parallel Computing Sciences Department MOV’01 In Return … l We’ve given you »Multilevel partitioning »hMETIS l Our applications are different from yours »Underlying geometry »More regular structure »Bounded degree »Partitioning time is more important –Different algorithmic tradeoffs

4 Parallel Computing Sciences Department MOV’01 Multilevel Discrete Algorithm l Explicitly mimic traditional multigrid l Construct series of smaller approximations »Restriction l Solve on smallest »Coarse grid solve l Propagate solution up the levels »Prolongation l Periodically perform local improvement »Smoothing

5 Parallel Computing Sciences Department MOV’01 Lots of Possible Variations l More complex multilevel iterations »E.g. V-cycle, W-cycle, etc. »Not much evidence of value for discrete problems l Key issue: properties of coarse problems »Local refinement = multi-scale improvement l I’ll focus on graph algorithms »Most relevant to VLSI problems

6 Parallel Computing Sciences Department MOV’01 Not a New Idea l Idea is very natural »Reinvented repeatedly in different settings l Focus of this workshop is on heuristics for hard problems l Technique also good for poly-time problems »E.g. Geometric point detection (Kirkpatrick’83)

7 Parallel Computing Sciences Department MOV’01 Planar Point Detection l O(n log n) time to preprocess l O(log n) time to answer query

8 Parallel Computing Sciences Department MOV’01 Multilevel Graph Partitioning l Invented Independently Several Times »Cong/Smith’93 »Bui/Jones’93 »H/Leland’93 »Related Work –Garbers/Promel/Steger’90, Hagen/Khang’91, Cheng/Wei’91 –Kumar/Karypis’95, etc, etc. l Multigrid Metaphor H/Leland’93 (Chaco) »Popularized by Kumar/Karypis’95 (METIS)

9 Parallel Computing Sciences Department MOV’01 Multilevel Partitioning l Construct Sequence of Smaller Graphs l Partition Smallest l Project Partition Through Intermediate Levels »Periodically Refine l Why does it work so well? »Refinement on multiple scales (like multigrid) »Key properties preserved on (weighted) coarse graphs –(Weighted) partition sizes –(Weighted) edge cuts »Very fast

10 Parallel Computing Sciences Department MOV’01 Coarse Problem Construction 1. Find maximal matching 2. Contract matching edges 3. Sum vertex and edge weights Key Properties: Preserves (weighted) partition sizes Preserves (weighted) edge cuts Preserves planarity Related to min-cut algorithm of Karger/Stein’96

11 Parallel Computing Sciences Department MOV’01 Extension I: Terminal Propagation l Dunlop/Kernighan’85 »Skew partitioning to address constrained vertices l Also useful for parallel computing »Move few vertices when repartitioning »Assign neighboring vertices to near processors »H/Leland/Van Dreissche’96 l Basic idea: »Vertex has gain-like preference to be in particular partition

12 Parallel Computing Sciences Department MOV’01 Multilevel Terminal Propagation l How to include in multilevel algorithm? l Simple idea: »When vertices get merged, sum preferences »Simple, fast, effective »Coarse problem precisely mimics original

13 Parallel Computing Sciences Department MOV’01 Extension II: Finding Vertex Separators l Useful for several partitioning applications »E.g. sparse matrix reorderings l One idea: edge separator first, then min cover »Problem: multilevel power on wrong objective l Better to reformulate multilevel method »Find vertex separators directly »H/Rothberg’98

14 Parallel Computing Sciences Department MOV’01 Multilevel Vertex Separators l Use same coarse constructor »Except edge weights don’t matter l Change local refinement & coarse solve »Can mimic KL/FM l Resulted in improved matrix reordering tool »Techniques now standard

15 Parallel Computing Sciences Department MOV’01 Extension III: Hypergraph Partitioning Coarse construction »Contract pairs of vertices? »Contract hyperedges? l Traditional refinement methodology l See talk tomorrow by George Karypis

16 Parallel Computing Sciences Department MOV’01 Envelope Reduction l Reorder rows/columns of symmetric matrix to keep nonzeros near the diagonal

17 Parallel Computing Sciences Department MOV’01 Graph Formulation l Each row/column is a vertex l Nonzero in (i,j) generates edge e ij l For row i of matrix (vertex i) »Env(i) = max(i-j such that e ij in E) »Envelope =  Env(i) l Find vertex numbering to minimize envelope »NP-Hard

18 Parallel Computing Sciences Department MOV’01 Status l Highest Quality algorithm is spectral ordering »Sort entries of Fiedler vector (Barnard/Pothen/Simon’95) »Eigenvector calculation is expensive »Fast Sloan (Kumfert/Pothen’97) good compromise l Now multilevel methods are comparable »(Boman/H’96, Hu/Scott’01) l Related ordering problems with VLSI relevance »Optimal Linear Arrangement –Minimize  |i-j| such that e ij in E

19 Parallel Computing Sciences Department MOV’01 Challenges for Multilevel Envelope Minimization l No precise coarse representation »Can’t express exact objective on coarse problem l No incremental update for envelope metric »I.e. no counterpart of Fiduccia/Mattheyses l Our solution: Use approximate metric »1-sum / minimum linear arrangement »Allows for incremental update –But still not an exact coarse problem »VLSI applications?

20 Parallel Computing Sciences Department MOV’01 Results l Disappointing for envelope minimization »We never surpassed best competitor –Fast Sloan algorithm »But Hu/Scott’01 succeeded with similar ideas l Better for linear arrangement, but … »Not competitive with Hur/Lillis’99 –Multilevel algorithm with expensive refinement

21 Parallel Computing Sciences Department MOV’01 Lessons Learned l Good coarse model is the key »Need to encode critical properties of full problem »Progress on coarse instance must help real one »Must allow for efficient refinement methodology »Different objectives require different coarse models l Quality/Runtime tradeoff varies w/ application »Must understand needs of your problem domain »For VLSI, quality is worth waiting for »All aspects of multilevel algorithm are impacted

22 Parallel Computing Sciences Department MOV’01 Conclusions l Appropriate coarse representation is key »Lots of existing ideas for construction coarse problem –Matching contraction, independent sets, fractional assignment, etc. l Multigrid metaphor provides important insight »We’re not yet fully exploiting multigrid possibilities »Do we have something to offer algebraic multigrid? l Need for CS recognition of multilevel paradigm »Rich, general algorithmic framework, but not in any textbook »Not the same as divide-and-conquer

23 Parallel Computing Sciences Department MOV’01 Acknowledgements l Shanghua Teng »"Coarsening, Sampling and Smoothing: Elements of the Multilevel Method“ l Rob Leland l Erik Boman l Ed Rothberg l Tammy Kolda l Chuck Alpert l DOE MICS Office


Download ppt "Parallel Computing Sciences Department MOV’01 Multilevel Combinatorial Methods in Scientific Computing Bruce Hendrickson Sandia National Laboratories Parallel."

Similar presentations


Ads by Google