Presentation is loading. Please wait.

Presentation is loading. Please wait.

CS267/EngC233 Spring 2010 April 15, 2010 Parallel Graph Algorithms Kamesh Madduri Lawrence Berkeley National Laboratory.

Similar presentations


Presentation on theme: "CS267/EngC233 Spring 2010 April 15, 2010 Parallel Graph Algorithms Kamesh Madduri Lawrence Berkeley National Laboratory."— Presentation transcript:

1 CS267/EngC233 Spring 2010 April 15, 2010 Parallel Graph Algorithms Kamesh Madduri KMadduri@lbl.gov Lawrence Berkeley National Laboratory

2 Applications Review of key results Case studies: Graph traversal-based problems, parallel algorithms –Breadth-First Search –Single-source Shortest paths –Betweenness Centrality –Community Identification Lecture Outline

3 Road networks, Point-to-point shortest paths: 15 seconds (naïve)  10 microseconds Routing in transportation networks H. Bast et al., “Fast Routing in Road Networks with Transit Nodes”, Science 27, 2007.

4 The world-wide web can be represented as a directed graph –Web search and crawl: traversal –Link analysis, ranking: Page rank and HITS –Document classification and clustering Internet topologies (router networks) are naturally modeled as graphs Internet and the WWW

5 Reorderings for sparse solvers –Fill reducing orderings  Partitioning, eigenvectors –Heavy diagonal to reduce pivoting (matching) Data structures for efficient exploitation of sparsity Derivative computations for optimization –Matroids, graph colorings, spanning trees Preconditioning –Incomplete Factorizations –Partitioning for domain decomposition –Graph techniques in algebraic multigrid  Independent sets, matchings, etc. –Support Theory  Spanning trees & graph embedding techniques Scientific Computing B. Hendrickson, “Graphs and HPC: Lessons for Future Architectures”, http://www.er.doe.gov/ascr/ascac/Meetings/Oct08/Hendrickson%20ASCAC.pdf Image source: Yifan Hu, “A gallery of large graphs” Image source: Tim Davis, UF Sparse Matrix Collection.

6 Graph abstractions are very useful to analyze complex data sets. Sources of data: petascale simulations, experimental devices, the Internet, sensor networks Challenges: data size, heterogeneity, uncertainty, data quality Large-scale data analysis Astrophysics: massive datasets, temporal variations Bioinformatics: data quality, heterogeneity Social Informatics: new analytics challenges, data uncertainty Image sources: (1) http://physics.nmt.edu/images/astro/hst_starfield.jpg (2,3) www.visualComplexity.comhttp://physics.nmt.edu/images/astro/hst_starfield.jpg

7 Study of the interactions between various components in a biological system Graph-theoretic formulations are pervasive: –Predicting new interactions: modeling –Functional annotation of novel proteins: matching, clustering –Identifying metabolic pathways: paths, clustering –Identifying new protein complexes: clustering, centrality Data Analysis and Graph Algorithms in Systems Biology Image Source: Giot et al., “A Protein Interaction Map of Drosophila melanogaster”, Science 302, 1722-1736, 2003.

8 Image Source: Nexus (Facebook application) Graph –theoretic problems in social networks –Community identification: clustering –Targeted advertising: centrality –Information spreading: modeling

9 [Krebs ’04] Post 9/11 Terrorist Network Analysis from public domain information Plot masterminds correctly identified from interaction patterns: centrality A global view of entities is often more insightful Detect anomalous activities by exact/approximate subgraph isomorphism. Image Source: http://www.orgnet.com/hijackers.html Network Analysis for Intelligence and Survelliance Image Source: T. Coffman, S. Greenblatt, S. Marcus, Graph-based technologies for intelligence analysis, CACM, 47 (3, March 2004): pp 45-47

10 Research in Parallel Graph Algorithms Application Areas Methods/ Problems Architectures Graph Algorithms Traversal Shortest Paths Connectivity Max Flow … GPUs FPGAs x86 multicore servers Massively multithreaded architectures Multicore Clusters Clouds Social Network Analysis WWW Computational Biology Scientific Computing Engineering Find central entities Community detection Network dynamics Data size Problem Complexity Graph partitioning Matching Coloring Gene regulation Metabolic pathways Genomics Marketing Social Search VLSI CAD Route planning

11 Characterizing Graph-theoretic computations (2/2) graph sparsity (m/n ratio) static/dynamic nature weighted/unweighted, weight distribution vertex degree distribution directed/undirected simple/multi/hyper graph problem size granularity of computation at nodes/edges domain-specific characteristics paths clusters partitions matchings patterns orderings Input data Problem: Find *** Factors that influence choice of algorithm Graph kernel traversal shortest path algorithms flow algorithms spanning tree algorithms topological sort ….. Graph problems are often recast as sparse linear algebra (e.g., partitioning) or linear programming (e.g., matching) computations

12 Applications Review of key results Graph traversal-based parallel algorithms, case studies –Breadth-First Search –Single-source Shortest paths –Betweenness Centrality –Community Identification Lecture Outline

13 1735: “Seven Bridges of Königsberg” problem, resolved by Euler, first result in graph theory. … 1966: Flynn’s Taxonomy. 1968: Batcher’s “sorting networks” 1969: Harary’s “Graph Theory” … 1972: Tarjan’s “Depth-first search and linear graph algorithms” 1975: Reghbati and Corneil, Parallel Connected Components 1982: Misra and Chandy, distributed graph algorithms. 1984: Quinn and Deo’s survey paper on “parallel graph algorithms” … History

14 Idealized parallel shared memory system model Unbounded number of synchronous processors; no synchronization, communication cost; no parallel overhead EREW, CREW Measuring performance: space and time complexity; total number of operations (work) The PRAM model

15 Extension to the PRAM model for shared memory algorithm design and analysis. T(n, p) is measured by the triplet –T M (n, p), T C (n, p), B(n, p) –T M (n, p): maximum number of non-contiguous main memory accesses required by any processor –T C (n, p): upper bound on the maximum local computational complexity of any of the processors –B(n, p): number of barrier synchronizations. The Helman-JaJa model

16 Pros –Simple and clean semantics. –The majority of theoretical parallel algorithms are specified with the PRAM model. –Independent of the communication network topology. Cons –Not realistic, too powerful communication model. –Algorithm designer is misled to use IPC without hesitation. –Synchronized processors. –No local memory. –Big-O notation is often misleading. PRAM Pros and Cons

17 Prefix sums List ranking –Euler tours, Pointer jumping, Symmetry breaking Vertex collapse Tree contraction Building blocks of classical PRAM graph algorithms

18 Prefix Sums Input: A, an array of n elements; associative binary operation Output: B(0,1) C(0,1) B(0,2) C(0,2) B(0,3) C(0,3) B(0,4) C(0,4) B(0,5) C(0,5) B(0,6) C(0,6) B(0,7) C(0,7) B(0,8) C(0,8) B(1,1) C(1,1) B(1,2) C(1,2) B(1,3) C(1,3) B(1,4) C(1,4) B(2,1) C(2,1) B(2,2) C(2,2) B(3,1) C(3,2) O(n) work, O(log n) time, n processors

19 X: array of n elements stored in arbitrary order. For each element i, let X(i).value be its value and X(i).next be the index of its successor. For binary associative operator Θ, compute X(i).prefix such that –X(head).prefix = X (head).value, and –X(i).prefix = X(i).value Θ X(predecessor).prefix w here –head is the first element –i is not equal to head, and –predecessor is the node preceding i. List ranking: special case of parallel prefix, values initially set to 1, and addition is the associative operator. Parallel Prefix

20 Ordered list (X.next values) Random list (X.next values) List ranking Illustration 2 2 3 3 4 4 5 5 6 6 7 7 8 8 9 9 4 4 6 6 5 5 7 7 8 8 3 3 2 2 9 9

21 1. Chop X randomly into s pieces 2. Traverse each piece using a serial algorithm. 3. Compute the global rank of each element using the result computed from the second step. In the Helman-JaJa model, T M (n,p) = O(n/p). List Ranking key idea

22 Building block for many graph algorithms –Minimum spanning tree, biconnected components, planarity testing, etc. Representative of the “graft-and-shortcut” approach CRCW PRAM algorithms – [Shiloach & Vishkin ’82]: O(log n) time, O((m+n) logn) work –[Gazit ’91]: randomized, optimal, O(log n) time. CREW algorithms – [Han & Wagner ’90]: O(log 2 n) time, O((m+nlog n) logn) work. Connected Components

23 Input: n isolated vertices and m PRAM processors. Each processor P i grafts a tree rooted at vertex v i to the tree that contains one of its neighbors u under the constraints u< v i Grafting creates k ≥ 1 connected subgraphs, and each subgraph is then shortcut so that the depth of the trees reduce at least by half. Repeat graft and shortcut until no more grafting is possible. Runs on arbitrary CRCW PRAM in O(log n) time with O(m) processors. Helman-JaJa model: T M = (3m/p + 2)log n, T B = 4log n. Shiloach-Vishkin algorithm 4 1 2 3 4 1 2 3 graft 1,4 2,3 shortcut

24 Input: (1) A set of m edges (i, j) given in arbitrary order. (2) Array D[1..n] with D[i] = i Output: Array D[1..n] with D[i] being the component to which vertex i belongs. begin while true do 1. for (i, j) ∈ E in parallel do if D[i]=D[D[i]] and D[j]<D[i] then D[D[i]] = D[j]; 2. for (i, j) ∈ E in parallel do if i belongs to a star and D[j]=D[i] then D[D[i]] = D[j]; 3. if all vertices are in rooted stars then exit; for all i in parallel do D[i] = D[D[i]] end Shiloach-Vishkin algorithm pseudo-code

25 Typically composed of several low-level efficient algorithms. Tarjan-Vishkin’s biconnected components algorithm: O(log n) time, O(m+n) time. 1.Compute spanning tree T for the input graph G. 2.Compute Eulerian circuit for T. 3.Root the tree at an arbitrary vertex. 4.Preorder numbering of all the vertices. 5.Label edges using vertex numbering. 6.Connected components using the Shiloach-Vishkin algorithm. An example higher-level algorithm

26 Static case Dense graphs (m = O(n 2 )): adjacency matrix commonly used. Sparse graphs: adjacency lists Dynamic representation depends on common-case query Edge insertions or deletions? Vertex insertions or deletions? Edge weight updates? Graph update rate Queries: connectivity, paths, flow, etc. Optimizing for locality a key design consideration. Data structures: graph representation

27 A wide range seen in graph algorithms: array, list, queue, stack, set, multiset, tree Implementations are typically array-based for performance considerations. Key data structure considerations in parallel graph algorithm design –Practical parallel priority queues –Space-efficiency –Parallel set/multiset operations, e.g., union, intersection, etc. Data structures in (parallel) graph algorithms

28 Applications Review of key results Case studies: Graph traversal-based problems, parallel algorithms –Breadth-First Search –Single-source Shortest paths –Betweenness Centrality –Community Identification Lecture Outline

29 Concurrency –Simulating PRAM algorithms: hardware limits of memory bandwidth, number of outstanding memory references; synchronization Locality Work-efficiency –Try to improve cache locality, but avoid “too much” superfluous computation Graph Algorithms on today’s systems

30 Serial Performance of “approximate betweenness centrality” on a 2.67 GHz Intel Xeon 5560 (12 GB RAM, 8MB L3 cache) Input: Synthetic R-MAT graphs (# of edges m = 8n) The locality challenge “Large memory footprint, low spatial and temporal locality impede performance” ~ 5X drop in performance No Last-level Cache (LLC) misses O(m) LLC misses

31 Graph topology assumptions in classical algorithms do not match real-world datasets Parallelization strategies at loggerheads with techniques for enhancing memory locality Classical “work-efficient” graph algorithms may not fully exploit new architectural features –Increasing complexity of memory hierarchy (x86), DMA support (Cell), wide SIMD, floating point-centric cores (GPUs). Tuning implementation to minimize parallel overhead is non- trivial –Shared memory: minimizing overhead of locks, barriers. –Distributed memory: bounding message buffer sizes, bundling messages, overlapping communication w/ computation. The parallel scaling challenge “Classical parallel graph algorithms perform poorly on current parallel systems”

32 Optimizing BFS on cache-based multicore platforms, for networks with “power-law” degree distributions Problem Spec.Assumptions No. of vertices/edges10 6 ~ 10 9 Edge/vertex ratio1 ~ 100 Static/dynamic?Static DiameterO(1) ~ O(log n) Weighted/UnweightedUnweighted Vertex degree distributionUnbalanced (“power law”) Directed/undirected?Both Simple/multi/hypergraph?Multigraph Granularity of computation at vertices/edges? Minimal Exploiting domain-specific characteristics? Partially Test data Synthetic R-MAT networks (Data: Mislove et al., IMC 2007.)

33 Graph traversal (BFS) problem definition 07 5 3 8 2 46 1 9 source vertex Input: Output: 1 1 1 2 2 3 3 4 4 distance from source vertex Memory requirements (# of machine words): Sparse graph representation: m+n Stack of visited vertices: n Distance array: n

34 1. Expand current frontier (level-synchronous approach, suited for low diameter graphs) Parallel BFS Strategies 07 5 3 8 2 46 1 9 source vertex 2. Stitch multiple concurrent traversals (Ullman-Yannakakis approach, suited for high-diameter graphs) O(D) parallel steps Adjacencies of all vertices in current frontier are visited in parallel 07 5 3 8 2 46 1 9 source vertex path-limited searches from “super vertices” APSP between “super vertices”

35 Locality (where are the random accesses originating from?) A deeper dive into the “level synchronous” strategy 0 31 53 84 74 11 93 1. Ordering of vertices in the “current frontier” array, i.e., accesses to adjacency indexing array, cumulative accesses O(n). 2. Ordering of adjacency list of each vertex, cumulative O(m). 3. Sifting through adjacencies to check whether visited or not, cumulative accesses O(m). 26 44 63 1. Access Pattern: idx array -- 53, 31, 74, 26 2,3. Access Pattern: d array -- 0, 84, 0, 84, 93, 44, 63, 0, 0, 11

36 Performance Observations Youtube social network Graph expansion Edge filtering Flickr social network

37 Well-studied problem, slight differences in problem formulations –Linear algebra: sparse matrix column reordering to reduce bandwidth, reveal dense blocks. –Databases/data mining: reordering bitmap indices for better compression; permuting vertices of WWW snapshots, online social networks for compression NP-hard problem, several known heuristics –We require fast, linear-work approaches –Existing ones: BFS or DFS-based, Cuthill-McKee, Reverse Cuthill-McKee, exploit overlap in adjacency lists, dimensionality reduction Improving locality: Vertex relabeling x xxxx xxx x x x x x x xxx xxx x x x x x xxxx xx x x xxxx x x x xx xx xx x x

38 Recall: Potential O(m) non-contiguous memory references in edge traversal (to check if vertex is visited). –e.g., access order: 53, 31, 31, 26, 74, 84, 0, … Objective: Reduce TLB misses, private cache misses, exploit shared cache. Optimizations: 1.Sort the adjacency lists of each vertex – helps order memory accesses, reduce TLB misses. 1.Permute vertex labels – enhance spatial locality. 2.Cache-blocked edge visits – exploit temporal locality. Improving locality: Optimizations 0 31 53 84 74 11 93 26 44 63

39 Instead of processing adjacencies of each vertex serially, exploit sorted adjacency list structure w/ blocked accesses Requires multiple passes through the frontier array, tuning for optimal block size. Note: frontier array size may be O(n) Improving locality: Cache blocking xxxx xx x x x x x xx xx xx x x frontier Adjacencies (d) linear processing New: cache-blocked approach xxxx xx x x x x x xx xx xx x x frontier Adjacencies (d) Metadata denoting blocking pattern 1 2 3 Tune to L2 cache size Process high-degree vertices separately

40 S imilar to older heuristics, but tuned for small-world networks: 1.High percentage of vertices with (out) degrees 0, 1, and 2 in social and information networks => store adjacencies explicitly (in indexing data structure).  Augment the adjacency indexing data structure (with two additional words) and frontier array (with one bit) 2.Process “high-degree vertices” adjacencies in linear order, but other vertices with d-array cache blocking. 3.Form dense blocks around high-degree vertices  Reverse Cuthill-McKee, removing degree 1 and degree 2 vertices Vertex relabeling heuristic

41 1. Software prefetching on the Intel Core i7 (supports 32 loads and 20 stores in flight) –Speculative loads of index array and adjacencies of frontier vertices will reduce compulsory cache misses. 2. Aligning adjacency lists to optimize memory accesses –16-byte aligned loads and stores are faster. –Alignment helps reduce cache misses due to fragmentation –16-byte aligned non-temporal stores (during creation of new frontier) are fast. 3. SIMD SSE integer intrinsics to process “high-degree vertex” adjacencies. 4. Fast atomics (BFS is lock-free w/ low contention, and CAS-based intrinsics have very low overhead) 5. Hugepage support (significant TLB miss reduction) 6. NUMA-aware memory allocation exploiting first-touch policy Architecture-specific Optimizations

42 Experimental Setup NetworknmMax. out- degree % of vertices w/ out- degree 0,1,2 Orkut3.07M223M32K5 LiveJournal5.28M77.4M9K40 Flickr1.86M22.6M26K73 Youtube1.15M4.94M28K76 R-MAT8M-64M8nn 0.6 Intel Xeon 5560 (Core i7, “Nehalem”) 2 sockets x 4 cores x 2-way SMT 12 GB DRAM, 8 MB shared L3 51.2 GBytes/sec peak bandwidth 2.66 GHz proc. Intel Xeon 5560 (Core i7, “Nehalem”) 2 sockets x 4 cores x 2-way SMT 12 GB DRAM, 8 MB shared L3 51.2 GBytes/sec peak bandwidth 2.66 GHz proc. Performance averaged over 10 different source vertices, 3 runs each.

43 OptimizationGeneralityImpact*Tuning required? (Preproc.) Sort adjacency listsHigh--No (Preproc.) Permute vertex labelsMedium--Yes Preproc. + binning frontier vertices + cache blocking M2.5xYes Lock-free parallelizationM2.0xNo Low-degree vertex filteringLow1.3xNo Software PrefetchingM1.10xYes Aligning adjacencies, streaming storesM1.15xNo Fast atomic intrinsicsH2.2xNo Impact of optimization strategies * Optimization speedup (performance on 4 cores) w.r.t baseline parallel approach, on a synthetic R-MAT graph (n=2 23,m=2 26 )

44 Cache locality improvement Theoretical count of the number of non- contiguous memory accesses: m+3n Performance count: # of non-contiguous memory accesses (assuming cache line size of 16 words)

45 Parallel performance (Orkut graph) Parallel speedup: 4.9 Speedup over baseline: 2.9 Execution time: 0.28 seconds (8 threads) Graph: 3.07 million vertices, 220 million edges Single socket of Intel Xeon 5560 (Core i7) Graph: 3.07 million vertices, 220 million edges Single socket of Intel Xeon 5560 (Core i7)

46 Applications Review of key results Case studies: Graph traversal-based problems, parallel algorithms –Breadth-First Search –Single-source Shortest paths –Betweenness Centrality –Community Identification Lecture Outline

47 Parallel Single-source Shortest Paths (SSSP) algorithms Edge weights: concurrency primary challenge! No known PRAM algorithm that runs in sub-linear time and O(m+nlog n) work Parallel priority queues: relaxed heaps [DGST88], [BTZ98] Ullman-Yannakakis randomized approach [UY90] Meyer and Sanders, ∆ - stepping algorithm [MS03] Distributed memory implementations based on graph partitioning Heuristics for load balancing and termination detection K. Madduri, D.A. Bader, J.W. Berry, and J.R. Crobak, “An Experimental Study of A Parallel Shortest Path Algorithm for Solving Large-Scale Graph Instances,” Workshop on Algorithm Engineering and Experiments (ALENEX), New Orleans, LA, January 6, 2007.

48 ∆ - stepping algorithm [MS03] Label-correcting algorithm: Can relax edges from unsettled vertices also ∆ - stepping: “approximate bucket implementation of Dijkstra’s algorithm” ∆: bucket width Vertices are ordered using buckets representing priority range of size ∆ Each bucket may be processed in parallel

49 0.01 ∆ - stepping algorithm: illustration 1 2 3 4 5 6 0.13 0 0.18 0.15 0.05 0.07 0.23 0.56 0.02 d array 0 1 2 3 4 5 6 Buckets One parallel phase while (bucket is non-empty) i)Inspect light edges ii)Construct a set of “requests” (R) iii)Clear the current bucket iv)Remember deleted vertices (S) v)Relax request pairs in R Relax heavy request pairs (from S) Go on to the next bucket ∞ ∞∞∞ ∞ ∞∞ ∆ = 0.1 (say)

50 0.01 ∆ - stepping algorithm: illustration 1 2 3 4 5 6 0.13 0 0.18 0.15 0.05 0.07 0.23 0.56 0.02 d array 0 1 2 3 4 5 6 Buckets One parallel phase while (bucket is non-empty) i)Inspect light edges ii)Construct a set of “requests” (R) iii)Clear the current bucket iv)Remember deleted vertices (S) v)Relax request pairs in R Relax heavy request pairs (from S) Go on to the next bucket 0 ∞∞∞ ∞ ∞∞ Initialization: Insert s into bucket, d(s) = 0 0 0

51 0.01 ∆ - stepping algorithm: illustration 1 2 3 4 5 6 0.13 0 0.18 0.15 0.05 0.07 0.23 0.56 0.02 d array 0 1 2 3 4 5 6 Buckets One parallel phase while (bucket is non-empty) i)Inspect light edges ii)Construct a set of “requests” (R) iii)Clear the current bucket iv)Remember deleted vertices (S) v)Relax request pairs in R Relax heavy request pairs (from S) Go on to the next bucket 0 ∞∞∞ ∞ ∞∞ 0 0 2 R S.01

52 0.01 ∆ - stepping algorithm: illustration 1 2 3 4 5 6 0.13 0 0.18 0.15 0.05 0.07 0.23 0.56 0.02 d array 0 1 2 3 4 5 6 Buckets One parallel phase while (bucket is non-empty) i)Inspect light edges ii)Construct a set of “requests” (R) iii)Clear the current bucket iv)Remember deleted vertices (S) v)Relax request pairs in R Relax heavy request pairs (from S) Go on to the next bucket 0 ∞∞∞ ∞ ∞∞ 2 R 0 S.01 0

53 0.01 ∆ - stepping algorithm: illustration 1 2 3 4 5 6 0.13 0 0.18 0.15 0.05 0.07 0.23 0.56 0.02 d array 0 1 2 3 4 5 6 Buckets One parallel phase while (bucket is non-empty) i)Inspect light edges ii)Construct a set of “requests” (R) iii)Clear the current bucket iv)Remember deleted vertices (S) v)Relax request pairs in R Relax heavy request pairs (from S) Go on to the next bucket 0 ∞.01 ∞ ∞ ∞∞ 2 R 0 S 0

54 0.01 ∆ - stepping algorithm: illustration 1 2 3 4 5 6 0.13 0 0.18 0.15 0.05 0.07 0.23 0.56 0.02 d array 0 1 2 3 4 5 6 Buckets One parallel phase while (bucket is non-empty) i)Inspect light edges ii)Construct a set of “requests” (R) iii)Clear the current bucket iv)Remember deleted vertices (S) v)Relax request pairs in R Relax heavy request pairs (from S) Go on to the next bucket 0 ∞.01 ∞ ∞ ∞∞ 2 R 0 S 0 13.03.06

55 0.01 ∆ - stepping algorithm: illustration 1 2 3 4 5 6 0.13 0 0.18 0.15 0.05 0.07 0.23 0.56 0.02 d array 0 1 2 3 4 5 6 Buckets One parallel phase while (bucket is non-empty) i)Inspect light edges ii)Construct a set of “requests” (R) iii)Clear the current bucket iv)Remember deleted vertices (S) v)Relax request pairs in R Relax heavy request pairs (from S) Go on to the next bucket 0 ∞.01 ∞ ∞ ∞∞ R 0 S 0 13.03.06 2

56 0.01 ∆ - stepping algorithm: illustration 1 2 3 4 5 6 0.13 0 0.18 0.15 0.05 0.07 0.23 0.56 0.02 d array 0 1 2 3 4 5 6 Buckets One parallel phase while (bucket is non-empty) i)Inspect light edges ii)Construct a set of “requests” (R) iii)Clear the current bucket iv)Remember deleted vertices (S) v)Relax request pairs in R Relax heavy request pairs (from S) Go on to the next bucket 0.03.01.06 ∞ ∞∞ R 0 S 0 2 1 3

57 0.01 ∆ - stepping algorithm: illustration 1 2 3 4 5 6 0.13 0 0.18 0.15 0.05 0.07 0.23 0.56 0.02 d array 0 1 2 3 4 5 6 Buckets One parallel phase while (bucket is non-empty) i)Inspect light edges ii)Construct a set of “requests” (R) iii)Clear the current bucket iv)Remember deleted vertices (S) v)Relax request pairs in R Relax heavy request pairs (from S) Go on to the next bucket 0.03.01.06.16.29.62 R 0 S 1 2 13 2 6 4 5 6

58

59 Classify edges as “heavy” and “light”

60 Relax light edges (phase) Repeat until B[i] Is empty

61 Relax heavy edges. No reinsertions in this step.

62 No. of phases (machine-independent performance count) low diameter high diameter

63 Average shortest path weight for various graph families ~ 2 20 vertices, 2 22 edges, directed graph, edge weights normalized to [0,1]

64 Last non-empty bucket (machine-independent performance count) Fewer buckets, more parallelism

65 Number of bucket insertions (machine-independent performance count)

66 Applications Review of key results Case studies: Graph traversal-based problems, parallel algorithms –Breadth-First Search –Single-source Shortest paths –Betweenness Centrality –Community Identification Lecture Outline

67 Betweenness Centrality Centrality: Quantitative measure to capture the importance of a vertex/edge in a graph –degree, closeness, eigenvalue, betweenness Betweenness Centrality ( : No. of shortest paths between s and t) Applied to several real-world networks –Social interactions –WWW –Epidemiology –Systems biology

68 Algorithms for Computing Betweenness All-pairs shortest path approach: compute the length and number of shortest paths between all s-t pairs (O(n 3 ) time), sum up the fractional dependency values (O(n 2 ) space). Brandes’ algorithm (2003): Augment a single-source shortest path computation to count paths; uses the Bellman criterion; O(mn) work and O(m+n) space.

69 Madduri, Bader (2006): parallel algorithms for computing exact and approximate betweenness centrality –low-diameter sparse graphs (diameter D = O(log n), m = O(nlog n)) –Exact algorithm: O(mn) work, O(m+n) space, O(nD+nm/p) time. Madduri et al. (2009): New parallel algorithm with lower synchronization overhead and fewer non-contiguous memory references –In practice, 2-3X faster than previous algorithm –Lock-free => better scalability on large parallel systems Our New Parallel Algorithms

70 Parallel BC Algorithm Consider an undirected, unweighted graph High-level idea: Level-synchronous parallel Breadth-First Search augmented to compute centrality scores Two steps –traversal and path counting –dependency accumulation

71 Parallel BC Algorithm Illustration 07 5 3 8 2 46 1 9

72 1. Traversal step: visit adjacent vertices, update distance and path counts. 07 5 3 8 2 46 1 9 source vertex

73 Parallel BC Algorithm Illustration 1. Traversal step: visit adjacent vertices, update distance and path counts. 07 5 3 8 2 46 1 9 source vertex 2 7 5 0 0 0 0 1 1 1 1 1 1 D 0 00 0 S P

74 Parallel BC Algorithm Illustration 1. Traversal step: visit adjacent vertices, update distance and path counts. 07 5 3 8 2 46 1 9 source vertex 8 8 2 7 5 0 0 0 0 1 1 2 2 1 1 1 1 2 2 D 0 00 0 S P 3 3 27 57 Level-synchronous approach: The adjacencies of all vertices in the current frontier can be visited in parallel

75 Parallel BC Algorithm Illustration 1.Traversal step: at the end, we have all reachable vertices, their corresponding predecessor multisets, and D values. 07 5 3 8 2 46 1 9 source vertex 2 2 1 1 6 6 4 4 8 8 2 7 5 0 0 0 0 1 1 2 2 1 1 1 1 2 2 D 0 00 0 S P 3 3 27 57 Level-synchronous approach: The adjacencies of all vertices in the current frontier can be visited in parallel 38 8 6 6

76 Exploit concurrency in visiting adjacencies, as we assume that the graph diameter is small: O(log n) Upper bound on size of each predecessor multiset: In-degree Potential performance bottlenecks: atomic updates to predecessor multisets, atomic increments of path counts New algorithm: Based on observation that we don’t need to store “predecessor” vertices. Instead, we store successor edges along shortest paths. –simplifies the accumulation step –reduces an atomic operation in traversal step –cache-friendly! Graph traversal step analysis

77 Graph Traversal Step locality analysis for all vertices u at level d in parallel do for all adjacencies v of u in parallel do dv = D[v]; if (dv < 0) vis = fetch_and_add(&Visited[v], 1); if (vis == 0) D[v] = d+1; pS[count++] = v; fetch_and_add(&sigma[v], sigma[u]); fetch_and_add(&Scount[u], 1); if (dv == d + 1) fetch_and_add(&sigma[v], sigma[u]); fetch_and_add(&Scount[u], 1); All the vertices are in a contiguous block (stack) All the adjacencies of a vertex are stored compactly (graph rep.) Store to S[u] Non-contiguous memory access Non-contiguous memory access Non-contiguous memory access Better cache utilization likely if D[v], Visited[v], sigma[v] are stored contiguously

78 Parallel BC Algorithm Illustration 2. Accumulation step: Pop vertices from stack, update dependence scores. 07 5 3 8 2 46 1 9 source vertex 2 2 1 1 6 6 4 4 8 8 2 7 5 0 0 Delta 0 00 0 S P 3 3 27 57 38 8 6 6

79 Parallel BC Algorithm Illustration 2. Accumulation step: Can also be done in a level-synchronous manner. 07 5 3 8 2 46 1 9 source vertex 2 2 1 1 6 6 4 4 8 8 2 7 5 0 0 Delta 0 00 0 S P 3 3 27 57 38 8 6 6

80 Accumulation step locality analysis for level d = GraphDiameter-2 to 1 do for all vertices v at level d in parallel do for all w in S[v] in parallel do reduction(delta) delta_sum_v = delta[v] + (1 + delta[w]) * sigma[v]/sigma[w]; BC[v] = delta[v] = delta_sum_v; All the vertices are in a contiguous block (stack) Each S[v] is a contiguous block Only floating point operation in code

81 Centrality Analysis applied to Protein Interaction Networks 43 interactions Protein Ensembl ID ENSG00000145332.2 Kelch-like protein 8

82 Applications Review of key results Case studies: Graph traversal-based problems, parallel algorithms –Breadth-First Search –Single-source Shortest paths –Betweenness Centrality –Community Identification Lecture Outline

83 Implicit communities in large-scale networks are of interest in many cases. –WWW –Social networks –Biological networks Formulated as a graph clustering problem. –Informally, identify/extract “dense” sub-graphs. Several different objective functions exist. –Metrics based on intra-cluster vs. inter- cluster edges, community sizes, number of communities, overlap … Highly studied research problem –100s of papers yearly in CS, Social Sciences, Physics, Comp. Biology, Applied Math journals and conferences. Community Identification

84 Bottom-up approach: Start with n singleton communities, iteratively merge pairs to form larger communities. –What measure to minimize/maximize? A measure known as modularity –How do we order merges? Use a priority queue Parallelization: perform multiple “independent” merges simultaneously. Agglomerative Clustering, Parallelization

85 SNAP: Small-world Network Analysis and Partitioning. 10-100x faster than competing graph analysis software. –Parallelism, heuristics exploiting graph topology. Can process graphs with billions of vertices and edges. Open-source: http://snap-graph.sf.nethttp://snap-graph.sf.net Optimizations for multicore systems: improving cache locality, minimize synchronization overhead with atomics. Graph Analysis with cache-based multicore systems D.A. Bader and K. Madduri, IPDPS 2008, Parallel Computing 2008.

86 Designing fast parallel graph algorithms Data size (n: # of vertices/edges) 10 4 10 6 10 8 10 12 # of passes over data constant ~ n log n “RandomAccess”-like “Stream”-like Improve locality Data reduction/ Compression Faster methods Peta+ System requirements: High (on-chip memory, DRAM, network, IO) bandwidth. Solution: Efficiently utilize available memory bandwidth. Algorithmic innovation to avoid corner cases. Problem Complexity Locality

87 Applications: Internet and WWW, Scientific computing, Data analysis, Surveillance Earlier work on parallel graph algorithms –PRAM algorithms, list ranking, connected components –graph representations Parallel algorithm case studies –BFS: locality, level-synchronous approach, multicore tuning –Shortest paths: exploiting concurrency, parallel priority queue –Betweenness centrality: importance of locality, atomics Review of lecture

88 New methods/analytics: Modeling network dynamics; persistent monitoring of dynamically changing properties. Software: Portable, high-performance, extensible routines. Emerging systems: how do we best utilize GPUs, flash disks? Future Research Challenges BIG DATA Analytics, Summarization, Visualization Modeling & Simulation

89 Questions? Thank you!


Download ppt "CS267/EngC233 Spring 2010 April 15, 2010 Parallel Graph Algorithms Kamesh Madduri Lawrence Berkeley National Laboratory."

Similar presentations


Ads by Google