Download presentation
Presentation is loading. Please wait.
Published byMerryl Brooks Modified over 8 years ago
1
R ONG C HEN R ONG C HEN, J IAXIN S HI, Y ANZHE C HEN, H AIBO C HEN Institute of Parallel and Distributed Systems Shanghai Jiao Tong University, China H L Differentiated Graph Computation and Partitioning on Skewed Graphs
2
Graphs are Everywhere Graphs are Everywhere 1 Traffic network graphs □ Monitor/react to road incidents □ Analyze Internet security Biological networks □ Predict disease outbreaks □ Model drug-protein interactions Social networks □ Recommend people/products □ Track information flow 1 From PPoPP’15 Keynote: Computing in a Persistent (Memory) World, P. Faraboschi 2
3
A centrality analysis algorithm to iteratively rank each vertex as a weighted sum of neighbors’ ranks Example: PageRank Graph-parallel Processing Coding graph algorithms as vertex-centric programs to process vertices in parallel and communicate along edges -- "Think as a Vertex" -- "Think as a Vertex" philosophy C OMPUTE(v) foreach n in v.in_nbrs sum += n.rank / n.nedges v.rank = 0.15 + 0.85 * sum if !converged(v) foreach n in v.out_nbrs activate(n) 3
4
Graph-parallel Processing A centrality analysis algorithm to iteratively rank each vertex as a weighted sum of neighbors’ ranks Example: PageRank Coding graph algorithms as vertex-centric programs to process vertices in parallel and communicate along edges -- "Think as a Vertex" -- "Think as a Vertex" philosophy C OMPUTE(v) foreach n in v.in_nbrs sum += n.rank / n.nedges v.rank = 0.15 + 0.85 * sum if !converged(v) foreach n in v.out_nbrs activate(n) Gather data from neighbors via in-edges 4
5
Graph-parallel Processing A centrality analysis algorithm to iteratively rank each vertex as a weighted sum of neighbors’ ranks Example: PageRank Coding graph algorithms as vertex-centric programs to process vertices in parallel and communicate along edges -- "Think as a Vertex" -- "Think as a Vertex" philosophy C OMPUTE(v) foreach n in v.in_nbrs sum += n.rank / n.nedges v.rank = 0.15 + 0.85 * sum if !converged(v) foreach n in v.out_nbrs activate(n) Update vertex with new data 5
6
Graph-parallel Processing A centrality analysis algorithm to iteratively rank each vertex as a weighted sum of neighbors’ ranks Example: PageRank Coding graph algorithms as vertex-centric programs to process vertices in parallel and communicate along edges -- "Think as a Vertex" -- "Think as a Vertex" philosophy C OMPUTE(v) foreach n in v.in_nbrs sum += n.rank / n.nedges v.rank = 0.15 + 0.85 * sum if !converged(v) foreach n in v.out_nbrs activate(n) Scatter data to neighbors via out-edges 6
7
Natural Graphs are Skewed Graphs in real-world □ Power-law degree distribution (aka Zipf’s law or Pareto) Source: M. E. J. Newman, ”Power laws, Pareto distributions and Zipf’s law”, 2005 “most vertices have relatively few neighbors while a few have many neighbors” -- PowerGraph [OSDI’12] #Follower : Case: S INA Weibo Case: S INA Weibo( 微博 ) □ 167 million active users 1 1 http://www.chinainternetwatch.com/10735/weibo-q3-2014/ http://www.chinainternetwatch.com/10735/weibo-q3-2014/........ 77,971,093 #1 76,946,782 #2#100 14,723,818397.......... 69 7
8
Challenge: L OCALITY vs. P ARALLELISM L OCALITY make resource locally accessible to hidden network latency H High High -degree vertex 77,971,093 L Low Low -degree vertex 397 1.Imbalance 2.High contention 3.Heavy network traffic 8
9
Challenge: L OCALITY vs. P ARALLELISM L OCALITY P ARALLELISM make resource locally accessible to hidden network latency evenly parallelize the workloads to avoid load imbalance H High High -degree vertex 77,971,093 L Low Low -degree vertex 397 1.Redundant comm., comp. & sync. (not worthwhile) 9
10
Challenge: L OCALITY vs. P ARALLELISM L OCALITY P ARALLELISM make resource locally accessible to hidden network latency evenly parallelize the workloads to avoid load imbalance H High High -degree vertex 77,971,093 L Low Low -degree vertex 397 100-million users w/ 100 followers 100 users w/ 100-million followers Conflict 10
11
Existing Efforts: One Size Fits All PregelGraphLab Pregel [SIGMOD’08] and GraphLab [VLDB’12] □ Focus on exploiting L OCALITY □ Partitioning: use edge-cut to evenly assign vertices along with all edges □ Computation: aggregate all resources (i.e., msgs or replicas) of a vertex on local machine A B AA B msg replica (i.e., mirror) master x2 B LLHL Edge-cut 11
12
Existing Efforts: One Size Fits All PowerGraph GraphX PowerGraph [OSDI’12] and GraphX [OSDI’14] □ Focus on exploiting P ARALLELISM □ Partitioning: use vertex-cut to evenly assign edges with replicated vertices □ Computation: decompose the workload of a vertex into multiple machines Vertex-cut LHHL mirror A B A B master x5 12
13
: : differentiated graph computation and partitioning on skewed graphs □ Hybrid computation and partitioning strategies → Embrace locality (low-degree vertex) and parallelism (high-degree vertex) □ An optimization to improve locality and reduce interference in communication □ Outperform PowerGraph (up to 5.53X) and other systems for ingress and runtime P ARALLELISM L OCALITY Contributions One Size fits All 13
14
Agenda Graph Partitioning Graph Computation OptimizationEvaluation
15
Graph Partitioning -cut Hybrid-cut: differentiate graph partitioning for low- degree and high-degree vertices □ Low-cut (inspired by edge-cut) : reduce mirrors and exploit locality for low-degree vertices □ High-cut (inspired by vertex-cut) : provide balance and restrict the impact of high-degree vertices □ Efficiently combine low-cut and high-cut 1 2 5 1 2 3 6 High-master Low-master High-mirror Low-mirror 1 4 3 1 24 35 6 Sample Graph θ=3 User-defined threshold 15
16
Low-cut Low-cut : evenly assign low-degree vertices along with edges in only one direction (e.g., in-edges) to machines by hashing the (e.g., target) vertex 1.Unidirectional access locality 2.No duplicated edges 3.Efficiency at ingress 4.Load balance (E & V) 5.Ready-made heuristics 1 24 35 6 Hybrid-cut (Low) 4 2 5 3 6 1 1 2 3 N OTE: A new heuristic, called Ginger, is provided to further optimize Random hybrid-cut. (see paper) hash(X) = X % 3 θ=3 16
17
High-cut High-cut : distribute one-direction edges (e.g., in-edges) of high-degree vertices to machines by hashing opposite (e.g., source) vertex 1.Avoid adding any mirrors for low-degree vertices (i.e., an upper bound of increased mirrors for assigning a new high-degree vertex) 2.No duplicated edges 3.Efficiency at ingress 4.Load balance (E & V) 6 24 35 1 Hybrid-cut (High) 2 5 1 1 3 hash(X) = X % 3 θ=3 4 1 17
18
Constructing Hybrid-cut 1, 6 1, 4 2, 1 2, 5 2, 6 3, 1 3, 4 4, 1 5, 1 Load 1 24 35 6 Sample Graph θ=3 1.Load files in parallel format: edge-list 18
19
Constructing Hybrid-cut 1 42 5 3 6 1, 6 1, 4 2, 1 2, 5 2, 6 3, 1 3, 4 4, 1 5, 1 Dispatch Load 1 24 35 6 Sample Graph θ=3 1.Load files in parallel 2.Dispatch all edges (Low-cut) and count in-degree of vertices format: edge-list 19
20
Constructing Hybrid-cut 1 42 5 3 6 14121 3 5 1, 6 1, 4 2, 1 2, 5 2, 6 3, 1 3, 4 4, 1 5, 1 Dispatch Load Re-assign 1 24 35 6 Sample Graph θ=3 1.Load files in parallel 2.Dispatch all edges (Low-cut) and count in-degree of vertices 3.Re-assign in-edges of high-degree vertices (High-cut, θ=3) N OTE: some formats (e.g., adjacent list) can skip counting and re-assignment format: edge-list 20
21
Constructing Hybrid-cut 1 42 5 3 6 14121 3 5 1 4 1 2 5 1 2 3 6 1, 6 1, 4 2, 1 2, 5 2, 6 3, 1 3, 4 4, 1 5, 1 3 Dispatch Load Re-assign Construct 1 24 35 6 Sample Graph θ=3 1.Load files in parallel 2.Dispatch all edges (Low-cut) and count in-degree of vertices 3.Re-assign in-edges of high-degree vertices (High-cut, θ=3) 4.Create mirrors to construct a local graph format: edge-list 21
22
Agenda Graph Partitioning Graph Computation OptimizationEvaluation 22
23
Graph Computation -model Hybrid-model: differentiate the graph computation model to high-degree and low-degree vertices □ High-degree model: decompose workload of high-degree vertices for load balance □ Low-degree model: minimize the communication cost for low-degree vertices □ Seamlessly run all of existing graph algorithms mastermirror L... L G A SS High-degree modelLow-degree model mastermirror H... G A S G S H 23
24
C OMPUTE(v) foreach n in v.in_nbrs sum += n.rank / n.nedges v.rank = 0.15 + 0.85 * sum if !converged(v) foreach n in v.out_nbrs activate(n) PageRank Example: PageRank Hybrid-model (High) -degree vertex High-degree vertex □ Exploit P ARALLELISM □ Follow GAS model (inspired by P OWER G RAPH ) □ Comm. Cost: ≤ 4 x #mirrors A PPLY (v, sum): v.rank = 0.15 + 0.85 * sum G ATHER(v, n): return n.rank / n.nedges S UM (a, b): return a + b G ATHER_EDGES = IN S CATTER (v, n): if !converged(v) activate(n) S CATTER_EDGES = OUT G A S local distributed distributed master H... G A S G S mirror H... High-degree vertex 24
25
Hybrid-model (Low) -degree vertex Low-degree vertex □ Exploit L OCALITY □ Reserve GAS interface □ Unidirectional access locality □ Comm. Cost: ≤ 1 x #mirrors O BSERVATION : most algorithms only gather or scatter in one direction (e.g., PageRank: G/IN and S/OUT) A PPLY (v, sum): v.rank = 0.15 + 0.85 * sum G ATHER(v, n): return n.rank / n.nedges S UM (a, b): return a + b G ATHER_EDGES = IN S CATTER (v, n): if !converged(v) activate(n) S CATTER_EDGES = OUT G A S local distributed local S L... master L... mirror Low-degree vertex A S G 25
26
Generalization run all graph algorithms Seamlessly run all graph algorithms □ Adaptively downgrade model (+ N msgs, 1 ≤ N ≤ 3) □ Detect at runtime w/o any changes to APPs e.g., Connected Components (CC) → G/NONE and S/ALL → +1 msg in S CATTER (mirror master ) → Comm. Cost: ≤ 2 x #mirrors mastermirror L... L G A SS +1 msg One-direction locality limits the expressiveness to some OTHER algorithms (i.e., G or S in both IN and OUT) 26
27
Agenda Graph Partitioning Graph Computation OptimizationEvaluation
28
Poor locality and high interference □ Messages are batched and sent periodically □ Messages from different nodes will be updated to vertices (i.e., master and mirror) in parallel... M1... M0 Worker Thread Challenge: Locality & Interference msgs RPC Thread OPT: Locality-conscious graph layout □ Focus on exploiting locality in communication □ Extend hybrid-cut at ingress (w/o runtime overhead) 28
29
98613524 Example: Initial State 75384162 641398527 M1 M2 M0 h L l l h H l L H h l h L L l l l l L l H h l h L 75 12 H H igh-master h h igh-mirror L L ow-master l l ow-mirror Both master and mirror of all vertices (high- & low-degree) are mixed and stored in a random order on each machine e.g. Hash(X) = (X-1)%3 29
30
98613524 Example: Zoning 75384162 641398527 M1 M2 M0 586394127 M1 93184526 M2 74238561 M0 H0 L0 h-mrr l-mrr H1 L1 h-mrr l-mrr H2 L2 h-mrr l-mrr Z0 Z1 Z2 Z3 1.Z ONE : divide vertex space into 4 zones to store H igh-masters (Z0), L ow-masters (Z1), h igh-mirrors (Z2) and l ow-mirrors (Z3) 75 12 H H igh-master h h igh-mirror L L ow-master l l ow-mirror Z0Z1Z2Z3 30
31
Example: Grouping H0 L0 h1 h2 l1 l2 H1 L1 h0 h2 l0 l2 H2 L2 h0 h1 l0 l1 Z0 Z1 Z2 Z3 2.G ROUP : the mirrors (i.e., Z2 and Z3) are further grouped within zone according to the location of their masters 75 12 H H igh-master h h igh-mirror L L ow-master l l ow-mirror 586394127 74238561 93184526 M1 M2 M0 581794623 M1 M0 74253861 M2 93185426 Z0Z1Z2Z3 31
32
581794623 M1 Example: Sorting H0 L0 h1 h2 l1 l2 H1 L1 h0 h2 l0 l2 H2 L2 h0 h1 l0 l1 Z0 Z1 Z2 Z3 3.S ORT : both the masters and mirrors within a group are sorted according to the global vertex ID 75 12 H H igh-master h h igh-mirror L L ow-master l l ow-mirror 39158426 M2 47283561 M0 M0 74253861 M2 39185426 581734629 M1 Z0Z1Z2Z3 32
33
581794623 M1 Example: Sorting H0 L0 h1 h2 l1 l2 H1 L1 h0 h2 l0 l2 H2 L2 h0 h1 l0 l1 Z0 Z1 Z2 Z3 3.S ORT : both the masters and mirrors within a group are sorted according to the global vertex ID 75 12 H H igh-master h h igh-mirror L L ow-master l l ow-mirror 47283561 39158426 M2 M0 M0 74253861 M2 39185426 581734629 M1 Z0Z1Z2Z3 33
34
586943127 M1 Example: Rolling H0 L0 h1 h2 l1 l2 H1 L1 h2 h0 l2 l0 H2 L2 h0 h1 l0 l1 Z0 Z1 Z2 Z3 4.R OLL : place mirror groups in a rolling order: the mirror groups in machine n for p partitions start from (n + 1)%p e.g., M1 start from h2 then h0, where n=2 and p=3 75 12 H H igh-master h h igh-mirror L L ow-master l l ow-mirror 47283561 39158426 M2 M0 M0 47283561 M2 39158426 581734629 M1 Z0Z1Z2Z3 34
35
Tradeoff: Ingress vs. Runtime OPT: Locality-conscious graph layout □ The increase of graph ingress time is modest → Independently executed on each machine → Less than 5% (power-law) & around 10% (real-world) □ Usually more than 10% speedup for runtime → 21% for the Twitter follower graph Power-law (α)Real-world 35
36
Agenda Graph Partitioning Graph Computation OptimizationEvaluation
37
Implementation P OWER L YRA □ Based on latest PowerGraph (v2.2, Mar. 2014) □ A separate engine and partitioning algorithms □ Support both sync and async execution engines □ Seamlessly run all graph toolkits in GraphLab and respect the fault tolerance model □ Port the Random hybrid-cut to GraphX 1 The source code and a brief instruction are available at http://ipads.se.sjtu.edu.cn/projects/powerlyra.htmlhttp://ipads.se.sjtu.edu.cn/projects/powerlyra.html 1 We only port Random hybrid-cut for preserving the graph partitioning interface of GraphX. 37
38
Evaluation Baseline: Baseline: latest P OWER G RAPH (2.2 Mar. 2014) with various vertex-cuts (Grid/Oblivious/Coordinated)Platforms 1.A 48-node VM-based cluster 1 ( Each: 4 Cores, 12GB RAM ) 2.A 6-node physical cluster 1 ( Each: 24 Cores, 64GB RAM ) Algorithms and graphs □ 3 graph-analytics algorithms (PR 2, CC, DIA) and 2 MLDM algorithms (ALS, SGD) □ 5 real-world graphs, 5 synthetic graphs 3, etc. 6-node48-node 1 Clusters are connected via 1GigE 2 We run 10 iterations for PageRank. 3 Synthetic graphs has fixed 10 million vertices 38
39
Performance Power-law (α)Real-world default 2.2X 2.9X 3.3X 3.2X 2.6X 2.0X 3.0X 5.5X 48-node better 39
40
Scalability default #Machines#Vertices (millions) 47.4 28.5 23.1 16.8 48-node6-node better
41
Breakdown Power-law (α) 48-node Power-law (α) 48-node default better 41
42
Graph Partitioning Comparison Power-law (α)Real-world 28.4 18.0
43
Threshold 10 3 10 4 10 5 ∞ + 48-node Impact of different thresholds □ Using high-cut or low-cut for all vertices is poor □ Execution time is stable for a large range of θ → Difference is lower than 1sec (100 ≤ θ ≤ 500) High-cut High-cut for all Low-cut Low-cut for all Threshold (θ) stable
44
6-node vs. Other Systems vs. Other Systems 1 Twitter Follower GraphPower-law (α=2.0) Graph Exec-Time (sec) 528 367 1975219 233 56 36 23 24 172 62 69 PageRank PageRank with 10 iterations 6-node In-memory Out-of-core 1 We use Giraph 1.1, CombBLAS 1.4, GraphX 1.1, Galois 2.2.1, X-Stream 1.0, as well as the latest version of GPS, PowerGraph, Polymer and GraphChi. better 44
45
Conclusion Existing systems use a “O NE S IZE F ITS A LL ” design for skewed graphs, resulting in suboptimal performance P OWER L YRA : a hybrid and adaptive design that differentiates computation and partitioning on low- and high-degree vertices Embrace L OCALITY and P ARALLELISM □ Outperform PowerGraph by up to 5.53X □ Also much faster than other systems like GraphX, Giraph, and CombBLAS for ingress and runtime □ Random hybrid-cut improves GraphX by 1.33X 45
46
L QuestionsThanks http://ipads.se.sjtu.edu.cn/ projects/powerlyra.html Institute of Parallel and Distributed Systems H
47
Related Work G ENERAL Pregel [SIGMOD'09], GraphLab [VLDB'12], PowerGraph [OSDI'12], GraphX [OSDI'14] O PTIMIZATION Mizan [EuroSys'13], Giraph++ [VLDB'13], PowerSwitch [PPoPP'15] S INGLE M ACHINE GraphChi [OSDI'12], X-Stream [SOSP'13], Galois [SOSP'13], Polymer [PPoPP'15] O THER P ARADIGMS CombBLAS [IJHPCA'11], SociaLite [VLDB'13] E DGE -C UT Stanton et al. [SIGKDD'12], Fennel [WSDM'14] V ERTEX -C UT Greedy [OSDI'12], GraphBuilder [GRADES'13] H EURISTICS 2D [SC'05], Degree-Based Hashing [NIPS'15] Other graph processing systems Graph replication and partitioning
48
Backup
49
Downgrade mastermirror L... L G A SS mastermirror L... G A S G S L G ATHER S CATTER mastermirror L... G A S G S L mastermirror L... G A SS L #msgs O BSERVATION : most algorithms only gather or scatter in one direction Sweet Spot
50
Other Algorithms and Graphs Power-law (α) Normalized Speedup Power-law (α) 48-node MLDM Algorithms Netflix :|V|=0.5M, |E|=99M 48-node default G/OUT & S/NONE ingress | runtime better Latent dimension G/NONE & S/ALL
51
Other Algorithms and Graphs Power-law (α) Normalized Speedup Power-law (α) 48-node MLDM Algorithms Netflix :|V|=0.5M, |E|=99M 48-node Non-skewed Graph RoadUS: |V|=23.9M, |E|=58.3M 48-node default G/OUT & S/NONE G/NONE & S/ALL better
52
Ingress Time vs. Execution Time
53
Fast CPU & Networking Exec-Time (sec) Twitter Follower GraphPower-law (α=2.0) Graph Testbed: Testbed: 6-node physical clusters 1.SLOW: 24 Cores (AMD 6168 1.9GHz L3:12MB), 64GB RAM, 1GbE 2.FAST: 20 Cores (Intel E5-2650 2.3GHz L3:24MB), 64GB RAM, 10GbE
Similar presentations
© 2024 SlidePlayer.com Inc.
All rights reserved.