Presentation is loading. Please wait.

Presentation is loading. Please wait.

Ex-MATE: Data-Intensive Computing with Large Reduction Objects and Its Application to Graph Mining Wei Jiang and Gagan Agrawal.

Similar presentations


Presentation on theme: "Ex-MATE: Data-Intensive Computing with Large Reduction Objects and Its Application to Graph Mining Wei Jiang and Gagan Agrawal."— Presentation transcript:

1 Ex-MATE: Data-Intensive Computing with Large Reduction Objects and Its Application to Graph Mining Wei Jiang and Gagan Agrawal

2 Outline October 2, 20152  Background  System Design of Ex-MATE  Parallel Graph Mining with Ex-MATE  Experiments  Related Work  Conclusion

3 Outline October 2, 20153  Background  System Design of Ex-MATE  Parallel Graph Mining with Ex-MATE  Experiments  Related Work  Conclusion

4 October 2, 20154  Map-Reduce  Simple API : map and reduce  Easy to write parallel programs  Fault-tolerant for large-scale data centers  Performance?  Always a concern for HPC community  Generalized Reduction  First proposed in FREERIDE that was developed at Ohio State 2001-2003  Shared a similar processing structure  The key difference lies in a programmer-managed reduction-object  Better performance? Background (I)

5 October 2, 20155 Map-Reduce Execution

6 Comparing Processing Structures 6 Reduction Object represents the intermediate state of the execution Reduce func. is commutative and associative Sorting, grouping...overheads are eliminated with red. func/obj. October 2, 2015

7 Our Previous Work  A comparative study between FREERIDE and Hadoop:  FREERIDE outperformed Hadoop with factors of 5 to 10  Possible reasons:  Java VS C++? HDFS overheads? Inefficiency of Hadoop?  API difference?  Developed MATE (Map-Reduce system with an AlternaTE API) on top of Phoenix from Stanford  Adopted Generalized Reduction  Focused on API differences  MATE improved Phoenix with an average of 50%  Avoids large set of intermediate pairs between Map & Reduce  Reduces memory requirements October 2, 20157

8 Extending MATE  Main issues of the original MATE:  Only works on a single multi-core machine  Datasets should reside in memory  Assumes the reduction object MUST fit in memory  This paper extended MATE to address these limitations  Focus on graph mining: an emerging class of apps  Require large-sized reduction objects as well as large-scale datasets  E.g., PageRank could have a 8GB reduction object!  Support of managing arbitrary-sized reduction objects  Also reading disk-resident input data  Evaluated Ex-MATE using PEGASUS  PEGASUS: A Hadoop-based graph mining system October 2, 20158

9 Outline October 2, 20159  Background  System Design of Ex-MATE  Parallel Graph Mining with Ex-MATE  Experiments  Related Work  Conclusion

10 October 2, 201510 System Design and Implementation  System design of Ex-MATE  Execution overview  Support of distributed environments  System APIs in Ex-MATE  One set provided by the runtime  operations on reduction objects  Another set defined or customized by the users  reduction, combination, etc..  Runtime in Ex-MATE  Data partitioning  Task scheduling  Other low-level details

11 October 2, 201511 Ex-MATE Runtime Overview  Basic one-stage execution

12 October 2, 201512 Implementation Considerations  Support for processing very large datasets  Partitioning function:  Partition and distribute to a number of nodes  Splitting function:  Use the multi-core CPU on each node  Management of a large reduction-object (R.O.):  Reduce disk I/O!  Outputs (R.O.) are updated in a demand-driven way  Partition the reduction object into splits  Inputs are re-organized based on data access patterns  Reuse a R.O. split as much as possible in memory  Example: Matrix-Vector Multiplication

13 A MV-Multiplication Example October 2, 201513 Output Vector Input Vector Input Matrix (1, 1) (2, 1) (1, 2)

14 Outline October 2, 201514  Background  System Design of Ex-MATE  Parallel Graph Mining with Ex-MATE  Experiments  Related Work  Conclusion

15 GIM-V for Graph Mining (I)  Generalized Iterative Matrix-Vector Multiplication(GIM-V)  Proposed at CMU at first  Similar to the common MV Multiplication  MV Mul. :  Three operations in  GIM-V:  combine m(i, j) and v(j) :  Not have to be a multiplication  combineAll n partial results for the element i :  Not have to be the sum  assign v(new) to v(i) :  The previous value of v(i) is updated by a new value October 2, 201515 Multiplication Sum Assignment

16 GIM-V for Graph Mining (II)  A set of graph mining applications can fit into this GIM-V  PageRank, Diameter Estimation, Finding Connected Components, Random Walk with Restart, etc..  Parallelization of GIM-V:  Use Map-Reduce in PEGASUS  A two-stage algorithm: two consecutive map-reduce jobs  Use Generalized Reduction in Ex-MATE  A one-stage algorithm: simpler code October 2, 201516

17 GIM-V Example: PageRank  PageRank is used by Google to calculate the relative importance of web-pages:  Direct implementation of GIM-V: v(j) is the ranking value  The three customized operations are: October 2, 201517 Multiplication Sum Assignment

18 GIM-V: Other Algorithms  Diameter Estimation: HADI is an algorithm to estimate the diameter of a given graph  The three customized operations are:  Finding Connected Components: HCC is a new algorithm to find the connected components of large graphs  The three customized operations are: October 2, 201518 Multiplication Bitwise-or Multiplication Minimal

19 Parallelization of GIM-V (I)  Using Map-Reduce: Stage I  Map: October 2, 201519 Map M(i,j) and V(j) to reducer j

20 Parallelization of GIM-V (II)  Using Map-Reduce: Stage I (cont.)  Reduce: October 2, 201520 Map “combine2(M(i,j), V(j)) “ to reducer i

21 Parallelization of GIM-V (III)  Using Map-Reduce: Stage II  Map: October 2, 201521

22 Parallelization of GIM-V (IV)  Using Map-Reduce: Stage II (cont.)  Reduce: October 2, 201522

23 Parallelization of GIM-V (V)  Using Generalized Reduction in Ex-MATE:  Reduction: October 2, 201523

24 Parallelization of GIM-V (VI)  Using Generalized Reduction in Ex-MATE:  Finalize: October 2, 201524

25 Outline October 2, 201525  Background  System Design of Ex-MATE  Parallel Graph Mining with Ex-MATE  Experiments  Related Work  Conclusion

26 October 2, 201526  Applications:  Three graph mining algorithms:  PageRank, Diameter Estimation, and Finding Connected Components  Evaluation:  Performance comparison with PEGASUS  PEGASUS provides a naïve version and an optimized version  Speedups with an increasing number of nodes  Scalability speedups with an increasing size of datasets  Experimental platform:  A cluster of multi-core CPU machines  Used up to 128 cores (16 nodes) Experiments Design

27 October 2, 201527 Results: Graph Mining (I)  PageRank: 16GB dataset; a graph of 256 million nodes and 1 billion edges Avg. Time Per Iteration (min) # of nodes 10.0 speedup

28 October 2, 201528 Results: Graph Mining (II)  HADI: 16GB dataset; a graph of 256 million nodes and 1 billion edges Avg. Time Per Iteration (min) # of nodes 11.0 speedup

29 October 2, 201529 Results: Graph Mining (III)  HCC: 16GB dataset; a graph of 256 million nodes and 1 billion edges Avg. Time Per Iteration (min) # of nodes 9.0 speedup

30 October 2, 201530 Scalability: Graph Mining (IV)  HCC: 8GB dataset; a graph of 256 million nodes and 0.5 billion edges Avg. Time Per Iteration (min) # of nodes 1.7 speedup 1.9 speedup

31 October 2, 201531 Scalability: Graph Mining (V)  HCC: 32GB dataset; a graph of 256 million nodes and 2 billion edges Avg. Time Per Iteration (min) # of nodes 1.9 speedup 2.7 speedup

32 October 2, 201532 Scalability: Graph Mining (VI)  HCC: 64GB dataset; a graph of 256 million nodes and 4 billion edges Avg. Time Per Iteration (min) # of nodes 1.9 speedup 2.8 speedup

33 Observations October 2, 201533  Performance trends are similar for all three applications  Consistent with the fact that all three applications are implemented using the GIM-V method  Ex-MATE outperforms PEGASUS significantly for all three graph mining algorithms  Reasonable speedups for different datasets  Better scalability for larger datasets with a increasing number of nodes

34 Outline October 2, 201534  Background  System Design of Ex-MATE  Parallel Graph Mining with Ex-MATE  Experiments  Related Work  Conclusion

35 Related Work: Academia October 2, 201535  Evaluation of Map-Reduce-like models in various parallel programming environments:  Phoenix-rebirth for large-scale multi-core machines  Mars for a single GPU  MITHRA for GPGPUs in heterogeneous platforms  Recent IDAV for GPU clusters  Improvement of Map-Reduce API:  Integrating pre-fetch and pre-shuffling into Hadoop  Supporting online queries  Enforcing a less restrictive synchronization semantics between Map and Reduce

36 Related Work: Industry October 2, 201536  Google’s Pregel System:  Map-reduce may not so suitable for graph operations  Proposed to target graph processing  Open source version: HAMA project in Apache  Variants of Map-Reduce:  Dryad/DryadLINQ from Microsoft  Sawzall from Google  Pig/Map-Reduce-Merge from Yahoo!  Hive from Facebook

37 Outline October 2, 201537  Background  System Design of Ex-MATE  Parallel Graph Mining with Ex-MATE  Experiments  Related Work  Conclusion

38 October 2, 201538 Conclusion  Ex-MATE supports the management of reduction objects of arbitrary sizes  Deals with disk-resident reduction objects  Outperforms PEGASUS for both the naïve and optimized implementations for all three graph mining application  Has a simpler code  Offers a promising alternative for developing efficient data-intensive applications,  Uses GIM-V for parallelizing graph mining

39 39 Thank You, and Acknowledgments  Questions and comments  Wei Jiang - jiangwei@cse.ohio-state.edujiangwei@cse.ohio-state.edu  Gagan Agrawal - agrawal@cse.ohio-state.eduagrawal@cse.ohio-state.edu  This project was supported by:


Download ppt "Ex-MATE: Data-Intensive Computing with Large Reduction Objects and Its Application to Graph Mining Wei Jiang and Gagan Agrawal."

Similar presentations


Ads by Google