Presentation is loading. Please wait.

Presentation is loading. Please wait.

Majd F. Sakr, Suhail Rehman and

Similar presentations


Presentation on theme: "Majd F. Sakr, Suhail Rehman and"— Presentation transcript:

1 Majd F. Sakr, Suhail Rehman and
Cloud Computing CS Dryad and GraphLab Lecture 11, Feb 22, 2012 Majd F. Sakr, Suhail Rehman and Mohammad Hammoud

2 Today… Last session Pregel Today’s session Dryad and GraphLab
Announcement: Project Phases I-A and I-B are due today © Carnegie Mellon University in Qatar

3 Discussion on Programming Models
Objectives Discussion on Programming Models Pregel, Dryad and GraphLab Pregel, Dryad and GraphLab MapReduce Message Passing Interface (MPI) Examples of parallel processing Traditional models of parallel programming Parallel computer architectures Why parallelism? Last 3 Sessions © Carnegie Mellon University in Qatar

4 Dryad © Carnegie Mellon University in Qatar

5 Dryad In this part, the following concepts of Dryad will be described:
Dryad Model Dryad Organization Dryad Description Language and An Example Program Fault Tolerance in Dryad © Carnegie Mellon University in Qatar

6 Dryad In this part, the following concepts of Dryad will be described:
Dryad Model Dryad Organization Dryad Description Language and An Example Program Fault Tolerance in Dryad © Carnegie Mellon University in Qatar

7 Dryad Dryad is a general purpose, high-performance, distributed computation engine Dryad is designed for: High-throughput Data-parallel computation Use in a private datacenter Computation is expressed as a directed-acyclic-graph (DAG) Vertices represent programs Edges represent data channels between vertices © Carnegie Mellon University in Qatar

8 Unix Pipes vs. Dryad DAG © Carnegie Mellon University in Qatar

9 Dryad Job Structure grep1000 | sed500 | sort1000 | awk500 | perl50
Channels Input files Stage Output files sort grep awk sed perl grep sort This is the basic Dryad terminology. awk sed grep sort Vertices (processes) © Carnegie Mellon University in Qatar

10 Dryad In this part, the following concepts of Dryad will be described:
Dryad Model Dryad Organization Dryad Description Language and An Example Program Fault Tolerance in Dryad © Carnegie Mellon University in Qatar

11 Dryad System Organization
There are 3 roles for machines in Dryad Job Manager (JM) Name Server (NS) Daemon (D) The JM can run within the cluster or on a user’s workstation with network access to the cluster. It contains the app-specific code to construct the job’s communication graph along with library code to schedule the work across the available resources. All data is sent directly between vertices and thus the job manager is only responsible for control decisions and is not a bottleneck for any data transfers. NS can be used to enumerate all the available computers. NS also exposes the position of each computer within the network topology so that scheduling decisions can take account of locality. D is responsible for creating processes on behalf of the JM. The binary of V is sent from JM to D for execution. © Carnegie Mellon University in Qatar

12 Program Execution (1) The Job Manager (JM):
Creates the job communication graph (job schedule) Contacts the NS to determine the number of Ds and the topology Assigns Vs to each D (using a simple task scheduler- not described) for execution Coordinates data flow through the data plane Data is distributed using a distributed storage system that shares with the Google File System some properties (e.g., data are split into chunks and replicated across machines) Dryad also supports the use of NTFS for accessing files locally © Carnegie Mellon University in Qatar

13 4. Query cluster resources
Program Execution (2) 1. Build 2. Send .exe 7. Serialize vertices vertex code 5. Generate graph JM code Computation Staging Cluster services 6. Initialize vertices 3. Start JM 8. Monitor Vertex execution 4. Query cluster resources © Carnegie Mellon University in Qatar

14 Data Channels in Dryad X Items M
Data items can be shuffled between vertices through data channels Data channels can be: Shared Memory FIFOs (intra-machine) TCP Streams (inter-machine) SMB/NTFS Local Files (temporary) Distributed File System (persistent) The performance and fault tolerance of these mechanisms vary Data channels are abstracted for maximum flexibility X Items M © Carnegie Mellon University in Qatar

15 Dryad In this part, the following concepts of Dryad will be described:
Dryad Model Dryad Organization Dryad Description Language and An Example Program Fault Tolerance in Dryad © Carnegie Mellon University in Qatar

16 Dryad Graph Description Language
Here are some operators in the Dryad graph description language: n n B B B B C D n A A n n A A A A B AS = A^n (Cloning) AS >= BS (Pointwise Composition) AS >> BS (Bipartite Composition) (B>=C) || (B>=D) (Merge) © Carnegie Mellon University in Qatar

17 Example Program in Dryad (1)
Skyserver SQL Query (Q18): Find all the objects in the database that have neighboring objects within 30 arc seconds such that at least one of the neighbors has a color similar to the primary object’s color There are two tables involved photoObjAll and it has 354,254,163 records Neighbors and it has 2,803,165,372 records For the equivalent Dryad computation, they extracted the columns of interest into two binary files, “ugriz.bin” and “neighbors.bin” H n Y Y L L 4n S S 4n M M n D D n X X U N U N © Carnegie Mellon University in Qatar

18 Example Program in Dryad (2)
S Y H n X U N L Example Program in Dryad (2) [distinct] [merge outputs] (u.color,n.neighborobjid) [re-partition by n.neighborobjid] [order by n.neighborobjid] Took SQL plan Manually coded in Dryad Manually partitioned data select u.color,n.neighborobjid from u join n where u.objid = n.objid select u.objid from u join <temp> where u.objid = <temp>.neighborobjid and |u.color - <temp>.color| < d u: objid, color n: objid, neighborobjid [partition by objid] This is an explanation of the SkyServer Query plan. © Carnegie Mellon University in Qatar

19 Example Program in Dryad (3)
S Y H n X U N L Here is the corresponding Dryad code: GraphBuilder XSet = moduleX^N; GraphBuilder DSet = moduleD^N; GraphBuilder MSet = moduleM^(N*4); GraphBuilder SSet = moduleS^(N*4); GraphBuilder YSet = moduleY^N; GraphBuilder HSet = moduleH^1; GraphBuilder XInputs = (ugriz1 >= XSet) || (neighbor >= XSet); GraphBuilder YInputs = ugriz2 >= YSet; GraphBuilder XToY = XSet >= DSet >> MSet >= SSet; for (i = 0; i < N*4; ++i) { XToY = XToY || (SSet.GetVertex(i) >= YSet.GetVertex(i/4)); } GraphBuilder YToH = YSet >= HSet; GraphBuilder HOutputs = HSet >= output; GraphBuilder final = XInputs || YInputs || XToY || YToH || HOutputs; © Carnegie Mellon University in Qatar

20 Dryad In this part, the following concepts of Dryad will be described:
Dryad Model Dryad Organization Dryad Description Language and An Example Program Fault Tolerance in Dryad © Carnegie Mellon University in Qatar

21 Fault Tolerance in Dryad (1)
Dryad is designed to handle two types of failures: Vertex failures Channel failures Vertex failures are handled by the JM and the failed vertex is re-executed on another machine Channel failures cause the preceding vertex to be re-executed © Carnegie Mellon University in Qatar

22 Fault Tolerance in Dryad (2)
X[0] X[1] X[3] X[2] X’[2] Slow vertex Duplicate vertex Completed vertices The handling of apparently very slow computation by duplication of vertices is handled by a stage manager. Duplication Policy = f(running times, data volumes) © Carnegie Mellon University in Qatar

23 GraphLab © Carnegie Mellon University in Qatar

24 GraphLab In this part, the following concepts of GraphLab will be described: Motivation for GraphLab GraphLab Data Model and Update Mechanisms Scheduling in GraphLab Consistency Models in GraphLab PageRank in GraphLab © Carnegie Mellon University in Qatar

25 GraphLab In this part, the following concepts of GraphLab will be described: Motivation for GraphLab GraphLab Data Model and Update Mechanisms Scheduling in GraphLab Consistency Models in GraphLab PageRank in GraphLab © Carnegie Mellon University in Qatar

26 Motivation for GraphLab
Shortcomings of MapReduce Interdependent data computation difficult to perform Overheads of running jobs iteratively – disk access and startup overhead Communication pattern is not user definable/flexible Shortcomings of Pregel BSP model requires synchronous computation One slow machine can slow down the entire computation considerably Shortcomings of Dryad Very flexible but steep learning curve for the programming model © Carnegie Mellon University in Qatar

27 Update Functions and Scopes
GraphLab GraphLab is a framework for parallel machine learning Scheduling Data Graph Shared Data Table Update Functions and Scopes © Carnegie Mellon University in Qatar

28 GraphLab In this part, the following concepts of GraphLab will be described: Motivation for GraphLab GraphLab Data Model and Update Mechanisms Scheduling in GraphLab Consistency Models in GraphLab PageRank in GraphLab © Carnegie Mellon University in Qatar

29 Data Graph A graph in GraphLab is associated with data at every vertex and edge Data Graph Arbitrary blocks of data can be assigned to vertices and edges © Carnegie Mellon University in Qatar

30 Update Functions The data graph is modified using update functions Sv
The update function can modify a vertex v and its neighborhood, defined as the scope of v (Sv) Sv v © Carnegie Mellon University in Qatar

31 Shared Data Table Certain algorithms require global information that is shared among all vertices (Algorithm Parameters, Statistics, etc.) GraphLab exposes a Shared Data Table (SDT) SDT is an associative map between keys and arbitrary blocks of data T[Key] → Value The shared data table is updated using the sync mechanism Shared Data Table © Carnegie Mellon University in Qatar

32 Sync Mechanism Shared Data Table Similar to Reduce in MapReduce
User can define fold, merge and apply functions that are triggered during the global sync mechanism Fold function allows the user to sequentially aggregate information across all vertices Merge optionally allows user to perform a parallel tree reduction on the aggregated data collected during the fold operation Apply function allows the user to finalize the resulting value from the fold/merge operations (such as normalization etc.) sync Shared Data Table © Carnegie Mellon University in Qatar

33 GraphLab In this part, the following concepts of GraphLab will be described: Motivation for GraphLab GraphLab Data Model and Update Mechanisms Scheduling in GraphLab Consistency Models in GraphLab PageRank in GraphLab © Carnegie Mellon University in Qatar

34 Scheduling in GraphLab (1)
The scheduler determines the order that vertices are updated CPU 1 e f g k j i h d c b a b c Scheduler e f b a h i i j CPU 2 The process repeats until the scheduler is empty © Carnegie Mellon University in Qatar

35 Scheduling in GraphLab (2)
An update schedule defines the order in which update functions are applied to vertices A parallel data-structure called the scheduler represents an abstract list of tasks to be executed in Graphlab Base (Vertex) schedulers in GraphLab Synchronous scheduler Round-robin scheduler Job Schedulers in GraphLab FIFO scheduler Priority scheduler Custom schedulers can be defined by the set scheduler Termination Assessment If the scheduler has no remaining tasks Or, a termination function can be defined to check for convergence in the data © Carnegie Mellon University in Qatar

36 GraphLab In this part, the following concepts of GraphLab will be described: Motivation for GraphLab GraphLab Data Model and Update Mechanisms Scheduling in GraphLab Consistency Models in GraphLab PageRank in GraphLab © Carnegie Mellon University in Qatar

37 Need for Consistency Models
How much can computation overlap? © Carnegie Mellon University in Qatar

38 Consistency Models in GraphLab
GraphLab guarantees sequential consistency Guaranteed to give the same result as a sequential execution of the computational steps User-defined consistency models Full Consistency Vertex Consistency Edge Consistency Full Consistency Edge Consistency Vertex Consistency © Carnegie Mellon University in Qatar

39 GraphLab In this part, the following concepts of GraphLab will be described: Motivation for GraphLab GraphLab Data Model and Update Mechanisms Scheduling in GraphLab Consistency Models in GraphLab PageRank in GraphLab © Carnegie Mellon University in Qatar

40 PageRank (1) PageRank is a link analysis algorithm
The rank value indicates an importance of a particular web page A hyperlink to a page counts as a vote of support A page that is linked to by many pages with high PageRank receives a high rank itself A PageRank of 0.5 means there is a 50% chance that a person clicking on a random link will be directed to the document with the 0.5 PageRank PageRank is a probability distribution used to represent the likelihood that a person randomly clicking on links will arrive at any particular page. © Carnegie Mellon University in Qatar

41 PageRank (2) Iterate: Where: α is the random reset probability
L[j] is the number of links on page j 1 2 3 4 5 6 © Carnegie Mellon University in Qatar

42 PageRank Example in GraphLab
PageRank algorithm is defined as a per-vertex operation working on the scope of the vertex pagerank(i, scope){ // Get Neighborhood data (R[i], Wij, R[j]) scope; // Update the vertex data // Reschedule Neighbors if needed if R[i] changes then reschedule_neighbors_of(i); } Dynamic computation © Carnegie Mellon University in Qatar

43 How MapReduce, Pregel, Dryad and GraphLab Compare Against Each Other?
© Carnegie Mellon University in Qatar

44 Comparison of the Programming Models
MapReduce Pregel Dryad GraphLab Programming Model Fixed Functions – Map and Reduce Supersteps over a data graph with messages passed DAG with program vertices and data edges Data graph with shared data table and update functions Parallelism Concurrent execution of tasks within map and reduce phases Concurrent execution of user functions over vertices within a superstep Concurrent execution of vertices during a stage Concurrent execution of non-overlapping scopes, defined by consistency model Data Handling Distributed file system Flexible data channels: Memory, Files, DFS etc. Undefined – Graphs can be in memory or on disk Task Scheduling Fixed Phases – HDFS Locality based map task assignment Partitioned Graph and Inputs assigned by assignment functions Job and Stage Managers assign vertices to avaiabled daemons Pluggable schedulers to schedule update functions Fault Tolerance DFS replication + Task reassignment / Speculative execution of Tasks Checkpointing and superstep re-execution Vertex and Edge failure recovery Synchronous and asychronous snapshots MapReduce Pregel Dryad GraphLab Programming Model Fixed Functions – Map and Reduce Supersteps over a data graph with messages passed DAG with program vertices and data edges Data graph with shared data table and update functions Parallelism Concurrent execution of tasks within map and reduce phases Concurrent execution of user functions over vertices within a superstep Concurrent execution of vertices during a stage Concurrent execution of non-overlapping scopes, defined by consistency model Data Handling Distributed file system Flexible data channels: Memory, Files, DFS etc. Undefined – Graphs can be in memory or on disk Task Scheduling Fixed Phases – HDFS Locality based map task assignment Partitioned Graph and Inputs assigned by assignment functions Job and Stage Managers assign vertices to available daemons Pluggable schedulers to schedule update functions Fault Tolerance DFS replication + Task reassignment / Speculative execution of Tasks Checkpointing and superstep re-execution Vertex and Edge failure recovery Synchronous and asychronous snapshots Developed by Google Microsoft Carnegie Mellon MapReduce Pregel Dryad GraphLab Programming Model Fixed Functions – Map and Reduce Supersteps over a data graph with messages passed DAG with program vertices and data edges Data graph with shared data table and update functions Parallelism Concurrent execution of tasks within map and reduce phases Concurrent execution of user functions over vertices within a superstep Concurrent execution of vertices during a stage Concurrent execution of non-overlapping scopes, defined by consistency model Data Handling Distributed file system Flexible data channels: Memory, Files, DFS etc. Undefined – Graphs can be in memory or on disk Task Scheduling Fixed Phases – HDFS Locality based map task assignment Partitioned Graph and Inputs assigned by assignment functions Job and Stage Managers assign vertices to avaiabled daemons Pluggable schedulers to schedule update functions MapReduce Pregel Dryad GraphLab Programming Model Fixed Functions – Map and Reduce Supersteps over a data graph with messages passed DAG with program vertices and data edges Data graph with shared data table and update functions Parallelism Concurrent execution of tasks within map and reduce phases Concurrent execution of user functions over vertices within a superstep Concurrent execution of vertices during a stage Concurrent execution of non-overlapping scopes, defined by consistency model Data Handling Distributed file system Flexible data channels: Memory, Files, DFS etc. Undefined – Graphs can be in memory or on disk MapReduce Pregel Dryad GraphLab Programming Model Fixed Functions – Map and Reduce Supersteps over a data graph with messages passed DAG with program vertices and data edges Data graph with shared data table and update functions Parallelism Concurrent execution of tasks within map and reduce phases Concurrent execution of user functions over vertices within a superstep Concurrent execution of vertices during a stage Concurrent execution of non-overlapping scopes, defined by consistency model MapReduce Pregel Dryad GraphLab Programming Model Fixed Functions – Map and Reduce Supersteps over a data graph with messages passed DAG with program vertices and data edges Data graph with shared data table and update functions © Carnegie Mellon University in Qatar

45 References This presentation has elements borrowed from various papers and presentations: Papers: Pregel: Dryad: GraphLab: Presentations: Dryad Presentation at Berkeley by M. Budiu: GraphLab1 Presentation: GraphLab2 Presentation: © Carnegie Mellon University in Qatar

46 Distributed File Systems
Next Class Distributed File Systems © Carnegie Mellon University in Qatar


Download ppt "Majd F. Sakr, Suhail Rehman and"

Similar presentations


Ads by Google