Presentation is loading. Please wait.

Presentation is loading. Please wait.

MapReduce Online Tyson Condie and Neil Conway UC Berkeley Joint work with Peter Alvaro, Rusty Sears, Khaled Elmeleegy (Yahoo! Research), and Joe Hellerstein.

Similar presentations


Presentation on theme: "MapReduce Online Tyson Condie and Neil Conway UC Berkeley Joint work with Peter Alvaro, Rusty Sears, Khaled Elmeleegy (Yahoo! Research), and Joe Hellerstein."— Presentation transcript:

1 MapReduce Online Tyson Condie and Neil Conway UC Berkeley Joint work with Peter Alvaro, Rusty Sears, Khaled Elmeleegy (Yahoo! Research), and Joe Hellerstein

2 MapReduce Programming Model Programmers think in a data-centric fashion – Apply transformations to data sets The MR framework handles the Hard Stuff: – Fault tolerance – Distributed execution, scheduling, concurrency – Coordination – Network communication

3 MapReduce System Model Designed for batch-oriented computations over large data sets – Each operator runs to completion before producing any output – Operator output is written to stable storage Map output to local disk, reduce output to HDFS Simple, elegant fault tolerance model: operator restart – Critical for large clusters

4 Life Beyond Batch Processing Can we apply the MR programming model outside batch processing? Two domains of interest: 1.Interactive data analysis Enabled by high-level MR query languages, e.g. Hive, Pig, Jaql Batch processing is a poor fit 2.Continuous analysis of data streams Batch processing adds massive latency Requires saving and reloading analysis state

5 MapReduce Online Pipeline data between operators as it is produced – Decouple computation schedule (logical) from data transfer schedule (physical) Hadoop Online Prototype (HOP): Hadoop with pipelining support – Preserving the Hadoop interfaces and APIs – Challenge: retain elegant fault tolerance model Enables approximate answers and stream processing – Can also reduce the response times of jobs

6 Outline 1.Hadoop Background 2.HOP Architecture 3.Online Aggregation 4.Stream Processing with MapReduce 5.Future Work and Conclusion

7 Hadoop Architecture Hadoop MapReduce – Single master node, many worker nodes – Client submits a job to master node – Master splits each job into tasks (map/reduce), and assigns tasks to worker nodes Hadoop Distributed File System (HDFS) – Single name node, many data nodes – Files stored as large, fixed-size (e.g. 64MB) blocks – HDFS typically holds map input and reduce output

8 Job Scheduling One map task for each block of the input file – Applies user-defined map function to each record in the block – Record = User-defined number of reduce tasks – Each reduce task is assigned a set of record groups Record group = all records with same key – For each group, apply user-defined reduce function to the record values in that group Reduce tasks read from every map task – Each read returns the record groups for that reduce task

9 Dataflow in Hadoop Map tasks write their output to local disk – Output available after map task has completed Reduce tasks write their output to HDFS – Once job is finished, next job’s map tasks can be scheduled, and will read input from HDFS Therefore, fault tolerance is simple: simply re- run tasks on failure – No consumers see partial operator output

10 Dataflow in Hadoop Submit job schedule map reduce

11 Dataflow in Hadoop HDFS Block 1 Block 2 map reduce Read Input File

12 Dataflow in Hadoop map reduce Local FS Finished Finished + Location

13 Dataflow in Hadoop map reduce Local FS HTTP GET

14 Dataflow in Hadoop reduce HDFS Write Final Answer

15 Hadoop Online Prototype (HOP)

16 Hadoop Online Prototype HOP supports pipelining within and between MapReduce jobs: push rather than pull – Preserve simple fault tolerance scheme – Improved job completion time (better cluster utilization) – Improved detection and handling of stragglers MapReduce programming model unchanged – Clients supply same job parameters Hadoop client interface backward compatible – No changes required to existing clients E.g., Pig, Hive, Sawzall, Jaql – Extended to take a series of job

17 Pipelining Batch Size Initial design: pipeline eagerly (for each row) – Prevents use of combiner – Moves more sorting work to mapper – Map function can block on network I/O Revised design: map writes into buffer – Spill thread: sort & combine buffer, spill to disk – Send thread: pipeline spill files => reducers Simple adaptive algorithm

18 Fault Tolerance Fault tolerance in MR is simple and elegant – Simply recompute on failure, no state recovery Initial design for pipelining FT: – Reduce treats in-progress map output as tentative Revised design: – Pipelining maps periodically checkpoint output – Reducers can consume output <= checkpoint – Bonus: improved speculative execution

19 Pipeline request Dataflow in HOP ScheduleSchedule + Location map reduce

20 Online Aggregation Traditional MR: poor UI for data analysis Pipelining means that data is available at consumers “early” – Can be used to compute and refine an approximate answer – Often sufficient for interactive data analysis, developing new MapReduce jobs,... Within a single job: periodically invoke reduce function at each reduce task on available data Between jobs: periodically send a “snapshot” to consumer jobs

21 Intra-Job Online Aggregation Approximate answers published to HDFS by each reduce task Based on job progress: e.g. 10%, 20%, … Challenge: providing statistically meaningful approximations – How close is an approximation to the final answer? – How do you avoid biased samples? Challenge: reduce functions are opaque – Ideally, computing 20% approximation should reuse results of 10% approximation – Either use combiners, or HOP does redundant work

22 Online Aggregation in HOP HDFS Write Snapshot Answer HDFS Block 1 Block 2 Read Input File map reduce

23 Inter-Job Online Aggregation Write Answer HDFS map Job 2 Mappers reduce Job 1 Reducers

24 Inter-Job Online Aggregation Like intra-job OA, but approximate answers are pipelined to map tasks of next job – Requires co-scheduling a sequence of jobs Consumer job computes an approximation – Can be used to feed an arbitrary chain of consumer jobs with approximate answers Challenge: how to avoid redundant work – Output of reduce for 10% progress vs. for 20%

25 Example Scenario Top K most-frequent-words in 5.5GB Wikipedia corpus (implemented as 2 MR jobs) 60 node EC2 cluster

26 Stream Processing MapReduce is often applied to streams of data that arrive continuously – Click streams, network traffic, web crawl data, … Traditional approach: buffer, batch process 1.Poor latency 2.Analysis state must be reloaded for each batch Instead, run MR jobs continuously, and analyze data as it arrives

27 Why? Why use MapReduce for stream processing? 1.Many existing MR use cases are a good fit 2.Ability to run user-defined code Machine learning, graph analysis, unstructured data 3.Massive scale + low-latency analysis 4.Use existing MapReduce tools and libraries

28 Stream Processing with HOP Map and reduce tasks run continuously Reduce function divides stream into windows – “Every 30 seconds, compute the 1, 5, and 15 minute average network utilization; trigger an alert if …” – Window management done by user (reduce)

29 Stream Processing Challenges 1.How to store stream input? – HDFS is not ideal 2.Fault tolerance for long-running tasks – Operator restart increasingly expensive 3.Elastic scale-up / scale-down during MR job

30 #1: Storing Stream Input Current approach: colocate map task and data producer – Apply map function, partition => reduce task – Fault tolerance: fate share – “Pushdown” predicates and scalar transforms – Total order = single reduce task User-defined code at data producer = bad? – Fault-tolerant “buffer” (map task), coordination

31 #2: Fault Tolerance for Streams Operator restart for long-running reduces: too expensive Hence, window-oriented fault tolerance – Reducers label windows with IDs – Mappers use window IDs to garbage collect spills Probably need fault-tolerant Job Tracker and HDFS Name Node

32 #3: Intra-Job Elasticity Peak load != average load – Increasingly important as job duration grows Solution: consistent hashing over reduce key space – Job Tracker manages reduce key => task mapping Useful for regular Hadoop as well

33 Other HOP Benefits Shorter job completion time via improved cluster utilization: reduce work starts early – Important for high-priority jobs, interactive jobs Adaptive load management – Better detection and handling of “straggler” tasks – Elastic scale-up/scale-down: better pre-emption – Decouple unit of data transfer from unit of scheduling E.g. Yahoo! Petasort: 15GB/map task

34 Sort Performance: Blocking 60 node EC2 cluster, 5.5GB input file 40 map tasks, 59 reduce tasks

35 Sort Performance: Pipelining 927 seconds vs. 610 seconds

36 Future Work 1.Basic pipelining – Performance analysis at scale (e.g. PetaSort) – Job scheduling is much harder 2.Online Aggregation – Statically-robust estimation – Better UI for approximate results 3.Stream Processing – Develop into full-fledged stream processing engine – Stream support for high-level query languages – Online machine learning

37 Thanks! Questions? Source code and technical report: http://code.google.com/p/hop/ http://code.google.com/p/hop/ Contact: nrc@cs.berkeley.edunrc@cs.berkeley.edu

38 Map Task Execution 1.Map phase – Read the assigned input split from HDFS Split = file block by default – Parses input into records (key/value pairs) – Applies map function to each record Returns zero or more new records 2.Commit phase – Registers the final output with the slave node Stored in the local filesystem as a file Sorted first by bucket number then by key – Informs master node of its completion

39 Reduce Task Execution 1.Shuffle phase – Fetches input data from all map tasks The portion corresponding to the reduce task’s bucket 2.Sort phase – Merge-sort *all* map outputs into a single run 3.Reduce phase – Applies user reduce function to the merged run Arguments: key and corresponding list of values – Write output to a temp file in HDFS Atomic rename when finished

40 Design Implications 1.Fault Tolerance – Tasks that fail are simply restarted – No further steps required since nothing left the task 2.“Straggler” handling – Job response time affected by slow task – Slow tasks get executed redundantly Take result from the first to finish Assumes slowdown is due to physical components (e.g., network, host machine) Pipelining can support both!

41 Fault Tolerance in HOP Traditional fault tolerance algorithms for pipelined dataflow systems are complex HOP approach: write to disk and pipeline – Producers write data into in-memory buffer – In-memory buffer periodically spilled to disk – Spills sent to consumers – Consumers treat pipelined data as “tentative” until producer is known to complete – Fault tolerance via task restart, tentative output discarded

42 Refinement: Checkpoints Problem: Treating output as tentative inhibits parallelism Solution: Producers periodically “checkpoint” with Hadoop master node – “Output split x corresponds to input offset y” – Pipelined data <= split x is now non-tentative – Also improves speculation for straggler tasks, reduces redundant work on task failure


Download ppt "MapReduce Online Tyson Condie and Neil Conway UC Berkeley Joint work with Peter Alvaro, Rusty Sears, Khaled Elmeleegy (Yahoo! Research), and Joe Hellerstein."

Similar presentations


Ads by Google