Presentation is loading. Please wait.

Presentation is loading. Please wait.

Reining in the Outliers in Map-Reduce Clusters using Mantri Ganesh Ananthanarayanan, Srikanth Kandula, Albert Greenberg, Ion Stoica, Yi Lu, Bikas Saha,

Similar presentations


Presentation on theme: "Reining in the Outliers in Map-Reduce Clusters using Mantri Ganesh Ananthanarayanan, Srikanth Kandula, Albert Greenberg, Ion Stoica, Yi Lu, Bikas Saha,"— Presentation transcript:

1 Reining in the Outliers in Map-Reduce Clusters using Mantri Ganesh Ananthanarayanan, Srikanth Kandula, Albert Greenberg, Ion Stoica, Yi Lu, Bikas Saha, Edward Harris@Microsoft Presenter: Weiyue Xu OSDI'10 Proceedings of the 9th USENIX conference on Operating systems design and implementation

2 Credits Modified version of –http://www.cs.duke.edu/courses/fall10/cps296.2/lect ures/http://www.cs.duke.edu/courses/fall10/cps296.2/lect ures/ –research.microsoft.com/en- us/UM/people/srikanth/data/Combating%20Outliers %20in%20Map-Reduce.web.pptxresearch.microsoft.com/en- us/UM/people/srikanth/data/Combating%20Outliers %20in%20Map-Reduce.web.pptx

3 Outline Introduction Cause of the Outlier Mantri Evaluation Discussion and Conclusion

4 log(size of dataset) GB 10 9 TB 10 12 PB 10 15 EB 10 18 log(size of cluster) 10 4 1 10 3 10 2 10 1 10 5 HPC, || databases mapreduce MapReduce Decouples customized data operations from mechanisms to scale Widely used Cosmos (based on SVC ’ s Dryad) + Scope @ Bing MapReduce @ Google Hadoop inside Yahoo! and on Amazon ’ s Cloud (AWS) e.g., the Internet, click logs, bio/genomic data

5 Local write An Example How it Works: Goal Find frequent search queries to Bing SELECT Query, COUNT(*) AS Freq FROM QueryTable HAVING Freq > X What the user says: ReadMap Reduce file block 0 job manager task output block 0 output block 1 file block 1 file block 2 file block 3 assign work, get progress

6 Outliers slow down map-reduce jobs Map.Read 22K Map.Move 15K Map 13K Reduce 51K Barrier File System We find that: 6 Tackling outliers, we can speed up jobs while using resources efficiently: Quicker response improves productivity Predictability supports SLAs Better resource utilization

7 From a phase to a job A job may have many phases An outlier in an early phase has a cumulative effect Data loss may cause multi-phase recompute  outliers

8 Delay due to a recompute readily cascades Why outliers? reduce sort Delay due to a recompute map 8 Due to unavailable input, tasks have to be recomputed

9 Outlier stragglers : Tasks that take  1.5 times the median task in that phase recomputes : Tasks that are re-run because their output was lost 50% phases have > 10% stragglers and no recomputes 10% of the stragglers take >10X longer Frequency of Outliers straggler

10 At median, jobs slowed down by 35% due to outliers Cost of Outliers

11 Previous solution The original MapReduce paper observed the problem and did not solve it in depth Current schemes (e.g. Hadoop, LATE) duplicate long- running tasks based on some metrics Drawbacks –Some may be unnecessary –Use extra resources –Placement may be a problem

12 What this Paper is About Identify fundamental causes of outliers Mantri: A Cause-, Resource-Aware mitigation scheme –Case by case analysis: takes distinct actions based on cause –Considers opportunity cost of actions Results from a production deployment

13 Causes of Outliers Data Skew: data size varies across tasks in a phase.

14 Reduce task Map output uneven placement is typical in production reduce tasks are placed at first available slots Crossrack Traffic Causes of Outliers Rack

15 50% of recomputes happen on 5% of the machines Recompute increases resource usage Bad and Busy Machines Causes of Outliers

16 70% of cross track traffic is reduce traffic Reduce reads from every map, tasks in a spot with slow network run slower Tasks compete network among themselves 50% phases takes 62% longer to finish than ideal placement Causes of Outliers

17 Outliers cluster by time –Resource contention might be the cause Recomputes cluster by machines –Data loss may cause multiple recomputes

18 Mantri Cause aware, and resource aware Fix each problem with different strategies Runtime = f (input, network, dataToProcess,...)

19 Delay due to a recompute readily cascades Why outliers? reduce sort Delay due to a recompute map 19 Mantri [Avoid Recomputations]

20 Idea: Replicate intermediate data, use copy if original is unavailable Challenge: What data to replicate? Insight: Cost to recompute vs cost to replicate M1M1 M2M2 t redo = r 2 (t 2 +t 1 redo ) Cost to recompute depends on data loss probabilities and time taken, and also recursively looks at prior phases. t redo > t rep Mantri preferentially acts on more costly inputs 20 t = predicted runtime of task r = predicted probability of recomputation at machine

21 Reduce task Map output uneven placement is typical in production reduce tasks are placed at first available slots Crossrack Traffic Rack Mantri [Network Aware Placement]

22 Idea: Avoid hot-spots, keep traffic on a link proportional to bandwidth Challenges: Global co-ordination, congestion detection Insights: –Local control is a good approximation(each job balances its traffic) –Link utilizations average out on the long task and are steady on the short task If rack i has d i map output need d u i and d v i for upload and download while b u i and b v i bandwidths available Place a i fraction of reduces such that: 22

23 Data Skew - About 25% of outliers occur due to more dataToProcess (workload imbalance) Ignoring these is better than duplicating (state-of-the-art) Mantri [Data Aware Task Ordering]

24 Problem: Workload imbalance causes tasks to straggle. Idea: Restarting outliers that are lengthy is counter- productive. Insights: –Theorem [Graham, 1969] –Scheduling tasks with longest processing time first is at-most 33% worse than optimal schedule. 24 Builds an estimator T ~ function (dataToProcess) We schedule tasks in descending order of dataToProcess Theorem [due to Graham, 1969] Doing so is no more than 33% worse than the optimal

25 Mantri [Resource Aware Restart] Problem: 25% outliers remain, likely due to contention@machine Idea: Restart tasks elsewhere in the cluster asap Challenge: restart or duplicate? 25 (a) (b) (c) Running task Potential restart (t new ) now time t rem √ × × Save time and resources iff P(c t rem > (c+1) t new ) P(c t rem > (c+1) t new ) > δ If pending work, duplicate iff save both time and resource Else, duplicate if expected savings are high Continuously observe and kill wasteful copies

26 26 Summary Reduce recomputation: –preferentially replicate costly-to-recompute tasks Poor network: –each job locally avoids network hot-spots DataToProcess: –schedule in descending order of data size Bad machines: – quarantine persistently faulty machines Others: –restart or duplicate tasks, cognizant of resource cost.

27 Evaluation Methodology Mantri run on production clusters Baseline is results from Dryad Use trace-driven simulations to compare with other systems

28 Comparing Jobs in the Wild w/ and w/o Mantri for one month of jobs in Bing production cluster 340 jobs that each repeated at least five times during May 25-28 (release) vs. Apr 1-30 (pre-release)

29 In Production, Restarts …

30 In Trace-Replay Simulations, Restarts … CDF % cluster resources

31 Protecting Against Recomputes CDF % cluster resources

32 Conclusion Outliers are a significant problem Happens due to many causes Mantri: cause and resource aware mitigation outperforms prior schemes 32

33 Discussion Mantri does case by case analysis for each cause, what if the causes are inter-dependent?

34 Questions or Comments? Thanks!

35 Estimation of t rem and t new d: input data size d read : the amount read

36 Estimation of t new processRate: estimated of all tasks in the phase locationFactor: machine performance d: input size


Download ppt "Reining in the Outliers in Map-Reduce Clusters using Mantri Ganesh Ananthanarayanan, Srikanth Kandula, Albert Greenberg, Ion Stoica, Yi Lu, Bikas Saha,"

Similar presentations


Ads by Google