Presentation is loading. Please wait.

Presentation is loading. Please wait.

Satisfying Strong Application Requirements in Data-Intensive Clouds Ph.D Final Exam Brian Cho 1.

Similar presentations


Presentation on theme: "Satisfying Strong Application Requirements in Data-Intensive Clouds Ph.D Final Exam Brian Cho 1."— Presentation transcript:

1 Satisfying Strong Application Requirements in Data-Intensive Clouds Ph.D Final Exam Brian Cho 1

2 Motivating scenario: Using the data-intensive cloud Researchers contract with defense agency to investigate ongoing suspicious activity – e.g., botnet attack, worm, etc. – Other applications: processing click logs, news items, etc. 1.Transfer large logs (TBs-PBs) from possible victim sites 2.Run computations on logs to find vulnerabilities and source of attack 3.Store data 2

3 Can today’s data-intensive cloud meet these demands? The researchers require: 1.Control over time and $ cost of transfer, to stay within the contracted budget and time 2.Prioritization of this time- sensitive job over other jobs in its cluster 3.Consistent updates and reads at data store Current limitation: Systems are built to optimize key metrics at large scales, but not to meet these strong user requirements 3

4 Strong user requirements Many real-world requirements are too important to relax – Time – $$$ – Priority – Data consistency It is essential to treat these strong requirements as problem constraints – … not just as side effects of resource limitations in the cloud 4

5 Thesis statement It is feasible to satisfy strong application requirements for data-intensive cloud computing environments, in spite of resource limitations, while simultaneously optimizing run-time metrics. – Strong application requirements: real-time deadlines, dollar budgets, data consistency, etc. – Resource limitations: finite compute nodes, limited bandwidth, high latency, frequent failures, etc. – Run-time metrics: throughput, latency, $ cost, etc. 5

6 Contributions: Practical solutions SolutionStrong user requirement Key optimized metric Natjam Prioritize production jobs Job completion time Vivace [USENIX ATC 2012] ConsistencyLow latency Key-value Storage Computation Pandora-A [ICDCS 2010] DeadlineLow $ cost Pandora-B [ICAC 2011] $ BudgetShort transfer time Bulk Data Transfer 6

7 Pandora-A: Bulk Data Transfer via Internet and Shipping Networks Minimize $ cost subject to time deadline Transfer options – Internet links with proportional costs but limited bandwidth – Shipping links with fixed costs and shipping times depending on method (e.g. ground, air) Solution – Transform into time-expanded network – Solve min-cost flow on network Trace-driven experiments – Pandora-A solutions better than direct Internet or shipping 7

8 Pandora-B: Bulk Data Transfer via Internet and Shipping Networks Minimize transfer time subject to $ budget – Bounded binary search on Pandora-A solutions – Bounds created by transforming time- expanded networks 8 B Transfer Time T (hrs) Dollar Cost ($) UB LB

9 Vivace: Consistent data for congested geo-distributed systems Strongly consistent key-value store – Low latency across geo-distributed data centers – Under congestion New algorithms – Prioritize a small amount of critical information – To avoid delay due to congestion Evaluated using a practical prioritization infrastructure 9

10 Natjam: Prioritizing production jobs in MapReduce/Hadoop Mixed workloads – Production jobs Time sensitive Directly affect revenue – Research jobs e.g., long term analysis Example: Ad provider Count clicks Update ads Slow counts → Show old ads → Don’t get paid $$$ Ad click-through logs Is there a better way to place ads? Run machine learning analysis Lots of historical logs. Need a large cluster. 10 Prioritize production jobs

11 Contributions Natjam prioritizes production jobs While giving research jobs spare capacity Suspend/Resume tasks in research jobs – Production jobs can gain resources immediately – Research jobs can use many resources at a time, without wasting work Develop eviction policies that choose which tasks to suspend 11

12 Natjam Outline Motivation Contributions Background: MapReduce/Hadoop State-of-the-art Solution: Suspend/Resume Design Evaluation 12

13 Background: MapReduce/Hadoop Distributed computation on large cluster Each job consists of Map and Reduce tasks Job stages 1.Map tasks run computations in parallel 2.Shuffle combines intermediate Map outputs 3.Reduce tasks run computations in parallel 13 MM M MM R RR

14 Background: MapReduce/Hadoop Distributed computation on large cluster Each job consists of Map and Reduce tasks Job stages 1.Map tasks run computations in parallel 2.Shuffle combines intermediate Map outputs 3.Reduce tasks run computations in parallel Map input/Reduce output stored in distributed file system (e.g. HDFS) Scheduling: Which task to run on empty resources (slots) 14 MM M MM R RR R M MM MM M RR M M M MM RR Job 1 Job 2 Job 3

15 State-of-the-art: Separate clusters Submit production jobs to a production cluster Submit research jobs to a research cluster 15

16 State-of-the-art: Separate clusters Submit production jobs to a production cluster Submit research jobs to a research cluster Trace of job submissions to Yahoo production cluster Periods of under-utilization, where research jobs could potentially fill in # Reduce slots Reduce slot capacity ( under-utilization ) Plot used with permission from Yahoo 16 0 2000 4000 6000 8000 10000 time (hours:mins) 0:201:000:40

17 State-of-the-art: Single cluster Hadoop scheduling Ideally, – Enough capacity for production jobs – Run research tasks on all idle production slots But, – Killing tasks (e.g. Fair Scheduler) can lead to wasted work Plot used with permission from Yahoo wasted work # Reduce slots Reduce slot capacity ( under-utilization ) 17 0 2000 4000 6000 8000 10000 time (hours:mins) 0:201:000:40

18 State-of-the-art: Single cluster Hadoop scheduling Ideally, – Enough capacity for production jobs – Run research tasks on all idle production slots But, – Killing tasks (e.g. Fair Scheduler) can lead to wasted work – No preemption (e.g. Capacity Scheduler) can lead to production jobs waiting for resources Plot used with permission from Yahoo # Reduce slots Reduce slot capacity production jobs aren’t assigned resources 18 0 2000 4000 6000 8000 10000 time (hours:mins) 0:201:000:40

19 Approach: Suspend/Resume Suspend/Resume tasks within and across research jobs – Production jobs can gain resources immediately – Research jobs can use many resources at a time, without wasting work Focus on Reduce tasks – Reduce tasks take longer, so more work to lose (median Map 19 seconds vs. Reduce 231 seconds [Facebook]) Plot used with permission from Yahoo # Reduce slots Reduce slot capacity 19 0 2000 4000 6000 8000 10000 time (hours:mins) 0:201:000:40

20 Goals: Prioritize production jobs Requirement: Production jobs should have the same completion time as if they were executed in an exclusive production cluster – Possibly with a small overhead Optimization: Research jobs should have the shortest completion time possible Constraint: Finite cluster resources 20

21 Challenges Avoid Suspend overhead – Would require production jobs to wait for resources Avoid Resume overhead – Would delay research jobs from making progress Optimize task evictions – Job completion time is metric that users care about – Develop eviction policies that have the least impact on job completion times 21

22 Natjam Design Motivation Contributions Background: MapReduce/Hadoop State-of-the-art Solution: Suspend/Resume Design Evaluation 22 Scheduler – Hadoop → Natjam Architecture – Hadoop → Natjam Suspend/Resume tasks Eviction Policies – Task – Job

23 Background: Capacity Scheduler Limitation: research jobs cannot scale down Hadoop capacity shared using queues – Guaranteed capacity (G) – Maximum capacity(M) 23

24 Background: Capacity Scheduler Limitation: research jobs cannot scale down Hadoop capacity shared using queues – Guaranteed capacity (G) – Maximum capacity(M) Example – Production (P) queue: G 80%/M 80% – Research (R) queue: G 20%/M 40% 1.Production job submitted first: 2.Research job submitted first: 24 time → P takes 80% ( under-utilization ) R grows to 40% time → R takes 40% ( under-utilization ) P cannot grow beyond 60%

25 Natjam Scheduler Does not require Maximum capacity Scales down research jobs 25

26 Natjam Scheduler Does not require Maximum capacity Scales down research jobs 1.P/R Guaranteed: 80%/20% 2.P/R Guaranteed: 100%/0% 26 time → R takes 100% P takes 80% time → R takes 100% P takes 100% Prioritize Production Jobs

27 Background: Hadoop YARN architecture Resource Manager Application Master per application Tasks are launched on containers of memory – Formerly, slots in Hadoop 27 Resource Manager Capacity Scheduler Node ANode B Node Manager A Application Master 1 Node Manager B Application Master 2 Task (App2) ask container (empty container) Task (App1)

28 Suspend/Resume architecture Preemptor – Decides when resources should be reclaimed from queues – Chooses victim job Releaser – Chooses task to evict Local Suspender – Saves state – Promptly exits Messaging overheads 28 Resource Manager Capacity Scheduler Node A (empty container) Node B Node Manager A Application Master 1 Node Manager B Application Master 2 Task (App2) Preemptor Releaser Task (App2) Local Suspender ReleaserLocal Suspender preempt() # containers to release release() suspend saved state ask container Task (App1) resume()

29 Suspending and Resuming Tasks When suspending, we must save enough state to be used when resuming the task. By using existing intermediate data we save small state – Simple – Low overhead 29

30 Suspending and Resuming Tasks Existing intermediate data used – Reduce inputs, stored at local host – Reduce outputs, stored on HDFS Suspend state saved – Key counter – Reduce input path – Hostname – List of suspended task attempt IDs 30 HDFS Task Attempt 1 Inputs Key Counter tmp/task_att_1 tmp/task_att_2 outdir/ (Resumed) Task Attempt 2 Inputs Key Counter (skip) (Suspended) Container freed, Suspend state saved (Suspended) Container freed, Suspend state saved

31 Two-level Eviction Policies Job-level Eviction – Chooses victim job Task level-eviction – Chooses task to evict 31 Resource Manager Capacity Scheduler Node ANode B Node Manager A Application Master 1 Node Manager B Application Master 2 Task (App2) Preemptor Releaser Task (App2) Local Suspender ReleaserLocal Suspender # containers to release preempt() release()

32 Task eviction policies Based on time remaining – Last task to finish decides job completion time – Task that finishes earlier releases container earlier Application Master keeps track of time remaining Shortest Remaining Time (SRT) Shortens the tail  Holds on to containers that would be released soon Longest Remaining Time (LRT)  May lengthen the tail Releases containers as soon as possible 32

33 Job eviction policies Based on amount of resources (e.g. memory) held by job Resource Manager holds resource information Least Resources (LR) Large jobs benefit  Starvation even with small production jobs Most Resources (MR) Small jobs benefit  Large jobs may be delayed for a long time Probabilistically-weighted on Resources (PR) Avoids biasing tasks: chance of eviction for task is same across all jobs, assuming random task eviction policy  Many jobs may be delayed 33

34 Evaluation Microbenchmarks Trace-driven experiments Natjam was implemented based on Hadoop 0.23 (YARN) 7-node cluster in CCT 34

35 Microbenchmarks: Setup Avg completion times on empty cluster – Research Job: ~200s – Production Job: ~70s Job sizes: XL (100% of cluster), L (75%), M (50%), S (25%) Task workloads within a job chosen uniformly between range of (1/2 of largest task, largest task] 35

36 Microbenchmark: Comparing Natjam to other techniques 36 50% more than ideal 90% more than ideal 20% more than ideal 2% more than ideal 15% less than Killing 7% more than ideal 40% less than Soft cap time (seconds) t=0s Research-XLt=50s Production-S

37 Microbenchmark: Suspend overhead 1.25s increase due to messaging delays Task assignments happen in parallel: 4.7s increase in job completion time is i.Assign Application Master ii.Assign Map tasks iii.Assign Reduce tasks 37 1.25 s (50%) increase

38 Microbenchmark: Task eviction policies 38 17% less than Random time (seconds) t=0s Research-XLt=50s Production-S Theorem 1:When production tasks are the same length, SRT results in shortest job completion time.

39 Microbenchmark: Job eviction policies 39 Most Resources + SRT = good fit time (seconds) t=0sResearch-L Research-S t=50s Production-S Theorem 2:When tasks within each job are the same length, evicting from the minimum number of jobs results in the shortest average job completion time.

40 Trace-driven evaluation Yahoo trace: scaled production cluster workload + scaled research cluster Job completion times 40

41 Trace-driven evaluation: Research jobs only 41 115 seconds

42 Trace-driven evaluation: CDF of differences (negative is good) 42

43 Related Work Single cluster job scheduling has focused on: – Locality of Map tasks [Quincy, Delay Scheduling] – Speculative execution [LATE Scheduler] – Average fairness between queues [Capacity Scheduler, Fair Scheduler] – Recent work: Elastic queues [Amoeba] We solve the requirement of prioritizing production jobs 43

44 Natjam summary Natjam prioritizes production jobs Suspend/Resume tasks in research jobs Eviction policies that choose which tasks to suspend Evaluation – Microbenchmarks – Trace-drive experiments 44

45 Conclusion SolutionStrong user requirement Key optimized metric Pandora-A [ICDCS 2010] DeadlineLow $ cost Pandora-B [ICAC 2011] $ Budget Short transfer time Natjam Prioritize production jobs Job completion time Vivace [USENIX ATC 2012] ConsistencyLow latency Thesis: It is feasible to satisfy strong application requirements for data-intensive cloud computing environments, in spite of resource limitations, while simultaneously optimizing run-time metrics. Contributions: Solutions that reinforce this statement in diverse data-intensive cloud settings. 45


Download ppt "Satisfying Strong Application Requirements in Data-Intensive Clouds Ph.D Final Exam Brian Cho 1."

Similar presentations


Ads by Google