Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Scheduling Jobs with Varying Parallelizability Ravishankar Krishnaswamy Carnegie Mellon University.

Similar presentations


Presentation on theme: "1 Scheduling Jobs with Varying Parallelizability Ravishankar Krishnaswamy Carnegie Mellon University."— Presentation transcript:

1 1 Scheduling Jobs with Varying Parallelizability Ravishankar Krishnaswamy Carnegie Mellon University

2 Outline Motivation Formalization of Problems A Concrete Example Generalizations 2

3 Motivation Consider a scheduler on a multi-processor system – Different jobs of varying importance arrive “online” – Each job is inherently decomposed into stages Each stage has some degree of parallelism Scheduler is not aware of these – Only knows when a job completes! Can we do anything “good”? 3

4 A Small Example Scheduling on a Multi-Processor system – m processors – 1 job arrives at t=0, it is fully sequential (does not make use of parallelism even when provided) – several jobs arrive subsequently, and they are all parallelizable completely (imagine a single ‘for’ loop, which can be completely parallelized) Bad Solution FCFS, and allocate all the machines to earliest job. 4

5 Formalization Scheduling Model Online, Preemptive, Non-Clairvoyant Job Types Varying degrees of Parallelizability Objective Minimize the average flowtime (or some function of them) 5

6 Explaining away the terms Online – we know of a new job only at the time when it arrives. Non-Clairvoyance – we know “nothing” about the job like runtime, extents of parallelizability, etc. – job tells us “it’s done” once it is completely scheduled 6

7 Explaining away the terms Varying Parallelizability [ECBD STOC 1997] – Each job is composed of different stages – Each stage r has a degree of parallelizability Γ r (p) How parallelizable the stage is, given p machines 7 p Γ(p) Makes no sense to allocate more cores

8 Special Case: Fully Parallelizable Stage 8 p Γ(p)

9 Special Case: Sequential Stage 9 p Γ(p) 1 1 In general, Γ is assumed to be 1)Non Decreasing 2)Sublinear

10 Explaining away the terms Objective Function Average flowtime:minimize Σ j (C j – a j ) L 2 Norm of flowtime:minimize Σ j (C j – a j ) 2 etc. 10

11 Can we do anything at all? Yes, with a little bit of resource augmentation – Online algorithm uses ‘ s m’ machines to schedule an instance on which OPT can only use ‘m’ machines. O(1/ ∈ )-competitive algorithm with (2+ ∈ )- augmentation for minimizing average flowtime [E99] 11

12 Outline Motivation Formalization of Problems A Concrete Example Generalizations 12

13 The Case of Unweighted Flowtimes Instance has n jobs that arrive online, m processors (online algorithm can use sm machines) Each job has several stages, each with its own ‘degree of parallelizability’ curve. Minimize average flowtime of the jobs (or by scaling, the total flowtime of the jobs) 13

14 The Case of Unweighted Flowtimes Algorithm: – At each time t, let N A (t) be the unfinished jobs in our algorithms queue. – For each job, devote sm/N A (t) share of processing towards it. This is O(1)-competitive with O(1) augmentation. 14 (In paper, Edmonds and Pruhs get O(1+ ∈ ) O(1/ ∈ 2 )-competitive algorithm)

15 High Level Proof Idea [E99] 15 General Instance Restricted Extremal Instance Solve Extremal Case - Non-Clairvoyant Alg can’t distinguish the 2 instances - Also ensure OPT R ≤ OPT G - Show that EQUI (with augmentation) is O(1) competitive against OPT R Each stage is either fully parallelizable or fully sequential

16 Reduction to Extremal Case Consider an infinitesimally small time interval [t, t+dt) ALG gives p processors towards some j. OPT gives p* processors to get same work done (before or after t). Γ( p) dt = Γ( p*) dt* 16 tt* 1)ALG is oblivious to change 2)OPT can fit in new work in-place If p ≥ p*, replace this work with dt “sequential work”. If p < p*, replace this work with pdt “parallel work”.

17 High Level Proof Idea [E99] 17 General Instance Restricted Extremal Instance Solve Extremal Case

18 Amortized Competitiveness Analysis Contribution of any alive job at time t is 1 Total rise of objective function at time t is |N A (t)| Would be done if we could show (for all t) |N A (t)| ≤ O(1) |N O (t)| 18 C j - a j

19 Amortized Competitiveness Analysis Sadly, we can’t show that. There could be situations when |N A (t)| is 100 and |N O (t)| is 10 and vice-versa too. Way around: Use some kind of global accounting. 19 When we’re way behind OPT When OPT pay lot more than us

20 Banking via a Potential Function Resort to an amortized analysis Define a potential function Φ(t) which is 0 at t=0 and t= Show the following: – At any job arrival, Δ Φ ≤ α ΔOPT ( ΔOPT is the increase in future OPT cost due to arrival of job) – At all other times, 20 Will give us an ( α+β) -competitive online algorithm

21 For our Problem Define rank(j) is sorted order of jobs w.r.t arrivals. (most recent has highest rank) y a (j,t) - y o (j,t) is the ‘lag’ in parallel work algorithm has over optimal solution 21

22 Arrivals and Departures are OK Recall When new job arrives, y a (j,t) = y o (j,t). Hence, that term adds 0. For all other jobs, rank(j) remains unchanged. Also, when our algorithm completes a job, some ranks may decrease, but that causes no drop in potential. 22

23 Running Condition Recall At any time instant, Φ increases due to OPT working and decreases due to ALG working. Notice that in the worst case, OPT is working on most recently arrived job, and hence Φ increases at rate of at most |N A (t)|. 23

24 What goes up must come down.. Φ will drop as long as there the algorithm is working on jobs in their parallel phase which OPT is ahead on. – If they are in a sequential phase, they don’t decrease in Φ. – If OPT is ahead on a job in parallel phase, max(0, y a (j) - y o (j)) = 0. Suppose there are very few (say, |N A (t)|/100) which are ‘bad’. Then, algorithm working drops Φ for most jobs. Drop is at least 24 (counters both ALG’s cost and the increase in Φ due to OPT working)

25 Putting it all together So in the good case, we have Handling bad jobs – If ALG is leading OPT in at least |N A (t)|/200 jobs, then we can charge LHS to 400 |N O (t)|. – If more than |NA(t)|/200 jobs are in sequential phase, OPT must pay 1 for each of these job sometime in the past/future. (observe that no point in OPT will be double charged) 25 Integrating over time, we get c(ALG) ≤ 400 c(OPT) + 400 c(OPT)

26 Outline Motivation Formalization of Problems A Concrete Example Generalizations 26

27 Minimizing L 2 norm of Flowtimes [GIKMP10] Round-Robin does not work 1 job arrives at t=0, and has some parallel work to do. Subsequently some unit sized sequential jobs arrive every time step. Optimal solution: just work on first job. Round-Robin will waste lot of cycles on subsequent jobs, and incur a larger cost on job 1 (because of flowtime 2 ) 27

28 To Fix the problem Need to consider “weighted” round-robin, where age of a job is its weight. Generalize the earlier potential function – handle ages/weights – can’t charge sequential parts directly with optimal (if they were executed at different ages, it didn’t matter in L 1 minimization) Get O(1/ ∈ 3 )-competitive algorithm with (2+ ∈ )-augmentation. 28

29 Other Generalization [CEP09] consider the problem of scheduling such jobs on machines whose speeds can be altered, with objective of minimizing flowtime + energy; they give a O( α 2 log m) competitive online algorithm. [BKN10] use similar potential function based analysis to get (1+ ∈ )-speed O(1)-competitive algorithms for broadcast scheduling. 29

30 Conclusion Looked at model where jobs have varying degrees of parallelism, and non-clairvoyant scheduling Outlined analysis for a O(1)-augmentation O(1)-competitive analysis Described a couple of recent generalizations Open Problems – Improving augmentation requirement/ showing a lower bound on L p norm minimization – Close the gap in flowtime+energy setting 30

31 31 Thank You! Questions?

32 References [ECBD97] Jeff Edmonds, Donald D. Chinn, Tim Brecht, Xiaotie Deng: Non-clairvoyant Multiprocessor Scheduling of Jobs with Changing Execution Characteristics. STOC 1997. [E99] Jeff Edmonds: Scheduling in the Dark. STOC 1999. [EP09] Jeff Edmonds, Kirk Pruhs: Scalably scheduling processes with arbitrary speedup curves. SODA 2009. [CEP09] Ho-Leung Chan, Jeff Edmonds, Kirk Pruhs: Speed scaling of processes with arbitrary speedup curves on a multiprocessor. SPAA 2009. [GIKMP10] Anupam Gupta, Sungjin Im, Ravishankar Krishnaswamy, Benjamin Moseley, Kirk Pruhs: Scheduling processes with arbitrary speedup curves to minimize variance. Manuscript 2010 32


Download ppt "1 Scheduling Jobs with Varying Parallelizability Ravishankar Krishnaswamy Carnegie Mellon University."

Similar presentations


Ads by Google