Presentation is loading. Please wait.

Presentation is loading. Please wait.

Analysis of cooperation in multi-organization Scheduling Pierre-François Dutot (Grenoble University) Krzysztof Rzadca (Polish-Japanese school, Warsaw)

Similar presentations


Presentation on theme: "Analysis of cooperation in multi-organization Scheduling Pierre-François Dutot (Grenoble University) Krzysztof Rzadca (Polish-Japanese school, Warsaw)"— Presentation transcript:

1 Analysis of cooperation in multi-organization Scheduling Pierre-François Dutot (Grenoble University) Krzysztof Rzadca (Polish-Japanese school, Warsaw) Fanny Pascual (LIP6, Paris) Denis Trystram (Grenoble University and INRIA) NCST, CIRM, may 13, 2008

2 Goal The evolution of high-performance execution platforms leads to distributed entities (organizations) which have their own « local » rules. Our goal is to investigate the possibility of cooperation for a better use of a global computing system made from a collection of autonomous clusters. Work partially supported by the Coregrid Network of Excellence of the EC.

3 Main result We show in this work that it is always possible to produce an efficient collaborative solution (i.e. with a theoretical performance guarantee) that respects the organizations’ selfish objectives. A new algorithm with theoretical worst case analysis plus experiments for a case study (each cluster has the same objective: makespan).

4 Target platforms Our view of computational grids is a collection of independent clusters belonging to distinct organizations. … … …… … …

5 Computational model (one cluster) Independent applications are submitted locally on a cluster. The are represented by a precedence task graph. An application is a parallel rigid job. Let us remind briefly the model (close to what presented Andrei Tchernykh this morning)… See Feitelson for more details and classification.

6 Cluster J1J2J3…… … Local queue of submitted jobs

7 Job

8

9

10

11

12

13 overhead Computational area Rigid jobs: the number of processors is fixed.

14 #of required processors q i Runtime p i

15 #of required processors q i Runtime p i high jobs (those which require more than m/2 processors) and low jobs (the others).

16 Scheduling rigid jobs: Packing algorithms Scheduling independent rigid jobs may be solved as a 2D packing Problem (strip packing). List algorithm (off-line). m

17 Cluster J1J2J3…… … Organization k n organizations. m processors (identical for the sake of presentation) k

18 … … …… … …

19 … … …… … …

20 n organizations (here, n=3, red 1, green 2 and blue 3) Each organization k aims at minimizing « its » own makespan max(C i,k ) We want also that the global makespan is minimized C max (O k ) organization C max (M k ) cluster

21 n organizations (here, n=3, red 1, green 2 and blue 3) Each organization k aims at minimizing « its » own makespan max(C i,k ) We want also that the global makespan (max over all local ones) is minimized Cmax(O 2 ) Cmax(O 3 )= Cmax(M 1 ) C max (O k ) organization C max (M k ) cluster

22 Problem statement MOSP: minimization of the « global » makespan under the constraint that no local schedule is increased. Consequence: taking the restricted instance n=1 (one organization) and m=2 with sequential jobs, the problem is the classical 2 machines problem which is NP- hard. Thus, MOSP is NP-hard.

23 Multi-organizations Motivation: A non-cooperative solution is that all the organizations compute their local jobs (« my job first » policy). However, such a solution is arbitrarly far from the global optimal (it grows to infinity with the number of organizations n). See next example with n=3 for jobs of unit length. no cooperation with cooperation (optimal) O1O1 O2O2 O3O3 O1O1 O2O2 O3O3

24 Preliminary results Single organization scheduling (packing). Resource constraint list algorithm (Garey-Graham 1975). List-scheduling: (2-1/m) approximation ratio HF: Highest First schedules (sort the jobs by decreasing number of required processors). Same theoretical guaranty but better from the practical point of view.

25 Analysis of HF (single cluster) high utilization zone (I) (more than 50% of processors are busy) low utilization zone (II) Proposition. All HF schedules have the same structure which consists in two consecutive zones of high (I) and low (II) utilization. Proof. (2 steps) By contracdiction, no high job appears after zone (II) starts

26 Collaboration between clusters can substantially improve the results. Other more sophisticated algorithms than the simple load balancing are possible: matching certain types of jobs may lead to bilaterally profitable solutions. no cooperationwith cooperation O1O1 O2O2 O1O1 O2O2

27 If we can not worsen any local makespan, the global optimum can not be reached. 1 2 1 22 1 2 1 2 2 localglobally optimal O1O1 O2O2 O1O1 O2O2

28 If we can not worsen any local makespan, the global optimum can not be reached. 1 2 1 22 1 2 1 2 2 1 2 1 2 2 localglobally optimal best solution that does not increase Cmax(O 1 ) O1O1 O2O2 O1O1 O2O2 O1O1 O2O2

29 If we can not worsen any local makespan, the global optimum can not be reached. Lower bound on approximation ratio greater than 3/2. 1 2 1 22 1 2 1 2 2 1 2 1 2 2 best solution that does not increase Cmax(O 1 ) O1O1 O2O2 O1O1 O2O2 O1O1 O2O2

30 Multi-Organization Load-Balancing 1 Each cluster is running local jobs with Highest First LB = max (pmax,W/nm) 2. Unschedule all jobs that finish after 3LB. 3. Divide them into 2 sets (Ljobs and Hjobs) 4. Sort each set according to the Highest first order 5. Schedule the jobs of Hjobs backwards from 3LB on all possible clusters 6. Then, fill the gaps with Ljobs in a greedy manner

31 let consider a cluster whose last job finishes before 3LB 3LB Ljob Hjob

32 3LB Ljob Hjob

33 3LB Ljob Hjob

34 3LB Ljob Hjob

35 3LB Ljob

36 3LB Ljob

37 Notice that the centralized scheduling mechanism is not work stealing, moreover, the idea is to change as minimum as we can the local schedules.

38 Feasibility (insight) Zone (I) 3LB Zone (I) Zone (II)

39 Sketch of analysis Case 1: x is a small job. Global surface argument Case 2: x is a high job. Much more complicated, see the paper for technical details Proof by contradiction: let us assume that it is not feasible, and call x the first job that does not fit in a cluster.

40 3-approximation (by construction) This bound is tight Local HF schedules

41 3-approximation (by construction) This bound is tight Optimal (global) schedule

42 3-approximation (by construction) This bound is tight Multi-organization load-balancing

43 Improvement Local schedules Multi-org LB load balance O3O3 O1O1 O2O2 O4O4 O5O5 O3O3 O1O1 O2O2 O4O4 O5O5 O3O3 O1O1 O2O2 O4O4 O5O5 We add an extra load-balancing procedure O3O3 O1O1 O2O2 O4O4 O5O5 Compact

44 Some experiments

45 Conclusion We proved in this work that cooperation may help for a better global performance (for the makespan). We designed a 3-approximation algorithm. It can be extended to any size of organizations (with an extra hypothesis).

46 Conclusion We proved in this work that cooperation may help for a better global performance (for the makespan). We designed a 3-approximation algorithm. It can be extended to any size of organizations (with an extra hypothesis). Based on this, it remains a lot of interesting open problems, including the study of the problem for different local policies or objectives (Daniel Cordeiro)

47 Thanks for attention Do you have any questions?

48 Using Game Theory? We propose here a standard approach using Combinatorial Optimization. Cooperative Game Theory may also be usefull, but it assumes that players (organizations) can communicate and form coalitions. The members of the coalitions split the sum of their playoff after the end of the game. We assume here a centralized mechanism and no communication between organizations.


Download ppt "Analysis of cooperation in multi-organization Scheduling Pierre-François Dutot (Grenoble University) Krzysztof Rzadca (Polish-Japanese school, Warsaw)"

Similar presentations


Ads by Google