Presentation is loading. Please wait.

Presentation is loading. Please wait.

Policy-based CPU-scheduling in VOs Catalin Dumitrescu, Mike Wilde, Ian Foster.

Similar presentations


Presentation on theme: "Policy-based CPU-scheduling in VOs Catalin Dumitrescu, Mike Wilde, Ian Foster."— Presentation transcript:

1 Policy-based CPU-scheduling in VOs Catalin Dumitrescu, Mike Wilde, Ian Foster

2 Some Background (Grid-style Monitoring)

3 Some Background (Policy-based Sched)

4 Introduction ●For example, in the sciences, there may be hundreds of institutions and thousands of individual investigators that collectively control tens or hundreds of thousands of computers and associated storage systems ●Each individual institution may participate in, and contribute resources to, multiple collaborative projects that can vary widely in scale, lifetime, and formality VO-A VO-B V-Queue S-Queue Site 3 Site 2 Site 1

5 Initial Model ●Assumption: Participants may wish to delegate to one or more VOs the right to use certain resources subject to local policy (and service level agreements), and each VO then wishes to use those resources subject to VO policy ●How are such local and VO policies to be expressed, discovered, interpreted, and enforced? Verifier VO-A VO-B S-PEP V-PEP V-Queue S-Queue Site 3 Site 2 Site 1 V-PEP

6 Talk Overview ● Part I: Model Detailing ● Architecture / Model description ● Policy language definition (syntax&semantics) ● Part II: Specific work ● Policy case scenarios (research focus) ● Algorithms ● Simulator & Simulated model ● Dimension identification ● Part III: Simulations & Implementation issues ● Simulation results ● Related work ● “Rolling out in Atlas/Grid3” ● Future work & Conclusions

7

8 Simplified Model ● Composed of: ●R = compute resource, (several individual compute elements) ●M = associated manager, (designed to control resource R's states) ●{P} = policy set, (a finite list of intents expressed by administrators) ● Some Rules: ●M is authorized and responsible for enforcing {P} with respect to R ●{P} is composed only of rules that have direct correlation with R ● And the Mapping to a Concrete Case Scenario: ●a cluster with 1 head-node and several worker-nodes ●compute resources are managed locally by one or several pooling and/or queuing software managers (e.g., Condor, PBS, LSF)

9 Extended Model ● Composed of: ●G = several sites, where each of them is of type S ●M = associated manager(s), (designed to control S's states by delegation) ●{P} = policy set, a finite list of intents expressed by administrators ● Some Rules: ●M is authorized and responsible to distribute{P} with respect to R to R ●{P} is composed of rules that have direct correlation with G ●distributed {PR} is composed only of rules that have direct correlation with R ● And the Mapping to Concrete Case Scenario: ●a set of clusters of type S ●the monitoring and policy mechsnisms are VO-Centric Ganglia (as prototype)

10 Refined Prototype Model

11 Policy Language Definition ● Two types of policies: ●absolute: its arguments are mapped to VOs and site resources ●relative: its arguments are mapped to groups and VOs' resources ● Two types of contraints (open problem regarding enforcement): ●long term hard limitations and short term soft limitations ●identified by position in the presented syntax ● Proposed interpretations (to avoid ambiguities): ●long term hard limitations ●averaged over period (at most): sites provide (if requested) at most the specified fraction over the specified time interval ●short term soft limitations ●upper limits over period (a maximum): sites may provide up to the specified fraction over the specified time interval, (but without any guarantee in place)

12 Simple/Proposed Syntax ● Two identifiable forms: ● absolute policy: ● resource (RESOURCE, ENTITY, LIST_POLICY) ● Examples: ● resource (R, V1, [(year, 20), (5minutes, 60)]) ● resource (R, V2, [(year, 80), (5minutes, 90)]) ● relative policy: ● subset (RESOURCE, ENTITY, GROUP, LIST_POLICY) ● Examples: ● subset (R, V1, G1, [(year, 30), (5minutes, 100)]) ● subset (R, V1, G2, [(year, 70), (5minutes, 100)]) ● Note: definitions and examples are independent (i.e., R has different interpretations in the two examples)

13 Motivation for the Language ● Users can burn their allocation faster or slower, controlled by the two limits ● Possible to map to site RMs with node allocation policies (e.g., Condor, OpenPBS & LSF)

14 Policy Case Scenarios ● Case 1 99% 80% 20% 60% 90% VO1 VO2

15 Policy Case Scenarios ● Case 2 99% 80% 20% 60% 90% VO1 VO2

16 Implemented Algorithms (Site) for each Gi with EPi, BPi, BEi do # Case 1: fill BPi + BEi if (Sum(BAj) == 0) & (BAi < BPi) & (Qi has jobs) then schedule a job from some Qi to the least loaded site # Case2: BAi<BPi (resources available) else if (SUM (BAk) < TOTAL) & (BAi < BPi) & (Qi has jobs) schedule a job from some Qi to the least loaded site # Case 3: fill EPi (resource contention) else if (sum(BAk) == TOTAL) & (BAi < EPi) & (Qi exists) then if (j exists such that BAj >= EPj) then stop scheduling jobs for VOj # Need to fill with extra jobs? if (BAi < EPi + BEi) then schedule a job from some Qi to the least loaded site # ?? if (EAi < EPi) & (Qi has jobs) then schedule additional backfill jobs

17 Implemented Algorithms (VO) for each VOi with EPi # Case 1: fill BPi if (Sum(BAj) == 0) & (BAi < BPi) & (Qi has jobs) then release a job from some Qi # Case 2: BAi < BPi (resources available) else if (Sum(BAk) < TOTAL) & (BAi < BPi) & (Qi has jobs) then release a job from some Qi # Case 3: fill EPi (resource contention) else if (Sum(BAk) == TOTAL) & (BAi < EPi) & (Qi has jobs) then if (j exists such that BAj >= EPj) then stop scheduling jobs for VOj

18 Simulations ● Structures: ● 2 VOs * 2 groups * 1 planner with 3 clusters ● 6 VOs * 3 groups * 2 planners with 10 clusters ● Model: ● S-PEP: ●conitnuos monitoring ●jobs control by sending high level commands to RMs ● V-PEP: ●gatekeeper type (access control point)

19 Talk Overview ● Part I: Model Detailing ● Architecture / Model description ● Policy language definition (syntax&semantics) ● Part II: Specific work ● Policy case scenarios (research focus) ● Algorithms ● Simulator & Simulated model ● Dimension identification ● Part III: Simulations & Implementation issues ● Simulation results ● Related work ● “Rolling out in Atlas/Grid3” ● Future work & Conclusions

20 Initial Simulations (settings) 2 VOs * 2 groups * 1 planner with 3 clusters (1 * (1+2+4) + 1 * (1+2+4+8) + 1 * (1+2+4+8) CPUs) VO0 Jobs Statistics CPU Usage & Policy VO1

21 More Simulations 6 VOs * 3 groups * 2 planners with 20 clusters (1 * (1+2+4) +... CPUs)

22 Overall CPU Usage

23 Per Group CPU Usage

24 Simulation Variations / Dimensions ● Algorithms ● Technical solution for mappings ● Site level trust ● Centralized vs. decentralized ● Total information &/vs. inaccurate (stalled) information

25 Policy DB Policy Translator RM Resources Policy DB Policy Translator Site Selector User Group Queues *** ??? Job Submission Job Selector Technical Approach to Grid03

26 Future Work ● Mainly, Analysis on Several Dimensions

27 Glimpse into Policy Setup ● Negotiation & Advance Resource Reservation RM S`-PEP S-AP V-RM V`-PEP V-AP RM S`-PEP S-AP SLA Document SN-SLA VN-SLA SM-SLA User SLA Iniatitor Policy Rules SN-SLA VMA-SLA Job Submission Site B Site A VO VNA-SLA VM-SLA Resources

28 Conclusions ● Conclusions


Download ppt "Policy-based CPU-scheduling in VOs Catalin Dumitrescu, Mike Wilde, Ian Foster."

Similar presentations


Ads by Google