Presentation is loading. Please wait.

Presentation is loading. Please wait.

Power Aware Virtual Machine Placement Yefu Wang. 2 ECE692 2009 Introduction Data centers are underutilized – Prepared for extreme workloads – Commonly.

Similar presentations


Presentation on theme: "Power Aware Virtual Machine Placement Yefu Wang. 2 ECE692 2009 Introduction Data centers are underutilized – Prepared for extreme workloads – Commonly."— Presentation transcript:

1 Power Aware Virtual Machine Placement Yefu Wang

2 2 ECE692 2009 Introduction Data centers are underutilized – Prepared for extreme workloads – Commonly under 20% utilized Shutting down unused servers – Saves more power than DVFS – Application correctness should be guaranteed Design choices – Workload redirection – VM live migration – Workload redirection + VM live migration [Meisner’09]

3 3 ECE692 2009 Design Choice (1) : Workload Redirection Web requests Example: [Heo’07]

4 4 ECE692 2009 Design Choice (2): VM Live Migration VM Example: [Wang’09, Verma’08]

5 5 ECE692 2009 Design Choice (3): Hybrid [Kusic’08] 6 servers and 12 VMs 2 applications: Gold and Silver HTTP requests are dispatched by a dispatcher

6 pMapper: Power and Migration Cost Aware Application Placement in Virtualized Systems Akshat Verma, Puneet Ahuja, Anindya Neogi IBM India Research Lab, IIT Delhi

7 7 ECE692 2009 Application Placement Static vs. dynamic placement – Utilization of each server shows dynamic pattens – Dynamic placement saves more energy – A later paper from the save authors advocates static placement System administrators have more control CPU utilization(0%-100%) are mapped to grayscale (0-1) in this figure.

8 8 ECE692 2009 Application Placement Architecture VM resizing + idling DVFS

9 9 ECE692 2009 Optimization Formulations Cost performance tradeoff Cost Minimization with Performance Constraint Performance benefit maximization with power constraint Performance benefitPowerMigration Cost

10 10 ECE692 2009 System Modeling Migration Cost – Independent of the background workload – Can be estimated a priori Performance modeling – This paper does not design a performance controller – pMapper can be integrated with other performance controllers or models Power model – It is infeasible to have an accurate power model in practice Server power consumnption depends on the hosted applications The potential server-VM mappings can be huge – This paper only relies on the power efficiency of servers

11 11 ECE692 2009 Optimization Formulations Cost performance tradeoff Cost Minimization with Performance Constraint Performance benefit maximization with power constraint Performance benefitPowerMigration Cost

12 12 ECE692 2009 mPP Algorithm VM Server 1Server 2Server 3Server 4Server 5

13 13 ECE692 2009 mPP Algorithm VM Server 1Server 2Server 3Server 4Server 5

14 14 ECE692 2009 mPP Algorithm VM Server 1Server 2Server 3Server 4Server 5 Sort VMs by size

15 15 ECE692 2009 mPP Algorithm VM Server 1Server 2Server 3Server 4Server 5 Sort servers by slope (power efficiency)

16 16 ECE692 2009 mPP Algorithm VM Server 1Server 2Server 3Server 4Server 5 Allocate the VM to servers using First Fit

17 17 ECE692 2009 mPP Algorithm VM Server 1Server 2Server 3Server 4Server 5 mPP algorithm is oblivious of the last configuration May entail large-scale migrations.

18 18 ECE692 2009 mPPH Algorithm : mPP with History VM mPP

19 19 ECE692 2009 mPPH Algorithm : mPP with History VM Target Util. Target Util. Target Util. Receiver Donor

20 20 ECE692 2009 mPPH Algorithm : mPP with History VM Donor Target

21 21 ECE692 2009 mPPH Algorithm : mPP with History VM Donor Pick the smallest VMs that add to a migration list Migration list Target

22 22 ECE692 2009 mPPH Algorithm : mPP with History VM Receiver VM Migration list VM Donor

23 23 ECE692 2009 mPPH Algorithm : mPP with History VM Receiver VM Migration list VM Donor Target

24 24 ECE692 2009 mPPH Algorithm : mPP with History VM Receiver VM Migration list VM Donor Target

25 25 ECE692 2009 mPPH Algorithm : mPP with History VM Receiver VM Donor Target mPPH algorithm tries to minimize migrations by migrating as few VMs as possible pMaP algorithm: before each migration, consider the benefit and migration cost

26 26 ECE692 2009 System Implementation Testbed – Virtulization platform: VMware ESX 3.0 – Power monitor: IBM Active Energy Manager – Performance manager: EWLM – DVFS is not implemented Simulator – Replace performance manager with trace data – Simulating 4 blades, 42 VMs Baselines – Load balanced – Static

27 27 ECE692 2009 Power Consumption mPP, mPPH saves 25% of power

28 28 ECE692 2009 Algorithm Cost mPP fails at high utilization pMaP has the least overall cost most of the time

29 29 ECE692 2009 Conclusion Application placement controller pMapper – Minimize power and migration cost – Meet performance guarantees Also avaiable in the paper: – More simulation results – Proof of the propoties of the algorithm for a given platform

30 Performance-Controlled Power Optimization for Virtualized Clusters with Multi-tier Applications Yefu Wang and Xiaorui Wang University of Tennessee, Knoxville

31 31 ECE692 2009 Introduction Power saving techniques – DVFS have a small overhead – Server consolidation saves more power Performnce guarantee – Multi-tier applications may span multiple VMs – A controller must respond to a workload variation quickly Integrating performance control and server consolidation is important

32 32 ECE692 2009 Application- level Response Time Controller Application- level Response Time Monitor Application- level Response Time Controller Application- level Response Time Monitor System Architecture VM Application- level Response Time Controller Application- level Response Time Monitor CPU Resource Demands Power Optimizer VM migration and server sleep/active commands

33 33 ECE692 2009 Response Time Controller VM1VM2 Application HTTP requests Response Time Monitor MPC Controller Response Time Model CPU requirements

34 34 ECE692 2009 CPU Resource Arbitrator CPU resource arbitrator – Runs at server level – Collect the CPU requirements of all VMs – Decides CPU allocations of all VMs and DVFS level of the server The CPU resource a VM receives depends on: – CPU allocation Example: give 20% CPU to VM1 – DVFS level Example: Set the CPU frequency to

35 35 ECE692 2009 CPU Resource Arbitrator Two observations – Performance depends on – Keeping a constant, a lower leads to less power consumption CPU resource arbitrator – Use the lowest possible CPU frequency to meet the CPU requirements of the hosted VMS

36 36 ECE692 2009 VM Consolidation for Power Optimization Problem formulation – Minimize the total power consumption of all servers – Meet all CPU resource requirements Power Model

37 37 ECE692 2009 Optimization Algorithm 1.Minimum slack problem for a single server 2.Power aware consolidation for a list of servers 3.Incremental Power Aware Consolidation (IPAC) algorithm 4.Cost-aware VM migration

38 38 ECE692 2009 Minimum slack problem for a single server VM Server Slack=1Minimum Slack=1

39 39 ECE692 2009 Minimum slack problem for a single server VM Server VM Slack=0.8Minimum Slack=0.8

40 40 ECE692 2009 Minimum slack problem for a single server VM Server VM Slack=0.2Minimum Slack=0.2

41 41 ECE692 2009 Minimum slack problem for a single server VM Server VM Slack=0.2

42 42 ECE692 2009 Minimum slack problem for a single server VM Server VM Slack=0.5Minimum Slack=0.2

43 43 ECE692 2009 Minimum slack problem for a single server VM Server VM Slack=0.2Minimum Slack=0.2

44 44 ECE692 2009 Minimum slack problem for a single server VM Server VM Slack=0 Algorithm stops if minimum slack < Sounds like exhaustive search? Complexity: the maximum number of VMs a server can host Fast in practice [Fleszar’02] Gives better solution than FFD Minimum Slack=0.2

45 45 ECE692 2009 Consolidation Algorithm Power aware consolidation for a list of servers – Begin from the most power efficient server – Use minimum slack algorithm to fill the server with VMs – Repeate with the next server untill all the VMs are hosted Incremental Power Aware Consolidation (IPAC) algorithm – Everytime only consider these VMs for consolidation: Selected VMs on overloaded servers The VMs on the least power efficient server – Repeate until the number of servers does not decrease Cost-aware VM migration – Consider the benefit and migration cost before each migration Benefit: power reduction estimated by the power model, etc. Cost: administrator-defined based on their policies

46 46 ECE692 2009 System Implementation Testbed – 4 servers, 16 VMs – 8 applications (RUBBoS) – Xen 3.2 with DVFS Simulator – Use CPU utilization trace file for 5415 servers to simulate 5415 VMs – 400 physical servers with random power models – 3 different type of CPUs

47 47 ECE692 2009 Response Time Control Our solution Baseline: pMapper Violation of performance requirements Lower power consumption resulted from DVFS

48 48 ECE692 2009 Server Consolidation 69.6% power is saved Response time is still guaranteed after the consolidation

49 49 ECE692 2009 Simulation Results IPAC saves more power than pMapper – Algorithm gives better consolidation solutions – Even more power is saved by DVFS IPAC runs even faster than pMapper – Only a small number of VMs are considered in each period

50 50 ECE692 2009 Conclusion Performance-controlled power optimization solution for virtualized server clusters – Application-level performance guaranteed – Power savings are provided by DVFS and server consolidation Compared with pMapper – Provides performance guarantee – Consumes less power – Less computational overhead

51 51 ECE692 2009 Critiques to pMapper Too many components are only disscussed without implementation Lacks hardware experiments Only provides small scale simulations

52 52 ECE692 2009 Critiques to IPAC Realword constraints are not shown in experiments – Network constraint, etc. Centralized solution – Incures a heavy conmunication overhead to the optimizer The settling time of the response time controller is too long

53 53 ECE692 2009 Comparisons of the Two Papers pMapperIPAC Performance guaranteeNoYes DVFSNoYes Hardware experimentsNoYes Based algorithmsFFDMinimum Slack


Download ppt "Power Aware Virtual Machine Placement Yefu Wang. 2 ECE692 2009 Introduction Data centers are underutilized – Prepared for extreme workloads – Commonly."

Similar presentations


Ads by Google