Presentation is loading. Please wait.

Presentation is loading. Please wait.

U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science Server Consolidation in Virtualized Data Centers Prashant Shenoy University of Massachusetts.

Similar presentations


Presentation on theme: "U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science Server Consolidation in Virtualized Data Centers Prashant Shenoy University of Massachusetts."— Presentation transcript:

1 U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science Server Consolidation in Virtualized Data Centers Prashant Shenoy University of Massachusetts

2 U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science Networked Applications Proliferation of web-enabled and networked applications Increased use in consumer and business worlds Brokerage/ banking online gameonline store Growing significance in personal, business affairs Focus: networked server applications

3 U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science Dynamic Application Workloads Networked apps see dynamic workloads Multi-time-scale variations Time-of-day, hour-of-day Transient spikes Flash crowds Incremental growth User threshold for response time: 8-10 s Key issue: Provide good response time under varying workloads 0 20000 40000 60000 80000 100000 120000 140000 05101520 Time (hrs) Request Rate (req/min) 0 12 24 Time (hours) Time (days) 0 12345 Arrivals per min 0 0 140K 1200

4 U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science Data Centers Networked apps run on data centers Data Centers Large clusters of servers Networked storage devices Allocate resources to meet application SLAs Energy costs are large part of operating budget Modern data centers are increasingly virtualized

5 U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science Virtualized Data Centers Virtualized data centers Each application runs inside a virtual server One or more VS mapped onto each physical server Application isolation Dynamic resource allocation VM migration Server consolidation

6 U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science Data Center Resource Allocation Growing complexity Static allocation Not suitable for dynamic workloads Resource over-provisioning Resource wastage Estimating peak workloads is hard Manual reallocation Slow allocation time Challenge: How to handle dynamic workloads while efficiently utilizing data center resources? WC Soccer 1998

7 U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science Server Consolidation Power is the largest cost for modern data centers Must get as much usage from each server as possible Reduce management costs Virtualization promises great opportunities for consolidation Easily run multiple virtual servers on each host Number of processor cores per server is increasing Memory becomes bottleneck for packing VMs Each VM gets a hard allocation of memory Not as flexible as CPU time How can we fit more VMs per server?

8 U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science Our Approach: Exploiting Sharing Content based page sharing If two VMs have identical pages, only store one copy Reduces the total memory required Supported by both Xen and VMware Applications with high sharing potential Anything on Windows Thin client servers Replicated apps Sharing in web applications Operating system files Application libraries ScenarioTotal RAM% Shared 10 WinNT204843 9 Linux184629 5 Linux165810 Place similar VMs together to decrease the total memory footprint Source: Memory Resource Management in VMWare ESX Server Waldspurger, OSDI 2002

9 U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science Consolidation: Challenges Which VMs are similar? Where should each VM be placed? How to respond to memory hotspots?

10 U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science Talk Outline Motivation Memory Buddies architecture VM Fingerprinting Server Consolidation and Hotspot Mitigation Implementation and evaluation Other research projects

11 U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science Memory Buddies Architecture Control Plane Sharing Estimation: Determines which VMs have similar memory images Consolidation: uses sharing potential to guide VM placement Hotspot Mitigation: Detects and resolves memory pressure Memory Tracer Tracks memory utilization Creates VM fingerprint

12 U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science Memory Tracer Tracks memory contents to estimate sharing Finds list of actively used pages to determine RAM allocation Monitor access bits on each memory page If a bit is set, clear it Later, if bit has been set again, page is in active use Can determine importance of pages in LRU order Use this to predict how much memory is needed to meet target page fault rate

13 U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science Content Based Page Sharing Detect identical pages used by multiple virtual machines Calculate a hash for each page for quick comparison Only store one copy of each identical page Use copy-on-write mapping to ensure correctness VM 1 Memory Contents VM 2 Memory Contents Self-sharing Cross VM Sharing Host 1 Host 2 VM 3 Memory Contents

14 U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science VM Fingerprinting Need efficient method to compare VMs’ memory contents Maintaining full hash lists is expensive 32 bit hash/page needs 1MB hash list per 1GB of RAM Use Bloom Filter based fingerprint Probabilistic data structure for storing keys Page hashes are the keys inserted into Bloom Filter Bloom Filter benefits Reduces storage requirement Much faster to compare ? VM 1 Memory VM 2 Memory ?

15 U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science Bloom Filter Overview Data structure composed of an array of bits and several hashing functions Insert elements by setting the bit positions resolved from the hashing functions Can lookup an element by checking if those bits are set Hash collisions can lead to false positives Rate depends on array size and number of hash functions

16 U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science Bloom Filter Comparison Also useful for fast set intersection Create a Bloom Filter for each set Compare the bits set in each filter’s bit vector Intuition: VM 1 and VM 2 are the most similar since they have more set bits in common Magnitude of sharing can be estimated based on probability of having certain bits set in both vectors z1, z2 = number of zeroes in each bit vector z12 = number of zeroes in AND of vectors m = bit vector size k=number of hash functions VM 1 VM 2 VM 3 Bit Vector

17 U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science Server Consolidation Use sharing potential to guide VM placement Step 1: Identify servers to consolidate Servers with low utilization over previous monitoring window Step 2: Determine target hosts Find new candidate host for each VM to be consolidated Place a VM on the host which allows the maximum sharing Also must satisfy CPU, memory, and network constraints Step 3: Perform migrations Use live migration transparent to applications Power down consolidated server after all VMs are moved

18 U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science Hotspot Mitigation Memory requirements can change over time Workload or application phase changes increase need Modifying shared pages causes copy-on-write event Step 1: Monitoring system detects hotspot formation Use disk paging and sharing statistics Step 2: Determine cause of hotspot Type 1: swapping occurs without change to sharing rate Type 2: swapping occurs and sharing rate decreases Step 3: Hotspot resolution Type 1: Increase memory allocation if available Type 2: Attempt to migrate VM to a host with higher sharing

19 U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science Talk Outline Motivation Memory Buddies architecture VM Fingerprinting Server Consolidation and Hotspot Mitigation Implementation and evaluation Other research projects

20 U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science Implementation Uses VMware ESX Server 3.0.1 Xen currently lacks simultaneous support for migration and page sharing Control plane Communicates with VMware’s web services based API Memory Tracer Linux Kernel module Gathers page hashes and access statistics Could be placed within the hypervisor layer Testbed Four ESX hosts plus additional control and client machines

21 U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science Evaluation: VM Placement Memory Buddies detects which VMs have similar memory contents and groups them accordingly 4 hosts and 4 applications Sharing Aware placement can support two extra VMs

22 U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science Evaluation: Data Center Simulation Simulation of 100 host data center Can fit 8 VMs per host without major sharing Effectiveness of sharing changes with application diversity With very few app types random does just as well With very large number there is little chance for benefit At peak, can fit 33% more VMs by utilizing sharing

23 U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science Evaluation: Hotspot Mitigation Scenario Host 1: VM 1 and VM 2 Host 2: VM 3 Initially, 1 and 2 have high rate of sharing At time 10, VM 2 makes writes to shared pages Memory Buddies automatically detects loss of sharing and migrates VM 1 to Host 2 where it has a higher sharing rate

24 U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science Evaluation: Fingerprinting Bloom Filter comparison accuracy depends on bit vector size For VM with 384MB RAM Less than 1% error for > 64KB Hash list requires 384KB Size increases linearly with RAM Bloom Filters are much faster to compare In 60 seconds: 125 Hash list comparisons 1700 Bloom Filter comparisons

25 U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science Large-scale System Monitoring Scenario: Very large data centers (~ 10K servers) Brokerage firms and banks Complex mapping of applications to servers Need scalable techniques to model, monitor, understand and respond in large systems Adaptive monitoring Sample on global-scales Increase monitoring resolution when needed

26 U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science Complex Application Modeling Application Models Answer “what if” scenarios about application performance under hypothetical workloads. Data Mining Application Logs Provide clues about application activities and workload characteristics Workload-to-Utilization Models Predict resource utilization based on incoming workload Workload-to-Workload Models Predict incoming workload at subsequent tiers Model Composition Combine models to predict resource requirements several tiers away from monitor

27 U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science Related Work Much work in virtualized data centers Summit speakers represent a good sample! Content-based page sharing [OSDI02] Memory monitoring Jones, Arpaci-Dusseau, and Arpaci-Dusseau in ASPLOS ‘06 Lu and Shen in Usenix 2007 Bloom Filters for set intersection Jain, Dahlin, and Tewari in WebDB 2005 Luo, Qin, Geng, and Luo in SKG 2006

28 U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science Conclusions Exploiting page sharing can reduce the number of servers required in a data center Must monitor memory utilization to detect hotspots caused by changes in sharing rates Joint work with Tim Wood, Gabriel Tarasuk-Levin, Jim Cipar, Peter Desnoyers, Mark Corner, and Emery Berger More at http://lass.cs.umass.eduhttp://lass.cs.umass.edu Sandpiper source code available http://lass.cs.umass.edu/projects/virtualization


Download ppt "U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science Server Consolidation in Virtualized Data Centers Prashant Shenoy University of Massachusetts."

Similar presentations


Ads by Google