Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Virtual Machine Memory Access Tracing With Hypervisor Exclusive Cache USENIX ‘07 Pin Lu & Kai Shen Department of Computer Science University of Rochester.

Similar presentations


Presentation on theme: "1 Virtual Machine Memory Access Tracing With Hypervisor Exclusive Cache USENIX ‘07 Pin Lu & Kai Shen Department of Computer Science University of Rochester."— Presentation transcript:

1 1 Virtual Machine Memory Access Tracing With Hypervisor Exclusive Cache USENIX ‘07 Pin Lu & Kai Shen Department of Computer Science University of Rochester

2 2 Motivation Virtual Machine (VM) memory allocation Lack of OS information at the hypervisor The existing approaches do not work well Static allocation  inefficient memory utilization & sub-optimized performance Working set sampling (i.e., VMware ESX server)  limited information to support flexible allocation, work poorly under workloads with little data reuse

3 3 Miss Ratio Curve Miss ratio curve (MRC) The miss rates at different memory allocation sizes Allows flexible allocation objectives The reuse distance distribution 1.0 0.5 2.0 Y: page miss ratio X: memory size C1C2 Current allocation size

4 4 Related Work on MRC Estimation Geiger ( Jones et al., ASPLOS 2006 ) Append a ghost buffer in addition to VM memory Reuse distance is tracked through I/O Dynamic Tracking of MRC for Memory Management ( Zhou et al., ASPLOS 2004 ) & CRAMM ( Yang el at, OSDI 2006 ) Protecting the LRU pages Reuse distance is tracked through memory accesses Transparent Contribution of Memory ( Cipar et al., USENIX 2006 ) Periodically sampling the access bits to approximate the memory access traces

5 5 Estimate VM MRC with Hypervisor Cache The hypervisor cache approach Part of VM memory  cache managed by the hypervisor Memory accesses are tracked by cache references Low overhead & requires minimal VM information Virtual Machine Hypervisor Storage VM memory Hypervisor Cache Data misses

6 6 Outline The Hypervisor Cache Design Transparency & overhead Evaluation MRC-directed Multi-VM Memory Allocation Allocation Policy Evaluation Summary & Future Work

7 7 Design Track MRC with VM memory allocation Part of VM memory  Hypervisor cache Exclusive cache Caching efficiency Comparable miss rate HCache does not incur extra miss rate if LRU is employed Data admission from VM Avoid expensive storage I/O VM memory VM direct memoryHypervisor cache VM direct memory Hypervisor cache

8 8 Cache Correctness Cache contents need to be correct i.e., matching with the storage location Challenging because hypervisor has very limited information VM data eviction notification The VM OS notifies the hypervisor about a page eviction/release Page to storage location and the reverse two-way mapping tables Each VM I/O request  mappings are inserted in both mapping tables Each VM page eviction  data is admitted with consultation to the mapping tables

9 9 Design Transparency & Overhead Current design is not transparent Explicit page eviction notification from VM OS The changes are small, fit well in para-virtualization Reuse time inference techniques (Geiger) are not appropriate The page may have already been changed – too late to admit it from VM System overhead Cache and mapping table management Minor page faults Page eviction notification

10 System Workflow More complete page miss rate info – Smaller VM direct memory (larger hypervisor cache) The cache can be kept permanently (no step 3) – If overhead is not tolerable 10

11 11 Prototype Implementation Hypervisor Xen 3.0.2 with VM OS Linux 2.6.16 Page eviction as new type of VM I/O request Hypervisor cache populated through ballooning HCache and mapping tables maintained at Xen0 backend driver Page copying to transfer data Cache & Tables Read Write Eviction Back-end Front-end Storage Read, Write Xen0XenU Primitive operationsOver‘d Mapping table lookup Mapping table insert Mapping table delete Cache lookup Cache insert Cache delete Cache move to tail Page copying 0.28us 0.06us 0.28us 0.13us 0.06us 0.05us 7.82us

12 12 Hypervisor Cache Evaluation Goals Evaluate caching performance, overhead & MRC prediction accuracy VM Workloads I/O bound Specweb99 Keyword searching TPC-C like CPU bound TPC-H like

13 13 Throughput Results Total VM memory is 512MB Hypervisor cache sizes: 12.5%, 25%, 50%, and 75% of total VM memory

14 14 CPU Overhead Results Total VM memory is 512MB Hypervisor cache sizes: 12.5%, 25%, 50%, and 75% to VM memory

15 15 MRC Prediction Results

16 16 Outline The Hypervisor Cache Design Transparency and overhead Evaluation MRC-directed Multi-VM Memory Allocation Allocation Policy Evaluation Summary & Future Work

17 17 MRC-directed Multi-VM Memory Allocation More Complete VM MRC via Hypervisor cache Provides detailed miss rates at different memory sizes Flexible VM memory allocation policies Isolated Sharing Policy Maximize system-wide performance e.g., lower the geometric mean of all VMs’ miss rates Constrained individual VM performance degradation e.g., any of the VM does not suffer extra α % more miss rate

18 18 Isolated Sharing Experiments Base allocation of 512MB each; minimize geo. mean of miss ratios Isolation constraint at 5% ⇒ achieve mean miss ratio of 0.85 Isolation constraint at 25% ⇒ achieve mean miss ratio of 0.41

19 19 Comparison with VMware ESX Server ESX server policy and Isolated sharing with 0% tolerance Both work well when the VM working set (around 330MB) VM fits in the VM memory (512MB) Add a noise background workload that slowly scans through a large dataset VM MRC identifies that the VM does not benefit from extra memory ESX server estimates the working set size at 800MB, preventing memory reclamation

20 20 Summary and Future Work Summary VM MRC estimation via Hypervisor Cache Features, design and implementation MRC-directed multi-VM memory allocation Future Work Improving the transparency of HCache Reducing the overhead of HCache Generic hypervisor buffer cache


Download ppt "1 Virtual Machine Memory Access Tracing With Hypervisor Exclusive Cache USENIX ‘07 Pin Lu & Kai Shen Department of Computer Science University of Rochester."

Similar presentations


Ads by Google