Presentation is loading. Please wait.

Presentation is loading. Please wait.

Diagnosing Performance Overheads in the Xen Virtual Machine Environment Aravind Menon Willy Zwaenepoel EPFL, Lausanne Jose Renato Santos Yoshio Turner.

Similar presentations


Presentation on theme: "Diagnosing Performance Overheads in the Xen Virtual Machine Environment Aravind Menon Willy Zwaenepoel EPFL, Lausanne Jose Renato Santos Yoshio Turner."— Presentation transcript:

1 Diagnosing Performance Overheads in the Xen Virtual Machine Environment Aravind Menon Willy Zwaenepoel EPFL, Lausanne Jose Renato Santos Yoshio Turner Yoshio Turner G. (John) Janakiraman HP Labs, Palo Alto

2 Virtual Machine Monitors (VMM) Increasing adoption for server applications Increasing adoption for server applications Server consolidation, co-located hosting Server consolidation, co-located hosting Virtualization can affect application performance in unexpected ways Virtualization can affect application performance in unexpected ways

3 Web server performance in Xen 25-66% lower peak throughput than Linux depending on Xen configuration 25-66% lower peak throughput than Linux depending on Xen configuration Need VM-aware profiling to diagnose causes of performance degradation Need VM-aware profiling to diagnose causes of performance degradation

4 Contributions Xenoprof – framework for VM-aware profiling in Xen Xenoprof – framework for VM-aware profiling in Xen Understanding network virtualization overheads in Xen Understanding network virtualization overheads in Xen Debugging performance anomaly using Xenoprof Debugging performance anomaly using Xenoprof

5 Outline Motivation Motivation Xenoprof Xenoprof Network virtualization overheads in Xen Network virtualization overheads in Xen Debugging using Xenoprof Debugging using Xenoprof Conclusions Conclusions

6 Xenoprof – profiling for VMs Profile applications running in VM environments Profile applications running in VM environments Contribution of different domains (VMs) and the VMM (Xen) routines to execution cost Contribution of different domains (VMs) and the VMM (Xen) routines to execution cost Profile various hardware events Profile various hardware events Example output Example output Function name %Instructions Module Function name %Instructions Module ---------------------------------------------------------------------- ---------------------------------------------------------------------- mmu_update 13 Xen (VMM) mmu_update 13 Xen (VMM) br_handle_frame 8 driver domain (Dom 0) br_handle_frame 8 driver domain (Dom 0) tcp_v4_rcv 5 guest domain (Dom 1) tcp_v4_rcv 5 guest domain (Dom 1)

7 Xenoprof – architecture (brief) Extend existing profilers (OProfile) to use Xenoprof Extend existing profilers (OProfile) to use Xenoprof Xenoprof collects samples and coordinates profilers running in multiple domains Xenoprof collects samples and coordinates profilers running in multiple domains Domain 0 OProfile (extended) Xenoprof Domain 1 OProfile (extended) Domain 2 OProfile (extended) Domains (VMs) Xen VMM H/W performance counters

8 Outline Motivation Motivation Xenoprof Xenoprof Network virtualization overheads in Xen Network virtualization overheads in Xen Debugging using Xenoprof Debugging using Xenoprof Conclusions Conclusions

9 Xen network I/O architecture Privileged driver domain controls physical NIC Privileged driver domain controls physical NIC Each unprivileged guest domain uses virtual NIC connected to driver domain via Xen I/O Channel Each unprivileged guest domain uses virtual NIC connected to driver domain via Xen I/O Channel Control: I/O descriptor ring (shared memory) Control: I/O descriptor ring (shared memory) Data Transfer: Page remapping (no copying) Data Transfer: Page remapping (no copying) I/O Driver Domain Guest Domain I/O Channel NIC Bridge vif1vif2

10 Evaluated configurations Linux: no Xen Linux: no Xen Xen Driver: run application in privileged driver domain Xen Driver: run application in privileged driver domain Xen Guest: run application in unprivileged guest domain interfaced to driver domain via I/O channel Xen Guest: run application in unprivileged guest domain interfaced to driver domain via I/O channel I/O Driver Domain Guest Domain I/O Channel NIC Bridge vif1vif2

11 Networking micro-benchmark One streaming TCP connection per NIC (up to 4) One streaming TCP connection per NIC (up to 4) Driver receive throughput 75% of Linux throughput Driver receive throughput 75% of Linux throughput Guest throughput 1/3 rd to 1/5 th of Linux throughput Guest throughput 1/3 rd to 1/5 th of Linux throughput

12 Receive – Xen Driver overhead Profiling shows slower instruction execution with Xen Driver than w/Linux (both use 100% CPU) Profiling shows slower instruction execution with Xen Driver than w/Linux (both use 100% CPU) Data TLB miss count 13 times higher Data TLB miss count 13 times higher Instruction TLB miss count 17 times higher Instruction TLB miss count 17 times higher Xen: 11% more instructions per byte transferred (Xen virtual interrupts, driver hypercall) Xen: 11% more instructions per byte transferred (Xen virtual interrupts, driver hypercall)

13 Receive – Xen Guest overhead Xen Guest configuration executes two times as many instructions as Xen Driver configuration Xen Guest configuration executes two times as many instructions as Xen Driver configuration Driver domain (38%): overhead of bridging Driver domain (38%): overhead of bridging Xen (27%): overhead of page remapping Xen (27%): overhead of page remapping I/O Driver Domain Guest Domain I/O Channel NIC Bridge vif1vif2

14 Transmit – Xen Guest overhead Xen Guest: executes 6 times as many instructions as Xen driver configuration Xen Guest: executes 6 times as many instructions as Xen driver configuration Factor of 2 as in Receive case Factor of 2 as in Receive case Guest instructions increase 2.7 times Guest instructions increase 2.7 times Virtual NIC (vif2) in guest does not support TCP offload capabilities of NIC Virtual NIC (vif2) in guest does not support TCP offload capabilities of NIC

15 Suggestions for improving Xen Enable virtual NICs to utilize offload capabilities of physical NIC Enable virtual NICs to utilize offload capabilities of physical NIC Efficient support for packet demultiplexing in driver domain Efficient support for packet demultiplexing in driver domain

16 Outline Motivation Motivation Xenoprof Xenoprof Network virtualization overheads in Xen Network virtualization overheads in Xen Debugging using Xenoprof Debugging using Xenoprof Conclusions Conclusions

17 Anomalous network behavior in Xen TCP receive throughput in Xen changes with application buffer size (slow Pentium III) TCP receive throughput in Xen changes with application buffer size (slow Pentium III)

18 Debugging using Xenoprof 40% kernel execution overhead incurred in socket buffer de-fragmenting routines 40% kernel execution overhead incurred in socket buffer de-fragmenting routines

19 De-fragmenting socket buffers Linux: insignificant fragmentation with streaming workload Linux: insignificant fragmentation with streaming workload Socket receive queue De-fragment Socket buffer (4 KB) Data packet (MTU) Xenolinux (Linux on Xen) Xenolinux (Linux on Xen) Received packets: 1500 bytes (MTU) out of 4 KB socket buffer Received packets: 1500 bytes (MTU) out of 4 KB socket buffer Page-sized socket buffers support remapping over I/O channel Page-sized socket buffers support remapping over I/O channel

20 Conclusions Xenoprof useful for identifying major overheads in Xen Xenoprof useful for identifying major overheads in Xen Xenoprof to be included in official Xen and OProfile releases Xenoprof to be included in official Xen and OProfile releases Where to get it: http://xenoprof.sourceforge.net Where to get it: http://xenoprof.sourceforge.net


Download ppt "Diagnosing Performance Overheads in the Xen Virtual Machine Environment Aravind Menon Willy Zwaenepoel EPFL, Lausanne Jose Renato Santos Yoshio Turner."

Similar presentations


Ads by Google