Presentation is loading. Please wait.

Presentation is loading. Please wait.

Xen and the Art of Virtualization

Similar presentations


Presentation on theme: "Xen and the Art of Virtualization"— Presentation transcript:

1 Xen and the Art of Virtualization
Paul Barham, Boris Dragovic, Keir Fraser, Steven Hand, Tim Harris, Alex Ho, Rolf Neugebauer, Ian Pratt & Andrew Warfield Presented by Yutao Tang

2 Outline Introduction Overview of Xen Detailed Design Evaluation
Xen Today Conclusion

3 Why using virtualization?
Operating systems undergo the following problems: Do not support adequate isolation the scheduling priority, memory demand, network traffic and disk accesses of one process may impact the performance of others System Administration becomes Difficult

4 Why using virtualization?
Virtualization can mitigate this problem Perform multiplexing at OS level Unintentional or undesired interactions between tasks are minimized.

5 Breaking it Down Virtualization can be broken down into two main categories Full Virtualization Paravirtualization

6 Classic VMM: Full Virtualization
In full virtualization, virtual hardware is functionally identical to the underlying physical hardware Allow unmodified operating systems to be hosted Support for virtualization not inherent in x86 architecture Certain privileged instructions did not trap to the VMM  Virtualizing the MMU efficiently was difficult Sometimes OS wants both real and virtual resource information: Timer

7 XEN‘s Approach - Paravirtualization
It presents a virtual machine abstraction that is similar but not identical to the underlying hardware Requires Modifications to the Guest Operating System No changes are required to the Application Binary Interface (ABI)

8 Outline Introduction Overview of Xen Detailed Design Evaluation
Xen Today Conclusion

9 Overview of Xen High-Performance, Paravirtualized Virtual Machine Monitor Runs on 32-bit x86 Processors Can host up to 100 Virtual machines Supports Isolation Supports Ported Guest Operating Systems XenoLinux XenoXP

10 XEN’s Virtual Machine Interface
The virtual machine interface can be broadly classified into 3 parts Memory Management CPU Device I/O

11 Xen’s VMI: Memory Management
TLB: challenging Software managed TLB can be managed much easier, since CPU can decide which TLB entry will be flushed. x86 uses hardware-managed TLB and has no tags, address space switches typically require a complete TLB flush. solutions: Guest OS are responsible for allocating and managing the hardware page tables but under the control of Hypervisor Xen exists in a 64MB section at the top of a VM’s address space that is not accessible from the guest Benefits Safety and Isolation Performance Overhead is minimized

12 Xen’s VMI: CPU In a non-virtualized system, OS is the most privileged entity Many processor architectures only provide 2 privilege levels Guest and apps run in level 1, VMM run in level 0 Guest OS and applications run in separate address space Guest OS can use the VMM to pass control to applications

13 Xen’s VMI: CPU In non-virtualized system
The x86 is less difficult than most systems to virtualize It support four distinct privilege levels in hardware. In non-virtualized system OS runs in ring 0 (the most privileged) Applications run in ring 3 Ring 1 & 2 generally are not used In Xen Hypervisor runs in ring 0 Guest OS runs in ring 1 Applications runs in ring 3

14 Xen’s VMI: CPU Typically only two types of exceptions occur frequently enough to affect system performance system calls Page faults Solution for system calls Allow guest OS to register a ‘fast’ exception handler for system calls, so that it can be accessed directly by CPU in ring 1, without switching to ring-0 Exception handler is validated before being installed in exception table: To make sure nothing executed in Ring 0 privilege. This method doesn’t work for Page Fault Only code in ring 0 can read the faulting address from register

15 XEN’s VMI: Device I/O Xen does not emulate hardware devices
Exposes device abstractions for simplicity and performance Data is transferred to and from using shared memory, asynchronous buffers Hardware interrupts are notified via a event delivery mechanism to the respective domains

16 The Cost of Porting an OS to Xen
The cost is measured in lines of code which are modified or added. Privileged instructions Page table access Network driver Block device driver <2% of code-base

17 XEN : Control and Management
Xen only performs basic control operations such as access control, CPU scheduling between domains etc. All the policy and control decisions with respect to Xen are undertaken by management software which is running in domain0 The software supports creation and deletion of VBD, VIF, domains, routing rules etc.

18 Outline Introduction Overview of Xen Detailed Design Evaluation
Xen Today Conclusion

19 Detailed Design: Control Transfer
Hypercall: synchronous calls from a domain to Xen Analogous to system calls Events: Events are used by Xen to notify the domain in an asynchronous manner E.g. data from an I/O device ready Guest OS does not see hardware interrupts, only Xen notifications

20 XEN : Detailed Design Data Transfer
The Data transfer between Xen and domains is done using I/O rings Memory for device I/O is provided by the respective domain Minimize the amount of work to demultiplex data to a specific domain

21 XEN : Data Transfer in Detail
I/O Ring Structure I/O Ring is a circular queue of descriptors Descriptors do not contain I/O data but indirectly reference a data buffer as allocated by the guest OS. Access to each ring is based on a set of pointers namely producer and consumer pointers Guest OS associates a unique identifier with each request to address the possible problem of ordering between requests

22 XEN : Sub System Virtualization
The various Sub Systems are : CPU Scheduling Time and Timers Virtual Address Translation Physical Memory Network Management Disk Management

23 XEN : CPU Scheduling Xen uses Borrowed Virtual Time scheduling algorithm for scheduling the domains Per domain scheduling parameters can be adjusted using domain0

24 XEN : Time and Timers Xen provides three types of timers to satisfy different requirements. Real Time (time that is accurately maintained with respect to the processor’s cycle counter and always advances regardless of the executing domain) Virtual Time (time that only advances within the context of the domain) Wall Clock Time (real-time + offset)

25 XEN : Virtual Address Translation
In paravirtualization, Guest OSes allocate and manage their own PTs “Hypercall” to change PT base Xen must validate PT updates before use The following validation rules must be applied to PTE: 1. Guest OSes may only map physical pages it owns 2. Guest OSes have read-only accesses to page tables

26 How OS update PTE Guest OS XEN Hardware Virtual → Machine
Update PTE Hypercall Guest OS 1) Validation check 2) Perform update XEN Hardware MMU

27 XEN : Physical Memory Physical Memory allocations are made at the time of creation which are statically partitioned, to provide strong isolation. Xen does not guarantee to allocate contiguous regions of memory, guest OSes will create the illusion of contiguous physical memory. Xen supports efficient hardware to physical address mapping through a shared translation array, readable by all domains – updates to this are validated by Xen.

28 Networking Xen provides a “Virtual Firewall Router”
Domain0 is responsible for creating the firewall rules Data is transmitted (and received) using two buffer rings (one for outgoing, the other for incoming data) Incoming data packets are analyzed by Xen against the VFR rules, and if any rule is broken, the packet is dropped

29 XEN : Disk Management Only Domain0 has direct unchecked access to the physical disks. Other Domains access the physical disks through virtual block devices (VBDs) which is maintained by domain0. VBD comprises a list of associated ownership and access control information, and is accessed via I/O ring. Xen services batches of requests from competing domains in a simple round-robin fashion.

30 Outline Introduction Overview of Xen Detailed Design Evaluation
Xen Today Conclusion

31 Relative Performance The author first performed a set of experiments to evaluate the overhead of the various virtualization. In this experiment, application-level benchmarks have been used to characterize performance.

32 Performance Results Cluster 1: SPEC CPU suite.
Computationally intensive application, very little I/O and OS interaction. Cluster 2: Time taken to build a default configuration of the Linux kernel Cluster 3: Open Source Database Benchmark suite in default configuration. Information retrieval shown in tuples per second. Cluster 4: Open Source Database Benchmark suite in default configuration. Online Transaction Processing workloads shown in tuples per second. Cluster 5: dbench program emulating load placed on a file server. Cluster 6: SPEC Web99 is a web server benchmark. Xen’s performance is far better than VMware and UML.

33 XEN : Operating System Benchmarks
The author also performed number of smaller experiments targeting particular subsystems. In Table 3, Xen exhibits slower fork, exec and sh performance because all of them require large numbers of page table updates. Table 4 shows context switch times between different numbers of processes with different working set times.

34 XEN : Operating System Benchmarks
Table 5, Test of mmap latency and page fault latency. Despite two transitions into Xen per page, the overhead is relatively modest. Table 6, TCP performance over Gigabit Ethernet LAN. XenoLinux’s page-flipping technique achieves very low overhead.

35 XEN : Concurrent Virtual Machines
The author also compared the performance of running multiple applications in their own guest OS against running them on the same native OS In Figure 4, when the number of domains increases, Xen’s performance improves. In Figure 5, Increase in number of domains further causes reduction in throughput which can be attributed to increased context switching and disk head movement.

36 Additional experiment Results
Performance Isolation Execute domains with “anti-social” processes OSDB-IR and SPEC WEB99 only slightly affected Scalability Run up to 100 VMs concurrently Only a loss of 7.5% throughput compared to Linux

37 Outline Introduction Overview of Xen Detailed Design Evaluation
Xen Today Conclusion

38 Xen Today Current Version: Xen 4.2.1
Supports HW Virtualization Extensions (Intel IVT, AMD-V) Supports SMP Virtualized Guest OSes Supported OSes: Windows, Linux, BSD, … Virtualizes Architectures: x86, x86/64, IA64, ARM, PowerPC, …

39 Outline Introduction Overview of Xen Detailed Design Evaluation
Xen Today Conclusion

40 Conclusion Performance achievement near to that of Native Linux
Xen provides excellent platform for deploying a wide variety of applications Xen provides necessary protection and performance isolation Performance achievement near to that of Native Linux

41 Questions?


Download ppt "Xen and the Art of Virtualization"

Similar presentations


Ads by Google