Download presentation
Presentation is loading. Please wait.
1
Operating System Support for Virtual Machines Samuel King, George Dunlap, Peter Chen Univ of Michigan Ashish Gupta
2
Overview Motivation Classification of VMs Advantage of Type II VMs About UMLinux: exploiting Linux caps How UMLinux works ? The three bottlenecks, their solutions Performance results Conclusions: Modifying host OS helps !
3
Two classifications for VM Higher Level Interface VM/370 VMWare DenaliUMLinux SimOS Xen VMWare Guest tools VAX VMM Security Kernel u-kernelsJVM 1
4
Two classifications for VM Underlying Platform VM/370 VMWare ESX Disco Denali Xen VMWare Workstation VirtualPC SimOS UMLinux 2 Type II Type I Convenience Performance
5
UMLinux Higher level interface slightly different Guest OS needs to be modified –Simple device drivers added –Emulation of certain instructions (iret and in/out) –Kernel Re-linked to different address 17,000 lines of change ptrace virtualization –Intercepts guest system calls –Tracks transitions
6
Advantage of Type II VM Guest Machine Process Virtual CPU Host files and devices Virtual I/O Devices Host Signals Virtual Interrupts mmap munmap Virtual MMU
7
The problem
8
Compiling the Linux Kernel + 510 lines to Host OS
9
Compiling the Linux Kernel + 510 lines to Host OS
10
Optimization One System calls
12
Lots of context switches between VMM Guest machine process
13
Use VMM as a Kernel module Modification to Host OS also…
14
?
16
Optimization Two Memory protection
17
Frequent switching between Guest Kernel and Guest application
18
Guest Kernel to Guest User
19
Guest User to Guest Kernel Through mmap, munmap and mprotect Very expensive…
20
Host Linux Memory Management x86 paging provides built-in protection to memory pages Linux uses page tables for translation and protection Segments used only to switch between privilege levels Uses supervisor bit to disallow ring 3 to access certain pages The idea: segments bound features are relatively unused
21
Solution: Change Segment bounds for each mode
24
Optimization Three Context Switching
25
The problem with context switching: –Have to remap user process’s virtual memory to the “virtual” physical memory –Generates large number of mmaps costly The solution: –Allow one process to maintain multiple address- spaces –Each address space different set of page tables –New system call : switch guest, whenever context switching
26
Multiple Page Table Sets Page Table Ptr Host operating system Guest OS guest proc a guest proc b switchguest syscall
28
Conclusion Type II VMM CAN be as fast as type I by modifying the Host OS Is the title of paper justified ?
29
Virtualizing I/O Devices on VMware Workstation’s Hosted VMM Jeremy Sugerman, Ganesh Venkitachalam and Beng-Hong Lim VMware, Inc.
30
Introduction VM Definition from IBM: –a “virtual machine” is a fully protected and isolated copy of the underlying physical machine’s hardware. The choice for hosted architecture –Relies upon host OS for device support Primary Advantage –Copes with diversity of hardware –Compatible with pre-existing PC software –Near native performance for CPU intensive workloads
32
The major tradeoff I/O performance degradation I/O emulation done in host world –Switching between the host world and the VMM world
33
How I/O works VM AppVMM VM Driver Application Portion Privileged Portion I/O Request I/O Virtualization CPU Virtualization H/w interrupt Interrupt reasserted
34
I/O Virtualization VMM intercepts all I/O operations –Usually privileged IN, OUT operations Emulated either in VMM on in VMApp Host OS drivers understand the semantics of port I/O, VMM doesn’t Physical Hardware I/O must be handled in Host OS Lot of Overhead from world switching –Which devices get affected ? –CPU gets saturated before I/O…
35
The Goal of this paper I/O CPU I/O CPU
36
The Network Card Virtual NIC appears as a full fledged PCI Ethernet Controller, with its own MAC address Connection implemented by a VMNet driver loaded in the Host OS Virtual NIC : a combination of code in the VMM and VMApp –Virtual I/O Ports and Virtual IRQs
38
HOSTHOST VMMVMM Sending a Packet
39
VMMVMM HOSTHOST HOSTHOST Receiving a Packet
40
Experimental Setup Nettest: throughput tests
41
Time profiling Extra work: Switching worlds for every I/O instruction: most expensive I/O interrupt for every packet sent and received: –VMM, host and guest interrupt handlers are run ! Packet trans: two device drivers Packet copy on transmit
42
Optimization One Primary aim: Reduce world switches Idea: Only a third of the I/O instructions trigger packet trans. –Emulate the rest in VMM The Lance NIC address I/O has memory semantics –I/O MOV ! –Strips away several layers of virtualization
43
Optimization Two Very high interrupt rate for data trans. When does a world switch occur: –A packet is to be transmitted –A real interrupt occurs e.g. timer interrupt The Idea: Piggyback the packet interrupts on the real interrupts –Queue the packets in a ring buffer –Transmit all buffered packets on next switch Works well for I/O intensive workloads
44
Packet Transmit Real Interrupt
45
Optimization Three Reduce host system calls for packet sends and receives Idea: Instead of select, use a shared bit-vector, to indicate packet availability Eliminates costly select() ?
46
Summary of three optimizations Native VM/733 MHz Version 2.0 VM/733 MHz Optimized Guest OS idles
47
Summary of three optimizations Native VM/350 MHz Version 2.0 VM/350 MHz Optimized
48
Most effective Optimization ? Emulating IN and OUT to Lance I/O ports directly in VMM Why ? –Eliminates lots of world switches –I/O changed to MOV instruction
49
Further avenues for Optimization ? Modify the Guest OS –Substitute expensive-to-virtualize instructions e.g. MMU instructions. Example ?? –Import some OS functionality into VMM –Tradeoff: can use off-the-shelf Oses An idealized virtual NIC (Example ??) –Only one I/O for packet transmit instead of 12 ! –Cost: custom device drivers for every OS –VMWare Server version
50
Further avenues for Optimization ? Modify the Host OS: Example ?? –Change the Linux networking stack Poor buffer management –Cost: requires co-operation from OS Vendors Direct Control of Hardware: VMWare ESX –Fundamental limitations of Hosted Architecture –Idea: Let VMM drive I/O directly, no switching –Cost ??
Similar presentations
© 2024 SlidePlayer.com Inc.
All rights reserved.