Presentation is loading. Please wait.

Presentation is loading. Please wait.

KVM/ARM: The Design and Implementation of the Linux ARM Hypervisor Christoffer Dall Department of Computer Science Columbia University cdall@cs.columbia.edu.

Similar presentations


Presentation on theme: "KVM/ARM: The Design and Implementation of the Linux ARM Hypervisor Christoffer Dall Department of Computer Science Columbia University cdall@cs.columbia.edu."— Presentation transcript:

1 KVM/ARM: The Design and Implementation of the Linux ARM Hypervisor Christoffer Dall Department of Computer Science Columbia University Jason Nieh Department of Compouter Science Columbia University 김해천

2 ARM ~1.2 billion ~300 million

3 ARM ARM Server ARM Network infrasturcture

4 Virtualization Extensions
Key Challenges ARM Virtualization Extensions intel VT-x != No PC-standard on ARM

5 Hypervisor Layering in software stack
Above part HyperOne, Xen, PikeOS, OKL4, Hyper V, Vmware ESX Lower part KVM, VirtyalBox, Virtual PC, Parallels, BlueStacks

6 ARM Virtualization Extensions
Provides virtualization in 4 key areas: CPU Virtualization Memory Virtualization Interrupt Virtualization Timer Virtualization

7 ARM Virtualization Extensions CPU Virtualization
Hyp mode was introduced as a trap and emulate mechanism to support virtualization User User Kernel Kernel Hyp System call, page fault To reduce virtualization overhead H/w

8 ARM Virtualization Extensions memory virtualization
Hardware support to virtualize physical memory: stage 2 Page Tables

9 ARM Virtualization Extensions Interrupt virtualization
One distributor in a system, but each CPU core has a cpu Interface Distributor is used to configure the GIC CPU interface is used to acknowledge(ACK) and to signal End-Of- Interrupt(EOI) Interrupt can be configured to trap to either Hyp or Kernel mode Trap to kernel : avoiding the overhead of going through Hyp mode Trap to Hypervisor : hypervisor retain control, but big cost GIC v2.0 include H/W virtualization (VGIC) Virtual CPU interface, List Register VGIC PPI: Private Peripheral Interrupts SPI: Shared Peripheral Interrupts SGI: Soft Generated Interrupt GIC : Generic Interrupt Controller

10 ARM Virtualization Extensions Interrupt virtualization
Generic Interrupt Controller : Trapping Interrupt in Hyp Mode Vm 3) Emulate Virtual Interrupt By signal Hypervisor Cumbersome & Expensive 2) trap 1) interrupt H/W

11 ARM Virtualization Extensions Interrupt virtualization
Generic Interrupt Controller (V2.0) , Virtual GIC : Trapping Interrupt in Kernel Mode Vm Hypervisor 2) trap Good 1) interrupt H/W

12 ARM Virtualization Extensions Timer virtualization
ARM define the Generic Timer Architecture The timers used by the hypervisor cannot be directly configured and manipulated by guest OSes. Such timer accesses from guest OS would need to trap to Hyp mode, incurring additional overhead Timer 0 CPU 0 Timer 1 CPU 1 Accessible from Hyp mode counter Timer 2 CPU 2 Timer 3 CPU 3 Virtual counter Virtual Timer 0 Virtual CPU 0 Virtual counter Virtual Timer 1 Virtual CPU 1 ARM provides ☞ Accessible from VMs Virtual counter Virtual Timer 2 Virtual CPU 2 Virtual counter Virtual Timer 3 Virtual CPU 3

13 Hypervisor Architecture
KVM/ARM builds on KVM and leverages existing infrastructure in the Linux kernel Bare metal hypervisor(xen) vs KVM/ARM ARM platform designs are non-standard ways by different manufactures Samsung exynos, qualcomm snapdragon, Apple A series But, Linux is supported across almost all ARM platform ☞ by integrating KVM/ARM with Linux PL0 User PL1 Kernel PL2 Hyp Linux kernel KVM

14 Hypervisor Architecture Split-mode Virtualization
Running KVM/ARM in Hyp mode implies running the Linux kernel in Hyp mode This is problematic Low-level architecture dependent code in Linux is written to work in kernel mode Running the entire kernel in Hyp mode would adversely affect native performance Kernel Kernel mode ? Kernel Hyp mode

15 Hypervisor Architecture Split-mode Virtualization
KVM/ARM introduces split-mode virtualization It runs across different privileged CPU mode to take advantage offered by each CPU mode Two components, the lowvisor and the highvisor Lowvisor takes advantage of the H/W virtualization support available in Hyp mode Set up the correct execution context by configuration of the H/W Enforce protection and isolation between different execution context Switch from a VM execution context to the host execution, vice versa Provide a virtualization trap handler Highvisor can directly leverage existing Linux functionality Scheduler, kernel data structure, locking, memory allocation functions Kernel mode OS Kernel Hypervisor High visor Handles High level Functionality Hyp mode Low visor Handles Low level Functionality

16 Hypervisor Architecture Split-mode Virtualization
Switching between a VM and the highvisor OS Kernel Hypervisor VM Highvisor Kernel mode Run VM Trap Trap Hyp mode Lowvisor

17 Hypervisor Architecture Split-mode Virtualization
Switching between a VM and the highvisor OS Kernel Hypervisor VM Function call Highvisor Kernel mode Trap Trap Hyp mode Lowvisor As a result, split mode virtualization incurs a double trap cost in switching to and from the highvisor

18 Hypervisor Architecture CPU Virtualization
Context switch register during world-switch S/W in the VM must have persistent access to same register state as S/W running on the physical CPU physical H/W state associated with the hypervisor and its host kernel is persistent across running VMs Hypervisor VM ARM Virtualized cpu Performs trap and emulate on sensitive instruction and when accessing H/W state trap Controlled by the Hypervisor

19 Hypervisor Architecture Memory Virtualization
KVM/ARM provides memory virtualization by enabling Stage-2 translation When running in a VM Completely transparent to the VM The highvisor manages the Stage-2 translation page tables to only allow access to memory allocated for a VM Other accesses will cause stage-2 page faults which trap to the hypervisor Stage-2 translation is disabled when running in the highvisor and lowvisor

20 Hypervisor Architecture Memory Virtualization
Configuring page tables is a high level Functionality OS Kernel Hypervisor VM Highvisor Kernel mode Hyp mode Lowvisor Configures Stage-2 Page Tables

21 Hypervisor Architecture Memory Virtualization
LowVisor has hardware access as it runs in Hyp Mode OS Kernel Hypervisor VM Highvisor Kernel mode Hyp mode Lowvisor Enables Stage-2 Translation

22 Hypervisor Architecture Memory Virtualization
get_user_pages() OS Kernel Hypervisor VM Highvisor Kernel mode Page fault Hyp mode Lowvisor Disables Stage-2 Translation

23 Hypervisor Architecture Interrupt Virtualization
When running in a VM or Host & highvisor All, H/W interrupt processing is done in the host by using Linux’s existing interrupt handling func However, VM must receive notifications in the form of virtual interrupt from emulated devices KVM/ARM uses the VGIC Multicore guest Oses musts be able to send virtual IPIs to others OS Kernel VM Kernel mode High visor Hypervisor Low visor Hyp mode Trap Trap H/W

24 Hypervisor Architecture Timer Virtualization
KVM/ARM leverage ARM’s H/W virtualization features of the generic timer Unfortunately, due to architectural limitations, the virtual timers cannot directly raise virtual interrupts, but always raise hardware interrupt, which trap to the hypervisor KVM/ARM detects when a Virtual timer expires Injects a corresponding virtual interrupt to the VM KVM/ARM performs all hardware ACK and EOI operations OS Kernel VM VM Kernel mode High visor Prepare Enable virtual timer Low visor Hyp mode Hypervisor trap timer

25 Experimental Setup

26 Experimental Setup

27 - Cost of two world switch
Experimental Results Table 3 presents costs of virtualization using KVM/ARM on ARM and KVM x86 on x86 Measured in cycle units - Saving & restore VGIC state is quite expensive on ARM. - x86 provides H/W support Hypercall - Cost of two world switch Trap - Cost of switching the h/w mode from the into the cpu mode bg bg bg

28 Experimental Results Figure 3,4 show normalized performance for running lmbench in a VM versus Host Caused by updating the run-queue clock Lmbench : 메모리 레이턴시와 bandwidth 측정하는 밴치마크 lmbench is a suite of simple, portable, ANSI/C microbenchmarks for UNIX/POSIX. In general, it measures two key features: latency and bandwidth. lmbench is intended to give system developers insight into basic costs of key operations. Supports- KVM/ARM has less overhead than KVM x86 fork & exec Repeatedly sending an IPI Cost of KVM is higher than KVM/ARM Because this require tapping to the hypervisor on x86 but not on ARM UP: uni-processor SMP:symmetrical multi-processing

29 Experimental Results Figure 5,6 show normalized performance for running application workloads More mature KVM x86 system has significantly higher virtualization overheads, KVM/ARM’s split-mode virtualization design allows it to leverage ARM H/W support with comparable performance to traditional hypervisor

30 Experimental Results Figure 7 shows normalized power consumption of using virtualization As well as ARM, mac air’s i7 is one of Intel’s more power optimized processors Both workloads are not CPU bound & the power consumption is not significantly affected by the virtualization layer

31 http://www. linux-kongress. org/2010/slides/KVM-Architecture-LK2010
esc2014chhypforarmv pdf kvm/arm experiences building the linux arm hypervisor

32

33 VIRQ Virtual CPU Interface ACK Virtual CPU Interface

34


Download ppt "KVM/ARM: The Design and Implementation of the Linux ARM Hypervisor Christoffer Dall Department of Computer Science Columbia University cdall@cs.columbia.edu."

Similar presentations


Ads by Google