Presentation is loading. Please wait.

Presentation is loading. Please wait.

Differentiated I/O services in virtualized environments Tyler Harter, Salini SK & Anand Krishnamurthy 1.

Similar presentations


Presentation on theme: "Differentiated I/O services in virtualized environments Tyler Harter, Salini SK & Anand Krishnamurthy 1."— Presentation transcript:

1 Differentiated I/O services in virtualized environments Tyler Harter, Salini SK & Anand Krishnamurthy 1

2 Overview Provide differentiated I/O services for applications in guest operating systems in virtual machines Applications in virtual machines tag I/O requests Hypervisor’s I/O scheduler uses these tags to provide quality of I/O service 2

3 Motivation Variegated applications with different I/O requirements hosted in clouds Not optimal if I/O scheduling is agnostic of the semantics of the request 3

4 Motivation 4 Hypervisor VM 1 VM 2 VM 3

5 Motivation 5 Hypervisor VM 2 VM 3

6 Motivation We want to have high and low priority processes that correctly get differentiated service within a VM and between VMs 6 Can my webserver/DHT log pusher’s IO be served differently from my webserver/DHT’s IO?

7 Existing work & ProblemsExisting work & Problems Vmware’s ESX server offers Storage I/O Control (SIOC) Provides I/O prioritization of virtual machines that access a shared storage pool 7 But it supports prioritization only at host granularity!

8 Existing work & ProblemsExisting work & Problems Xen credit scheduler also works at domain level Linux’s CFQ I/O scheduler supports I/O prioritization – Possible to use priorities at both guest and hypervisor’s I/O scheduler 8

9 Original ArchitectureOriginal Architecture 9 QEMU Virtual SCSI Disk Syscalls I/O Scheduler (e.g., CFQ) Syscalls I/O Scheduler (e.g., CFQ) Guest VMs Host HighHighLowLowHighHighLowLow

10 Original ArchitectureOriginal Architecture 10

11 Problem 1: low and high may get same serviceProblem 1: low and high may get same service 11

12 Problem 2: does not utilize host cachesProblem 2: does not utilize host caches 12

13 Existing work & ProblemsExisting work & Problems Xen credit scheduler also works at domain level Linux’s CFQ I/O scheduler supports I/O prioritization – Possible to use priorities at both guest and hypervisor’s I/O scheduler Current state of the art doesn’t provide differentiated services at guest application level granularity 13

14 Solution 14 Tag I/O and prioritize in the hypervisor

15 Outline KVM/Qemu, a brief intro… KVM/Qemu I/O stack Multi-level I/O tagging I/O scheduling algorithms Evaluation Summary 15

16 KVM/Qemu, a brief intro..KVM/Qemu, a brief intro.. 16 Hardware Linux Standard Kernel with KVM - Hypervisor KVM module part of Linux kernel since version 2.6 Linux has all the mechanisms a VMM needs to operate several VMs. Has 3 modes:- kernel, user, guest kernel-mode: switch into guest-mode and handle exits due to I/O operations user-mode: I/O when guest needs to access devices guest-mode: execute guest code, which is the guest OS except I/O Relies on a virtualization capable CPU with either Intel VT or AMD SVM extensions Relies on a virtualization capable CPU with either Intel VT or AMD SVM extensions

17 KVM/Qemu, a brief intro..KVM/Qemu, a brief intro.. 17 Hardware Linux Standard Kernel with KVM - Hypervisor KVM module part of Linux kernel since version 2.6 Linux has all the mechanisms a VMM needs to operate several VMs. Has 3 modes:- kernel, user, guest kernel-mode: switch into guest-mode and handle exits due to I/O operations user-mode: I/O when guest needs to access devices guest-mode: execute guest code, which is the guest OS except I/O Relies on a virtualization capable CPU with either Intel VT or AMD SVM extensions Relies on a virtualization capable CPU with either Intel VT or AMD SVM extensions

18 KVM/Qemu, a brief intro..KVM/Qemu, a brief intro.. 18 Hardware Linux Standard Kernel with KVM - Hypervisor Each Virtual Machine is an user space process

19 KVM/Qemu, a brief intro..KVM/Qemu, a brief intro.. 19 Hardware Linux Standard Kernel with KVM - Hypervisor libvirt Other user space ps

20 KVM/Qemu I/O stackKVM/Qemu I/O stack Application in guest OS System calls layer read, write, stat,… VFS FileSystem BufferCache Block SCSI ATA Issues an I/O-related system call (eg: read(), write(), stat()) within a user-space context of the virtual machine. This system call will lead to submitting an I/O request from within the kernel-space of the VM This system call will lead to submitting an I/O request from within the kernel-space of the VM The I/O request will reach a device driver - either an ATA-compliant (IDE) or SCSI

21 KVM/Qemu I/O stackKVM/Qemu I/O stack Application in guest OS System calls layer read, write, stat,… VFS FileSystem BufferCache Block SCSI ATA The device driver will issue privileged instructions to read/write to the memory regions exported over PCI by the corresponding device The device driver will issue privileged instructions to read/write to the memory regions exported over PCI by the corresponding device

22 KVM/Qemu I/O stackKVM/Qemu I/O stack Hardware Linux Standard Kernel with KVM - Hypervisor These instructions will trigger VM-exits, that will be handled by the core KVM module within the Host's kernel-space context These instructions will trigger VM-exits, that will be handled by the core KVM module within the Host's kernel-space context Qemu emulator The privileged I/O related instructions are passed by the hypervisor to the QEMU machine emulator The privileged I/O related instructions are passed by the hypervisor to the QEMU machine emulator A VM-exit will take place for each of the privileged instructions resulting from the original I/O request in the VM A VM-exit will take place for each of the privileged instructions resulting from the original I/O request in the VM

23 KVM/Qemu I/O stackKVM/Qemu I/O stack Hardware Linux Standard Kernel with KVM - Hypervisor Qemu emulator These instructions will then be emulated by device-controller emulation modules within QEMU (either as ATA or as SCSI commands) These instructions will then be emulated by device-controller emulation modules within QEMU (either as ATA or as SCSI commands) QEMU will generate block-access I/O requests, in a special blockdevice emulation module QEMU will generate block-access I/O requests, in a special blockdevice emulation module Thus the original I/O request will generate I/O requests to the kernel-space of the Host Upon completion of the system calls, qemu will "inject" an interrupt into the VM that originally issued the I/O request

24 Multi-level I/O tagging modificationsMulti-level I/O tagging modifications

25 Modification 1: pass priorities via syscallsModification 1: pass priorities via syscalls

26 Modification 2: NOOP+ at guest I/O schedulerModification 2: NOOP+ at guest I/O scheduler

27 Modification 3: extend SCSI protocol with prioModification 3: extend SCSI protocol with prio

28 Modification 2: NOOP+ at guest I/O schedulerModification 2: NOOP+ at guest I/O scheduler

29 Modification 4: share-based prio sched in hostModification 4: share-based prio sched in host

30 Modification 5: use new calls in benchmarksModification 5: use new calls in benchmarks

31 Scheduler algorithm-StrideScheduler algorithm-Stride 31

32 Scheduler algorithm cntdScheduler algorithm cntd 32

33 Evaluation Tested on HDD and SSD Configuration: 33 Guest RAM size1GB Host RAM size8GB Hard disk RPM7200 SSD35000 IOPS Rd, IOPS Wr Guest OSUbuntu Server LK 3.2 Host OSKubuntu LK 3.2 Filesystem(Host/Guest)Ext4 Virtual disk image formatqcow2

34 Results Metrics: – Throughput – Latency Benchmarks: – Filebench – Sysbench – Voldemort(Distributed Key Value Store) 34

35 Shares vs Throughput for different workloads : HDDShares vs Throughput for different workloads : HDD 35

36 Shares vs Latency for different workloads : HDDShares vs Latency for different workloads : HDD 36 Priorities are better respected if most of the read request hits the disk

37 Effective Throughput for various dispatch numbers : HDDEffective Throughput for various dispatch numbers : HDD 37 Priorities are respected only when dispatch numbers of the disk is lower than the number of read requests generated by the system at a time Downside: Dispatch number of the disk is directly proportional to the effective throughput

38 Shares vs Throughput for different workloads : SSDShares vs Throughput for different workloads : SSD 38

39 Shares vs Latency for different workloads : SSDShares vs Latency for different workloads : SSD 39 Priorities in SSDs are respected only under heavy load, since SSDs are faster

40 Comparison b/w different schedulersComparison b/w different schedulers 40 Only Noop+LKMS respects priority! (Has to be, since we did it)

41 Results Hard drive/SSD WebserverMailserverRandom Reads Sequential Reads Voldemort DHT Reads Hard disk Flash 41

42 Summary It works!!! Preferential services are possible only when dispatch numbers of the disk is lower than the number of read requests generated by the system at a time But lower dispatch number reduces the effective throughput of the storage In SSD, preferential service is only possible under heavy load Scheduling at the lowermost layer yields better differentiated services 42

43 Future workFuture work Get it working for writes Get evaluations on VMware ESX SIOC and compare with our results 43

44 44


Download ppt "Differentiated I/O services in virtualized environments Tyler Harter, Salini SK & Anand Krishnamurthy 1."

Similar presentations


Ads by Google