Presentation is loading. Please wait.

Presentation is loading. Please wait.

OPEN-SOURCE SOFTWARE TOOLKITS FOR CREATING AND MANAGING DISTRIBUTED HETEROGENEOUS CLOUD INFRASTRUCTURES A.V. Pyarn Lomonosov Moscow State University, Faculty.

Similar presentations


Presentation on theme: "OPEN-SOURCE SOFTWARE TOOLKITS FOR CREATING AND MANAGING DISTRIBUTED HETEROGENEOUS CLOUD INFRASTRUCTURES A.V. Pyarn Lomonosov Moscow State University, Faculty."— Presentation transcript:

1 OPEN-SOURCE SOFTWARE TOOLKITS FOR CREATING AND MANAGING DISTRIBUTED HETEROGENEOUS CLOUD INFRASTRUCTURES A.V. Pyarn Lomonosov Moscow State University, Faculty of Computational Mathematics and Cybernetics apyarn@gmail.com

2 Agenda  Aim of paper  Virtualization  Hypervisor architecture  IaaS and cloud toolkits  Requirements  Hypervisors toolstacks  Cloud platforms  Comparison

3 Aim of paper  Show use cases of cloud toolkits as virtual educational polygons for educational purposes.  Cloud toolkits design aspects, architectural features, functionality capabilities, installation how-to’s, extension and support capabilities: Xen, KVM toolkits and OpenNebula, OpenStack toolkits comparison.

4 Virtualization

5 Types of Virtualization  Emulation: Fully-emulate the underlying hardware architecture  Full virtualization: Simulate the base hardware architecture  Paravirtualization: Abstract the base architecture  OS-level virtualization: Shared kernel (and architecture), separate user spaces Oracle VirtualBox Vmware player, server Vmware ESXi, vSphere, Hyper-V, KVM, XEN XEN OpenVZ

6 Hypervisor role  Thin, privileged abstraction layer between the hardware and operating systems  Defines the virtual machine that guest domains see instead of physical hardware: −Grants portions of physical resources to each guest −Exports simplified devices to guests −Enforces isolation among guests

7 Hypervisor architecture Toolstack Toolstack = standard Linux tools+specific 3-d party toolkits, API daemons: libvirt, XEND, XAPI etc.

8 IaaS  IaaS = Virtualization (hypervisor features) + “Amazon Style” Self-Service Portal and convenient GUI management + Billing + Multitenancy  Hypervisors toolstack and APIs vs 3- rd party open source cloud toolkits (OpenNebula, OpenStack “datacenter virtualization” platform, etc.) What should we use? Depends on requirements

9 Requirements For educational polygons:  Open-source software: hypervisor and management subsystem  NFS or iSCSI independent storage for virtual disks and images  Easy installation and support  GUI: Management center, optional - self- service portal, monitoring and accounting tools

10 Cloud platforms They don’t consist of hypervisors themselves, only management role Hardware Hypervisor OS Hardware Hypervisor OS Storage NFS/ iSCSI Management server(-s): scheduler authorization monitoring web-interface DB SSH Worker node Agentless

11 KVM-QEMU

12 SMP hosts SMP guests (as of kvm-61, max 16 cpu supported) Live Migration of guests from one host to another Emulated hardware: ClassDevice Video cardCirrus CLGD 5446 PCI VGA card or dummy VGA card with Bochs VESA extensions[14] PCIi440FX host PCI bridge and PIIX3 PCI to ISA bridge[14] Input devicePS/2 Mouse and Keyboard[14] Sound cardSound Blaster 16, ENSONIQ AudioPCI ES1370, Gravis Ultrasound GF1, CS4231A compatible[14] Ethernet Network card AMD Am79C970A (Am7990), E1000 (Intel 82540EM, 82573L, 82544GC), NE2000, and Realtek RTL8139 Watchdog timer Intel 6300ESB or IB700 RAM50 MB - 32 TB CPU1-16 CPUs

13 KVM-QEMU Ease of use + Shared storage + Live migrations + Management GUI + (virtual machine manager)

14 XEN

15 Virtualization in Xen Xen can scale to >255 physical CPUs, 128 VCPUs per PV guest, 1TB of RAM per host, and up to 1TB of RAM per HVM guest or 512 GB of RAM per PV guest. Paravirtualization: Uses a modified Linux kernel Guest loads Dom0's pygrub or Dom0's kernel Front-end and back-end virtual device model Cannot run Windows Guest "knows" it's a VM and cooperates with hypervisor Hardware-assisted full virtualization (HVM): Uses the same, normal, OS kernel Guest contains grub and kernel Normal device drivers Can run Windows Guest doesn't "know" it's a VM, so hardware manages it

16 Xen – Cold Relocation Motivation: Moving guest between hosts without shared storage or with different architectures or hypervisor versions Process: Shut down a guest on the source host Move the guest from one Domain0's file system to another's by manually copying the guest's disk image and configuration files Start the guest on the destination host

17 Xen – Cold Relocation Benefits: Hardware maintenance with less downtime Shared storage not required Domain0s can be different Multiple copies and duplications Limitation: More manual process Service should be down during copy

18 Xen – Live Migration Motivation: Load balancing, hardware maintenance, and power management Result: Begins transferring guest's state to new host Repeatedly copies dirtied guest memory (due to continued execution) until complete Re-routes network connections, and guest continues executing with execution and network uninterrupted

19 Xen – Live Migration Benefits: No downtime Network connections to and from guest often remain active and uninterrupted Guest and its services remain available Limitations: Requires shared storage Hosts must be on the same layer 2 network Sufficient spare resources needed on target machine Hosts must be configured similarly

20 Xen Cloud Platform (XCP) XCP includes: Open-source Xen hypervisor Enterprise-level XenAPI (XAPI) management tool stack Support for Open vSwitch (open-source, standards-compliant virtual switch) Features: Fully-signed Windows PV drivers Heterogeneous machine resource pool support Installation by templates for many different guest OSes

21 Xen Cloud Platform (XCP) XCP includes: Open-source Xen hypervisor Enterprise-level XenAPI (XAPI) management tool stack Support for Open vSwitch (open-source, standards-compliant virtual switch) Features: Fully-signed Windows PV drivers Heterogeneous machine resource pool support Installation by templates for many different guest OSes

22 XCP XenAPI Management Tool Stack VM lifecycle: live snapshots, checkpoint, migration Resource pools: live relocation, auto configuration, disaster recovery Flexible storage, networking, and power management Event tracking: progress, notification Upgrade and patching capabilities Real-time performance monitoring and alerting

23 XCP Installation

24 XCP Management Software Xencenter

25 XCP Toolstack Command Line Interface (CLI) Tools ToolstackxlXAPIlibvirtxend CLI toolxlxevirshxm

26 XCP Toolstack Toolstack Feature Comparison Featuresxlxapilibvirt Purpose-built for Xen XX Basic VM Operations XXX Managed DomainsXX Live MigrationXXX PCI PassthroughXXX Host PoolsX Flexible, Advanced Storage Types X Built-in advanced performance monitoring (RRDs)RRDs X Host Plugins (XAPI)XAPI X

27 OpenNebula

28 What are the Main Components? Interfaces & APIs: OpenNebula provides many different interfaces that can be used to interact with the functionality offered to manage physical and virtual resources. There are two main ways to manage OpenNebula instances:command line interface and the Sunstone GUI. There are also several cloud interfaces that can be used to create public clouds: OCCI and EC2 Query, and a simple self-service portal for cloud consumers. In addition, OpenNebula features powerful integration APIs to enable easy development of new components (new virtualization drivers for hypervisor support, new information probes, etc). Users and Groups Hosts: The main hypervisors are supported, Xen, KVM, and VMware. Networking Storage: OpenNebula is flexible enough to support as many different image storage configurations as possible. The support for multiple data stores in the Storage subsystem provides extreme flexibility in planning the storage backend and important performance benefits. The main storage configurations are supported, file system datastore, to store disk images in a file form and with image transferring using ssh or shared file systems (NFS, GlusterFS, Lustre…),iSCSI/LVM to store disk images in a block device form, and VMware datastore specialized for the VMware hypervisor that handle the vmdk format. Clusters: Clusters are pools of hosts that share datastores and virtual networks. Clusters are used for load balancing, high availability, and high performance computing.

29 OpenNebula - installation Front-end, executes the OpenNebula services. Hosts, hypervisor-enabled hosts that provide the resources needed by the VMs. Datastores hold the base images of the VMs. Service Network, physical network used to support basic services: interconnection of the storage servers and OpenNebula control operations VM Networks physical network that will support VLAN for the VMs.

30 OpenNebula – installation front-end Front-End The machine that holds the OpenNebula installation is called the front- end. This machine needs to have access to the storage Datastores (e.g. directly mount or network), and network connectivity to each host. The base installation of OpenNebula takes less than 10MB. OpenNebula services include: Management daemon (oned) and scheduler (mm_sched) Monitoring and accounting daemon (onecctd) Web interface server (sunstone) Cloud API servers (ec2-query and/or occi) Note that these components communicate through XML-RPC and may be installed in different machines for security or performance reasons Requirements for the Front-End are: ruby >= 1.8.7 sudo apt-get install opennebula

31 OpenNebula – installation hosts Hosts The hosts are the physical machines that will run the VMs. During the installation you will have to configure the OpenNebula administrative account to be able to ssh to the hosts, and depending on your hypervisor you will have to allow this account to execute commands with root privileges or make it part of a given group. OpenNebula doesn't need to install any packages in the hosts, and the only requirements for them are: ssh server running hypervisor working properly configured Ruby >= 1.8.7

32 OpenNebula – installation storage Storage OpenNebula uses Datastores to handle the VM disk Images. VM Images are registered, or created (empty volumes) in a Datastore. In general, each Datastore has to be accessible through the front-end using any suitable technology NAS, SAN or direct attached storage. When a VM is deployed the Images are transferred from the Datastore to the hosts. Depending on the actual storage technology used it can mean a real transfer, a symbolic link or setting up an iSCSI target. There are two configuration steps needed to perform a basic set up: First, you need to configure the system datastore to hold images for the running VMs. Then you have to setup one ore more datastore for the disk images of the VMs, you can find more information on setting up Filesystem Datastores here. OpenNebula can work without a Shared FS. This will force the deployment to always clone the images and you will only be able to do cold migrations.

33 OpenNebula – installation networking The network is needed by the OpenNebula front- end daemons to access the hosts to manage and monitor the hypervisors; and move image files. It is highly recommended to install a dedicated network for this purpose. To offer network connectivity to the VMs across the different hosts, the default configuration connects the virtual machine network interface to a bridge in the physical host. You should create bridges with the same name in all the hosts. Depending on the network model, OpenNebula will dynamically create network bridges.

34 OpenNebula – CLI

35 OpenNebula – Sunstone

36

37

38 OpenStack Projects, Python Compute Storage Networking Dashboard (GUI)

39 OpenStack - compute

40 OpenStack - installation 1. Install Ubuntu 12.04 (Precise) or Fedora 16 In order to correctly install all the dependencies, we assume a specific version of Ubuntu or Fedora to make it as easy as possible. OpenStack works on other flavors of Linux (and some folks even run it on Windows!) We recommend using a minimal install of Ubuntu server or in a VM if this is your first time. 2. Download DevStack git clone git://github.com/openstack- dev/devstack.gitThe devstack repo contains a script that installs openstack and templates for configuration files 3. Start the install cd devstack;./stack.sh

41 OpenStack - installation

42 OpenStack - dashboard

43 OpenStack - summary 1.Hard to install and maintain 2.Bad logical structure of SW 3.Not stable, nany bugs

44 Conclusion Xen KVM/QEMU OpenNebula OpenStack


Download ppt "OPEN-SOURCE SOFTWARE TOOLKITS FOR CREATING AND MANAGING DISTRIBUTED HETEROGENEOUS CLOUD INFRASTRUCTURES A.V. Pyarn Lomonosov Moscow State University, Faculty."

Similar presentations


Ads by Google