Presentation is loading. Please wait.

Presentation is loading. Please wait.

OPEN-SOURCE SOFTWARE TOOLKITS FOR CREATING AND MANAGING DISTRIBUTED HETEROGENEOUS CLOUD INFRASTRUCTURES A.V. Pyarn Lomonosov Moscow State University,

Similar presentations


Presentation on theme: "OPEN-SOURCE SOFTWARE TOOLKITS FOR CREATING AND MANAGING DISTRIBUTED HETEROGENEOUS CLOUD INFRASTRUCTURES A.V. Pyarn Lomonosov Moscow State University,"— Presentation transcript:

1 OPEN-SOURCE SOFTWARE TOOLKITS FOR CREATING AND MANAGING DISTRIBUTED HETEROGENEOUS CLOUD INFRASTRUCTURES A.V. Pyarn Lomonosov Moscow State University, Faculty of Computational Mathematics and Cybernetics

2 Agenda Aim of paper Virtualization Hypervisor architecture
IaaS and cloud toolkits Requirements Hypervisors toolstacks Cloud platforms Comparison

3 Aim of paper Show use cases of cloud toolkits as virtual educational polygons for educational purposes. Cloud toolkits design aspects, architectural features, functionality capabilities, installation how-to’s, extension and support capabilities: Xen, KVM toolkits and OpenNebula, OpenStack toolkits comparison. Было проделано изучение облачных платформ...

4 Virtualization Вступительные слова о виртуализации – развитие и применение. Использование технологий виртуализации может быть применимо для следующих целей: Создание образовательных полигонов для тестирования программного обеспечения с различными характеристиками серверов (вариации оперативной памяти, количества ядер процессора, процессоров, сетевых топологий) Более эффективное использование имеющегося оборудования: возможность запуска нескольких полностью изолированных виртуальных серверов на одном физическом серверве, создание библиотеки образов виртуальных машин, простота управления и переключения виртуальными машинами Безопасное применение изолированных сред – сломав один виртуальный сервер Т.е. было бы удобно создать виртуализованную среду со встроенные

5 Types of Virtualization
Emulation: Fully-emulate the underlying hardware architecture Full virtualization: Simulate the base hardware architecture Paravirtualization: Abstract the base architecture OS-level virtualization: Shared kernel (and architecture), separate user spaces Oracle VirtualBox Vmware player, server Vmware ESXi, vSphere, Hyper-V, KVM, XEN Существуют различные типы виртуализации, обзор продуктов XEN OpenVZ

6 Hypervisor role Thin, privileged abstraction layer between the hardware and operating systems  Defines the virtual machine that guest domains see instead of physical hardware: Grants portions of physical resources to each guest Exports simplified devices to guests Enforces isolation among guests Dom U – паравиртуальный домен HVM –домент – hardware virtualized domain, для запуска немодифицированных гостевых систем Паравиртуальные драйверы в HVM домене, например для windows. Т.е. драйверы, которые знают, что они виртуализованы и работают в виртуальной среде

7 Hypervisor architecture
Toolstack Dom U – паравиртуальный домен HVM –домент – hardware virtualized domain, для запуска немодифицированных гостевых систем Паравиртуальные драйверы в HVM домене, например для windows. Т.е. драйверы, которые знают, что они виртуализованы и работают в виртуальной среде Toolstack = standard Linux tools+specific 3-d party toolkits, API  daemons: libvirt, XEND, XAPI etc.

8 Depends on requirements
IaaS IaaS = Virtualization (hypervisor features) + “Amazon Style” Self-Service Portal and convenient GUI management + Billing + Multitenancy Hypervisors toolstack and APIs vs 3- rd party open source cloud toolkits (OpenNebula, OpenStack “datacenter virtualization” platform, etc.) What should we use? Depends on requirements Одной из целей проведнного исследования было выяснить когда целесообзано использовать возможности самих гипервизоров и встроенных инструментов, а когда – сторонние средства для управления облачными инфраструктурами

9 Requirements For educational polygons:
Open-source software: hypervisor and management subsystem NFS or iSCSI independent storage for virtual disks and images Easy installation and support GUI: Management center, optional - self-service portal, monitoring and accounting tools

10 They don’t consist of hypervisors themselves, only management role
Cloud platforms They don’t consist of hypervisors themselves, only management role Storage NFS/ iSCSI Hardware Hypervisor OS Management server(-s): scheduler authorization monitoring web-interface SSH Worker node Hardware Hypervisor OS Agentless Что из себя представляют популярные облачные платформы Вывод: возможности облачных платформ ограничены возможностями гипервизоров, рассмотрим их более подробно и затем перейдем к рассмотрению облачных платформ DB SSH Worker node

11 KVM-QEMU KVM is closely associated with Linux because it uses the Linux kernel as a bare metal hypervisor.  A host running KVM is actually running a Linux kernel and the KVM kernel module, which was merged into Linux and has since been maintained as part of the kernel.  This approach takes advantage of the insight that modern hypervisors must deal with a wide range of complex hardware and resource management challenges that have already been solved in operating system kernels.  Linux is a modular kernel and is therefore an ideal environment for building a hypervisor. Программное обеспечение KVM состоит из загружаемого модуля ядра (называемого kvm.ko), предоставляющего базовый сервис виртуализации, процессорно-специфического загружаемого модуля kvm-amd.ko либо kvm-intel.ko, и компонентов пользовательского режима (модифицированного QEMU). Преимущество KVM состоит в том, что поскольку она сама является частью ядра, она может воспользоваться преимуществами от развития и оптимизации ядра. Это свидетельствует о перспективности этого подхода по сравнению с другими независимыми реализациями гипервизора. Два основных недостатка KVM состоят в том, что она требует новейших способных к виртуализации процессоров и запуска процесса QEMU в пользовательском пространстве для обеспечения виртуализации ввода/вывода. Но так или иначе KVM уже в ядре, что является огромным скачком вперед по сравнению с существующими решениями.

12 KVM-QEMU SMP hosts SMP guests (as of kvm-61, max 16 cpu supported)
Live Migration of guests from one host to another Emulated hardware: Class Device Video card Cirrus CLGD 5446 PCI VGA card or dummy VGA card with Bochs VESA extensions[14] PCI i440FX host PCI bridge and PIIX3 PCI to ISA bridge[14] Input device PS/2 Mouse and Keyboard[14] Sound card Sound Blaster 16, ENSONIQ AudioPCI ES1370, Gravis Ultrasound GF1, CS4231A compatible[14] Ethernet Network card AMD Am79C970A (Am7990), E1000 (Intel 82540EM, 82573L, 82544GC), NE2000, and Realtek RTL8139 Watchdog timer Intel 6300ESB or IB700 RAM 50 MB - 32 TB CPU 1-16 CPUs

13 KVM-QEMU Ease of use + Shared storage + Live migrations +
Management GUI + (virtual machine manager)

14 XEN

15 Virtualization in Xen Xen can scale to >255 physical CPUs, 128 VCPUs per PV guest, 1TB of RAM per host, and up to 1TB of RAM per HVM guest or 512 GB of RAM per PV guest. Paravirtualization:  Uses a modified Linux kernel Guest loads Dom0's pygrub or Dom0's kernel Front-end and back-end virtual device model Cannot run Windows Guest "knows" it's a VM and cooperates with hypervisor Hardware-assisted full virtualization (HVM):  Uses the same, normal, OS kernel Guest contains grub and kernel  Normal device drivers Can run Windows Guest doesn't "know" it's a VM, so hardware manages it KVM is closely associated with Linux because it uses the Linux kernel as a bare metal hypervisor.  A host running KVM is actually running a Linux kernel and the KVM kernel module, which was merged into Linux and has since been maintained as part of the kernel.  This approach takes advantage of the insight that modern hypervisors must deal with a wide range of complex hardware and resource management challenges that have already been solved in operating system kernels.  Linux is a modular kernel and is therefore an ideal environment for building a hypervisor. Программное обеспечение KVM состоит из загружаемого модуля ядра (называемого kvm.ko), предоставляющего базовый сервис виртуализации, процессорно-специфического загружаемого модуля kvm-amd.ko либо kvm-intel.ko, и компонентов пользовательского режима (модифицированного QEMU). Преимущество KVM состоит в том, что поскольку она сама является частью ядра, она может воспользоваться преимуществами от развития и оптимизации ядра. Это свидетельствует о перспективности этого подхода по сравнению с другими независимыми реализациями гипервизора. Два основных недостатка KVM состоят в том, что она требует новейших способных к виртуализации процессоров и запуска процесса QEMU в пользовательском пространстве для обеспечения виртуализации ввода/вывода. Но так или иначе KVM уже в ядре, что является огромным скачком вперед по сравнению с существующими решениями.

16 Xen – Cold Relocation Motivation:
Moving guest between hosts without shared storage or with different architectures or hypervisor versions  Process: Shut down a guest on the source host Move the guest from one Domain0's file system to another's by manually copying the guest's disk image and configuration files Start the guest on the destination host

17 Xen – Cold Relocation Benefits:
Hardware maintenance with less downtime Shared storage not required Domain0s can be different  Multiple copies and duplications Limitation: More manual process Service should be down during copy

18 Xen – Live Migration Motivation:
Load balancing, hardware maintenance, and  power management Result: Begins transferring guest's state to new host Repeatedly copies dirtied guest memory (due to continued execution) until complete Re-routes network connections, and guest continues executing with execution and network uninterrupted

19 Xen – Live Migration Benefits: No downtime
Network connections to and from guest often remain active and uninterrupted Guest and its services remain available Limitations: Requires shared storage Hosts must be on the same layer 2 network Sufficient spare resources needed on target machine Hosts must be configured similarly

20 Xen Cloud Platform (XCP)
XCP includes: Open-source Xen hypervisor Enterprise-level XenAPI (XAPI) management tool stack Support for Open vSwitch (open-source, standards-compliant virtual switch) Features: Fully-signed Windows PV drivers   Heterogeneous machine resource pool support   Installation by templates for many different guest OSes

21 Xen Cloud Platform (XCP)
XCP includes: Open-source Xen hypervisor Enterprise-level XenAPI (XAPI) management tool stack Support for Open vSwitch (open-source, standards-compliant virtual switch) Features: Fully-signed Windows PV drivers   Heterogeneous machine resource pool support   Installation by templates for many different guest OSes

22 XCP XenAPI Management Tool Stack
VM lifecycle: live snapshots, checkpoint, migration Resource pools: live relocation, auto configuration, disaster recovery Flexible storage, networking, and power management Event tracking: progress, notification    Upgrade and patching capabilities Real-time performance monitoring and alerting

23 XCP Installation

24 XCP Management Software
Xencenter

25 XCP Toolstack Command Line Interface (CLI) Tools Toolstack xl XAPI
libvirt xend CLI tool xe virsh xm

26 XCP Toolstack Toolstack Feature Comparison Features xl xapi libvirt
Purpose-built for Xen X Basic VM Operations Managed Domains Live Migration PCI Passthrough Host Pools Flexible, Advanced Storage Types Built-in advanced performance monitoring (RRDs) Host Plugins (XAPI)

27 OpenNebula Архитектурно OpenNebula состоит из трех слоев: различных драйверов, ядра и утилит. Драйверы взаимодействуют напрямую с гипервизорами и операционными системами, они ответственны за создание виртуальных машин, запуск и выключение, а так же управления хранением данных, они собирают информацию с физических и вирутальных машин. Ядро является диспетчером, который управляет виртуальными машинами, системами хранения данных, виртуальными сетями. Утилиты обеспечивают взаимодейтсвие пользователей с системой через различные интерфейсы и API. OpenNebula используем разеделяемые файловые системы (такие как NFS) для обеспечения доступа к образам виртуальных машин, таким образом каждый рабочий сервер имеет доступ к одинаковым образам. Когда требуется запустить или выключить виртуальную машину, OpenNebula подключается через SSH к рабочему серверу и запускает непосредственно на гипервизоре необходимые команды. Этот режим работы обычно называеют безагентным, т.к. он не требует модификации и настройки рабочих серверов. Сложность системы при этом, ниже чем при использовании ПО OpenStack.

28 OpenNebula What are the Main Components?
Interfaces & APIs: OpenNebula provides many different interfaces that can be used to interact with the functionality offered to manage physical and virtual resources. There are two main ways to manage OpenNebula instances:command line interface and the Sunstone GUI. There are also several cloud interfaces that can be used to create public clouds: OCCI and EC2 Query, and a simple self-service portal for cloud consumers. In addition, OpenNebula features powerful integration APIs to enable easy development of new components (new virtualization drivers for hypervisor support, new information probes, etc). Users and Groups Hosts: The main hypervisors are supported, Xen, KVM, and VMware. Networking Storage: OpenNebula is flexible enough to support as many different image storage configurations as possible. The support for multiple data stores in the Storage subsystem provides extreme flexibility in planning the storage backend and important performance benefits. The main storage configurations are supported, file system datastore, to store disk images in a file form and with image transferring using ssh or shared file systems (NFS, GlusterFS, Lustre…),iSCSI/LVM to store disk images in a block device form, and VMware datastore specialized for the VMware hypervisor that handle the vmdk format. Clusters: Clusters are pools of hosts that share datastores and virtual networks. Clusters are used for load balancing, high availability, and high performance computing.

29 OpenNebula - installation
Front-end, executes the OpenNebula services. Hosts, hypervisor-enabled hosts that provide the resources needed by the VMs. Datastores hold the base images of the VMs. Service Network, physical network used to support basic services: interconnection of the storage servers and OpenNebula control operations VM Networks physical network that will support VLAN for the VMs.

30 OpenNebula – installation front-end
sudo apt-get install opennebula Front-End The machine that holds the OpenNebula installation is called the front-end. This machine needs to have access to the storage Datastores (e.g. directly mount or network), and network connectivity to each host. The base installation of OpenNebula takes less than 10MB. OpenNebula services include: Management daemon (oned) and scheduler (mm_sched) Monitoring and accounting daemon (onecctd) Web interface server (sunstone) Cloud API servers (ec2-query and/or occi)  Note that these components communicate through XML-RPC and may be installed in different machines for security or performance reasons Requirements for the Front-End are: ruby >= 1.8.7

31 OpenNebula – installation hosts
The hosts are the physical machines that will run the VMs. During the installation you will have to configure the OpenNebula administrative account to be able to ssh to the hosts, and depending on your hypervisor you will have to allow this account to execute commands with root privileges or make it part of a given group. OpenNebula doesn't need to install any packages in the hosts, and the only requirements for them are: ssh server running hypervisor working properly configured Ruby >= 1.8.7

32 OpenNebula – installation storage
OpenNebula uses Datastores to handle the VM disk Images. VM Images are registered, or created (empty volumes) in a Datastore. In general, each Datastore has to be accessible through the front-end using any suitable technology NAS, SAN or direct attached storage. When a VM is deployed the Images are transferred from the Datastore to the hosts. Depending on the actual storage technology used it can mean a real transfer, a symbolic link or setting up an iSCSI target. There are two configuration steps needed to perform a basic set up: First, you need to configure the system datastore to hold images for the running VMs. Then you have to setup one ore more datastore for the disk images of the VMs, you can find more information on setting up Filesystem Datastores here.  OpenNebula can work without a Shared FS. This will force the deployment to always clone the images and you will only be able to do cold migrations.

33 OpenNebula – installation networking
The network is needed by the OpenNebula front-end daemons to access the hosts to manage and monitor the hypervisors; and move image files. It is highly recommended to install a dedicated network for this purpose. To offer network connectivity to the VMs across the different hosts, the default configuration connects the virtual machine network interface to a bridge in the physical host.  You should create bridges with the same name in all the hosts. Depending on the network model, OpenNebula will dynamically create network bridges.

34 OpenNebula – CLI

35 OpenNebula – Sunstone

36 OpenNebula – Sunstone

37 OpenNebula – Sunstone

38 OpenStack Projects, Python Compute Storage Networking Dashboard (GUI)

39 OpenStack - compute

40 OpenStack - installation
1. Install Ubuntu (Precise) or Fedora 16 In order to correctly install all the dependencies, we assume a specific version of Ubuntu or Fedora to make it as easy as possible. OpenStack works on other flavors of Linux (and some folks even run it on Windows!) We recommend using a minimal install of Ubuntu server or in a VM if this is your first time. 2. Download DevStack git clone git://github.com/openstack-dev/devstack.gitThe devstack repo contains a script that installs openstack and templates for configuration files 3. Start the install cd devstack; ./stack.sh

41 OpenStack - installation

42 OpenStack - dashboard

43 OpenStack - summary Hard to install and maintain
Bad logical structure of SW Not stable, nany bugs

44 Conclusion Xen KVM/QEMU OpenNebula OpenStack


Download ppt "OPEN-SOURCE SOFTWARE TOOLKITS FOR CREATING AND MANAGING DISTRIBUTED HETEROGENEOUS CLOUD INFRASTRUCTURES A.V. Pyarn Lomonosov Moscow State University,"

Similar presentations


Ads by Google