OPEN-SOURCE SOFTWARE TOOLKITS FOR CREATING AND MANAGING DISTRIBUTED HETEROGENEOUS CLOUD INFRASTRUCTURES A.V. Pyarn Lomonosov Moscow State University,

Slides:



Advertisements
Similar presentations
Virtual Machine Technology Dr. Gregor von Laszewski Dr. Lizhe Wang.
Advertisements

NWCLUG 01/05/2010 Jared Moore Xen Open Source Virtualization.
Virtualization and Cloud Computing
Network Implementation for Xen and KVM Class project for E : Network System Design and Implantation 12 Apr 2010 Kangkook Jee (kj2181)
COMS E Cloud Computing and Data Center Networking Sambit Sahu
TechNet and Community Tour - Dynamic IT Dynamic Desktop Deployment Level Advanced.
Virtualization and the Cloud
Introduction to DoC Private Cloud
European Organization for Nuclear Research Virtualization Review and Discussion Omer Khalid 17 th June 2010.
Introducing VMware vSphere 5.0
Virtualization for Cloud Computing
VMware vCenter Server Module 4.
VMware vSphere 4 Introduction. Agenda VMware vSphere Virtualization Technology vMotion Storage vMotion Snapshot High Availability DRS Resource Pools Monitoring.
Condor Project Computer Sciences Department University of Wisconsin-Madison Virtual Machines in Condor.
Methodologies, strategies and experiences Virtualization.
Tanenbaum 8.3 See references
Operating System Virtualization
About the Presentations The presentations cover the objectives found in the opening of each chapter. All chapter objectives are listed in the beginning.

INTRODUCTION TO CLOUD COMPUTING CS 595 LECTURE 7 2/23/2015.
 Cloud computing  Workflow  Workflow lifecycle  Workflow design  Workflow tools : xcp, eucalyptus, open nebula.
Microkernels, virtualization, exokernels Tutorial 1 – CSC469.
SAIGONTECH COPPERATIVE EDUCATION NETWORKING Spring 2010 Seminar #1 VIRTUALIZATION EVERYWHERE.
SAIGONTECH COPPERATIVE EDUCATION NETWORKING Spring 2009 Seminar #1 VIRTUALIZATION EVERYWHERE.
Introduction to VMware Virtualization
A Cloud is a type of parallel and distributed system consisting of a collection of inter- connected and virtualized computers that are dynamically provisioned.
Virtualization. Virtualization  In computing, virtualization is a broad term that refers to the abstraction of computer resources  It is "a technique.
+ CS 325: CS Hardware and Software Organization and Architecture Cloud Architectures.
Module 7: Hyper-V. Module Overview List the new features of Hyper-V Configure Hyper-V virtual machines.
Appendix B Planning a Virtualization Strategy for Exchange Server 2010.
Using Virtualization in the Classroom. Using Virtualization in the Classroom Session Objectives Define virtualization Compare major virtualization programs.
Benefits: Increased server utilization Reduced IT TCO Improved IT agility.
M.A.Doman Short video intro Model for enabling the delivery of computing as a SERVICE.
Virtualization Paul Krzyzanowski Distributed Systems Except as otherwise noted, the content of this presentation is licensed.
Presented by: Sanketh Beerabbi University of Central Florida COP Cloud Computing.
Linux in a Virtual Environment Nagarajan Prabakar School of Computing and Information Sciences Florida International University.
INTRODUCTION TO VIRTUALIZATION KRISTEN WILLIAMS MOSES IKE.
COMS E Cloud Computing and Data Center Networking Sambit Sahu
Virtualization for the LHCb Online system CHEP Taipei Dedicato a Zio Renato Enrico Bonaccorsi, (CERN)
Case for Server Virtualization. Content Why virtualize? Business value of virtualization Virtualization technologies & Hyper-V overview Management and.
Desktop Virtualization
VMware vSphere Configuration and Management v6
Full and Para Virtualization
© Copyright 2011 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. HP Restricted Module 7.
20409A 7: Installing and Configuring System Center 2012 R2 Virtual Machine Manager Module 7 Installing and Configuring System Center 2012 R2 Virtual.
Enabling Technologies for Distributed Computing Dr. Sanjay P. Ahuja, Ph.D. Fidelity National Financial Distinguished Professor of CIS School of Computing,
Hands-On Virtual Computing
Cloud Computing Lecture 5-6 Muhammad Ahmad Jan.
Cloud Computing – UNIT - II. VIRTUALIZATION Virtualization Hiding the reality The mantra of smart computing is to intelligently hide the reality Binary->
© 2015 VMware Inc. All rights reserved. Software-Defined Data Center Module 2.
Unit 2 VIRTUALISATION. Unit 2 - Syllabus Basics of Virtualization Types of Virtualization Implementation Levels of Virtualization Virtualization Structures.
Intro To Virtualization Mohammed Morsi
Open Source Virtualization Andrey Meganov RHCA, RHCX Consultant / VDEL
Virtualization Neependra Khare
Virtualization for Cloud Computing
Guide to Operating Systems, 5th Edition
Chapter 6: Securing the Cloud
Agenda Hardware Virtualization Concepts
Prepared by: Assistant prof. Aslamzai
StratusLab Final Periodic Review
StratusLab Final Periodic Review
Virtualization overview
Xen Summit Spring 2007 Platform Virtualization with XenEnterprise
Group 8 Virtualization of the Cloud
Introduction to vSphere and the Software-Defined Data Center
20409A 7: Installing and Configuring System Center 2012 R2 Virtual Machine Manager Module 7 Installing and Configuring System Center 2012 R2 Virtual.
Virtualization Layer Virtual Hardware Virtual Networking
HC Hyper-V Module GUI Portal VPS Templates Web Console
Guide to Operating Systems, 5th Edition
Presentation transcript:

OPEN-SOURCE SOFTWARE TOOLKITS FOR CREATING AND MANAGING DISTRIBUTED HETEROGENEOUS CLOUD INFRASTRUCTURES A.V. Pyarn Lomonosov Moscow State University, Faculty of Computational Mathematics and Cybernetics apyarn@gmail.com

Agenda Aim of paper Virtualization Hypervisor architecture IaaS and cloud toolkits Requirements Hypervisors toolstacks Cloud platforms Comparison

Aim of paper Show use cases of cloud toolkits as virtual educational polygons for educational purposes. Cloud toolkits design aspects, architectural features, functionality capabilities, installation how-to’s, extension and support capabilities: Xen, KVM toolkits and OpenNebula, OpenStack toolkits comparison. Было проделано изучение облачных платформ...

Virtualization Вступительные слова о виртуализации – развитие и применение. Использование технологий виртуализации может быть применимо для следующих целей: Создание образовательных полигонов для тестирования программного обеспечения с различными характеристиками серверов (вариации оперативной памяти, количества ядер процессора, процессоров, сетевых топологий) Более эффективное использование имеющегося оборудования: возможность запуска нескольких полностью изолированных виртуальных серверов на одном физическом серверве, создание библиотеки образов виртуальных машин, простота управления и переключения виртуальными машинами Безопасное применение изолированных сред – сломав один виртуальный сервер Т.е. было бы удобно создать виртуализованную среду со встроенные

Types of Virtualization Emulation: Fully-emulate the underlying hardware architecture Full virtualization: Simulate the base hardware architecture Paravirtualization: Abstract the base architecture OS-level virtualization: Shared kernel (and architecture), separate user spaces Oracle VirtualBox Vmware player, server Vmware ESXi, vSphere, Hyper-V, KVM, XEN Существуют различные типы виртуализации, обзор продуктов XEN OpenVZ

Hypervisor role Thin, privileged abstraction layer between the hardware and operating systems  Defines the virtual machine that guest domains see instead of physical hardware: Grants portions of physical resources to each guest Exports simplified devices to guests Enforces isolation among guests Dom U – паравиртуальный домен HVM –домент – hardware virtualized domain, для запуска немодифицированных гостевых систем Паравиртуальные драйверы в HVM домене, например для windows. Т.е. драйверы, которые знают, что они виртуализованы и работают в виртуальной среде

Hypervisor architecture Toolstack Dom U – паравиртуальный домен HVM –домент – hardware virtualized domain, для запуска немодифицированных гостевых систем Паравиртуальные драйверы в HVM домене, например для windows. Т.е. драйверы, которые знают, что они виртуализованы и работают в виртуальной среде Toolstack = standard Linux tools+specific 3-d party toolkits, API  daemons: libvirt, XEND, XAPI etc.

Depends on requirements IaaS IaaS = Virtualization (hypervisor features) + “Amazon Style” Self-Service Portal and convenient GUI management + Billing + Multitenancy Hypervisors toolstack and APIs vs 3- rd party open source cloud toolkits (OpenNebula, OpenStack “datacenter virtualization” platform, etc.) What should we use? Depends on requirements Одной из целей проведнного исследования было выяснить когда целесообзано использовать возможности самих гипервизоров и встроенных инструментов, а когда – сторонние средства для управления облачными инфраструктурами

Requirements For educational polygons: Open-source software: hypervisor and management subsystem NFS or iSCSI independent storage for virtual disks and images Easy installation and support GUI: Management center, optional - self-service portal, monitoring and accounting tools

They don’t consist of hypervisors themselves, only management role Cloud platforms They don’t consist of hypervisors themselves, only management role Storage NFS/ iSCSI Hardware Hypervisor OS Management server(-s): scheduler authorization monitoring web-interface SSH Worker node Hardware Hypervisor OS Agentless Что из себя представляют популярные облачные платформы Вывод: возможности облачных платформ ограничены возможностями гипервизоров, рассмотрим их более подробно и затем перейдем к рассмотрению облачных платформ DB SSH Worker node

KVM-QEMU KVM is closely associated with Linux because it uses the Linux kernel as a bare metal hypervisor.  A host running KVM is actually running a Linux kernel and the KVM kernel module, which was merged into Linux 2.6.20 and has since been maintained as part of the kernel.  This approach takes advantage of the insight that modern hypervisors must deal with a wide range of complex hardware and resource management challenges that have already been solved in operating system kernels.  Linux is a modular kernel and is therefore an ideal environment for building a hypervisor. Программное обеспечение KVM состоит из загружаемого модуля ядра (называемого kvm.ko), предоставляющего базовый сервис виртуализации, процессорно-специфического загружаемого модуля kvm-amd.ko либо kvm-intel.ko, и компонентов пользовательского режима (модифицированного QEMU). Преимущество KVM состоит в том, что поскольку она сама является частью ядра, она может воспользоваться преимуществами от развития и оптимизации ядра. Это свидетельствует о перспективности этого подхода по сравнению с другими независимыми реализациями гипервизора. Два основных недостатка KVM состоят в том, что она требует новейших способных к виртуализации процессоров и запуска процесса QEMU в пользовательском пространстве для обеспечения виртуализации ввода/вывода. Но так или иначе KVM уже в ядре, что является огромным скачком вперед по сравнению с существующими решениями.

KVM-QEMU SMP hosts SMP guests (as of kvm-61, max 16 cpu supported) Live Migration of guests from one host to another Emulated hardware: Class Device Video card Cirrus CLGD 5446 PCI VGA card or dummy VGA card with Bochs VESA extensions[14] PCI i440FX host PCI bridge and PIIX3 PCI to ISA bridge[14] Input device PS/2 Mouse and Keyboard[14] Sound card Sound Blaster 16, ENSONIQ AudioPCI ES1370, Gravis Ultrasound GF1, CS4231A compatible[14] Ethernet Network card AMD Am79C970A (Am7990), E1000 (Intel 82540EM, 82573L, 82544GC), NE2000, and Realtek RTL8139 Watchdog timer Intel 6300ESB or IB700 RAM 50 MB - 32 TB CPU 1-16 CPUs

KVM-QEMU Ease of use + Shared storage + Live migrations + Management GUI + (virtual machine manager)

XEN

Virtualization in Xen Xen can scale to >255 physical CPUs, 128 VCPUs per PV guest, 1TB of RAM per host, and up to 1TB of RAM per HVM guest or 512 GB of RAM per PV guest. Paravirtualization:  Uses a modified Linux kernel Guest loads Dom0's pygrub or Dom0's kernel Front-end and back-end virtual device model Cannot run Windows Guest "knows" it's a VM and cooperates with hypervisor   Hardware-assisted full virtualization (HVM):  Uses the same, normal, OS kernel Guest contains grub and kernel  Normal device drivers Can run Windows Guest doesn't "know" it's a VM, so hardware manages it KVM is closely associated with Linux because it uses the Linux kernel as a bare metal hypervisor.  A host running KVM is actually running a Linux kernel and the KVM kernel module, which was merged into Linux 2.6.20 and has since been maintained as part of the kernel.  This approach takes advantage of the insight that modern hypervisors must deal with a wide range of complex hardware and resource management challenges that have already been solved in operating system kernels.  Linux is a modular kernel and is therefore an ideal environment for building a hypervisor. Программное обеспечение KVM состоит из загружаемого модуля ядра (называемого kvm.ko), предоставляющего базовый сервис виртуализации, процессорно-специфического загружаемого модуля kvm-amd.ko либо kvm-intel.ko, и компонентов пользовательского режима (модифицированного QEMU). Преимущество KVM состоит в том, что поскольку она сама является частью ядра, она может воспользоваться преимуществами от развития и оптимизации ядра. Это свидетельствует о перспективности этого подхода по сравнению с другими независимыми реализациями гипервизора. Два основных недостатка KVM состоят в том, что она требует новейших способных к виртуализации процессоров и запуска процесса QEMU в пользовательском пространстве для обеспечения виртуализации ввода/вывода. Но так или иначе KVM уже в ядре, что является огромным скачком вперед по сравнению с существующими решениями.

Xen – Cold Relocation Motivation: Moving guest between hosts without shared storage or with different architectures or hypervisor versions    Process: Shut down a guest on the source host Move the guest from one Domain0's file system to another's by manually copying the guest's disk image and configuration files Start the guest on the destination host

Xen – Cold Relocation Benefits: Hardware maintenance with less downtime Shared storage not required Domain0s can be different  Multiple copies and duplications Limitation: More manual process Service should be down during copy

Xen – Live Migration Motivation: Load balancing, hardware maintenance, and  power management   Result: Begins transferring guest's state to new host Repeatedly copies dirtied guest memory (due to continued execution) until complete Re-routes network connections, and guest continues executing with execution and network uninterrupted

Xen – Live Migration Benefits: No downtime Network connections to and from guest often remain active and uninterrupted Guest and its services remain available   Limitations: Requires shared storage Hosts must be on the same layer 2 network Sufficient spare resources needed on target machine Hosts must be configured similarly

Xen Cloud Platform (XCP) XCP includes: Open-source Xen hypervisor Enterprise-level XenAPI (XAPI) management tool stack Support for Open vSwitch (open-source, standards-compliant virtual switch) Features: Fully-signed Windows PV drivers   Heterogeneous machine resource pool support   Installation by templates for many different guest OSes

Xen Cloud Platform (XCP) XCP includes: Open-source Xen hypervisor Enterprise-level XenAPI (XAPI) management tool stack Support for Open vSwitch (open-source, standards-compliant virtual switch) Features: Fully-signed Windows PV drivers   Heterogeneous machine resource pool support   Installation by templates for many different guest OSes

XCP XenAPI Management Tool Stack VM lifecycle: live snapshots, checkpoint, migration   Resource pools: live relocation, auto configuration, disaster recovery Flexible storage, networking, and power management Event tracking: progress, notification    Upgrade and patching capabilities Real-time performance monitoring and alerting

XCP Installation

XCP Management Software Xencenter

XCP Toolstack Command Line Interface (CLI) Tools Toolstack xl XAPI libvirt xend CLI tool xe virsh xm

XCP Toolstack Toolstack Feature Comparison Features xl xapi libvirt Purpose-built for Xen X Basic VM Operations Managed Domains Live Migration PCI Passthrough Host Pools Flexible, Advanced Storage Types Built-in advanced performance monitoring (RRDs) Host Plugins (XAPI)

OpenNebula Архитектурно OpenNebula состоит из трех слоев: различных драйверов, ядра и утилит. Драйверы взаимодействуют напрямую с гипервизорами и операционными системами, они ответственны за создание виртуальных машин, запуск и выключение, а так же управления хранением данных, они собирают информацию с физических и вирутальных машин. Ядро является диспетчером, который управляет виртуальными машинами, системами хранения данных, виртуальными сетями. Утилиты обеспечивают взаимодейтсвие пользователей с системой через различные интерфейсы и API. OpenNebula используем разеделяемые файловые системы (такие как NFS) для обеспечения доступа к образам виртуальных машин, таким образом каждый рабочий сервер имеет доступ к одинаковым образам. Когда требуется запустить или выключить виртуальную машину, OpenNebula подключается через SSH к рабочему серверу и запускает непосредственно на гипервизоре необходимые команды. Этот режим работы обычно называеют безагентным, т.к. он не требует модификации и настройки рабочих серверов. Сложность системы при этом, ниже чем при использовании ПО OpenStack.

OpenNebula What are the Main Components? Interfaces & APIs: OpenNebula provides many different interfaces that can be used to interact with the functionality offered to manage physical and virtual resources. There are two main ways to manage OpenNebula instances:command line interface and the Sunstone GUI. There are also several cloud interfaces that can be used to create public clouds: OCCI and EC2 Query, and a simple self-service portal for cloud consumers. In addition, OpenNebula features powerful integration APIs to enable easy development of new components (new virtualization drivers for hypervisor support, new information probes, etc). Users and Groups Hosts: The main hypervisors are supported, Xen, KVM, and VMware. Networking Storage: OpenNebula is flexible enough to support as many different image storage configurations as possible. The support for multiple data stores in the Storage subsystem provides extreme flexibility in planning the storage backend and important performance benefits. The main storage configurations are supported, file system datastore, to store disk images in a file form and with image transferring using ssh or shared file systems (NFS, GlusterFS, Lustre…),iSCSI/LVM to store disk images in a block device form, and VMware datastore specialized for the VMware hypervisor that handle the vmdk format. Clusters: Clusters are pools of hosts that share datastores and virtual networks. Clusters are used for load balancing, high availability, and high performance computing.

OpenNebula - installation Front-end, executes the OpenNebula services. Hosts, hypervisor-enabled hosts that provide the resources needed by the VMs. Datastores hold the base images of the VMs. Service Network, physical network used to support basic services: interconnection of the storage servers and OpenNebula control operations VM Networks physical network that will support VLAN for the VMs.

OpenNebula – installation front-end sudo apt-get install opennebula Front-End The machine that holds the OpenNebula installation is called the front-end. This machine needs to have access to the storage Datastores (e.g. directly mount or network), and network connectivity to each host. The base installation of OpenNebula takes less than 10MB. OpenNebula services include: Management daemon (oned) and scheduler (mm_sched) Monitoring and accounting daemon (onecctd) Web interface server (sunstone) Cloud API servers (ec2-query and/or occi)  Note that these components communicate through XML-RPC and may be installed in different machines for security or performance reasons Requirements for the Front-End are: ruby >= 1.8.7

OpenNebula – installation hosts The hosts are the physical machines that will run the VMs. During the installation you will have to configure the OpenNebula administrative account to be able to ssh to the hosts, and depending on your hypervisor you will have to allow this account to execute commands with root privileges or make it part of a given group. OpenNebula doesn't need to install any packages in the hosts, and the only requirements for them are: ssh server running hypervisor working properly configured Ruby >= 1.8.7

OpenNebula – installation storage OpenNebula uses Datastores to handle the VM disk Images. VM Images are registered, or created (empty volumes) in a Datastore. In general, each Datastore has to be accessible through the front-end using any suitable technology NAS, SAN or direct attached storage. When a VM is deployed the Images are transferred from the Datastore to the hosts. Depending on the actual storage technology used it can mean a real transfer, a symbolic link or setting up an iSCSI target. There are two configuration steps needed to perform a basic set up: First, you need to configure the system datastore to hold images for the running VMs. Then you have to setup one ore more datastore for the disk images of the VMs, you can find more information on setting up Filesystem Datastores here.  OpenNebula can work without a Shared FS. This will force the deployment to always clone the images and you will only be able to do cold migrations.

OpenNebula – installation networking The network is needed by the OpenNebula front-end daemons to access the hosts to manage and monitor the hypervisors; and move image files. It is highly recommended to install a dedicated network for this purpose. To offer network connectivity to the VMs across the different hosts, the default configuration connects the virtual machine network interface to a bridge in the physical host.  You should create bridges with the same name in all the hosts. Depending on the network model, OpenNebula will dynamically create network bridges.

OpenNebula – CLI

OpenNebula – Sunstone

OpenNebula – Sunstone

OpenNebula – Sunstone

OpenStack Projects, Python Compute Storage Networking Dashboard (GUI)

OpenStack - compute

OpenStack - installation 1. Install Ubuntu 12.04 (Precise) or Fedora 16 In order to correctly install all the dependencies, we assume a specific version of Ubuntu or Fedora to make it as easy as possible. OpenStack works on other flavors of Linux (and some folks even run it on Windows!) We recommend using a minimal install of Ubuntu server or in a VM if this is your first time. 2. Download DevStack git clone git://github.com/openstack-dev/devstack.gitThe devstack repo contains a script that installs openstack and templates for configuration files 3. Start the install cd devstack; ./stack.sh

OpenStack - installation

OpenStack - dashboard

OpenStack - summary Hard to install and maintain Bad logical structure of SW Not stable, nany bugs

Conclusion Xen KVM/QEMU OpenNebula OpenStack