Download presentation
Presentation is loading. Please wait.
Published byDylan Snow Modified over 9 years ago
1
SYN613: Explore the new cloud features of XenServer 6.1
Mark Butterly May, 2013
2
Agenda What’s new in XenServer 6.1? Storage XenMotion
New VMware VM conversion tool Networking Enhancements Guest OS Enhancements Storage link Enhancements Performance Enhancements Other New Things Major features: New Vmware VM conversion tool – Much easier to move VMs from Vmware products to XenServer Storage XenMotion – Moving disk images along with VMs when they move Enhancements in many areas: Guest OS enhancements - New OS support and new Xentools install Networking enhancements - Networking including four NIC bonds, including LACP Xen Scheduler enhancements Performance enhancements – in many areas Storagelink enhancements – Added support for EMC VNX and enhancements for other system Storage performance monitor – Simplifies the monitoring of storage performance with a dedicated tool
3
Agenda (Continued) XenServer 6.1 Hands-on-Lab
PXE and Web Server install Storage XenMotion Rolling Pool Upgrades LACP Bonding VLAN Scalability testing Xen to Xen is an environment where XenServer virtualize itself – running XenServer on XenServer - Perfect for lab environments and some Proof of Concept demonstrations. A PXE and Web Server can be used for installing and/or upgrading XenServers automatically; this is useful not only in the lab, but in production environments. We’ll take a look at Storage XenMotion moving VMs inside and outside of resource pools, and also moving a VDI without moving the VM. Rolling pool upgrades are a really efficient way to upgrade from one version of XenServer to another without downtime. As one XenServer in a pool is placed in Maintenance mode and vms are migrated from that server to others in the pool, access to VMs without downtime is assured. We’ll take a short look at NIC bonding including LACP which is fully supported with XenServer 6.1. We will test XenServer’s 6.1 ability to create VLANs efficiently (compared to previous versions). As VLANs are recognized as a very common and accepted way to segregate networks (including tenant networks) within cloud environments, creating or removing large amounts of VLANs efficiently is an important feature of this product.
4
If you have any questions or concerns, please raise your hand
Facilitators Landon Fraley Olivier Withoff Jared Cowart If you have any questions or concerns, please raise your hand To help the lab run smoothly, I have several facilitators who are eager to help you and answer any of your questions.
5
The Lab Environment
6
Xen-on-Xen To explore cloud-based features we need several XenServers
The lab runs XenServer on XenServer; XenServer virtualizing itself Allows each student to have many Virtual XenServers Virtual XenServers only run paravirtualized operating systems No windows or DOS Xen-on-Xen is unsupported by Citrix
7
The Lab Environment Student Desktop PXE & Web Server NFS Server vXS01
Student Desktop PXE & Web Server NFS Server Shared Storage Internet Private network vXS01 Local Storage vXS02 Local Storage vXS03 vXS04 Local Storage vXS05 Local Storage Local Storage
8
Lab Tips
9
Lab Tips The lab consists of 14 exercises
Some are recommended / Some are optional Short introduction before each module Please read the materials in the lab guide - helps understand WHY Please don’t work ahead If you have finished the module, please see if your neighbor needs help The are 14 exercises as part of this lab. Not all exercises are expected to be completed during the session time. Every effort will be made to accommodate anyone who wishes to do some or all of the exercises directly after the official class time. The main exercises however are sufficient to sufficient practise in the main headline features of XenServer within the context of cloud deployments. We are going to do a number of exercises at a time. There will be a short introduction before each set of exercises. Please read the materials in the lab guide – Materials have good background info to aid in understanding WHY. Please don’t work ahead. Some of the exercises will involve scripts that do not have to be fully understood to complete the exercises. However, your instructor or faciliator. is happy to help with any questions relating to the scripts and use of them. If you have finished the assignment, please see if your neighbor needs help.
10
Lab Tips Be careful not to miss a step
The effect may not be immediately obvious but it will bite you later All steps are required Linux configuration files expect perfection! No validity checking done after you’ve edited the file Any mistakes are typically reported at boot time and you won’t see them! This is a CLI world and commands can be long and complex to type Double check commands before you hit <Enter> Use copy and paste to minimize errors Check the expected output If your results don’t match what the lab doc shows you – STOP and ask for help!
11
Copy and Paste Copying commands from the lab guide saves time
And improves accuracy Use mouse to highlight text in the guide Hit Ctrl-C or right-click the text and select Copy On the console, right-click and select Paste, then hit the <Enter> key
12
XenServer Unattended Installation
13
Why unattended installation
Avoid manual CD handling Creating reproducible mass XenServer installations Speed up the installation / upgrade process Disaster recovery Ease of cloud-based administration Perfect for cloud-based environments ………… XenServer installations are perfect for efficient administration particularly when installing or re-installing a significant number of XenServers. They can be highly customized in order to adapt to the needs of the environments and for cloud environments with hundreds or even thousands of homogenous servers. In these cases, unattended installations are highly advisable.
14
Using PXE boot environments
Can boot to an attended / unattended installation DHCP server Provided IP addresses Provides boot options like TFTP server / boot loader file name TFTP server to provide the boot loader PXE enabled server It is usually advisable to make sure of existing DHCP servers as well as installing or making use of a TFTP server in order to allow the boot loader file to be retrieved. From there, redirection to a web server / ftp server can be made to install the additional XenServer files.
15
E1 - Attach XenCenter to your XenServers
In this exercise you will: Use your student desktop to launch Citrix XenCenter Attach XenCenter to your physical XenServers Note: Do not use a XenCenter that is local to your laptop for this lab. It will NOT work properly.
16
E2 – Building a PXE Boot and Web Server
A CentOS VM will be used as the base for the pxe server – already created In this exercise you will: Install a tftp server, syslinux, a web server and dhcp server Setup the following files … DHCP configuration file Configuration file (including the menu) Answer files (for individual XenServer install options) XenServer installation files (resident on the web server)
17
E3 – Create a Virtual XenServer
In this exercise you will: Create a virtual XenServer using the PXE server Save it as a template for further use Use the template to perform an attended network installation
18
E4 – Create another Virtual XenServer
In this exercise you will: Use the same template to create an unattended installation
19
vi Crash Course vi is a modal editor operating in either
Insert mode (keystrokes become part of the document) or command mode (keystrokes control the editor) To enter insert mode hit the “i” key To enter command mode hit the <ESC> key x – Delete character under the cursor dd – Delete current line :wq – Write the file and Quit :q! – Exit without saving the file
20
Lab Access Launch your web browser and go to Enter the session code ????????? and your business address Click “Get Started”
21
Lab Access You are provided with unique XenServer IP address and logon credentials Credentials shown are non-working examples You will need these credentials for exercise #1 Download the Lab Guide by clicking the link Use your second monitor for the guide Copy & paste long commands Click when you are ready
22
Please complete Exercises 1 to 4 now Estimated Time: 60 minutes
when you have completed exercise 4 i – Insert Mode <ESC> - Command Mode x – Delete character dd – Delete current line :wq – Write the file & Quit :q! – Exit without saving
23
E5 – Modifying the PXE server to create more Virtual XenServers
In this exercise you will: Automate the creation of configuration and answer files The files will be generated from information provided in a sample CSV file.
24
E6 – Unattended XenServer Installs
In this exercise you will: Use the configuration files and answer files generated in the previous exercise to perform unattended XenServer network installs. Direct binding between the MAC address to a configuration file to an answer file.
25
E7 – Attach XenCenter and control licensing
In this exercise you will: License the XenServers you have created and pool some of the XenServers you have just installed.
26
Please complete Exercises 5, 6 and 7 now Estimated Time: 40 minutes
when you have completed exercise 7 i – Insert Mode <ESC> - Command Mode x – Delete character dd – Delete current line :wq – Write the file & Quit :q! – Exit without saving
27
Setup the TFTP server – In Summary
Use any available TFTP server Usually quick and easy to install Copy to TFTP root Mboot.c32, menu.32, pxelinux.0 (from the tftp files) Create a /xenserver directory in the TFTP root Copy to /xenserver Create /pxelinux.cfg direction in the TFTP root Full details are in the XenServer 6.1 installation guide
28
Example default file default xenserver label xenserver
kernel mboot.c32 append /tftpboot/xenserver/xen.gz dom0_max_vcpus=1-2 dom0_mem=752M,max:752M com1=115200,8n1 \ console=com1,vga --- /tftpboot/xenserver/vmlinuz \ xencons=hvc console=hvc0 console=tty0 \ --- /tftpboot/xenserver/install.img The label parameter is for the graphic menu label that appears enabling you to make a selection and then read the installation parameters for that option. To specify which network adapter should be used for retrieving the answer file, include the answerfile_device=ethX or answerfile_device=MAC parameter and specify either the ethernet device number or the MAC address of the device. To make Supplemental Packs available during installation, copy the contents of each Supplemental Pack ISO into a separate directory within the main installation repository. You must edit XS-REPOSITORY-LIST to include the directory name in which the Supplemental Pack resides (on a new line). Otherwise the Supplemental Pack will not be installed. Refer to XenServer Supplemental Pack & DDK Guide for more details. The above examples shows how to configure the installer to run on the physical console, tty0. To use a different default, ensure that the console you want to use is the rightmost one specified.
29
Specifying an answer file
default xenserver-auto label xenserver-auto kernel mboot.c32 append /tftpboot/xenserver/xen.gz dom0_max_vcpus=1-2 dom0_mem=752M,max:752M com1=115200,8n1 \ console=com1,vga --- /tftpboot/xenserver/vmlinuz \ xencons=hvc console=hvc0 console=tty0 \ answerfile= \ install --- /tftpboot/xenserver/install.img The example above highlights the answerfile parameter which can be set for any answer file which is served out by http or ftp.
30
Example answer file. <?xml version="1.0"?>
<installation mode="fresh" srtype="lvm"> <bootloader>extlinux</bootloader> <primary-disk gueststorage="yes">sda</primary-disk> <keymap>de</keymap> <hostname>xen01</hostname> <root-password>citrix</root-password> <source type="url"> <admin-interface name="eth0" proto="static"> <ip> </ip> <subnet-mask> </subnet-mask> <gateway> </gateway> </admin-interface> <name-server> </name-server> <name-server> </name-server> <timezone>Europe/Berlin</timezone> <time-config-method>ntp</time-config-method> <ntp-server>0.de.pool.ntp.org</ntp-server> <ntp-server>1.de.pool.ntp.org</ntp-server> <ntp-server>2.de.pool.ntp.org</ntp-server> </installation> For a comprehensive summary of the elements used in a XenServer answer file, consult the installation guide at
31
Additional Scripting …
(1) Read a csv file and generate the menu files, configuration and answer files automatically (2) Bind the mac address of a NIC to a dedicated answer file on install! Scripting can be utilized in order to generate the configuration and files and answer files in an automated fashion to faciliate mass xenserver installations. Binding the mac address to the NIC is entirely possible so the installation can proceed by simply booting the server and allowing the installation to proceed without any manual intervention.
32
Additional installation options
Post installation script When executing, XAPI is not yet running Enable execution of the copied script on first server boot First boot script Usually stored on a file share Copied to the server invoked by the post install script (e.g. using wget) Includes the required configuration steps using XAPI / xe commands Driver update after installation Unattended installation of updates A post installation script is useful in order to start a task after a successful installation. The script should be stored on an nfs, http or ftp share. It can also be stored as a file locally. The script is executed during installation, not after boot of the server. For an unattended installation of drivers, copy the drivers to a subdirectory on the repository and add the sub directory to the XS-REPOSITORY-LIST file in the installation root. To handled the unattended installation of updates from support.citrix.com, store the .xsupdate files in a sub folder of the unattended installation directory, copy updates to a local server, execute the updates step by step, delete the applied update, restart xapi / host and re-invoke a script after reboot. A sample script is shown below. #!/bin/bash # Wait before start sleep 60 # Install XenServer Updates HOSTUUID=$(xe host-list name-label=$HOSTNAME --minimal) cd /tmp if [ -a /tmp/secondboot ] then echo "Secondboot" Else wget ftp:// /ftp/xs60/postinstall.updates/*.xsupdate touch /tmp/secondboot fi for updatefile in /tmp/*.xsupdate; do echo "Installing Update $updatefile ..." >> /var/log/messages PATCHUUID=$(xe patch-upload file-name=$updatefile) xe patch-apply host-uuid=$HOSTUUID uuid=$PATCHUUID rm -f $updatefile PATCHACTION=$(xe patch-list uuid=$PATCHUUID params=after-apply-guidance --minimal) if [ "$PATCHACTION" == "restartXAPI" ] xe-toolstack-restart else reboot exit done # Disable first boot script for subsequent reboots rm -f /etc/rc3.d/S99postinstall rm -f /tmp/secondboot # Final Reboot
33
Storage XenMotion Storage XenMotion offers the ability to not only move a live, running virtual machine, but also it’s underlying disks at the same time. This allows for balancing load, upgrading to shared storage and storage tiering as described in the following slides.
34
Storage XenMotion Storage XenMotion allows running VMs to be moved from one host to another along with their Virtual Disk Images (VDIs) Even when the VM VDIs are not located on shared storage Even when hosts are not in the same resource pool This enables system administrators to: Rebalance or move VMs between XenServer pools Perform maintenance on stand-alone servers Reduce deployment costs by using local storage You can move a VM from any XenServer to any other, regardless of the storage type. In a cloud deployment you could have a thousand XenServers and you can move a VM from any one of them to another as long as there’s a network connecting them. You can move a running VMs VDI from any type of storage to the same of any other type of storage. You move a VM and it’s VDI either inside a resource pool or to other resource pools or stand-alone servers For example promoting a VM from a development environment to a production environment. For example upgrading or updating standalone XenServer hosts without VM downtime. For example upgrading standalone XenServer host hardware without VM downtime. Reduce deployment costs by using local storage.
35
Storage XenMotion Live VDI Migration allows relocation of a VM's VDI without shutting down the VM The VM’s VDI can be moved from any storage type* to any other storage type* As well as being able to move VMs, Storage XenMotion allows just the movement of a VDI while leaving the VM running in the same place. You can move a VDI from anywhere to anywhere else – As long as the VM CAN STILL ACCESS the VDI. Practically speaking you will be moving from Local to Shared storage or vice-versa or Shared to Shared. This allows a VDI to be moved to higher performance SAN, or to cheaper (and maybe higher throughput) local storage. Other use cases: - Move a VM from cheap, local storage to fast, resilient, array-backed storage - Move a VM from a development to a production environment - Move between tiers of storage when a VM is limited by storage capacity - Move VDIs while performing storage array upgrades *Storagelink SRs excepted
36
Virtual Machine Disk Storage
A Virtual Machine (VM) usually expects to “see” a hard disk The VM “thinks” it has a real hard disk C:\ /dev/sda An exception to always needing a hard disk would be network (PXE) booting.
37
Virtual Machine Disk Storage
The disk a XenServer VM sees is actually a Virtual Disk Image (VDI) VDIs are files or volumes on a real disk If moving to another host, the VM must still be able to access it’s VDI Virtual Machine VDI If a VM “loses” it’s hard disk it will typically crash sooner or later when OS reads or writes fail.
38
Live VM XenMotion If the VDI is on shared storage, there’s no problem
The VM has access to the VDI before, during and after the migration XenServer XenServer Live Virtual Machine Shared Storage VDI Local Storage Local Storage While the animation shows the VM loses the link to the VDI during movement, this is just a limitation of the Powerpoint animation. in actuality the link is maintained through the migration.
39
Live Storage VM XenMotion
But if the VM moves when the VDI is on local storage The VDI must also be moved along with the VM XenServer XenServer Live Virtual Machine VDI Local Storage Local Storage If the VDI moves from local to local without moving the VDI, the VM would lose connection to the VDI and would probably crash. So XenServerer will move the VDI with the VM.
40
Live Storage VM XenMotion
Migration can be within the same pool XenServer Pool XenServer XenServer XenServer Live Virtual Machine VDI Local Storage Local Storage Local Storage
41
Live Storage VM XenMotion
Or between pools … with any combination of storage types XenServer Pool XenServer Pool XenServer XenServer XenServer XenServer XenServer XenServer Live Virtual Machine VDI Local or Shared Storage Local or Shared Storage This opens up the movement of VMs along with their VDIs to any other of hundreds or even thousands of XenServers in a cloud. In the next release CloudPlatform (CloudStack) will take advantage of this capability.
42
Live Storage VM XenMotion
The VDI can be moved without the VM moving XenServer Live Virtual Machine Shared Storage Local Storage VDI While the animation shows the VM loses the link to the VDI during movement, this is just a limitation of the Powerpoint animation. In actuality the link is maintained through the migration.
43
How do we move a VDI with a running VM attached?
Moving a VDI while a VM is accessing is not easy – the VM must have read/write access to the VDI at all times, before during and after the move.
44
Virtual Machine Disk Storage
XenServer VDIs are usually stored in VHD format Each VDI can be VHD a file Or each VDI can be a logical volume (a dedicated chunk of disk space) LVM iSCSI FC ext3 NFS Click 1: ext3 is the extended file system created specifically for the Linux kernel. Click 2: LVM – Logical Volume Manager – The default local storage install (unless thin provisioning is selected). FC = Fibre Channel.
45
A VHD Snapshots Primer “A” represents the VHD of a VM Virtual Machine
This is a quick background to VHD chaining, the method XenServer uses for snapshots, cloning and now Storage migration.
46
C A A B VHD Snapshots Primer “A” represents the VHD of a VM
Virtual Machine “A” represents the VHD of a VM When a snapshot is taken: VHD “A” becomes the Parent and is now read only VHD “B” is created as the snapshot pointer “B” is, and will always be empty as it’s read only VHD “C” is created to become the Active node “C” accepts the writes for the VDI and is initially empty The VM now points to “C” as the Active node C A A B Parent Snapshot Active “A” is the full and complete disk at the moment the snapshot is taken. “A” becomes read-only and contains the snapshot data. “B” is a pointer to the snapshot that a VM would use to access the snapshot If a VM reverts to a snapshot a new VDI would be created.
47
A A Storage XenMotion “A” represents the VHD of a VM
The VHD structure (not contents) of “A” is duplicated on the Destination Source Destination Virtual Machine Parent Snapshot A A Active So now we have the snapshot basics understood, let’s see how Storage XenMotion works … - “A” is the VDI of our VM. When the VDI needs to move, the VHD structure of the VDI is duplicated on the Destination (just the structure, not the contents of A).
48
A A C C A Storage XenMotion
A snapshot is taken on the Source just like a regular snapshot The snapshot structure is duplicated on the Destination Source Destination Virtual Machine Parent Snapshot A A C C A Active A snapshot is taken of “A” on the Source, with “C” set to become the Active VDI accepting writes. We ignore the snapshot pointer VDI (was “B” in the snapshot discussion) as it’s not needed The snapshot structure of the Source is duplicated on the Destination. “C” and “A” on the Destination are still empty.
49
A A C C C C A A Storage XenMotion
Any VM writes are now synchronous to both Source & Destination Active VHDs Parent VHD contents is now background copied to the Destination Source Destination Virtual Machine Parent Snapshot A A C C C C A A Active The VM VDI pointer is now moved to “C” on the Source and any writes are synchronously copied to “C” on the Destination. This synchronous writing will continue throughout the VDI and VM move. The Source Parent VHD (“A”) contents are now copied in the background to “A” on the Destination. This may take a while depending on the size of the drive and how full it is. Tapdisk handles the synchronous mirror writes – note network performance must match the rate the VM is writing at. VDI copy optimizes the copy by trying to start with a similar VDI (another VDI with the most similar blocks). Only different blocks are copied. Copy
50
A C C A A Storage XenMotion
Once the Parent VHD is copied, the VM is moved using XenMotion The synchronous writes continue until the XenMotion is complete Source Destination Virtual Machine Parent Snapshot A C C A A Active Once the contents of “A” have been fully copied, the VM is moved using regular XenMotion (XenServer’s VM Motion). The VM continues synchronous writes to both “C” on the Source and “C” on the Destination until XenMotion is completed. VM reads come from the Source “C” until XenMotion is completed.
51
A C C A A Storage XenMotion The VHDs not required are removed
The VM and VDI move is complete Source Destination Virtual Machine Parent Snapshot A C C A A Active Once the XemMotion is complete the dual writes are stopped; writes now only to the Destination “C”. The VHDs on the Source are removed.
52
Using Storage XenMotion
53
Storage XenMotion – In Use
In XenCenter just drag and drop the VM you wish to move Demo VM is dragged from vXS-01 and dropped on vXS-02.
54
Storage XenMotion – In Use
You will be asked to confirm which host you want to move the VM to The host where you dropped the VM will be the default vXS-02 will be the default since that’s where the VM was dropped.
55
Storage XenMotion – In Use
You will be asked to confirm which storage you want to move the VDI to Local storage on the same host will be the default Local Storage on the Destination BVM will be assumed as the default, but the storage could be changed to any shared storage available. Any type of storage is supported except Storagelink SRs. - Can’t migrate VM to different CPU hardware
56
Storage XenMotion – In Use
You will be asked to confirm which network to use to move the VDI This can be any available network Alternate networks can be used to avoid loading the management network The management network is assumed here because it’s the only network available, but any available network can be used including a special network setup just for the purpose of migrating. This will avoid loading up the management network or any other production network.
57
Storage XenMotion – In Use
After the questions, the VM migrates to the chosen host If the VDI is not on shared storage, the VDI moves with the VM Demo VM is dragged from vXS-01 and dropped on vXS-02.
58
Storage XenMotion – In Use
Alternatively use the XenServer CLI xe vm-migrate vm="Demo VM" remote-master= \ remote-username=root remote-password=Citrix123 xe vm-migrate - Migrate a VM to another host Extends the original vm-migrate command New Parameters: remote-address, remote-network, vif, vdi, remote-username, remote-password xe vm-migrate moves Demo VM from current host to xe vm-migrate expanded to offer migration in and outside of pool, along with VDI when required.
59
Storage XenMotion – In Use
You can also move just a VDI between storage, without moving the VM Using XenCenter Select the Storage Repository Then select VDI to move and it’s Destination Or use the XenServer CLI xe vdi-pool-migrate Parameters: uuid, sr-uuid Click 2: xe vdi-pool-migrate is a new command to move VDIs to other storage
60
E8 – Import a Demo VM In this exercise you will:
Import a Linux Demo VM into a vXS01 vXS01 is the pool master
61
E9 – Live migrate the Demo VM
In this exercise you will: Live migrate the Demo VM to another virtual Xenserver using local storage only Live migrate between one pool and another using local storage only
62
E10 – Migrate a VDI In this exercise you will:
Provision an NFS server from a template to be used as shared storage Migrate the Virtual Disk Image (VDI) for the Demo VM between local storage and shared storage (i.e. the NFS server you have created)
63
Please complete Exercises 8, 9 and 10 now Estimated Time: 40 minutes
when you have completed exercise 10 i – Insert Mode <ESC> - Command Mode x – Delete character dd – Delete current line :wq – Write the file & Quit :q! – Exit without saving
64
Storage XenMotion – Good to know
Up to 3 concurrent Storage Migrations per host Up to 6 VDIs can be moved on each VM Destination storage must have sufficient available disk space Otherwise migration will fail and VM does not move VM I/O performance will be reduced during migration Due to synchronous writing of VDIs Note that due to synchronous writes, you can’t write to disk faster than the network can handle or the disk write swill be delayed. For troubleshooting check logs: /var/log/{xensource.log,SMlog,messages} .
65
Storage XenMotion – Good to know
Storage Migration does not support more than 1 snapshot VMs with PCI Pass-through (NIC, GPU) are not supported Work Load Balancing (WLB) cannot yet utilise Storage Migration Can’t migrate a VM to different CPU hardware
66
XenServer Conversion Manager
New VMware VM conversion tool XenServer 6.1 offers a new, quicker way to convert virtual machines over from VMware to XenServer. Worth mentioning is the fact we also have the XenConvert utility, which concentrates on physical to virtual conversions, and also the ability to import VMware VMDK’s, VHD or OVF files directly into XenCenter.
67
Converting with XenServer Conversion Manager
Batch converts VMs directly from VMware to XenServer Process is direct, automatic and requires no intermediate storage The conversion is performed by a small appliance on the XenServer pool vCenter / ESX XenServer Conversion Appliance Virtual Machine Virtual Machine Virtual Machine Virtual Machine Conversion Manager Console XenServer Conversion Manager allows you to queue up multiple VMware 4.x VMs and serially import them directly from a VMware server host or vCenter server. The administration workstation is not in the data path. Because the conversion is direct, no extra storage is required and the process for each VM is quicker.
68
XCM - Limitations Conversion processes only one VM at a time (in order of submission) Source VM must be off and cannot reference an ISO Excluded VMware sources VirtualCenter 2.5, ESX Server 3, ESXi 3.5 vCenter 5, ESXi 5 Configuration requires access to VPX No Linux “Fixups” yet
69
Network Enhancements ……..
70
Networking Enhancements
Link Aggregation Control Protocol (LACP) support Enables the use of industry-standard network bonding features to provide fault-tolerance and load balancing of network traffic Multi-Tenancy improvements Can restrict a VM to send and receive traffic on a specific MAC address and a number of IPv4 or IPv6 addresses, without relying on VLANs and switch management software. VMs cannot impersonate any other VM, or intercept traffic intended for any other VM Increases security in environments where VMs cannot be fully trusted LACP – Will talk about in more detail later on Keeping network traffic from one company separate from another in a CSP cloud environment can be a challenge. The new XS 6.1 ability to restrict a VM to send and received traffic only of a specific MAC address or IP address without relying on VLANs will be very beneficial. Emergency network reset returns the XenServer to known good network state. This is primarily intended for use from the XenServer (hardware) XSConsole. It is not available from XenCenter as if the network were messed up, it is unlikely that XenCenter could interact with the server.
71
Networking Enhancements
Emergency Network Reset Provides a simple mechanism to recover and reset a host's networking Allows XenServer hosts to revert to a known good networking state Source Load Balancing (SLB) improvements Allows up to 4 NICs to be used in an active-active bond Improves total network throughput and increases fault tolerance The SLB balancing algorithm has been modified to reduce load on switches in large deployments The emergency network reset is an excellent mechanism for provide a ‘last known good’ configuration for resetting the network settings from XenServer’s XAPI database in any any adverse changes have been made. XenServer 6.1 also allows up to 4 network adapters / network controllers to be used in an active-active bond for bandwidth improvements . Active-passive bonding is also useful for fault tolerance.
72
Networking Enhancements
IPv6 Guest Support Enables the use of IPv6 addresses within guests allowing network administrators to plan for network growth Open vSwitch The default XenServer network switch (one per network) since XenServer 6.0 New version 1.4 has LTS (Long Term Support) IPv6 support is available for XenServer 6.1 guests and will be available for hosts in a future release of XenServer. The open vSwitch has been standard internal switch (one for each network) since XS LTS releases have been through a comprehensive testing process and are suitable for production use. Planned releases will occur several times a year. If a significant bug is identified in an LTS release, an updated release will be provided that includes the fix. Releases that are not LTS may not be fixed and may just be supplanted by the next major release.
73
Networking Enhancements
VLAN Scalability improvements In XS 6.0, 800 VLANs took hours to create In XS 6.1 the time has been reduced to 10 seconds The number of VLANs needed for a typical enterprise data-center are in the tens. Rarely would a data-center need more than a couple of dozen. But in the cloud, hundreds of VLANs are used to keep customer traffic separate. XenServer 6.1 allows hundreds of VLANs to be deployed in a XenServer pool quickly Hundred of VLANs now available almost instantly.
74
LACP Bonding XS 6.1 now supports NIC bonding using the industry standard LACP protocol Up to 4 physical NICs bonded together Traffic from all VM’s is load balanced across multiple NICs in the bond Using one of two balancing modes tcpudp_ports or src_mac Distribution is re-evaluated every 10 seconds LACP Groups need to be set up on the physical switch infrastructure prior to enabling it on the XenServer host or pool For some time, we have been able to group several network flows into one bond allowing us to provide redundancy with respect to network traffic flows. Link bonding is performed on the switch. We use the term aggregation in order to refer to the fact that both the switch and the hosts are aware of the network traffic bonding. The server running XenServer 6.1 and the switch can now exchange LACP frames (In the past, there was an obscure way of using LACP on a linux bridge but was always unsupported – currently LACP bonding is only supported with the open v-switch which is the default network stack for XenServer 6.0 and later.
75
LACP Bonding - tcpudp_ports (default mode)
Link Aggregation Group Flows Re-balanced every 10 sec Multiple TCP/UDP Traffic Flows XenServer LACP Bond Virtual Machine Virtual Machine Physical NICs Virtual Machine LACP 802.3AX Stacked Switches In tcpudp_ports mode, each different TCP or UDP traffic flow is treated separately Each flow can therefore travel down independent physical NICs so aggregate throughput is enhanced - We have up to four Physical NICs in a XenServer - Attach to four stacked LACP switches - Create a Link Aggregation Group - Setup LACP Bond on the XenServer - VMs traffic flows are distributed across the four NICs - Flows from get re-balanced every 10 seconds XenServer 6.1 now officially supports NIC bonding using the industry standard LACP protocol. In addition to supporting Active/Active and Active/Passive bonding using the Balance-SLB method, we can now support up to 4 physical NICs bonded together with LACP. Traffic from all VM’s is load balanced across multiple NICs in the bond; a distribution which is re-evaluated every 10 seconds. LACP Groups need to be set up on the physical switch infrastructure prior to enabling it on the XenServer host or pool. The default hashing algorithm for LACP is tcpudp_ports. The traffic is sent down a physical NIC in the bond based the IP and ports of the source and destination traffic. In terms of a single VM, this means that each different TCP or UDP traffic flow is treated separately and can therefore travel down independent physical NICs. This means the aggregate bandwidth used by any single VM could increase beyond what would be possible with a single physical NIC.
76
LACP Bonding – src_mac XenServer LACP Bond
Link Aggregation Group Flows Re-balanced every 10 sec Multiple TCP/UDP Traffic Flows XenServer LACP Bond Virtual Machine Virtual Machine Physical NICs Virtual Machine LACP 802.3AX Stacked Switches In src_mac mode, all traffic from a VM is transmitted via the same physical NIC Multiple VMs can use all the NICs in the bond, so aggregate throughput is enhanced The LACP hashing algorithm src_mac can be used by XenServer. In this configuration, at any given moment, all traffic from a VM is transmitted via the same physical NIC. Multiple VMs can of course use all the NICs in the bond, so aggregate throughput is enhanced significantly over a single NIC.
77
Can LACP speed up Storage?
Link Aggregation Group Flows Re-balanced every 10 sec Multiple TCP/UDP Traffic Flows XenServer LACP Bond Virtual Machine Virtual Machine Physical NICs NFS/iSCSI Flow Virtual Machine LACP 802.3AX Stacked Switches 4 1 2 3 No, it’s doesn’t speed up storage Because neither mode offers the ability to distribute a single flow iSCSI, NFS SAN IP based storage traffic (iSCSI/NFS) is sent over the network from Domain 0. Domain 0 behaves just like any other VM when considering load balancing and traffic flow. Because iSCSI or NFS traffic constitutes a single IP flow, neither LACP hashing methods offer the ability to distribute the traffic flow over more than one physical NIC at any given time. This means that although multiple NICs and LACP will improve storage performance over using a single NIC (because not so many VMs will be contending for the storage NIC), iSCSI or NFS cannot use bandwidth greater than that provided by a single NIC. Isolating IP storage traffic is the best method for ensuring good storage performance.
78
LACP Xenserver Configuration
Two new bond modes in XenCenter wizard LACP with load balancing based on either IP and port of source and destination or Source MAC address CLI commands: xe bond-param-set uuid=… properties:hashing_algorithm=tcpudp_ports xe bond-create mode=lacp network-uuid=… pif-uuids=… \ properties:hashing-algorithm=src_mac The hashing algorithm can be specified at bond creation or changed later. Here are examples of how it is setup in XenCenter and via the CLI .
79
Upgrades ……..
80
Rolling Pool Upgrade Wizard
Upgrades all servers in a pool … one server at a time VMs are migrated to other servers while each server is upgraded All services offered by the pool remain available at all times Automated pre-upgrade checks Disable HA and WLB Remove CD/DVDs from VMs Blocks unsupported upgrades Any pre-upgrade changes are automatically reversed after the server upgrade Upgrading a hundred XenServers is a daunting task for any IT administrator. Some of our customers have more than a thousand servers, so the XS 6.0 Rolling Pool Upgrade Wizard is going to be their best friend. It controls an automated process that will upgrade one server at a time while automatically migrating the servers workload to other servers in the pool. This allows all the resources of a pool to remain functional during the upgrade process. Automatic pre-upgrade checks prior to the upgrade catch common problems, allowing the upgrade to proceed in a controlled and safe manner. Many of the flagged problems can be resolved with just one click. Changes made by the Wizard during the pre-upgrade checks are reversed after the upgrade is complete. The Pool master is upgraded first and we support upgrades from 5.6 and 5.6 FP1 to 6.0 and from 6.0 to 6.1 .
81
E11 – Prepare a XenServer 6.0 pool to perform a rolling pool upgrade
In this exercise you will: Provision XenServer 6.0 Virtual XenServers from templates Create a XenServer pool and create an NFS storage repository for the pool Import the Linux Demo VM and place its VDI on shared storage
82
E12 – Perform a rolling pool upgrade
In this exercise you will: Perform a rolling pool upgrade from your existing XenServer 6.0 pool to a XenServer 6.1 pool preventing any downtime
83
E13 – VLAN Scalability In this exercise you will:
Test VLAN creation and deletion with XenServer 6.1 Use bash scripting to practise combining XenServer XAPI commands in a loop in order to automate a repetitive process
84
E14 – LCAP bonding In this exercise you will:
Add four “physical” NICs to a virtual XenServer and bond the NICs into a LACP bond See the status of the connections using Open vSwitch commands.
85
Please complete one of the following options ……
Option 1: Exercises Prepare and perform a rolling pool upgrade OR Option 2 : Exercise 13 VLAN Scalability OR Option 3 : Exercise 14 LACP bonding
86
Guest Enhancements
87
Guest Enhancements Newly Supported Guests Depreciated Guests
Ubuntu 12.04 CentOS 5.7, 6.0, 6.1, 6.2 Red Hat Enterprise Linux 5.7, 6.1, 6.2 Oracle Enterprise Linux 5.7, 6.1, 6.2 Depreciated Guests SLES 9 SP4 Debian Lenny
88
Guest Enhancements New Installation Mechanism for XenServer Tools
XenServer Tools are now delivered as industry standard Windows Installer MSI files This enables the use of 3rd party tools to deliver and manage the installation and upgrade of the XenServer device drivers
89
Storage link Enhancements
90
Integrated Storage Link Enhancements
New Support EMC VNX SMI-S connector Expanded Support NetApp Dell EqualLogic The Storage Management Initiative Specification (SMI-S) standardizes and streamlines storage management functions and features into a common set of tools that address the day-to-day tasks of the IT environment. Initially providing a foundation for identifying the attributes and properties of storage devices, SMI-S now also delivers services such as discovery, security, virtualization, performance, and fault reporting.
91
Performance Enhancements
92
XenServer Architecture
Minor changes since XenServer 6.0 Kernel Primarily driver updates Dom0 - CentOS 5.7 ethtool updated from 2009 version For new driver knowledge
93
XenServer Architecture
hdparm updated and smarttools added Primarily for SSD support inset added For cloud users (iptables enhancement)
94
Performance Enhancements - Pinning
Pinning dom0 vCPUs to physical processors can improve system performance Pinning commands added Pinning now configurable Unpinned (as with XS 6.0) Permanently pinned Dynamically pinned, dependent on VM load (configuration file) Policy and Parameters set in /etc/sysconfig/tune-vcpus In the past : xe vm-param-set uuid=<VM UUID> VCPUs-params:mask=1,3,7
95
Performance Enhancements – dom0 vCPUs
Number of vCPUs for dom0 has been fixed at 4 since 5.6 FP1 Additional vCPUs during VM boot storm improves performance Number of vCPUs now configurable Minimum mode: Always 4 (Same as XS 6.0) Maximum mode: Always maximum allowable (8 by default) Dynamic mode: Based on VM load (vCPUs added when needed) Policy and Parameters set in /etc/sysconfig/tune-vcpus
96
Performance Enhancements
True Parallelization of XAPI tasks XenCenter gave illusion it was parallel, but it was sequential Now XAPI is truly parallel and can support more than 200 simultaneous tasks Limit is configurable - default is 4 simultaneous tasks
97
Xen Scheduler Time-Slice
Default Time-Slice for a VM can be set Historically, the Time-Slice was fixed at 30ms Good for computationally-intensive workloads Not good for latency-sensitive workloads Change the Time-Slice via Xen tslice_ms parameter Reducing it from 30ms might help latency-sensitive workloads But increases overhead from context-switching and reduces effectiveness of caches
98
Xen Scheduler Rate-Limiting
The minimum time that a VM can run without being preempted can be set Good for workloads where VMs do a few microseconds worth of work, go back to sleep, only to be woken up microseconds later This causes thousands of schedules per second… Lots of time spent scheduling, not much time doing actual work! Default is 1 millisecond Very few network-based workloads require sub-millisecond latency Override with sched_ratelimit_us on xen cmdline (0 disables) or via “xl sched-credit”
99
Storage Performance Monitoring
100
Problems Monitoring Storage I/O in XenServer
XenServer storage subsystem is complex, with many subsystem Currently, it is necessary to combine the output of many tools Even then, it is hard to understand what is going on
101
Problems Monitoring Storage I/O in XenServer
tap-ctl stats /sys/class/blktap2/ blktap0/debug /sys/devices/xen-backend/ vbd /io_ring /sys/class/blktap2/ blktap0/debug iostat -x /sys/block/td*/inflight
102
Storage Performance Monitor - xsiostat
xsiostat -x pool: e5ea c db5a1f (2816, 0) DOM VBD BLKBKRING INFLIGHT BLKTAP REQS/s BLKTAP MB/s TD INFL TAPDISK REQS/s TAPDISK MB/s TOT USE RD WR RD WR RD WR RD WR RD WR RD WR vbd: 1,51776: ( 32, 0) ( 0, 0) ( , ) ( 0.00, 0.00) ( 0, 0) ( , ) ( 0.00, 0.00) vbd: 1,51728: ( 32, 0) ( 0, 0) ( , ) ( 0.00, 0.00) ( 0, 0) ( , ) ( 0.00, 0.00)
103
XenServer 6.1 – Other New things
104
What else is in XenServer 6.1?
XenServer Tools The new installation mechanism enables future updates of the XenServer Tools to be delivered directly to the VM administrators via the standard Windows Update Service; the XS administrator no longer needs to be involved in the process Supportability vhostmd support for SAP certification Hardware support updates (e.g. CPUs, storage/networking adapters) Windows VM PV Drivers are MSI-based Automated Vendor Self-Certification Kit
105
What else is in XenServer 6.1?
dom0 Logging Pre 6.x, mixture of logging methods and time formats made it difficult to follow logs Some processes got blocked waiting for disk writes Some fixes in XS 6.0, more fixed in XS 6.1 including Xapi now using syslog All logging fixed by the next release (all processes will use syslog) Problems with logs filling up the file system mitigated by intelligent log size analysis Interface Rename Significant improvements in maintaining device names in face of hardware device replacement
106
Any Questions
107
Lab Exercise 1: Attaching to your XenServer
Exercise 2: Building a PXE Boot & Web Server Exercise 3: Create a Virtual XenServer (Attended install) Exercise 4: Create another Virtual XenServer (Automatic Install) Exercise 5: Create more Virtual XenServers (From Templates) Exercise 6: Unattended XenServer Installs Exercise 7: Attach XenCenter and control licensing Exercise 8: Import a Demo VM
108
Lab (Cont) Exercise 9: Live migrate the Demo VM
Exercise 10: Migrate a VDI Exercise 11: Prepare a XenServer 6.0 pool to perform a rolling pool upgrade Exercise 12: Perform a rolling pool upgrade Exercise 13: VLAN Scalability (Optional) Exercise 14: LACP bonding (Optional)
109
Lab (Cont) Appendix 1: Edit a file using vi
Appendix 2: Restore VM from Template
110
Before you leave …. Conference surveys are available online at starting Friday, May 24 at 9:00 a.m. PT Provide your feedback by 4:00 p.m. PT that day and you’ll receive a $30 Amazon.com gift card via Download presentations starting Monday, June 3, from your My Conference Planning tool located within the My Account section
Similar presentations
© 2025 SlidePlayer.com Inc.
All rights reserved.