Presentation is loading. Please wait.

Presentation is loading. Please wait.

vSphere 4.1: Delta to 4.0 Tech Sharing for Partners

Similar presentations


Presentation on theme: "vSphere 4.1: Delta to 4.0 Tech Sharing for Partners"— Presentation transcript:

1 vSphere 4.1: Delta to 4.0 Tech Sharing for Partners
Iwan ‘e1’ Rahabok, Senior Systems Consultant | virtual-red-dot.blogspot.com | tinyurl.com/SGP-User-Group | facebook.com/e1ang August 2010

2 Audience Assumption This is a level 200 - 300 presentation.
It assumes: Good understanding of vCenter 4, ESX 4, ESXi 4. Preferably hands-on We will only cover the delta between 4.1 and 4.0 Overview understanding of related products like VUM, Data Recovery, SRM, View, Nexus, Chargeback, CapacityIQ, vShieldZones, etc Good understanding of related storage, server, network technology Target audience VMware Specialist: SE + Delivery from partners

3 Agenda New features Server Storage Network Management Upgrade

4 4.1 New Feature (over 4.0, not 3.5): Server
Features Design Cost Scalability Performance Availability Security Manageability ESXi: scripted install ESXi: SAN Boot Memory compression Serial Port Concentrator USB Device MS Cluster support HA Health Check HA: more VM per cluster FT enhancements DRS/HA/FT integration FT: enhanced logging Isn’t cluster supported in 4.0.1? Compared the 2 manuals closely. Design here can mean better design, or you can fix/propose things that you can’t before, or give you more options to take on larger or more complex design. Cost here can mean lower Product cost, Services cost (e.g. reduce effort from partner) or less effort (if internal IT is doing it). Scalability means you can do more, like do more VM per ESX. Performance means can do the same thing but faster. For example, backing up a VM is faster. Memory Compression reduces cost: more VM per ESX means less ESX host, or smaller RAM expense. Scripted install improves security as it reduces risk of variance among installation. ESXi SAN boot improves security as ESXi config are not stored in a hundred places. vSphere 4.1 introduces an FT-specific versioning-control mechanism that allows the Primary and Secondary VMs to run on FT-compatible hosts at different but compatible patch levels. vSphere 4.1 differentiates between events that are logged for a Primary VM and those that are logged for its Secondary VM, and reports why a host might not support FT. In addition, you can disable VMware HA when FT-enabled VMs are deployed in a cluster, allowing for cluster maintenance operations without turning off FT. Compare with 4.0. The VMware HA dashboard in the vSphere Client provides a new detailed window called Cluster Operational Status. This window displays more information about the current VMware HA operational status, including the specific status and errors for each host in the VMware HA cluster.

5 4.1 New Feature (over 4.0, not 3.5): Server
Features Design Cost Scalability Performance Availability Security Manageability vMotion enhancements Power Management & Charts More VM per host? Reduced RAM overhead Host Affinity Rules AD integration Multi-core VM Local/Remote Console Total Lockdown Mode VMware Tools scripting Hyper-V import: without it, it will be more complex and may require longer down time. ESX 4.1 takes advantage of deep sleep states to further reduce power consumption during idle periods. The vSphere Client has a simple user interface that allows you to choose one of four host power management policies. In addition, you can view the history of host power consumption and power cap information on the vSphere Client Performance tab on newer platforms with integrated power meters. Need screenshot and new machine. Faster vMotion improves management as you spend less time waiting for 10 VMs to complete vMotion as you prepare to do hardware maintenance. In some cases, you are given a fixed window to do your maintenance. And you want the 5 or 15 VMs in that host to vmotion as fast as possible. vSphere 4.1 reduces the amount of overhead memory required, especially when running large VMs on systems with CPUs that provide hardware MMU support (AMD RVI or Intel EPT). vSphere 4.1 includes an AMD Opteron Gen. 3 (no 3DNow!™) EVC mode that prepares clusters for vMotion compatibility with future AMD processors. EVC also provides numerous usability improvements, including the display of EVC modes for VMs, more timely error detection, better error messages, and the reduced need to restart VMs Vmware Tools now have CLI, which

6 4.1 New Feature (over 4.0, not 3.5): Storage
Features Design Cost Scalability Performance Availability Security Manageability API for Array Integration vscsiStats in ESXi Storage I/O Control iSCSI Hardware Offload VMware Data Recovery VADP enhancements Boot from iSCSI Software Pluggable Storage Arch VMFS enhancements Storage statistics Paravirtualised SCSI Improved performance 8 GB FC support VMware Data Recovery is actually available in too, as it’s compatible VMFS enhancements: minor. Transparent to users There have been many algorithm changes between v3.33 and and 3.46 VMFS-3.46 driver uses hardware accelerated locking and hardware accelerated Storage VMotion, Virtual Machine provisioning, and cold migrate functions on such hardware. This improved the performance and scalability of workloads that require the above functions. Personally, there are those who are 100% convinced on the benefit of iSCSI boot. This is because it’s mixing storage and network, and can make troubleshooting/support complex. VADP: VSS on Win08 NFS performance improvement. Quantified? NFS Performance Enhancements. Networking performance for NFS has been optimized to improve throughput and reduce CPU usage

7 4.1 New Feature (over 4.0, not 3.5): Network
Features Design Cost Scalability Performance Availability Security Manageability Network I/O Control IPv6 Enhancements Load-based Teaming vNIC enhancements Nexus 1000V v2.0 Distributed Switch Nexus is not released yet. vDS: scalability vNIC enhancements: E1000 vNIC supports jumbo frames

8 4.1 New Feature: Management
Component New Features vMA AD authentication Host Profiles Cisco, AD, Tech Support Mode vCLI & PowerShell A set of new vCLI commands vCO 64 bit. Improved performance. VMware Update Manager 3rd party patching, provisioning, upgrading. Push update on critical notifications Licence Reporting Manager vCenter Faster performance, 64 bit, more VM per host, more hosts per vCenter, bigger vCenter, vCenter LinkedMode 3x more VM Site Recovery Manager 4.1 Per-VM pricing. IP customization for Windows 7 and Win08 R2. Faster recovery time for iSCSI . 64-bit only. vDS support. Error Reporting Submit error to VMware.com Partner plug-in Updated vCenter plug-ins from partners (Server, Storage, etc) Converter Convert to thin while converting. Hyper-V import Performance Charts New charts, new counters, especially Storage related You can use Host Profiles to roll out administrator password changes in vSphere 4.1. Enhancements also include improved Cisco Nexus 1000V support and PCI device ordering configuration Unattended Authentication in vSphere Management Assistant (vMA). vMA 4.1 offers improved authentication capability, including integration with AD and commands to configure the connection Update Manager 4.1 immediately sends critical notifications about recalled ESX and related patches. In addition, Update Manager prevents you from installing a recalled patch that you might have already downloaded. This feature also helps you identify hosts where recalled patches might already be installed. The License Reporting Manager provides a centralized interface for all license keys for vSphere 4.1 products in a virtual IT infrastructure and their respective usage. You can view and generate reports on license keys and usage for different time periods with the License Reporting Manager. A historical record of the utilization per license key is maintained in the vCenter database

9 Builds: ESX build VC build Some stats: 4000 development weeks were spent to get to FC 5100 QA weeks were spent to get to FC 872 beta customers downloaded and tried it out 2012 servers, 2277 storage arrays, and 2170 IO devices are already on the HCL

10 Consulting Services: Kit
The vSphere Fundamentals services kit Includes core services enablement materials for vSphere Jumpstarts, Upgrades, Converter/P2V and PoCs.  The update reflects what’s new in vSphere including new resource limits, memory compression, Storage IO Control, vNetwork Traffic Management, and vSphere Active Directory Integration.  The kit is intended for use by PSO Consultants, TAMs, and SEs to help with delivering services engagements, PoCs, or knowledge transfer sessions with customers.  Located at Partner Central – Services IP Assets ocumentId= SSi For delivery partner: Please download this.

11 4.1 New Features: Server

12 PXE Boot Retry Virtual Machine -> Edit Settings -> Options -> Boot Options Failed Boot Recovery disabled by default Enable and set the automatically retry boot after X Seconds

13 Wide NUMA Support Wide VM
Wide-VM is defined as a VM that has more vCPUs than the available cores on a NUMA node. A 5-vCPU VM in a quad-core server Only the cores count, and hyperthreading threads don’t ESX 4.1 scheduler introduces wide-VM NUMA support Improves memory locality for memory-intensive workloads. Based on testing with micro benchmarks, the performance benefit can be up to 11–17%. How it works ESX 4.1 allows wide-VMs to take advantage of NUMA management. NUMA management means that a VM is assigned a home node where memory is allocated and vCPUs are scheduled. By scheduling vCPUs on a NUMA node where memory is allocated, the memory accesses become local, which is faster than remote accesses an 8-vCPU SMP VM is considered wide on an Intel Xeon 55xx system because the processor has only four cores per NUMA node

14 Enhancements to ESXi. Not applicable to ESX
ESXi was released around 2 years ago. Just sharing my experience as SE. In this short period of 2 years, the discussions that I have with customers or partners have progressed, from “what is ESXi” to “why should we use ESXi” to “we are using or planning to use ESXi”. For a platform software, it is doing very well since it needs to build its ecosystem.

15 Transitioning to ESXi ESXi is our architecture going forward
We can say that vSphere 4.1 is the release for ESXi. In this release ESXi takes center stage. 4.1 is our strongest message that we are going toward ESXi as the sole hypervisor. A lot of customers, even some of the largest deployment, have decided to go ESXi going forward. If your customers have not, 4.1 is a good opportunity for you offer a migration services or hardware refresh. As SE, we also know that there are some features that we wish we have in the 4.0 release. For example, while the remote CLI helps, none of the Linux command works as the execution context is the VMA OS, not the ESXi kernel. And in some troubleshooting scenario, customers do need to issue linux command. Another thing we can’t do automatic installation and boot from network.

16 Moving toward ESXi Permalink to: VMware ESX and ESXi 4.1 Comparison
Service Console (COS) Agentless vAPI-based Management Agents Hardware Agents Agentless CIM-based Commands for configuration and diagnostics vCLI, PowerCLI Local Support Console ESXi introduces a new and improved management paradigm. In the new model, functions that were formerly provided by the COS are now provided in different and superior ways.  Most functionality, including hardware monitoring and systems management, is now done through standardized interfaces, instead of having to install and maintain agents directly on the ESX host. We enhanced the vSphere APIs to cover all the functionality that you’d typically need an agent to do. HW monitoring is done via CIM – which is a standards based way of capturing and exposing HW health information. The Common Information Model (CIM) is an open standard that defines how managed elements in an IT environment are represented as a common set of objects and relationships between them. This is intended to allow consistent management of these managed elements, independent of their manufacturer or provider. Local commands and scripting are replaced by remote scripting environments, which provide much greater degree of control and auditing.  vCLI has the closest syntax to COS, but POWERCLI is a more powerful scripting language that I’ve seen a lot of ESXi customers really enjoy using. For more advanced debugging and troubleshooting, there is a local support console – called tech support mode. Infrastructure agents, such as NTP, DNS, Syslog, SNMP, now run natively in ESXi instead of in the COS. The benefits of this new model are: - Better controls: since most activities are done remotely through standardized interfaces, access controls and auditing are much stronger and more consistent - greater consistency: since there are no local agents to install and maintain, the possibility of configuration drift is greatly reduced - light-weight hosts: the amount of stuff running on a host is much less, allowing for thin, stateless nodes. I often hear “customized ESXi image” – what are those and should customers be using them? We allow OEM partners to add their own image customizations to ESXi. Usually when you hear about a customized image, it has the OEM inserts such as HP CIM providers or Dell CIM providers. We do recommend that if you have HP HW, use the HP customized ESXi image, for better HW monitoring. Because the HP providers will collect much more holistic information about HP HW as they are aware of the HP HW intricacies and special features. The other benefit is that most OEMs have a systems management tool that is a CIM client, which means it can consume the HW health info provided by the CIM providers and display it. For example, the HP SIM product is what you’d use for HP images. Dell has a similar solution. CIM API vSphere API Infrastructure Service Agents Native Agents: NTP, Syslog, SNMP “Classic” VMware ESX VMware ESXi 16

17 Software Inventory - Connected to ESXi/ESX
Before From vSphere 4.1 Enumerate instance of CIM_SoftwareIdentity The CIM Provider is now written in Python. CIM also provides a major goal when it comes to esxupdate, but this is discussed elsewhere in the training class. The above is ESX, not ESXi VMware ESXi provides HW instrumentation through CIM Providers. Standards-based CIM Providers are distributed with all versions of VMware ESXi. VMware partners include their own proprietary CIM Providers in customized versions of VMware ESXi. These customized versions are available either from VMware’s web site or the partner’s web site, depending on the partner. Remote console applications like Dell DRAC, HP iLO, IBM RSA, and FSC iRMC S2are supported with ESXi. Enhanced CIM provider now displays great detail on installed software bundles.

18 Software Inventory – Connected to vCenter
Before From vSphere 4.1 Enumerate instance of CIM_SoftwareIdentity The CIM Provider is now written in Python. CIM also provides a major goal when it comes to esupdate, but this is discussed elsewhere in the training class. Enhanced CIM provider now displays great detail on installed software bundles.

19 Additional Deployment Option
Boot From SAN Fully supported in ESXi 4.1 Was only experimentally supported in ESXi 4.0 Boot from SAN supported for FC, iSCSI, and FCoE ESX and ESXi have different requirement: iBFT (Boot Firmware Table) required The host must have an iSCSI boot capable NIC that supports the iSCSI iBFT format. iBFT is a method of communicating parameters about the iSCSI boot device to an OS One of the most popular requests among customers is to improve the deployment and management of ESXi. First in the line is boot From SAN is now fully supported in ESXi 4.1. It was as only experimentally supported in ESXi 4.0. Boot from SAN will be supported for FC, iSCSI, and FCoE. For iSCSI and FCoE, it will depend upon hardware qualification, so please check the HCL and Release Notes when vSphere 4.1 is released. Dependent Hardware iSCSI means the card depends on VMware networking, and iSCSI configuration and management interfaces provided by VMware. So properties like IP, MAC, and other parameters used for the iSCSI sessions are configured from VMware GUI/CLI. For ESXi text installer we have a screen that warns if the user is trying to install image onto an existing data store. It will not prevent user from installing if he/she desires to do so. For scripted install, unless user specifies an override VMFS flag, scripted install will not proceed with installation when a user tries to install on an existing datastore. We will only support a booting of host on a unique LUN. This LUN *cannot be* shared by other hosts. User is expected to set proper LUN masking to avoid this scenario. If the luns were to be shared it could result in data corruption. copied from 3rd party site iSCSI SW boot: the only currently supported network card is the Broadcom GBe NIC. When booting from software iSCSI the boot firmware on the network adapter logs into an iSCSI target. The firmware than saves the network and iSCSI boot parameters in the iBFT which is stored in the host’s memory. Before you can use iBFT you need to configure the boot order in your server’s BIOS so the iBFT NIC is first before all other devices. You than need to configure the iSCSI configuration and CHAP authentication in the BIOS of the NIC before you can use it to boot ESXi from. The ESXi installation media has special iSCSI initialization scripts that use iBFT to connect to the iSCSI target and present it to the BIOS. Once you select the iSCSI target as your boot device the installer copies the boot image to it. Once the media is removed and the host rebooted the iSCSI target is used to boot and the initialization script runs in first boot mode which configures the networking which afterwards is persistent.

20 Additional Deployment Option
Scripted Installation Numerous choices for installation Installer booted from CD-ROM (default) Preboot Execution Environment (PXE) ESXi Installation image on CD-ROM (default), HTTP/S, FTP, NFS Script can be stored and accessed Within the ESXi Installer ramdisk On the installation CD-ROM HTTP / HTTPS, FTP, NFS Config script (“ks.cfg”) can include Preinstall Postinstall First boot Cannot use scripted installation to install to a USB device Second features we have implemented is more choice during install. We can now do PXE boot, and we can script it too. Scripted Installation, the equivalent of Kickstart, is now available. The installer can boot over the network, and at that point you can also do an interactive installation, or else set it up to do a scripted installation. Both the installed image and the config file (called “ks.cfg”) can be obtained over the network using a variety of protocols. There is also an ability to specify preinstall, postinstall, and first-boot scripts. For example, the postinstall script can configure all the host settings, and the first boot script could join the host to vCenter. These three types of scripts run either in the context of the Tech Support Mode or in Python. The Tech Support Mode shell is a highly stripped down version of bash. You can start the scripted installation with a CD-ROM drive or over the network by using PXE booting. You cannot use scripted installation to install ESXi to a USB device

21 PXE Boot Requirements PXE-capable NIC.
DHCP Server (IPv4). Use existing one. Media depot + TFTP server + gPXE A server hosting the entire content of ESXi media. Protocal: HTTP/HTTPS, FTP, or NFS server. OS: Windows/Linux server. Info We recommend the method that uses gPXE. If not, you might experience issues while booting the ESXi installer on a heavily loaded Network. TFTP is a light-weight version of the FTP service, and is typically used only for network booting systems or loading firmware on network devices such as routers. The media depot is a network-accessible location that contains the ESXi installation media. You can use HTTP/HTTPS, FTP, or NFS to access the depot. The depot must be populated with the entire contents of the ESXi installation DVD, preserving directory structure. If you are performing a scripted installation, you must point to the media depot in the script by including the install command with the nfs or url option. The following code snippet from an ESXi installation script demonstrates how to format the pointer to the media depot if you are using NFS: install nfs --server=example.com --dir=/nfs3/VMware/ESXi/41

22 PXE boot PXE uses DHCP and Trivial File Transfer Protocol (TFTP) to bootstrap an OS over network. How it works A host makes a DHCP request to configure its NIC. A host downloads and executes a kernel and support files. PXE booting the installer provides only the first step to installing ESXi. To complete the installation, you must provide the contents of the ESXi DVD Once ESXi installer is booted, it works like a DVD-based installation, except that the location of the ESXi installation media must be specified. The preboot execution environment (PXE) is an environment to boot computers using a network interface independently of available data storage devices or installed OS. These topics discuss the PXELINUX and gPXE methods of PXE booting the ESXi installer. PXE uses DHCP and Trivial File Transfer Protocol (TFTP) to bootstrap an OS (OS) over a network. Network booting with PXE is similar to booting with a DVD, but it requires some network infrastructure and a machine with a PXE-capable network adapter. Once the ESXi installer is booted, it works like a DVD-based installation, except that the location of the ESXi installation media (the contents of the ESXi DVD) must be specified. A host first makes a DHCP request to configure its network adapter and then downloads and executes a kernel and support files. PXE booting the installer provides only the first step to installing ESXi. To complete the installation, you must provide the contents of the ESXi DVD either locally or on a networked server through HTTP/HTTPS, FTP, or NFS. TFTP is a light-weight version of the FTP service, and is typically used only for network booting systems or loading firmware on network devices such as routers. If you do not use gPXE, you might experience issues while booting the ESXi installer on a heavily loaded network. This is because TFTP is not a robust protocol and is sometimes unreliable for transferring large amounts of data. If you use gPXE, only the gpxelinux.0 binary and configuration file are transferred via TFTP. gPXE enables you to use a Web server for transferring the kernel and ramdisk required to boot the ESXi installer. If you use PXELINUX without gPXE, the pxelinux.0 binary, the configuration file, and the kernel and ramdisk are transferred via TFTP. Setting up a new DHCP server is not recommended if your network already has one. If multiple DHCP servers respond to DHCP requests, machines can obtain incorrect or conflicting IP addresses, or can fail to receive the proper boot information. Seek the guidance of a network administrator in your organization before setting up a DHCP

23 Additional Deployment Option
Scripted Installation, the equivalent of Kickstart, will be supported on ESXi The installer can boot over the network, and at that point you can also do an interactive installation, or else set it up to do a scripted installation. Both the installed image and the config file (called “ks.cfg”) can be obtained over the network using a variety of protocols. There is also an ability to specify preinstall, postinstall, and first-boot scripts. For example, the postinstall script can configure all the host settings, and the first boot script could join the host to vCenter. These three types of scripts run either in the context of the Tech Support Mode shell (which is a highly stripped down version of bash) or in Python.

24 Sample ks.cfg file # Accept the EULA (End User Licence Agreement)
vmaccepteula # Set the root password to vmware123 rootpw vmware123 # Install the ESXi image from CDROM install cdrom # Auto partition the first disk – if a VMFS exists it will overwrite it. autopart --firstdisk --overwritevmfs # Create a partition called Foobar # Partition the disk identified with vmhba1:c0:t1:l0 to grow to a maxsize of 4000 partition Foobar --ondisk=mpx.vmhba1:C0:T1:L0 --grow –maxsize=4000 # Set up the management network on the vmnic0 using DHCP network –bootproto=dhcp --device=vmnic0 --addvmportgroup=0 %firstboot --level= unsupported --interpreter=busybox # On this first boot, save the current date to a temporary file date > /tmp/foo # Mount an nfs share and put it at /vmfs/volumes/www esxcfg-nas -add -host share /var/www www The firstboot scripts are run as initscripts. All initscripts have a numerical part in their filenames. They are sorted by that numerical part to determine the order in which they are run. So a script with "90.1" would run after a script with "90.0" and before a script with "90.2"

25 Full Support of Tech Support Mode
There you go  2 types Remote: SSH Local: Direct Console Finally, the Tech Support Mode is fully supported. We support both the local, when you are in front of the server, or remote, when you are using SSH. In ESXi 4.0, Tech Support Mode usage was ambiguous. We stated that you should only use it with guidance from VMware Support, but VMware also issued several KBs telling customers how to use it. Getting into Tech Support Mode was also not very user-friendly. The warning not to use TSM has been removed from the login screen. However, anytime TSM is enabled (either local or remote), a warning banner will appear in vSphere Client for that host. This is meant to reinforce the recommendation that TSM only be used for fixing problems, not on a routine basis. The SysAdminTools URL in the message above will take you to vMA, PowerCLI, CLI, etc.

26 Full Support of Tech Support Mode
Enter to toggle. That’s it! Disable/Enable Timeout automatically disables TSM (local and remote) Running sessions are not terminated. All commands issued in Tech Support Mode are sent to syslog To enable or disable from the console, it’s pretty straight forward. By default, after you enable TSM (both local and remote), they will automatically become disabled after 10 minutes. This time is configurable, and the timeout can also be disabled entirely. When TSM times out, running sessions are not terminated, allowing you to continue a debugging session. All commands issued in TSM are logged by hostd and sent to syslog, allowing for an incontrovertible audit trail. When lockdown mode is enabled, DCUI access is restricted to the root user (so root can still go in), while access to Tech Support Mode is completely disabled for all users. With lockdown mode enabled, access to the host for management or monitoring using CIM is possible only through vCenter. Direct access to the host using the vSphere Client is not permitted.

27 Full Support of Tech Support Mode
Recommended uses Support, troubleshooting, and break-fix Scripted deployment preinstall, postinstall, and first boot scripts Discouraged uses Any other scripts Running commands/scripts periodically (cron jobs) Leaving open for routine access or permanent SSH connection Admin will be notified when active As you know, the tech support mode is not for day to day use. So anytime it is enabled, we will flag it.

28 Full Support of Tech Support Mode
We can also enable it via GUI Can enable in vCenter or DCUI We can also enable it via the GUI. You select the ESXi you want to manage, then click on the “Configuration” tab. From here, click on the “Security Profile”. Clicking on the properties brings up this dialog box. From here, we can stop and start the relevant services. Enable/Disable

29 Security Banner A message that is displayed on the direct console Welcome screen. Procedure: 1 Log in to the host from the vSphere Client. 2 From the Configuration tab, select Advanced Settings. 3 From the Advanced Settings window, select Annotations. 4 Enter a security message. The message is displayed on the direct console Welcome screen.

30 Total Lockdown There is now an ability to totally lock down a host.
Lockdown mode in ESXi 4.1 forces all remote access to go through vCenter. So Lockdown mode is only available on ESXi hosts that have been added to vCenter.

31 Total Lockdown Ability to totally control local access via vCenter
DCUI Lockdown Mode (disallows all access except root on DCUI) Tech Support Mode (local and remote) If all configured, then no local activity possible (except pull the plugs) The only local access is for root to access the DCUI – this could be used, for example, to turn off lockdown mode in case vCenter is down. However, there is an option to disable DCUI in vCenter. In this case, with Lockdown mode turned on, there is no possible way to manage the host directly – everything must be done through vCenter. If vCenter is down, the only recourse in this case is to reimage the box. Of course, Lockdown Mode can be selectively disable for a host if there is a need to troubleshoot or fix it via TSM, and then enabled again. BTW,

32 Additional commands in Tech Support Mode
vscsciStats is now available in the console. Output is raw data for histogram. Use spreadsheet to plot the histogram Some use cases: Identify whether IO are sequential or random Optimizing for IO Sizes Checking for disk mis-alignment Looking at storage latency in more details Vscsistats has also been ported and now is available directly in the ESXi console. It is an advanced commands, and can be used to identify the IO patterns.

33 Additional commands in Tech Support Mode
Additional commands for troubleshooting nc (netcat) tcpdump-uw Other useful utilities for troubleshooting have been added to TSM

34 More ESXi Services listed
More services are now shown in GUI. Ease of control For example, if SSH is not running, you can turn it on from GUI. ESXi 4.0 ESXi 4.1

35 ESXi Diagnostics and Troubleshooting
During normal operations: If things go wrong: DCUI: misconfigs / restart mgmt agents vCenter vCLI vSphere APIs TSM: Advanced troubleshooting (GSS) ESXi Remote Access Local Access

36 Common Enhancements for both ESX and ESXi
64 bit User World Running VMs with very large memory footprints implies that we need a large address space for the VMX. 32-bit user worlds (VMX32) do not have sufficient address space for VMs with large memory. 64-bit User worlds overcome this limitation. NFS The number of NFS volumes supported is increased from 8 to 64. Fiber Channel End-To-End Support for 8 GB (HBA, Switch & Array). VMFS Version changed to No customer visible changes. Changes related to algorithms in the vmfs3 driver to handle new VMware APIs for Array Integration (VAAI).

37 Common Enhancements for both ESX and ESXi
VMkernel TCP/IP Stack Upgrade Upgraded to version based on BSD 7.1. Result: improving FT logging, VMotion and NFS client performance. Pluggable Storage Architecture (PSA) New naming convention. New filter plugins to support VAAI (vStorage APIs for Array Integration). New PSPs (Path Selection Policies) for ALUA arrays. New PSP from DELL for the EqualLogic arrays.

38 New Features for both ESX/ESXi
USB pass-through New Features for both ESX/ESXi

39 USB Devices 2 steps: Add USB Controller Add USB Devices

40 USB Devices Only devices listed on the manual is supported.
Mostly for ISV licence dongle. A few external USB drives. Limited list of device for now

41 Example 1 After vMotion, the VM will be on another (remote) ESXi. Communication inter-ESXi will use Mgmt Network (ESXi has no SC network) You cannot multi-select devices at this stage – add them one by one. Source: passthrough-in-vsphere-4-1/

42 Example 1 From the source
“I have tested numerous brands of USB mass storage devices (Kingston, Sandisk, Lexar, Imation) as well a couple of of security dongles and they all work well.”

43 Example 2: adding UPS Source: using-usb-pass-through-in-vsphere-4-1/

44 Example 2 Source: using-usb-pass-through-in-vsphere-4-1/

45 USB Devices: Supported Devices
Device Model Device Display Name SafeNet Sentinel Software Protection Dongle (purple) Rainbow SafeNet Sentinel SafeNet Sentinel Software Protection SuperPro Dongle (gray) Rainbow USB UltraPro SecuTech Unikey Software Protection Dongle Future Devices HID UNIKEY MAI KEYLOK II Software Protection Dongle Microcomputer Applications USB Device MAI KEYLOK Fortress Software Protection Dongle (Designed for Windows) Note: it is not designed for Linux systems. If you connect it to Linux systems, the connection resets frequently and can cause unexpected behavior. Philips KEYLOK Device Aladdin HASP HL Drive Aladdin Knowledge HASP HL 3.21, Kingston drive Aladdin HASP HL Basic Software Protection Dongle Aladdin Knowledge HASP HL 3.21 Aladdin HASP HL Pro Software Protection Dongle Aladdin HASP HL Max Software Protection Dongle Aladdin HASP HL Net Software Protection Dongle Aladdin HASP HL NetTime Software Protection Dongle Kingston DataTraveler 101 II 4GB Toshiba DT 101 II Lexar JD FireFly 2GB Lexar Media JD FireFly Western Digital My Passport Essential 250GB 2.5 HDD Western Digital External Cables To Go USB Port Hub Model# 29560 Not applicable

46 USB Devices Up to 20 devices per VM. Up to 20 devices per ESX host.
1 device can only be owned by 1 VM at a given time. No sharing. Supported vMotion Communication via the management network DRS Unsupported DPM. DPM is not aware of the device and may turn it off. This may cause loss of data. So disable DRS for this VM so it stays in this host only. Fault Tolerance Design consideration Take note of situation when the ESX host is not available (planned or unplanned downtime) You can add multiple USB devices, such as security dongles and mass storage devices, to a VM that resides on an ESX/ESXi host to which the devices are physically attached. Knowledge of device components and their behavior, VM requirements, feature support, and ways to avoid data loss can help make USB device passthrough from an ESX/ESXi host to a VM successful. When you attach a USB device to a physical host, the device is available only to VMs that reside on that host. Those VMs cannot connect to a device on another host in the datacenter. A USB device is available to only one VM at a time. When you remove a device from a virtual machine, it becomes available to other VMs that reside on the host. USB Arbitrator Manages connection requests and routes USB device traffic. The arbitrator is installed and enabled by default on ESX/ESXi hosts. It scans the host for USB devices and manages device connection among VMs that reside on the host. It routes device traffic to the correct VM instance for delivery to the guest OS. The arbitrator monitors the USB device and prevents other VMs from using it until you release it from the VM it is connected to. If vCenter polling is delayed, a device that is connected to one virtual machine might appear as though it is available to add to another virtual machine. In such cases, the arbitrator prevents the second VM from accessing the USB device. USB Controller The USB hardware chip that provides USB function to the USB ports that it manages. The virtual USB Controller is the software virtualization of the USB host controller function in the VM. USB controller hardware and modules that support USB 2.0 and USB 1.1 devices must exist on the host. Only one virtual USB controller is available to each VM. The controller supports multiple USB 2.0 and USB 1.1 USB devices in the virtual computer. The controller must be present before you can add USB devices to the virtual computer. The USB arbitrator can monitor a maximum of 15 USB controllers. Devices connected to controllers numbered 16 or greater are not available to the virtual Machine Before you hot add memory, CPU, or PCI devices, you must remove any USB devices. Hot adding these resources disconnects USB devices, which might result in data loss. n Before you suspend a VM, make sure that a data transfer is not in progress. During the suspend/resume process, USB devices behave as if they have been disconnected, then reconnected. Also, if you use vMotion to migrate a VM away from the host that the USB device is attached to, it won't be reconnected when the VM is resumed For compound devices, the virtualization process filters out the USB hub so that it is not visible to the virtual machine. The remaining USB devices in the compound appear to the VM as separate devices. You can add each device to the same VM or to different VMs if they reside on the same host.

47 New Features for both ESX/ESXi
MS AD integration New Features for both ESX/ESXi

48 AD Service Provides authentication for all local services
vSphere Client Other access based on vSphere API DCUI Tech Support Mode (local and remote) Has nominal AD groups functionality Members of “ESX Admins” AD group have Administrative privilege Administrative privilege includes: Full Administrative role in vSphere Client and vSphere API clients DCUI access Tech Support Mode access (local and remote) Another feature that was requested a lot is to integrate with MS AD. This further simplify the management of vSphere as we can now be consistent with vCenter. AD integration provides authentication for all local services. This means access via Admin Client, via the console, via remote console are all based on AD. ESX and ESXi should integrate with MS AD for all user authentication. This effectively removes static information from the ESX host and enables the "plug and play" and "stateless appliance" concepts. Customers do not want to manage user accounts on ESX or ESXi because it is additional work to what they would do in a physical environment. Lowers the Opex of managing a VI environment and also competitively positions our platform with Hyper-V which can do this today. Customers don’t want to rely on VC for these functions due to HA of VC.

49 The Likewise Agent ESX uses an agent from Likewise to connect to MS AD and to authenticate users with their domain credentials. The agent integrates with the VMkernel to implement the mapping for applications such as the logon process (/bin/login) which uses a pluggable authentication module (PAM). As such, the agent acts as an LDAP client for authorization (join domain) and as a Kerberos client for authentication (verify users). The vMA appliance also uses an agent from Likewise. ESX and vMA use different versions of the Likewise agent to connect to the Domain Controller. ESX uses version 5.3 whereas vMA uses version 5.1. Likewise/VMware Press Release: Good likewise reference doc: The Likewise agents are intended to be entirely transparent to the customer. As such, there are no visible logs or events that they generate. Their logs are in /var/log/ though, so it is possible to access them with the ESX Shell if you need to debug issues.

50 Joining AD: Step 1 So how do we do it?
One way is to select the ESX that you want to add to AD, and choose the “Configuration” tab. From this page, choose the “authentication service” link. Click on the properties link, the dialog box shown on the next slide is shown.

51 Joining AD: Step 2 1. Select “AD” 2. Click “Join Domain”
3. Join the domain. Full name. @123.com From the dialog box that pops up, select “AD” from the drop down. Then specify the Domain name. Then click “Join Domain”. The next dialog box will pop up to let you enter the ID which can join a domain. Click on Join Domain button to join the domain. If there is an error, an error message will be prompted. If not, ESXi will join the domain.

52 AD Service A third method for joining ESX/ESXi hosts and enabling Authentication Services to utilize AD is to configure it through Host Profiles I guess a question from customer will be how they can do this automatically, if they have a lot of ESXi and not enough Sys Admin to manage all these things. We have enhanced our host profile. Here is the screen where we can configure the same info in the host profiles.

53 AD Likewise Daemons on ESX
lwiod is the Likewise I/O Manager service - I/O services for communication. Launched from /etc/init.d/lwiod script. netlogond is the Likewise Site Affinity service - detects optimal AD domain controller, global catalogue and data caches. Launched from /etc/init.d/netlogond script. lsassd is the Likewise Identity & Authentication service. It does authentication, caching and idmap lookups. This daemon depends on the other two daemons running. Launched from /etc/init.d/lsassd script. root Dec08 ? :00:00 /sbin/lsassd --start-as-daemon root Dec08 ? :00:00 /sbin/lwiod --start-as-daemon root Dec08 ? :00:02 /sbin/netlogond --start-as-daemon The ‘ps’ listing above is taken from an ESX server. The listing would be different on an ESXi server, but the daemons are the same. Idmaps are identity mappings (IDMAP) of Windows security identifiers (SIDs) to UNIX UIDs and GIDs. lsassd is responsible for determining this mapping relationship. What is an AD global catalog? ( The global catalog is a distributed data repository that contains a searchable, partial representation of every object in every domain in a multidomain AD forest. The global catalog is stored on domain controllers that have been designated as global catalog servers and is distributed through multimaster replication. Searches that are directed to the global catalog are faster because they do not involve referrals to different domain controllers. The Likewise input-output service (lwiod) communicates over SMB with SMB servers

54 ESX Firewall Requirements for AD
Certain ports in SC are automatically opened in the Firewall Configuration to facilitate AD. Not applicable to ESXi Before Port 88 – Kerberos authentication Port 123 – NTP Port NetBIOS Name Service Port NetBIOS Session Service (SMB) Port 389 – LDAP Port MS-DS AD, Windows shares (SMB over TCP) Port 464 – Kerberos – change/password changes Port 3268 – Global Catalog search After

55 Time Sync Requirement for AD
Time must be in sync between the ESX/ESXi server and the AD server. For the Likewise agent to communicate over Kerberos with the domain controller, the clock of the client must be within the domain controller's maximum clock skew, which is 300 seconds, or 5 minutes, by default. The recommendation would be that they share the same NTP server. For more information, see

56 vSphere Client Now when assigning permissions to users/groups, the list of users and groups managed by AD can be browsed by selecting the Domain.

57 Info in AD The host should also be visible on the Domain Controller in the AD Computers objects listing. Looking at the ESX Computer Properties shows a Name of RHEL (as it the Service Console on the ESX) & Service pack of ‘Likewise Identity 5.3.0’ These view are taken from the AD Domain Controller. Note that when an ESX/ESXi leaves domain, status not reflected in Computer Object on DC. When a Windows host leaves the domain, it leaves the host in the AD Computers entry but places a red X against it. When an ESX leaves the domain, the status of the ESX host in the AD Computers doesn't change . A feature request has been filed to change this behaviour to be similar to windows hosts - Also, a utility called dsquery could be used - dsquery computer -inactive 8 -limit 0 - query for ‘computers’ that haven’t logged into AD in 8 weeks (or less if they want).  They could then look for ESX servers showing up and delete them if they want. (

58 New Features for both ESX/ESXi
Memory Compression New Features for both ESX/ESXi

59 Memory Compression VMKernel implement a per-VM compression cache to store compressed guest pages. When a guest page (4 KB page) needs to swapped, VMKernel will first try to compress the page. If the page can be compressed to 2 KB or less, the page will be stored in the per-VM compression cache. Otherwise, the page will be swapped out to disk. If a compressed page is again accessed by the guest, the page will decompressed online. The idea of memory compression is very straightforward: if the swapped out pages can be compressed and stored in a compression cache located in the main memory, the next access to the page only causes a page decompression which can be an order of magnitude faster than the disk access. With memory compression, only a few uncompressible pages need to be swapped out if the compression cache is not full. This means the number of future synchronous swap-in operations will be reduced. Hence, it may improve application performance significantly when the host is in heavy memory pressure. In ESX 4.1, only the swap candidate pages will be compressed. This means ESX will not proactively compress guest pages when host swapping is not necessary. In other words, memory compression does not affect workload performance when host memory is undercommitted. 3.5.1 Reclaiming Memory Through Compression Figure 8 illustrates how memory compression reclaims host memory compared to host swapping. Assuming ESX needs to reclaim two 4KB physical pages from a VM through host swapping, page A and B are the selected pages (Figure 8a). With host swapping only, these two pages will be directly swapped to disk and two physical pages are reclaimed (Figure 8b). However, with memory compression, each swap candidate page will be compressed and stored using 2KB of space in a per-VM compression cache. Note that page compression would be much faster than the normal page swap out operation which involves a disk I/O. Page compression will fail if the compression ratio is less than 50% and the uncompressible pages will be swapped out. As a result, every successful page compression is accounted for reclaiming 2KB of physical memory. As illustrated in Figure 8c, pages A and B are compressed and stored as half-pages in the compression cache. Although both pages are removed from VM guest memory, the actual reclaimed memory size is one page. If any of the subsequent memory access misses in the VM guest memory, the compression cache will be checked first using the host physical page number. If the page is found in the compression cache, it will be decompressed and push back to the guest memory. This page is then removed from the compression cache. Otherwise, the memory request is sent to the host swap device and the VM is blocked. The per-VM compression cache is accounted for by the VM’s guest memory usage, which means ESX will not allocate additional host physical memory to store the compressed pages. The compression cache is transparent to the guest OS. Its size starts with zero when host memory is undercommitted and grows when VM memory starts to be swapped out. If the compression cache is full, one compressed page must be replaced in order to make room for a new compressed page. An age-based replacement policy is used to choose the target page. The target page will be decompressed and swapped out. ESX will not swap out compressed pages. If the pages belonging to compression cache need to be swapped out under severe memory pressure, the compression cache size is reduced and the affected compressed pages are decompressed and swapped out. The maximum compression cache size is important for maintaining good VM performance. If the upper bound is too small, a lot of replaced compressed pages must be decompressed and swapped out. Any following swap-ins of those pages will hurt VM performance. However, since compression cache is accounted for by the VM’s guest memory usage, a very large compression cache may waste VM memory and unnecessarily create VM memory pressure especially when most compressed pages would not be touched in the future. In ESX 4.1, the default maximum compression cache size is conservatively set to 10% Note that this paper is written based on ESX4.0 memory management paper. Besides the new content introduced in ESX4.1, e.g., memory compression, quite a few places have been updated to represent the state of the art of ESX memory management.

60 Changing the value of cache size

61 Virtual Machine Memory Compression
Virtual Machine -> Resource Allocation Per-VM statistic showing compressed memory

62 Monitoring Compression
3 new counters introduced to monitor Host level, not VM level. What does the counter _really_ mean as it’s an _average_ of a _rate_?

63 Power Management

64 Power consumption chart
Per ESX, not per cluster Need hardware integration. Difference HW makes have different info

65 Performance Graphs – Power Consumption
We can now track the Power consumption of VMs in real-time Enabled through Software Settings ->Advanced Settings -> Power -> Power.ChargeVMs Esxtop also has a power view “p” 65

66 Host power consumption
In some situation, may need to edit /usr/share/sensors/vmware to get support for the host Different HW makers have different API. VM power consumption Experimental. Off by default (2) The feature of displaying per-VM power consumption is experimental and off by default. It can be turned on with an advanced config option as the paragraph describes. The per-VM power consumption feature is dependent on the host power consumption feature

67 Features only for ESX (not ESXi)

68 ESX: Service Console firewall
Changes in ESX 4.1 ESX 4.1 introduces these additional configuration files located in /etc/vmware/firewall/chains: usercustom.xml userdefault.xml Relationship between the 2 files “user” overwrites. The default files custom.xml and default.xml are overridden by usercustom.xml and userdefault.xml. All configuration is saved in usercustom.xml and userdefault.xml. Copy the original custom.xml and default.xml files. Use them as a template for usercustom.xml and userdefault.xml.

69 Cluster HA, FT, DRS & DPM HA and DRS have always been the popular features among our customers. I have quite a number of customers who found that HA is good enough for their SLA and moved from MS clustering. In the 4.1, we have a couple of enhancements in these main features.

70 Availability Feature Summary
HA and DRS Cluster Limitations High Availability (HA) Diagnostic and Reliability Improvements FT Enhancements vMotion Enhancements Performance Usability Enhanced Feature Compatibility VM-host Affinity (DRS) DPM Enhancements Data Recovery Enhancements Give tips on HA Types of cluster: prod, dmz, tier 2, IT cluster, non prod, desktop, why min host is 4 This slides give a summary of the new enhancements. As customers adopt more and more virtualisation, we are entering the phase where mission critical workloads are virtualised. With all these enhancements in 4.1, customers may be tempted to create large clusters and put everything there. By large I mean either large no of nodes, or a lot of VMs in single cluster. Personally, I still prefer the traditional approach, where a cluster is really the building block. So we have multiple clusters, with distinct purpose. From the list above, something that I think customers will appreciate is the 70 70

71 DRS: more HA-awareness
vSphere 4.1 adds logic to prevent imbalance that may not be good from HA point of view. Example 20 small VM and 2 very large VM. 2 ESXi hosts. Same workload with the above 20 collectively. vSphere 4.0 may put 20 small VM on Host A and 2 very large VM on Host B. From HA point of view, this may result in risks when Host A fails. vSphere 4.1 will try to balance the number of VM. In the past, customers reported that they very occasionally saw DRS "get it wrong" in the sense that DRS would move VMs based on purely performance criteria with scant regard for the availability anxiety. What this means is, in the past it was possible (if somewhat unlikely) for DRS to place 20 VMs on an ESX host and only put 8 VMs on another. While that may have been a good idea from a performance standpoint, it could lead to scenarios where DRS itself created an "eggs in one basket" scenario, as DRS didn't distribute VMs to prevent one ESX host from becoming overpopulated (and with a bigger VM count) than another. In this scenario, DRS would have to carry out VMotions to free up resources so HA can power on a VM.

72 HA and DRS Cluster Improvements
Increased cluster limitations Cluster limits are now unified for HA and DRS clusters Increased limits for VMs/host and VMs/cluster Cluster limits for HA and DRS: 32 hosts/cluster 320 VMs/host (regardless of # of hosts/cluster) 3000 VMs/cluster Note that these limits also apply to post-failover scenarios. Be sure that these limits will not be violated even after the maximum configured number of host failovers. 32 hosts/cluster (same as vSphere 4.0) 320 VMs/host (regardless of # of hosts/cluster) (vSphere 4.0 had different VMs/host limitation for DRS vs. HA and also for different number of hosts in the HA cluster) 3000 VMs/cluster (vSphere 4.0 had a limit of 1280) 72 72 72 72

73 HA and DRS Cluster Limit
5-host cluster, tolerate 1 host failure vSphere 4.1 supports 320 VMs/host Supports 320x5 VMs/cluster? NO Cluster can only support 320x4 VMs X 5-host cluster, tolerate 2 host failures Supports 320x5 VMs/cluster? NO Cluster can only support 320x3 VMs The HA limits are POST-FAILOVER limits. Meaning, you have to make sure that you will be running no more than 320 VMs/host even post failover. 5 host cluster, set up to tolerate 1 host failure. From the limits you might think that you can run 320x5 = 1600 VMs in the cluster. No way. B/c if one host fails, then that host’s 320 VMs will be spread out over the remaining 4 hosts, making them run > 320 per host  - bad, not supported. So what you do is, you take the size of the cluster minus the number of host failures you can tolerate – so that’s 4 hosts. So you can run 320x4 = 1280 VMs in that cluster. If it’s the same cluster and needs to tolerate 2 host failures, you can only run 320x3 = 960 VMs in the cluster. Any host that joins the cluster must communicate with an existing primary host to complete its configuration (except when you are adding the first host to the cluster). At least one primary host must be functional for VMware HA to operate correctly. If all primary hosts are unavailable (not responding), no hosts can be successfully configured for VMware HA. This can result from host operations such as: exiting standby mode, exiting maintenance mode, or being added to the cluster. You should consider this limit of five primary hosts per cluster when planning the scale of your cluster. Also. if your cluster is implemented in a blade server environment, place no more than four primary hosts in a single blade chassis. One of the primary hosts is also designated as the active primary host and its responsibilities include: n Deciding where to restart VMs. n Keeping track of failed restart attempts. n Determining when it is appropriate to keep trying to restart a VM. If the active primary host fails, another primary host replaces it X X 73 73 73 73

74 HA Diagnostic and Reliability Improvements
HA Healthcheck Status HA provides an ongoing healthcheck facility to ensure that the required cluster configuration is met at all times. Deviations result in an event or alarm on the cluster. Improved HA-DRS interoperability during HA failover DRS will perform vMotion to free up contiguous resources (i.e. on one host) so that HA can place a VM that needs to be restarted The VMware HA dashboard in the vSphere Client provides a new detailed window called Cluster Operational Status, which displays more information about the current VMware HA operational status, including the specific status and errors for each host in the VMware HA cluster. In addition, we now have improved integration between DRS (which is a load balancing feature) and HA (which is an availability feature). ______________ Is DRS now aware of HA slot size? Slot size not used in %? 74 74 74 74

75 HA Diagnostic and Reliability Improvements
HA Operational Status Displays more information about the current HA operational status, including the specific status and errors for each host in the HA cluster. It shows if the host is Primary or Secondary! VMware HA Healthcheck and Operational Status — The VMware HA dashboard in the vSphere Client provides a new detailed window called Cluster Operational Status, which displays more information about the current VMware HA operational status, including the specific status and errors for each host in the VMware HA cluster. What are the things it check? 75 75 75 75

76 HA Operational Status Just another example 

77 HA: Application Awareness
Application Monitoring can restart a VM if the heartbeats for an application it is running are not received Expose APIs for 3rd party app developers Application Monitoring works much the same way that VM Monitoring: If the heartbeats for an application are not received for a specified time via VMware Tools, its VM is restarted. For Application Monitoring, developers would develop application monitoring agents using the Application Monitoring SDK for specific applications running in the VM. There is support added in VMware Tools for an application to report its heartbeat/status. This gets communicated to vCenter as an "AppHeartbeatStatus" (similar to the "GuestHeartbeatStatus"). HA can respond to that by going red, indicating that the application has died. Thus, Application monitoring would work for those applications that use the new VMware Tools capability along with an application monitoring agent to report application status. To enable Application Monitoring Obtain the SDK from VMware (this is for the ISV, not end customers) Use it to set up customized heartbeats for the applications you want to monitor. ESXi 4.0 ESXi 4.1

78 Fault Tolerance

79 FT Enhancements FT fully integrated with DRS
DRS load balances FT Primary and Secondary VMs. EVC required. Versioning control lifts requirement on ESX build consistency Primary VM can run on host with a different build # as Secondary VM. Events for Primary VM vs. Secondary VM differentiated Events logged/stored differently. FT Primary VM FT Secondary VM Resource Pool DRS Interoperability for VMware HA and FT (FT) — FT VMs can take advantage of DRS functionality for load balancing and initial placement. In addition, VMware HA and DRS are tightly integrated, which allows VMware HA to restart VMs in more situations. EVC is required to be enabled for better performance for DRS to determine the location of where you can place FT VMs. I do recommend that you turn on EVC eventhough initially your clusters are of the same host. This give you flexibility when you are adding new hardware in the future. vSphere 4.1 introduces an FT-specific versioning-control mechanism that allows the Primary and Secondary VMs to run on FT-compatible hosts at different but compatible patch levels. Before that, they have to be identical. vSphere 4.1 differentiates between events that are logged for a Primary VM and those that are logged for its Secondary VM, and reports why a host might not support FT. In addition, you can disable VMware HA when FT VMs are deployed in a cluster, allowing for cluster maintenance operations without turning off FT. FT network logging performance allows improved throughput and reduced CPU usage. In addition, you can use vmxnet3 vNICs in FT-enabled VMs. Primary FT VM and Secondary FT VM events are now properly labeled so you can tell which FT VM generated the event. FT (FT) network logging performance has also improved in terms of throughput and reduced CPU usage. In addition, you can use vmxnet3 vNICs in FT VMs. 79 79 79

80 No data-loss Guarantee
vLockStep: 1 CPU step behind Primary/backup approach A common approach to implementing fault-tolerant servers is the primary/backup approach. The execution of a primary server is replicated by a backup server. Given that the primary and backup servers execute identically, the backup server can take over serving client requests without any interruption or loss of state if the primary server fails Since the hypervisor has full control over the execution of a VM, including delivery of all inputs, the hypervisor is able to capture all the necessary information about non-deterministic operations on the primary VM and to replay these operations correctly on the backup VM. The tagging scheme doesn’t introduce any significant delay of the replaying VM, since the hypervisor of the recording (primary) VM guarantees that last log entry of each single instruction emulation or a device operation is marked as a go-live point. Since the backup VM cannot be significantly delayed, the primary VM is also not affected by the use of go-live points

81 New versioning feature
FT now has a version number to determine compatibility Restriction to have identical ESX build # has been lifted Now FT checks it’s own version number to determine compatibility Future versions might be compatible with older ones, but possibly not vice-versa Additional information on vSphere Client FT version displayed in host summary tab # of FT enabled VMs displayed there For hosts prior to ESX/ESXi 4.1, this tab lists the host build number instead. FT versions included in vm-support output /etc/vmware/ft-vmk-version: product-version = build = ft-version = 2.0.0 Patches can cause host build numbers to vary between ESX and ESXi installations. To ensure that your hosts are FT compatible, do not mix ESX and ESXi hosts in an FT pair.

82 FT logging improvements
FT traffic was bottlenecked to 2 Gbit/s even on 10 Gbit/s pNICs Improved by implementing ZeroCopy feature for FT traffic Tx, too For sending only (Tx) Instead of copying from FT buffer into pNIC/socket buffer just a link to the memory holding the data is transferred Driver accesses data directly- no copy needed

83 FT: unsupported vSphere features
Snapshots. Snapshots must be removed or committed before FT can be enabled on a VM. It is not possible to take snapshots of VMs on which FT is enabled. Storage vMotion. Cannot invoke Storage vMotion for FT VM. To migrate the storage, temporarily turn off FT, do Storage vMotion, then turn on FT. Linked clones. Cannot enable FT on a VM that is a linked clone, nor can you create a linked clone from an FT-enabled VM. Back up. Cannot back up an FT VM using VCB, vStorage API for Data Protection, VMware Data Recovery or similar backup products that require the use of a VM snapshot, as performed by ESXi. To back up VM in this manner, first disable FT, then re-enable FT after backup is done. Storage array-based snapshots do not affect FT. Thin Provisioning, NPIV, IPv6, etc FT with vSphere 4.1 still has some incompatibilities Thin Provisioning and Linked Clones Hot plug devices and USB Passthrough IPv6 (as HA does not support) vSMP N-Port ID Virtualization (NPIV) StorageVMotion Serial/ parallel ports Physical and remote CD/ floppy

84 FT: performance sample
MS Exchange 2007 1 core handles 2000 Heavy Online user profile VM CPU utilisation is only 45%. ESX is only 8% Based on previous “generation” Xeon 5500, not vSphere 4.0, not 4.1 Opportunity Higher uptime for customer system Business opportunity: migrate customer from clustering (running 2 instance) to FT, where we have higher up time

85 Integration with HA Improved FT host management
Move host out of vCenter DRS able to vMotion FT VMs Warning if HA gets disabled and following operations will be disabled Turn on FT Enable FT Power on a FT VM Test failover Test secondary restart #1: If administrators wanted to move an ESX hosts from one vCenter instance to a new one (for whatever reasons) they usually did not bring the ESX host into maintenance mode. But adding the host to the new vCenter server without removing it from the previous one caused FT failures. Now the administrator will get a warning- which can be followed or ignored (yes/no) if he tries to add an ESX host which is managed by a different vCenter. #2: DRS will vMotion FT enabled VMs if needed and will place them according to DRS groups and other rules. Storage vMotion is not supported with FT, though. #3: If an administrator disabled HA he was forced to disable FT first. Now he gets a warning and he can decide to override and accept FT will not work as expected. Following this decision several operations re to FT will be disabled while HA is off.

86 VM-to-Host Affinity

87 Background Different servers in a datacenter is a common scenario
Differences by memory size, CPU generation or # or type of pNICs Best practice up to now Separate different hosts in different clusters Workarounds Creating affinity/ anti-affinity rules Pinning a VM to a single host by disabling DRS on the VM. Disadvantage Too expensive as each cluster needed to have HA failover capacity New feature: DRS Groups Host and VM groups Organize ESX hosts and VMs into groups Similar memory Similar usage profile

88 Required rules Preferential rules VM-host Affinity (DRS)
Rule enforcement: 2 options Required: DRS/HA will never violate the rule; event generated if violated manually. Only advised for enforcing host-based licensing of ISV apps. Preferential: DRS/HA will violate the rule if necessary for failover or for maintaining availability DRS VM Host Affinity Rules — DRS provides the ability to set constraints that restrict placement of a VM to a subset of hosts in a cluster. This feature is useful for enforcing host-based ISV licensing models, as well as keeping sets of VMs on different racks or blade systems for availability reasons. Strongly advise customers that these rules are not meant to be used often – the more constraints you put on VM mobility, the harder it is for DRS to balance load and to enforce resource allocation policies. You should only use them if you absolutely have to. Hard affinity rules are only advised to be used for enforcing host-based licensing of ISV apps. Soft affinity rules are meant for availability reasons – like keeping two VMs on different racks or blade chassis’s. Preferential rules can be violated to allow the proper functioning of DRS, VMware HA, and VMware DPM. 88 88

89 Hard Rules Hard Rules DRS will follow the hard rules
With DPM hosts will get powered on to follow a rule If DRS can’t follow, vCenter will display an alarm Can not be overwritten by user DRS will not generate any recommendations which would violate hard rules DRS Groups and hard rules with HA Hosts will be tagged as “incompatible” in case of “Must Not run…” so HA will take care of these rules, too

90 Soft Rules Soft Rules DRS will follow a soft rule if possible
Will allow actions User-initiated DRS-mandatory HA actions Rules are applied as long as their application does not impact satisfying current VM cpu or memory demand DRS will report a warning if the rule isn’t followed DRS does not produce a move recommendation to follow the rule Soft VM/host affinity rules are treated by DRS as "reasonable effort"

91 Grouping Hosts with different capabilities
DRS Groups Manager Defines Groups VM groups Host groups

92 Managing ISV Licensing
Example Customer has 4-node cluster Oracle DB and Oracle BEA are charged for every hosts that can run it. vSphere 4.1 introduces “hard partitioning” Both DRS and HA will honour this boundary. Rest of VMs Oracle DB DMZ VM Oracle BEA DMZ LAN Production LAN

93 Managing ISV Licensing
Hard partitioning If a host is in a VM-host must affinity rule, they are considered compatible hosts, all the others are tagged as incompatible hosts. DRS, DPM and HA are unable to place the VMs on incompatible hosts. Due to the incompatible host designation, the mandatory VM-Host is a feature what can be (undeniably) described as hard partioning. You cannot place and run a VM on incompatible host Oracle has not acknowledged this as hard partitioning. Sources rules/

94 Example of setting-up: Step 1
In this example, we are adding the “WinXPsp3” VM to the group. The group name is “Desktop VMs” So how do we do it? We can now create 2 types of group: groups of VM and groups of ESX. We then map the VM group to the ESX group

95 Example of setting-up: Step 2
Just like we can group VM, we can also group ESX An ESX host can belong to multiple group?

96 Example of setting-up: Step 3
We have grouped the VMs in the cluster into 2 We have grouped the ESX in the cluster into 2

97 Example of setting-up: Step 4
This is the screen where we do the mapping. VM Group mapped to Host Group

98 Example of setting-up: Step 5
Mapping is done. The Cluster Settings dialog box now display the new rules type.

99 HA/ DRS DRS lists rules Switch on or off Expand to display DRS Groups
Rule details Rule policy Involved Groups

100

101 Enhancement for Anti-affinity rules
Now more than 2 VMs in a rule Each rule can have a couple of VMs Keep them all together Separate them through cluster For each VM at least 1 host is needed The separate rules now include more than only two VMs. If you select a “Separate rule” and include 5 VMs you’ll need at least 5 ESX hosts to accommodate this rule as each of them must run on separate host.

102 DPM Enhancements Scheduling DPM
Turning on/off DPM is now a scheduled task DPM can be turned off prior to business hours in anticipation for higher resource demands Disabling DPM It brings hosts out of standby Eliminates risk of ESX hosts being stuck in standby mode while DPM is disabled. Ensures that when DPM is disabled, all hosts are powered on and ready to accommodate load increases. 102 102

103 DPM Enhancements

104 vMotion vMotion is not a cluster feature. We can vMotion across cluster. ? Can we vMotion from 2 clusters with different EVC? We can try this on the lab. We should be able to vMotion from 4.0 to 4.1, as we can do from 3.5 to 4.1

105 vMotion Enhancements Significantly decreased the overall migration time (time will vary depending on workload) Increased number of concurrent vMotions: ESX host: 4 on a 1 Gbps network and 8 on a 10 Gbps network Datastore: 128 (both VMFS and NFS) Maintenance mode evacuation time is greatly decreased due to above improvements vMotion enhancements significantly reduce the overall time for host evacuations, with support for more simultaneous VM migrations and faster individual VM migrations. The result is a performance improvement of up to 8x for an individual VM migration and support for four to eight simultaneous vMotion migrations per host, depending on the vMotion network adapter (1GbE or 10GbE respectively). ______________________________________________________________ Just sharing on a test that we performed: Sixteen 2GB VMs were migrated by maintenance mode. -          All touched: means all memory in all VMs was written to so that the memory would be allocated.  This just means that vMotion actually has to copy every byte of memory.  If we didn’t go through and touch all the memory, then some pages would be unused and vMotion would skip over them, making it look like we sent 32GB faster than we actually would have.  So with the “all touched” pages, it just means we’re really sending 32GB of data. -          Kernel compile: the VMs are running a kernel compile test.  This is mostly just giving the VM something to do.  In the previous test, while all the memory was touched, the VM was sitting there idle during the vMotion.  So in this one, the VM’s actively touching and modifying memory during the vMotion.  Thus we expect it to put a little more pressure on vMotion and for the vMotion to take a little longer. 105 105 105 105 105

106 vMotion Re-write of the previous vMotion code
Sends memory pages bundled together instead of one after the other Less network/ TCP/IP overhead Destination pre-allocates memory pages Multiple senders/ receivers Not only a single world responsible for each vMotion thus limit based on host CPU Sends list of changed pages instead of bitmaps Performance improvement Throughput improved significantly for single vMotion ESX 3.5 – ~1.0Gbps ESX 4.0 – ~2.6Gbps ESX 4.1 – max 8 Gbps Elapsed reduced by 50%+ on 10GigE tests. Mix of different bandwidth pNICs not supported

107 vMotion Aggressive Resume Destination VM resumes earlier
Only workload memory pages have been received Remaining pages transferred in background Disk-Backed Operation Source host creates a circular buffer file on shared storage Destination opens this file and reads out of it Works only on VMFS storage In case of network failure during transfer vMotion falls back to disk based transfer Works together with aggressive resume feature above

108 Enhanced vMotion Compatibility Improvements
Preparation for AMD Next Generation without 3DNow! Future AMD CPUs may not support 3DNow! To prevent vMotion incompatibilities, a new EVC mode is introduced. Preparation for AMD Next Generation w/o 3DNow!: Future generations of AMD processors may not support the 3DNow! Set of instructions. To prevent vMotion incompatibilities due to this change, VMware has introduced the AMD Opteron Generation 3 w/o 3DNow! EVC mode. VMs in AMD EVC clusters should be transitioned to this new mode to prevent compatibility with future AMD processors. Better handling of powered-on VMs: vCenter now uses a running VM’s CPU feature set to determine if it can be migrated into an EVC cluster. Previously, it relied on the host’s CPU features. This will provide higher granularity in error detection. 108 108 108

109 EVC Improvements Better handling of powered-on VMs
vCenter server now uses a live VM's CPU feature set to determine if it can be migrated into an EVC cluster Previously, it relied on the host's CPU features A VM could run with a different vCPU than the host it runs on I.e. if it was initially started on an older ESX host and vMotioned to the current one So the VM is compatible to an older CPU and could possibly be migrated to the EVC cluster even if the ESX hosts the VM runs on is not compatible This sound quite complicated but is easy to understand. Assumimg a VM was powered on on an older EVC mode and migrated (without powering off) to a cluster with a newer mode (and newer feature). So in this case the VM is “part” of the new EVC mode, but does not use the new features- instead still the old ones. Previously if you tried to vMotion this VM to and ESX host with the older EVC mode vCenter complained about them not being compatible- as the ESX host was not compatible to the current EVC mode the VM is running. Now it checks which mode the VM itself uses and accepts vMotioning to an older mode- as the VM doesn’t care and is still not using the new features.

110 Enhanced vMotion Compatibility Improvements
Usability Improvements VM's EVC capability: The VMs tab for hosts and clusters now displays the EVC mode corresponding to the features used by VMs. VM Summary: The Summary tab for a VM lists the EVC mode corresponding to the features used by the VM. VM’s EVC capability: The VMs tab for hosts and clusters now displays the EVC mode corresponding to the features used by VMs. This makes it easier to determine which EVC modes the VM can be migrated to. This also provides the opportunity to raise the EVC mode of a cluster and selectively power-cycle only those VMs that would benefit from it. VM Summary: The Summary tab for a VM lists the EVC mode corresponding to the features used by the VM. This information is available only when the VM is inside an EVC cluster. VM's EVC capability The Virtual Machines tab now displays the EVC mode used Makes it easier to determine which EVC modes the VM can be migrated to This also provides the opportunity to raise the EVC mode of a cluster and selectively power-cycle only those VMs that would benefit from it. 110 110 110

111 EVC (3/3) Earlier Add-Host Error detection
Host-specific incompatibilities are now displayed prior to the Add-Host work- flow when adding a host into an EVC cluster Up to now this error occurred after all needed steps were done by the administrator Now it’ll warn earlier Earlier Add-Host Error detection: Host-specific incompatibilities are now displayed prior to the Add-Host work-flow when adding a host into an EVC cluster.

112 Host-Affinity, Multi-core VM, Licence Reporting Manager
Licencing Host-Affinity, Multi-core VM, Licence Reporting Manager

113 Multi-core CPU inside a VM
Click this

114 Multi-core CPU inside a VM
2-core, 4-core, 8 core. No 3-core, 5 core, 6 core, etc Type this manually

115 Multi-core CPU inside a VM
How to enable (per VM, not batch) Turn off VM. Can not be done online. Click Configuration Parameters Click Add Row and type cpuid.coresPerSocket in the Name column. Type a value (2, 4, or 8) in the Value column. The number of virtual CPUs must be divisible by the number of cores per socket. The coresPerSocket setting must be a power of two. Notes: If enabled, CPU Hot Add is disabled in the vSphere VM Administation Guide page 92 vmware writes: "You can verify the CPU settings for the VM on the Resource Allocation tab.“ But in this menu you can see no indication to the multi core configuration. what do I have to look for? Is it already implemented in the vSphere 4.1 RC ? When you configure multicore virtual CPUs for a VM, CPU hot Add/remove is disabled. For more information about multicore CPUs, see the vSphere Resource Management Guide. You can also search the VMware KNOVA database for articles about multicore CPUs provides a more detailed view within each Guest OS ________________ Need to see if we can use Orchestrator or PowerShell to check this

116 Multi-core CPU inside a VM
Once enabled, it is not readily shown to administrator This is not shown easily in the UI. VM listing in vSphere Client does not show core Possible to write scripts Iterates per VM Sample tools CPU-Z MS SysInternals Need to see the PowerCLI and vSphere API to see if we can do this programmatically

117 Customers Can Self-Enforce Per VM License Compliance
When customer use more than they bought Alert by vCenter But will be able to continue managing additional VMs. So can over use. Customers are responsible for purchasing additional licenses and any back- SNS. So Support & Subscription must be back dated. This is consistent with current vSphere pricing. Note that “Average Capacity” in the report refers to the average capacity of all license keys for that product. Products (e.g. vSphere Enterprise) can have multiple keys Each key has a capacity and usage associated with it. In the screen above: Current capacity is total capacity for all the keys Average capacity is the average capacity for the keys. For example… Product: vSphere Enterprise key | capacity | usage xxxx-xxxx—xxxx | 1000 | 500 yyyy-xxxx—xxxx | 2000 | 100 For the product, vSphere Enterprise we would report: Total Capacity  Total Usage – 600 Average Usage – 300 Average Capacity – 1500

118 I’m sure you are tired too 
Thank You I’m sure you are tired too 

119 Useful references

120 vSphere Guest API It provides functions that management agents and other software can use to collect data about the state and performance of a VM. The API provides fast access to resource management information, without the need for authentication. The Guest API provides read‐only access. You can read data using the API, but you cannot send control commands. To issue control commands, use the vSphere Web Services SDK. Some information that you can retrieve through the API: Amount of memory reserved for the VM. Amount of memory being used by the VM. Upper limit of memory available to the VM. Number of memory shares assigned to the VM. Maximum speed to which the VM’s CPU is limited. Reserved rate at which the VM is allowed to execute. An idling VM might consume CPU cycles at a much lower rate. Number of CPU shares assigned to the VM. Elapsed time since the VM was last powered on or reset. CPU time consumed by a particular VM. When combined with other measurements, you can estimate how fast the VM’s CPUs are running compared to the host CPUs


Download ppt "vSphere 4.1: Delta to 4.0 Tech Sharing for Partners"

Similar presentations


Ads by Google