Enhanced vMotion Compatibility

Slides:



Advertisements
Similar presentations
Housekeeping Utilities for VMware. 11 June Housekeeping is preparing meals for oneself and family and the managing of other domestic concerns.
Advertisements

VSphere 4 Best Practices/ Common Support Issues Paul Hill Research Engineer, System Management VMware.
© 2011 VMware Inc. All rights reserved High Availability Module 7.
Terms 4 Definitions and Questions. Motherboard The main board of a computer, usually containing the circuitry for the central processing unit, keyboard,
MCITP Guide to Microsoft Windows Server 2008 Server Administration (Exam #70-646) Chapter 11 Windows Server 2008 Virtualization.
CS-3013 & CS-502, Summer 2006 Virtual Machine Systems1 CS-502 Operating Systems Slides excerpted from Silbershatz, Ch. 2.
VMware Update 2009 Daniel Griggs Solutions Architect, Virtualization Servers & Storage Solutions Practice Dayton OH.
70-290: MCSE Guide to Managing a Microsoft Windows Server 2003 Environment Chapter 2: Managing Hardware Devices.
Introducing VMware vSphere 5.0
Virtualization for Cloud Computing
Virtualization 101.
© 2010 VMware Inc. All rights reserved VMware ESX and ESXi Module 3.
High Availability Module 12.
Patch Management Module 13. Module You Are Here VMware vSphere 4.1: Install, Configure, Manage – Revision A Operations vSphere Environment Introduction.
VMware vCenter Server Module 4.
MODULE 2: INSTALLING UNIDESK. Agenda Understanding Unidesk components Basic Installation of Unidesk Licensing.
Virtualization 101.
Scalability Module 6.
Virtual Machine Management
Alleviating Constraints with Resource Pools & Live Migration with Enhanced VMotion* Breakout Session# 2823 Raghu Yeluri Sr. Architect Intel Corporation.

Microsoft ® Official Course Module 9 Configuring Applications.
Hands-On Microsoft Windows Server 2008 Chapter 1 Introduction to Windows Server 2008.
ATIF MEHMOOD MALIK KASHIF SIDDIQUE Improving dependability of Cloud Computing with Fault Tolerance and High Availability.
Hands-On Microsoft Windows Server 2008

Hands-On Microsoft Windows Server 2008 Chapter 1 Introduction to Windows Server 2008.
DIY: Your First VMware Server. Introduction to ESXi, VMWare's free virtualization Operating System.
© 2010 VMware Inc. All rights reserved Patch Management Module 13.
70-290: MCSE Guide to Managing a Microsoft Windows Server 2003 Environment, Enhanced Chapter 2: Managing Hardware Devices.
Guided Consolidation Product Support Engineering VMware Confidential.
VApp Product Support Engineering Rev E VMware Confidential.
VMware vSphere Configuration and Management v6
Project Name Program Name Project Scope Title Project Code and Name Insert Project Branding Image Here.
Virtual Infrastructure Web Access Product Support Engineering VMware Confidential.
Full and Para Virtualization
20409A 7: Installing and Configuring System Center 2012 R2 Virtual Machine Manager Module 7 Installing and Configuring System Center 2012 R2 Virtual.
Hands-On Virtual Computing
DPM - IPMI Product Support Engineering VMware Confidential.
VMware Certified Professional 6-Data Center Virtualization Beta 2V0-621Exam.
A Measured Approach to Virtualization Don Mendonsa Lawrence Livermore National Laboratory NLIT 2008 by LLNL-PRES
U N C L A S S I F I E D LA-UR Leveraging VMware to implement Disaster Recovery at LANL Anil Karmel Technical Staff Member
vSphere 6 Foundations Exam Training
“Geek Out”: DIY vSphere 5.1 Lab Hartford / CT VMware User Group March 28 th, 2013 Matt Kozloski.
Computer System Structures
Virtualization for Cloud Computing
VMware ESX and ESXi Module 3.
Patch Management Module 13.
Applied Operating System Concepts
VSPHERE 6 FOUNDATIONS BETA Study Guide QUESTION ANSWER
Virtualization Review and Discussion
Tivoli Storage Manager Product Family
Virtualization Dr. Michael L. Collard
Prepared by: Assistant prof. Aslamzai
Virtual Memory Use main memory as a “cache” for secondary (disk) storage Managed jointly by CPU hardware and the operating system (OS) Programs share main.
Optimizing SQL Server Performance in a Virtual Environment
NT1110 Computer Structure and Logic
Programmable Logic Controllers (PLCs) An Overview.
“Geek Out”: DIY vSphere 5.1 Lab
Nessus Vulnerability Scanning
20409A 7: Installing and Configuring System Center 2012 R2 Virtual Machine Manager Module 7 Installing and Configuring System Center 2012 R2 Virtual.
Virtualization 101.
HC Hyper-V Module GUI Portal VPS Templates Web Console
Operating System Concepts
Computer Security: Art and Science, 2nd Edition
Product Support Engineering
Expanded CPU resource pool with
Virtual Memory Use main memory as a “cache” for secondary (disk) storage Managed jointly by CPU hardware and the operating system (OS) Programs share main.
Operating System Concepts
Preparing for the Windows 8.1 MCSA
Presentation transcript:

Enhanced vMotion Compatibility Product Support Engineering VMware Confidential

Module 2 Lessons Lesson 1 – vCenter Server High Availability Lesson 2 – vCenter Server Distributed Resource Scheduler Lesson 3 – Fault Tolerance Virtual Machines Lesson 4 – Enhanced vMotion Compatibility Lesson 5 – DPM - IPMI Lesson 6 – vApps Lesson 7 – Host Profiles Lesson 8 – Reliability, Availability, Serviceability ( RAS ) Lesson 9 – Web Access Lesson 10 – vCenter Server Update Manager Lesson 11 – Guided Consolidation Lesson 12 – Health Status Agenda Overview VI4 - Mod 2-4 - Slide

Module 2-4 Lessons Lesson 1 – Overview of Enhanced vMotion Compatibility Lesson 2 – Compatibility Matrix Lesson 3 – EVC Baselines Lesson 4 – Requirements for EVC Lesson 5 – Enabling EVC on a Cluster Lesson 6 – Troubleshooting EVC Agenda Overview: This is the same EVC which was introduced in ESX 3.5 U2 which vSphere 4.0 is compatible with. The new improvements for 4 allows the creation of multiple CPU groups in the datacenters. VI4 - Mod 2-4 - Slide

OEM VMotion Compatibility Matrix Source: www.dell.com; Similar matrices are available for HP and IBM servers VI4 - Mod 2-4 - Slide

Enhanced VMotion Compatibility (EVC) EVC allows vCenter to enforce VMotion compatibility between all hosts in a cluster by forcing hosts to expose a common set of CPU features (baseline) to Virtual Machines. EVC automatically configures servers whose CPUs feature Intel FlexMigration and AMD-V Extended Migration technologies to be VMotion-compatible with servers that use older CPUs. EVC ensures that all hosts in a cluster present the same CPU feature set to Virtual Machines, even if the actual CPUs on the hosts differ. This prevents migrations with VMotion from failing due to incompatible CPUs. Beta2 what’s new - http://communities.vmware.com/viewwebdoc.jspa?documentID=DOC-7291&communityID=2701 Uses hardware support developed with AMD, Intel. EVC leverages Intel FlexMigration technology to present the same feature set as the baseline Intel processor. EVC leverages AMD’s AMD-V Extended Migration technology to present the same feature set as the baseline AMD processor. All hosts in the cluster must either have hardware live migration support (Intel FlexMigration or AMD-V Extended Migration) or have the CPU whose baseline feature set you intend to enable for the cluster. VI4 - Mod 2-4 - Slide

Detecting CPU Features OS or application software executes CPUID machine instruction CPUID instruction reports many system properties: Vendor (e.g. Intel or AMD) CPU family, model, stepping Supported CPU features, e.g.: NX/XD (No execute; memory protection from malware) AMD-V/VT-x (Virtualization support in hardware) SSE3 (CPU instructions to optimize streaming applications) Number of CPU cores, cache size, and many other properties VI4 - Mod 2-4 - Slide

EVC Benefits Enables VMotion across CPU generations Simple New CPUs are automatically configured to be compatible with earlier versions. Makes it much easier to add new hardware to existing clusters. Simple No manual CPUID masking required VI4 - Mod 2-4 - Slide

Intel EVC Cluster With Different Generation CPUs App OS Intel Core 2 VM ESX Intel Core 2 CPU ESX Future Intel CPU ESX Intel Core 2 45nm CPU Merom are Intel Core 2 CPUs. Penryn are Intel Core 2 45 nanometer CPUs. One of the difficult aspects of EVC is the naming convention – Intel would not allow us to use the codenames like Merom and Penryn so we have to use references like 45 nanometer to distinguish between them. VM sees Intel Core 2 level CPU features and can migrate to any host in the EVC cluster VI4 - Mod 2-4 - Slide

AMD EVC Cluster With Different Generation CPUs App OS Opteron Rev E VM ESX AMD Rev E CPU ESX AMD Barcelona CPU ESX Future AMD CPU AMD Barcelona is the third generation of AMD processors, K10 architecture. (K10 supersedes the earlier K8 architecture) VM sees AMD Opteron Rev E CPU features, can migrate to any host in the EVC cluster. VI4 - Mod 2-4 - Slide

EVC Baselines Baseline: a set of CPU features that is supported by every host in the cluster Baseline is the least common denominator of all hosts, or less In ESX 3.5 u2, one baseline per CPU vendor Intel: CPU features supported by Merom cores AMD: CPU features supported in Opteron Rev E/F In VI4, it is expected that two or more baselines can be defined, e.g.: Intel: Merom, Penryn (SSE4.1), Nehalem (SSE4.2) AMD: Rev E/F and Greyhound (SSE4A, ABM) VI4 - Mod 2-4 - Slide

EVC Baselines (ctd) EVC Baseline Compatible CPUs Intel® Core™ 2 (Merom) Intel® Core™ 2 Intel® 45nm Core™ 2 Intel® 45nm Core™ 2 (Penryn) AMD Second Generation Opteron™ (Rev. E/F) AMD Second Generation Opteron™ AMD Third Generation Opteron™ AMD Third Generation Opteron™ (Barcelona) At VMworld 2008, Paul Maritz announced Intel's Xeon 7400 Series chips, code-named Dunnington ,which were released the previous day, can "do heterogeneous VMotion across different rev levels [versions] of the chips, whereas now you have to make sure you stick within the same chip family”. With the release of Dunnington, the six-core Xeon chip officially known as the Xeon 7400, Intel moves away from the Penryn to the Nehalem architecture. Multiple baselines allow the user to choose: Greater compatibility (fewer cpu features) More features (less cpu compatibility) Additional baselines will be introduced for new CPU generations VI4 - Mod 2-4 - Slide

EVC baselines & CPU Models VMware KB article 1003212 lists specific CPU models and which baselines they support http://kb.vmware.com/kb/1003212 Examples: Intel Core 2 Intel Merom baseline: 73xx (Tigerton), 51xx (Woodcrest) Intel Core 2 45nm Intel Penryn baseline: 74xx (Dunnington), 54xx (Harpertown) AMD 2nd Generation Opteron Rev E/F baseline: 2yy, 8yy, 22yy, 82yy AMD 3rd Generation Greyhound baseline: 23yy, 83yy http://en.wikipedia.org/wiki/AMD_K10 VI4 - Mod 2-4 - Slide

EVC Cluster With Intel Core 2 45nm (Penryn) Baseline ESX Intel Core 2 45nm CPU Intel EVC Cluster (Core 2 45nm baseline) App OS Core 2 45 nm VM ESX Intel Core 2 45nm CPU ESX Intel Core 2 CPU ESX Future Intel CPU VM cannot migrate to an Intel Core 2 (Merom) based CPU It can migrate to an Intel Core 2 45nm (Penryn) based CPU outside the cluster VI4 - Mod 2-4 - Slide

The E54xx CPUs are ‘Harpertown’ processors from the Penryn series Determine CPU Model vCenter will display model information for CPUs that already have ESX installed For new servers that do not have ESX installed, use some of the freeware utilities such as CPU-Z. The E54xx CPUs are ‘Harpertown’ processors from the Penryn series VI4 - Mod 2-4 - Slide

EVC & CPU Features EVC does not affect CPU features Number of cores per CPU. For example, a Greyhound (quad core) CPU does not lose 2 cores when it’s added to a Rev E/F (dual core) EVC cluster Cache size Hardware virtualization support (VT-x, AMD-V, nested paging) Clock speed. Thus, EVC does not cause any performance penalties Worst case scenario from implementing EVC: a VM cannot take advantage of new CPU instructions, e.g. SSE 4.1 VI4 - Mod 2-4 - Slide

Requirements For EVC EVC requires ESX 3.5 update 2 or later. EVC requires Intel CPUs with Core 2 micro architecture and newer, e.g. Merom: 73xx (Tigerton), 51xx (Woodcrest), 53xx (Clovertown) Penryn: 74xx (Dunnington), 54xx (Harpertown) EVC requires AMD second generation Opteron CPUs and newer, e.g. Rev E/F: models 1yy, 2yy, 8yy, 12yy, 22yy, 82yy Greyhound: models 13yy, 23yy, 83yy EVC requires a homogenous cluster; either all Intel or all AMD hosts. Applications on VMs must be well-behaved. The applications must be written to use the CPUID machine instruction to discover CPU features. We intercept and mask CPU features to create the appearance of compatibility. But the CPU features are there and functioning if an application tries to use them anyway. So apps that test for the function (i.e run a commands and see if it works) rather than use CPUID to query for the function might break. If they do not follow the established rule (Use the CPUID operation or else!) they could break if they are in a cluster with a lower baseline that the physical hardware hosting then VM and if they are migrated to a host that does not support those instructions/operations. VI4 - Mod 2-4 - Slide

Enabling EVC On A Cluster (ctd) VI4 - Mod 2-4 - Slide

Using EVC Once EVC is enabled for a cluster: All hosts already in, or entering the cluster, are automatically configured to match the EVC cluster baseline. VC will not allow hosts to enter the cluster which are not capable of exactly matching the cluster EVC requirements. VMotion will never fail due to CPU incompatibility since all hosts present identical features through EVC baselines . VI4 - Mod 2-4 - Slide

EVC Maintains Complete VMotion Compatibility ESX AMD CPU Intel EVC Cluster (Merom Baseline) ESX Intel Pentium 4 Cannot add a host with incompatible hardware All hosts CPUs must be from the same vendor CPUs must be on par with the cluster baseline or newer VI4 - Mod 2-4 - Slide

EVC Maintains Complete VMotion Compatibility (ctd) ESX Intel Core 2 45nm CPU App OS Core 2 45nm VM Intel EVC Cluster (Core 2 Baseline) Cannot add a host with running VMs The VM could be using CPU features that are not present in all hosts in the cluster. Must migrate or power off VMs on the ESX that you wish to add to the cluster. The reason for powering off VMs is that we have no way of knowing if the VM is using instructions that break the baseline. VI4 - Mod 2-4 - Slide

EVC Maintains Complete VMotion Compatibility (ctd) AMD EVC Cluster (Rev E/F Baseline) ESX 3.0.x Opteron Rev F CPU Cannot add a host with incompatible ESX version Host must have ESX 3.5 update 2 or newer installed ESX 3.5 U2 introduced this feature. VI4 - Mod 2-4 - Slide

Troubleshooting EVC VMware CPUID utility Bootable CDROM Reports raw CPUID data and “interesting” features http://www.vmware.com/download/shared_utilities.html Intel processor identification utility http://support.intel.com/support/processors/tools/piu/ CPU-Z, a freeware utility for displaying CPU features www.cpuid.com VMotion Info tool Displays CPU info of servers in a vCenter deployment www.run-virtual.com VI4 - Mod 2-4 - Slide

Troubleshooting EVC Error: Incompatible CPU: “The following hosts have CPUs that do not support EVC. Remove these hosts from the cluster.” What troubleshooting steps would you take to solve this problem? You have to unmask the CPU features of all the ESX servers. If any of the bits are masked, you cannot enabled EVC. It was discovered during the delta of the course that if you had manually disabled these features in the BIOS of the machine itself and trying to enable EVC that there were odd behaviours; The machines although all having the same configuration still was throwing an error. The solution was to expose all of the features of the CPU in the BIOS and allow Virtual Center to control the masking/exposure of the CPU features. This anomaly could be fixed for the GA release. VI4 - Mod 2-4 - Slide

Lesson 2-4 Summary Learn how to enable EVC on a Cluster Learn how to create EVC Baselines Learn how to troubleshoot EVC VI4 - Mod 2-4 - Slide

Lesson 2-4 - Lab 4 Module 2-4 Lab 4 – VMware vCenter Enhanced vMotion Compatibility Enable EVC on a Cluster Checking EVC compatibility EVC settings Troubleshooting EVC VI4 - Mod 2-4 - Slide