Presentation is loading. Please wait.

Presentation is loading. Please wait.

© 2010 Cisco and/or its affiliates. All rights reserved.Cisco Confidential Presentation_ID VM-FEX Configuration and Best Practice for Multiple Hypervisor.

Similar presentations


Presentation on theme: "© 2010 Cisco and/or its affiliates. All rights reserved.Cisco Confidential Presentation_ID VM-FEX Configuration and Best Practice for Multiple Hypervisor."— Presentation transcript:

1 © 2010 Cisco and/or its affiliates. All rights reserved.Cisco Confidential Presentation_ID VM-FEX Configuration and Best Practice for Multiple Hypervisor Timothy Ma Technical Marketing - UCS

2 Agenda  VM-FEX Overview  VM-FEX vs Nexus 1000v  VM-FEX Operational Model  UCS VM-FEX General Base Configuration for hypervisor  VM-FEX Implementation on VMware ESX  VM-FEX Implementation with Hyper-V on UCS  VM-FEX Implementation with KVM on UCS  Summary What This Session Will Cover 2

3 Server Virtualization Issues 1. When VMs move across physical ports—the network policy must follow Live Migration 2. Must view or apply network/security policy to locally switched traffic 3. Need to maintain segregation of duties while ensuring non- disruptive operations Port Profile Server Admin Network Admin Security Admin

4 Cisco Virtual Networking Options Extend networking into hypervisor (Cisco Nexus 1000V Switch) Cisco UCS VM-FEX Cisco UCS VM-FEX Server Extend physical network to VMs (Cisco UCS VM-FEX) Hypervisor Cisco Nexus 1000V Generic Adapter Server

5 Agenda  VM-FEX Overview  VM-FEX vs Nexus 1000v  VM-FEX Operational Model  UCS VM-FEX General Base Configuration for hypervisor  VM-FEX Implementation on VMware ESX  VM-FEX Implementation with Hyper-V on UCS  VM-FEX Implementation with KVM on UCS  Summary What This Session Will Cover 5

6 VM-FEX Modes of Operation Enumeration vs. Hypervisor Bypass Emulated Mode Hypervisor Bypass Standard (Emulated) Mode  Each VM gets a dedicated PCIe device  ~12%-15% CPU performance improvement  Appears as distributed virtual switch to hypervisor  vMotion / LiveMigration supported High Performance Mode  Co-exists with Standard mode  Bypasses Hypervisor layer  ~30% improvement in I/O performance  Appears as distributed virtual switch to hypervisor  Currently supported with ESX 5.0+ Hyper-V 2012, RHEL KVM 6.3  vMotion / Live Migration supported 6

7 VM-FEX Modes of Operation VM-FEX Mode VMwareHyper-VKVM EmulationPass Through (PTS)Hyper-V SwitchSR-IOV with MacVTap Hypervisor VersionESX 4.0 U1 +Window Server 2012RHLE UCSM Version VMotion / Live Migration Support Enumeration vs. Hypervisor Bypass 7 VM-FEX Mode VMwareHyper-VKVM Hypervisor Bypass SR-IOV SR-IOV with PCI Passthrough Hypervisor VersionESX 5.0 +Window Server 2012 RHEL 6.3 UCSM Version VMotion / Live Migration Support NOT Support

8 VM-FEX Operational Model vMotion / Live Migration with Hypervisor Bypass Temporary transition from VMDirectPath to standard I/O VM Sending TCP stream (1500MTU) UCS B200 M2 blades with UCS VIC card vMotion to secondary host 1 sec silent period 8

9 VM-FEX Operational Model Simplifying the Access Infrastructure Physical Network Virtual Network Hypervisor VMVMVMVMVMVMVMVM VETH VNIC  Unify the virtual and physical network ‒ Same Port Profiles for various hypervisors and bare metal servers  Consistent functions, performance, management 9

10 VM-FEX Operational Model  Removing performance dependencies from VM location  Offloading software switching functionalities from host CPU  More on this in upcoming slides Traffic Forwarding Physical Network Hypervisor VMVMVMVMVMVMVMVM VETH VNIC 10

11 UCS VM-FEX System View Deploying on a UCS B or C Series Infrastructure Chassis IO Module A Server Ports VN 10Gbe Chassis IO Module B Internal Connections vfc vNIC1(s) vNIC2(s) vfc ESX 4.0u1+ / RHEL KVM 6.1+ / MS Windows Server 2012 VM-FEX UCS 6x00 Physical Ports Chassis IOM Ports UCS 6x00 Physical Ports Chassis IOM Ports VIC CPU Virtual Interface Control Logic Virtual Interface Control Logic Virtual Interface Control Logic Virtual Interface Control Logic HBA 0 vHBA0 HBA 1 vHBA1 veth Fiber Channel Uplink Ports veth1 veth2 veth3 veth4 veth1 veth2 veth3 veth4 veth Mgmt Uplink 0 0 Mgmt Uplink CIMC KVM etc. CIMC KVM etc. Cisco Adapter UCS B or C Series Server UCS Fabric Interconnect B (port profiles) UCS Fabric Interconnect A (port profiles) ESX Kernel Module / Libvirt / HyperV Extendable Switch 11

12 UCS VM-FEX System View Deploying on a UCS B or C Series Infrastructure Chassis IO Module A Server Ports VN 10Gbe Chassis IO Module B Service Console Service Console Kernel Internal Connections vfc d-vNIC1 vNIC1(s) d-vNIC2 vNIC2(s) d-vNIC3 vfc VM-FEX UCS 6x00 Physical Ports Chassis IOM Ports UCS 6x00 Physical Ports Chassis IOM Ports VIC CPU Virtual Interface Control Logic Virtual Interface Control Logic Virtual Interface Control Logic Virtual Interface Control Logic ESX Kernel Module / Libvirt / HyperV Extendable Switch HBA 0 vHBA0 HBA 1 vHBA1 d-vNIC4 veth Fiber Channel Uplink Ports veth1 veth2 veth3 veth4 veth1 veth2 veth3 veth4 veth Mgmt Uplink 0 0 Mgmt Uplink CIMC KVM etc. CIMC KVM etc. UCS Fabric Interconnect B (port profiles) UCS Fabric Interconnect A (port profiles) Cisco Adapter UCS B or C Series Server ESX 4.0u1+ / RHEL KVM 6.1+ / MS Windows Server

13 Agenda  VM-FEX Overview  VM-FEX vs Nexus 1000v  VM-FEX Operational Model  UCS VM-FEX General Baseline Configuration for hypervisor  VM-FEX Implementation on VMware ESX  VM-FEX Implementation with Hyper-V on UCS  VM-FEX Implementation with KVM on UCS  Summary What This Session Will Cover 13

14 UCS General Baseline #1: Dynamic vNICs Policy  Policies are to automatically provision dynamics on Servers  Dependent on the number of Fabric Interconnect to IO Module connections ‒ (# IOM to FI links * 63) - 2 for Gen 2 Hardware (62xx, 22xx and VIC12xx) ‒ (# IOM to FI links * 15) - 2 for Gen 1 Hardware (61xx, 21xx and Palo) ‒ Detailed VIF calculation Spreadsheet is available for ESXi and KVM (savtg wiki) Setting a Dynamic Adapter Policy Up 14

15  In port-channel mode, when you cable between FEX and the FI, the available virtual interface (VIF) namespace varies, depending on where the uplinks are connected to the FI ports:When port-channel uplinks from the FEX are connected only within a set of eight ports managed by a single chip, Cisco UCS Manager maximizes the number of VIFs used in service profiles deployed on the servers.  If uplink connections are distributed across ports managed by separate chips, the VIF count is decreased. For example, if you connect seven members of the port channel to ports 1–7, but the eighth member to port 9, this port channel can only support VIFs as though it had one member

16 UCS General Baseline #2: Building Service Profile  2 Statics – 1 to each UCS Fabric  Change dynamic vNIC connection policy to setup dynamics Adding the Dynamic Policy and Static Adapters 16

17 UCS General Baseline #2: Building Service Profile Adapter PolicyStatic vNICDynamic vNIC VMware ESXiVMwareVMwarePassThru Window Hyper-VSR-IOVWindows RedHat KVMSR-IOVLinux Static and Dynamic Adapter Policy 17

18 UCS General Baseline #3: Building Port Profiles  Creating Port Profiles Includes: ‒ VLAN(s) ‒ Native and/or Tagging allowed ‒ QoS Weights and Flow Rates ‒ Upstream Ports to always use Creating Folders of Network Access Attributes 18

19 UCS General Baseline #4: Building Port Profiles  Selecting High Performance will only Impact VMware deployment today  No problem if selected and used on other hypervisors Enhanced Options like VMDirectPath with VM-FEX 19

20 UCS General Baseline #5: Communication with Manager  Same Plug-in Method used in Nexus 1000v  Tool discussed later to simplify the whole integration process Establishing Communication to Hypervisor Manager 20

21 UCS General Baseline #5: Communication with Manager Hypervisor ManagerDVSHypervisor ManagerDVS ESXi1 vCenter per UCS doman 1 DVS per vCenter1 vCenter per UCS doman 4 DVS per vCenter Hyper-V1 MMC Instance5 DVS per UCS Domian 1 MMC Instance5 DVS per UCS domian KVMN/A1 DVS per UCS Domain N/A1 DVS per UCS Domain Port Profile per UCS Domain 512 Dynamic Ports per port profile 4096 Dynamic Ports per DVS Hypervisor Manager and Distributed Virtual Switch Support (as of UCSM 2.1 release)

22 UCS General Baseline #6: Publishing Port Profiles  Publish Port Profiles to Hypervisors and virtual switches within Exporting Port Profiles to these to Hypervisor Manager 22

23 Agenda  VM-FEX Overview  VM-FEX vs Nexus 1000v  VM-FEX Operational Model  UCS VM-FEX General Baseline Configuration for hypervisor  VM-FEX Implementation with VMware ESX on UCS  VM-FEX Implementation with Hyper-V on UCS  VM-FEX Implementation with KVM on UCS  Summary What This Session Will Cover 23

24 VMware VM-FEX: Infrastructure Requirements  Enterprise Plus License required (as is for any DVS) on Host  Standard License and above is required for vCenter  vCenter Plug-In download and install method (unless Easy VM-FEX tool is used)  Hosts then use VUM Depot’s to install ESX module when bringing host into UCS DVS (unless Easy VM-FEX tool is used)  VMotion fully supported for both emulated and hypervisor bypass  VMDirectPath (Hypervisor Bypass) with VM-FEX is supported with ESXi 5.0+  VM-FEX upgrade is supported from ESXi4.x to ESXi5.x with Customized ISO and VMware Update Manger Versions, Licenses, etc. 24

25 VM-FEX on VMware Configuration Steps 1.Configure Service Profile with static and dynamic adapter policy 2.Creating Port Profile and Cluster in UCSM *** 3.Install Cisco VEM software bundle and plugin in vCenter 4.Configure USCM VMware Integration wizard 5.Connecting Cisco UCSM to VMware vCenter 6.Add ESXi host into DVS cluster in vCenter 7.Configure Virtual Machine setting to enable hypervisor bypass 8.Verify VM-FEX configuration in both UCSM and vCenter *** (Data Center, Folder need to match between UCSM and vCenter)

26 VMware VM-FEX Configuration: Step 6  Uplinks from ESX hosts shown on right ‒ These are the statics for overhead  VM vNICs shown in port groups on left ‒ Port Groups are from Port Profiles sent in from UCSM or Nexus 5500  Normal view of VM vNICs, MAC, Port numbers, etc. Add ESXi host into DVS cluster in vCenter 26

27 VMware VM-FEX Configuration: Step 7  Match the Memory Reservation with the Limit  Only Supported Guests will get DP with VM-FEX ‒ Windows Server 2008 SP2, Windows Server 2008 R2, RHEL 6.x, SLES11 SP1 VM Settings to Get Hypervisor Bypass with VM-FEX 27

28 VMware VM-FEX Configuration: Step 7  Select High Performance I/O in UCSM Port Profile  Define normal VMXNET3 type  Select Port Group to put adapter into  Displays if the DirectPath with VM- FEX is active  Other Adapters can remain in emulated mode if desired VM Settings to Get Hypervisor Bypass with VM-FEX 28

29 VMware VM-FEX Configuration: Step 8 Verify VM-FEX configuration in vCenter 29

30 VMware VM-FEX Configuration: Step 8 Verify VM-FEX configuration in UCSM 30

31 Easy VM-FEX Tool  VMware solution only today with UCS  Quick System Bringups  Assumption of 1 management interface per ESX host ‒ Optional vMotion / FT logging also handled  All supported versions of VMware that VM-FEX supports ‒ Enterprise+ or Evaluation  Can define some defaults in text file  vCenter folders OK  Server needs Dynamic vNICs on Service Profile (will check)  Deployment name limited to 8 characters in tool  UCSM respository for ESX kernel model, or separate tool to pull from VMware online to a dedicated directory locally Tool Usage 31

32 Easy VM-FEX Tool Tool View 32

33 VMware VM-FEX Demo Topology 33

34 Agenda  VM-FEX Overview  VM-FEX vs Nexus 1000v  VM-FEX Operational Model  UCS VM-FEX General Baseline Configuration for hypervisor  VM-FEX Implementation with VMware ESX on UCS  VM-FEX Implementation with Hyper-V on UCS  VM-FEX Implementation with KVM on UCS  Summary What This Session Will Cover 34

35  VM-FEX is available for HyperV on only UCS Managed Deployments at shipment of Windows Server 2012 (UCS B series or Integrated C series)  HyperV Role Enabled on Windows Server 2012  For Live Migration, MS Cluster built with shared storage ‒ VM-FEX with Live Migration fully supported  HyperV Networks defined as shown here 1. Through HyperV Manager GUI and MMC 2. Through PowerShell Applets for scripting and troubleshooting  Cisco Extension to the HyperV Extensible Switch infrastructure  Systems Center Virtual Machine Manager (SCVMM) 2012 SP1 version with Windows Server 2012 support will be needed for manager integration ‒ Fully supported via PowerShell until the SCVMM ships HyperV VM-FEX: Infrastructure Requirements 35 Versions, Licenses, etc.

36 Hyper-V Scale Comparison Vmware vSphere 5.1Windows Server 2008 R2Windows Server 2012 HW Logical Processor Support 160 LPs64 LPs320 LPs Physical Memory Support2 TB1 TB4 TB Cluster Scale32 Nodes up to 4000VMs16 Nodes up to 1000 VMs64 Nodes up to 4000 VMs Virtual Machine Processor Support Up to 64 VPsUp to 4 VPsUp to 64 VPs VM MemoryUp to 1 TBUp to 64 GBUp to 1 TB Live MigrationConcurrent vMotion 128 per datastore Yes, one at a timeYes, with no limits. As many as hardware will allow. Live Storage MigrationConcurrent Storage vMotion 8 per datastore, 2 per host No. Quick Storage Migration via SCVMM Yes, with no limits. As many as hardware will allow. Servers in a Cluster VP:LP Ratio8:18:1 for Server 12:1 for Client (VDI) No limits. As many as hardware will allow. Microsoft Hyper-V vs VMware ESXi

37 Cisco Confidential © 2010 Cisco and/or its affiliates. All rights reserved. 37 Host SR-IOV - Overview SR-IOV bypasses the virtual switch and hypervisor Network I/O path without SRIOV Network I/O path with SRIOV Root Partition Hyper-V Switch Physical NIC Virtual Machine Virtual NIC Switching VLAN Filtering Data Copy Switching VLAN Filtering Data Copy Host Root Partition Hyper-V Switch SR-IOV Physical Function Virtual Machine Virtual Function Switching VLAaN Filtering Data Copy Switching VLAaN Filtering Data Copy

38 © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential 38 Hyper-V Extensible Switch Root Partition Extension Miniport Extension Protocol Hyper-V Switch Physical NIC Virtual Machine Host NIC VM NIC Virtual Machine VM NIC Filtering Extensions Forwarding Extension WFP Extensions Capture Extensions

39 © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential 39 SCVMM Management of Switch Extensions Virtualization VM 1 VM 3 VM 2 Root Partition 3 rd Party components VMM Agent VMM Service SCVMM Vendor network mgmt console Policy database Vendor SCVMM Plugin Capture Extension Filtering Extension Forwarding Extension Physical NIC SCVMM management of extensions Custom vendor management in SCVMM

40 VM-FEX on Hyper-V Configuration Steps 1.Upgrade UCSM to 2.1+ firmware (Del Mar release support SR-IOV) 2.Configure Service Profile with static (PF) and dynamic (VF) adapter policy 3.Creating Port Profile and Port Profile Client (up to 5 DVS support) 4.Install Windows Server 2012 Guest OS and enable Hyper-V role 5.Install VM-FEX Forwarding Extension and PF driver on host 6.Create Hyper-V vSwtich and enable VM-FEX forwarding extension 7.Install Windows Server 2012 VM and VF Driver 8.Install and Configure Microsoft Management Console (VM-FEX snap-in) 9.Verify VM-FEX configuration in both UCSM and MMC tool

41 Hyper-V VM-FEX Configuration: Step 5  Cisco VIC Driver Utility Setup Wizard to install PF Driver  Verify PF configuration in host Device Manager Cisco VIF PF Driver on Windows

42 Hyper-V VM-FEX Configuration: Step 6  Cisco VIC Driver Utility Setup Wizard to install VM-FEX extension  Forwarding extension need enable per Virtual Swtich Hyper-V Switch and VM-FEX Extension 42

43 Hyper-V VM-FEX Configuration: Step 7  Cisco VIC Driver Utility Setup Wizard to install VF Driver  Enable SR-IOV per Network Adapter Configuration Install Windows Server 2012 VM and VF Driver 43

44 Hyper-V VM-FEX Configuration: Step 8  Install Cisco VM-FEX Port Profile Manager  Connect Port Profile Manager to UCSM and pull Port Profile configuration into Hyper-V host Configure Microsoft Management Console 44

45 Hyper-V VM-FEX Configuration: Step 9  Virtual Network Manager to configure the switch  PS scripts to configure the Cisco extension to the HyperV switch Verify VM-FEX configuration in UCS Manger 45

46 Cisco Confidential © 2010 Cisco and/or its affiliates. All rights reserved. 46 Hyper-V VM-FEX Demo Topology Server Cisco UCS VM-FEX Microsoft Hyper-V Server NTTTCP/ Server Cisco UCS VM-FEX Microsoft Hyper-V Cisco UCS Fabric Interconnect Virtual Ethernet ports (vEth) Virtual Ethernet ports (vEth) SR-IOV enabled adapter NTTTCP/ Client Benefits  Simple: One Infrastructure for all networking  Performance: 10Gb all the way to the VM (SR-IOV)  Robust: Architect, troubleshoot and traffic engineer network holistically

47 Agenda  VM-FEX Overview  VM-FEX vs Nexus 1000v  VM-FEX Operational Model  UCS VM-FEX General Baseline Configuration for hypervisor  VM-FEX Implementation with VMware ESX on UCS  VM-FEX Implementation with Hyper-V on UCS  VM-FEX Implementation with KVM on UCS  Summary What This Session Will Cover 47

48 RHEL KVM VM-FEX: Infrastructure Requirements  VM-FEX is available for KVM on only UCS Managed Deployments Today  UCS Manager 2.1 release is required to supportted SR-IOV in KVM  Unlike VMware no VEM to load (utilizes libvirt)  Install Red Hat as Virtualization Host ‒ RHEL 6.2 for VM-FEX emulation mode (SR-IOV with MacVTap) ‒ RHEL 6.3 for VM-FEX hypervisor bypass mode (SR-IOV with PCI passthrough) ‒ MacVTap Direct (Private) mode is no longer supported with UCSM release 2.1  Live migration feature only supported emulation mode  Guest Operating System RHEL 6.3 Required to support SR-IOV with PCI passthrough ‒ RHEL 6.3 inbox driver is sufficient to support SR-IOV with PCI passthrough  Scripted nature of configuration at FCS ‒ No current RHEV-M for RHEL KVM 6.x  Virtual Machine interface management via editing of VM domain XML file Versions, Licenses, etc. 48

49 VM-FEX on RHEL KVM Overview 49

50 VM-FEX on KVM Configuration Steps 1.Upgrade UCSM to 2.1+ firmware (Del Mar release support SR-IOV) 2.Configure Service Profile with static (PF) and dynamic (VF) adapter policy 3.Creating Port Profile and Port Profile Client (only single default DVS support) 4.Install VM OS with RHEL 6.3 to support SR-IOV 5.Modify Virtual Machine Domain XML file to enable VM-FEX function 6.Connect VM to Virtual Machine Manager (GUI interface of Virsh) 7.Verify VM-FEX configuration in both UCSM and RHEL host

51 KVM VM-FEX Configuration: Step 5  Adapters can be created in virt-manager wizard and MAC assigned  Edit the domain.xml file to make the adapters VM-FEX  Bring in the port profile here  Then VM will operate as normal XML file for SR-IOV MacVTap 51

52  Adapters can be created in virt-manager wizard and MAC assigned  Edit the domain.xml file to make the adapters VM-FEX  Bring in the port profile here  Then VM will operate as normal XML file for SR-IOV PCI PassThrough 52 KVM VM-FEX Configuration: Step 5

53 KVM VM-FEX Configuration: Step 6  RHEL virt-manager to do simple VM operations  Can start, stop, open, migrate VM’s with VM-FEX connections also  RHEV-M will be able to present VM- FEX port profiles natively with 3.x Connect VM to Virsh (GUI interface of Virsh) 53

54 KVM VM-FEX Configuration: Step 6  Virsh set of commands to control VM’s  Create, Start, Stop, Migrate, etc. of VM’s with VM-FEX included Connect VM to Virsh (GUI interface of Virsh ) 54

55 KVM VM-FEX Configuration: Step 7  Same port profiles can be used to KVM VM’s Verify VM-FEX configuration in both UCSM 55

56 VM-FEX Advtanage  Simpler Deployments ‒ Unifying the virtual and physical network ‒ Consistency in functionality, performance and management  Robustness ‒ Programmability of the infrastructure ‒ Troubleshooting, traffic engineering virtual and physical together  Performance ‒ Near bare metal I/O performance ‒ Improve jitter, latency, throughput and CPU utilization Contrasting VM-FEX to Virtualised Switching Layers 56

57 Thank you.


Download ppt "© 2010 Cisco and/or its affiliates. All rights reserved.Cisco Confidential Presentation_ID VM-FEX Configuration and Best Practice for Multiple Hypervisor."

Similar presentations


Ads by Google