Presentation is loading. Please wait.

Presentation is loading. Please wait.

Virtual Machine Fabric Extension (VM-FEX) Bringing the Virtual Machines Directly on the Network BRKCOM-2005 Dan Hanson, Technical Marketing Manager, Data.

Similar presentations


Presentation on theme: "Virtual Machine Fabric Extension (VM-FEX) Bringing the Virtual Machines Directly on the Network BRKCOM-2005 Dan Hanson, Technical Marketing Manager, Data."— Presentation transcript:

1

2 Virtual Machine Fabric Extension (VM-FEX) Bringing the Virtual Machines Directly on the Network BRKCOM-2005 Dan Hanson, Technical Marketing Manager, Data Center Group, CCIE #4482 Timothy Ma, Technical Marketing Engineer, Data Center Group

3 © 2013 Cisco and/or its affiliates. All rights reserved.BRKCOM-2005Cisco Public The Session will Cover  FEX Overview &History  VM-FEX Introduction  VM-FEX Operational Model  VM-FEX General Baseline on UCS  VM-FEX with VMware on UCS  VM-FEX with Hyper-V on UCS  VM-FEX with KVM on UCS  VM-FEX General Details on Nexus 5500  Summary 3

4 FEX Overview & History

5 © 2013 Cisco and/or its affiliates. All rights reserved.BRKCOM-2005Cisco Public *IEEE 802.1BR Pre-Standard Fabric Extender Evolution FEX Architecture  Consolidates network management  FEX managed as line card of parent switch  Uses Pre-standard IEEE 802.1BR IEEE 802.1BR* Many applications require multiple interfaces One Network Parent Switch to Top of Rack Today FEX Network Administrator 5

6 © 2013 Cisco and/or its affiliates. All rights reserved.BRKCOM-2005Cisco Public Today Adapter FEX  Consolidates multiple 1Gb interface into a single 10Gb interface  Extends network into server  Uses Pre-standard IEEE 802.1BR One Network Parent Switch to Adapter IEEE 802.1BR * Adapter FEX Many applications require multiple interfaces FEX Network Administrator *IEEE 802.1BR Pre-Standard IEEE 802.1BR * Fabric Extender Evolution 6

7 © 2013 Cisco and/or its affiliates. All rights reserved.BRKCOM-2005Cisco Public Today IEEE 802.1BR * Adapter FEX Hypervisor One Network Virtual Same As Physical VM-FEX  Consolidates virtual and physical network  Each VM gets a dedicated port on switch  Uses Pre-standard IEEE 802.1BR IEEE 802.1BR * VM network managed by Server administrator VM-FEX FEX Network Administrator *IEEE 802.1BR Pre-Standard Fabric Extender Evolution 7

8 © 2013 Cisco and/or its affiliates. All rights reserved.BRKCOM-2005Cisco Public Hypervisor IEEE 802.1BR* One Network Parent Switch to Application Single Point of Management FEX Architecture  Consolidates network management  FEX managed as line card of parent switch Adapter FEX  Consolidates multiple 1Gb interface into a single 10Gb interface  Extends network into server VM-FEX  Consolidates virtual and physical network  Each VM gets a dedicated port on switch IEEE 802.1BR* Adapter FEX Today Manage network all the way to the OS interface – Physical and Virtual FEX VM FEX Network Administrator * IEEE 802.1BR Pre-Standard Fabric Extender Evolution 8

9 © 2013 Cisco and/or its affiliates. All rights reserved.BRKCOM-2005Cisco Public  VNTAG mimics forwarding vectors inside a switch  D: Direction, P: Unicast/Multicast, L: Loop  Policy associated with the Virtual Interface NOT port  VLAN member ship, QoS, MTU, Rate limit etc VNTAG Ether type Destination Virtual Interface Source Virtual Interface ver P R Application Payload TCP IP Ethernet VNTAG FEX architecture Switch FEX LAN Frame VNTAG Frame Key Architectural Component #1: VNTAG “Intra-Chassis” Bus Header 9 L D

10 © 2013 Cisco and/or its affiliates. All rights reserved.BRKCOM-2005Cisco Public FEX Data Forwarding Revisiting Traditional Modular Switches (Example Catalyst 6500)  Constellation Bus had 32 byte header for fabric switching –Vast majority of modular switch vendors have an internal “Tag” for fabric communications  Originally, Centralized forwarding ASICs –Line cards fed into these ASICs directly  When we needed higher performance – we added faster Switch Fabrics, and Distributed Forwarding Capabilities to system  What this really meant – adding more ASIC forwarding capacity to the system to minimize the number of devices a flow had to traverse 10

11 © 2013 Cisco and/or its affiliates. All rights reserved.BRKCOM-2005Cisco Public FEX Data Forwarding Decoupling the Modular Switch  Think the original C6k Satellite Program for VSL and RSL  The Constellation Bus now is smaller header – 6 Byte VNtag header –Core to FEX technology and being standardized as 802.1BR –This is NOT a 1:1 mapping to VEPA/802.1bg which is designed to offer an enhanced forwarding mechanism between peer devices via a single upstream device  Keep the ASIC counts for high performance but put them on the Central controlling switch instead of all these line cards –Latency and bandwidth were more a function of the layers of ASICs to traverse in a tree – rather than the location of these ASICs (the fiber/copper paths for a packet to propagate)  Add protocols for configuration and firmware management of these remote cards (Satellite Control Protocol, Satellite Discovery Protocol) –Allows us to get away from manual firmware code management per (remote) line-card  Move from Store-and-Forward behavior to Cut-Through switching to make latency actually better 11

12 © 2013 Cisco and/or its affiliates. All rights reserved.BRKCOM-2005Cisco Public Fabric Extension (FEX) Concept Virtualising the Network Port 12 LAN Switch port extended over Fabric Extender Logical Switch Switch Multi-tier architecture FEX architecture Switch FEX Collapse network tiers, fewer network management points

13 © 2013 Cisco and/or its affiliates. All rights reserved.BRKCOM-2005Cisco Public FEX Technology for Unified I/O  Virtual Switch Ports, Cables, and NIC Ports  Mapping of Ethernet and FC Wires over Ethernet  Service Level enforcement  Multiple data types (jumbo, lossless, FC)  Individual link-states  Fewer Cables  Multiple Ethernet traffic co-exist on same cable  Fewer adapters needed  Overall less power  Interoperates with existing Models  Management remains constant for system admins and LAN/SAN admins  Possible to take these links further upstream for aggregation Individual Ethernets DCB Ethernet Individual Storage (iSCSI, NFS, FC) Blade Management Channels (KVM, USB, CDROM, Adapters) 13

14 © 2013 Cisco and/or its affiliates. All rights reserved.BRKCOM-2005Cisco Public 14 Key Architectural Component #2 : UCS VIC  256 PCIe devices  Devices can be vNICs or vHBAs  Each device has a corresponding switch interface  Bandwidth 2x4x10 Gb  Uses 4x10 Ether Channel, HW 40Gb Capable  vNICs/vHBAs NOT limited to 10Gb  PCIe Gen-2 x 16  Mezzanine and PCIe

15 VM-FEX Introduction 15

16 © 2013 Cisco and/or its affiliates. All rights reserved.BRKCOM-2005Cisco Public 1. When VMs move across physical ports—the network policy must follow Live Migration 2. Must view or apply network/security policy to locally switched traffic 3. Need to maintain segregation of duties while ensuring non-disruptive operations Port Profile Server Admin Network Admin Security Admin Server Virtualization Issues 16

17 © 2013 Cisco and/or its affiliates. All rights reserved.BRKCOM-2005Cisco Public Cisco Virtual Networking Options Extend networking into hypervisor (Cisco Nexus 1000V Switch) Cisco UCS VM-FEX Cisco UCS VM-FEX Server Extend physical network to VMs (Cisco UCS VM-FEX) Hypervisor Cisco Nexus 1000V Generic Adapter Server 17

18 © 2013 Cisco and/or its affiliates. All rights reserved.BRKCOM-2005Cisco Public UCS VM-FEX Distributed Modular System Removing the Virtual Switching Infrastructure to a FEX 18 = Distributed Modular System VM-FEX: Single Virtual-Physical Access Layer  Collapse virtual and physical switching into a single access layer  VM-FEX is a Virtual Line Card to the parent switch  Parent switch maintains all management & configuration  Virtual and Physical traffic treated the same LAN N7000/ C6500 MDS SAN Access Layer UCS UCS VIC App OS App OS App OS App OS App OS App OS App OS App OS App OS App OS App OS App OS UCS IOM + UCS Fabric Interconnect Parent Switch Cisco UCS VIC UCS IOM-FEX +...

19 © 2013 Cisco and/or its affiliates. All rights reserved.BRKCOM-2005Cisco Public Extending FEX Architecture to the VMs Cascading of Fabric Extenders 19 Logical Switch Virtualized Deployment Switch FEX vSwitch App OS App OS App OS LAN Logical Switch VM-FEX architecture Switch FEX LAN App OS App OS App OS VM-FEX Switch port extended over cascaded Fabric Extenders to the Virtual Machine Logical Switch

20 © 2013 Cisco and/or its affiliates. All rights reserved.BRKCOM-2005Cisco Public Nexus 5000/2000 VM-FEX Distributed Modular System Removing the Virtual Switching Infrastructure to a FEX 20 = Distributed Modular System VM-FEX: Single Virtual-Physical Access Layer  Collapse virtual and physical switching into a single access layer  VM-FEX is a Virtual Line Card to the parent switch  Parent switch maintains all management & configuration  Virtual and Physical traffic treated the same LAN N7000/ C6500 MDS SAN Access Layer Nexus UCS VIC App OS App OS App OS App OS App OS App OS App OS App OS App OS App OS App OS App OS Nexus Nexus 5500 Parent Switch Cisco UCS VIC Nexus 2000 FEX +...

21 © 2013 Cisco and/or its affiliates. All rights reserved.BRKCOM-2005Cisco Public Nexus Fabric Extender Single Access Layer 21 = Distributed Modular System + Nexus 5000 Parent Switch Cisco Nexus ® 2000 FEX Over 6000 production customers Over 5 million Nexus 2000 ports deployed Distributed Modular System  Nexus 2000 FEX is a Virtual Line Card to the Nexus 5000  Nexus 5000 maintains all management & configuration  No Spanning Tree between FEX & Nexus 5000 LAN N7000/ C6500 MDS SAN Access Layer N N

22 © 2013 Cisco and/or its affiliates. All rights reserved.BRKCOM-2005Cisco Public - Management complexity: each VEPA is an independent point of management - Doesn’t support cascading Reflective Relay (used in basic VEPA) - Vulnerable: ACLs based on source MAC (can be spoofed) - Resource intensive: Hypervisor component consumes CPU cycles - Inefficient bandwidth : separate copy of each Mcast and Bcast packets on the wire - Ease of management: one switch manages all Port Extenders (adapters/switches/virtual interfaces) - Supports cascading of Port Extenders (multi-tier, single point of management) - Virtual Machine aware FEX - Secure: ACLs based on VN-TAG - Scalable: Mcast and Bcast replication performed in HW at line rate - Efficient: no impact to server CPU IEEE-802.1BR vs. IEEE802.1Qbg 22 VEPA based on IEEE 802.1Qbg FEX based on IEEE 802.1BR Switch FEX Logical Switch VM- FEX

23 © 2013 Cisco and/or its affiliates. All rights reserved.BRKCOM-2005Cisco Public Deployments of Cisco’s FEX Technology Nexus 5000/5500/ Nexus 2200 UCS 6100/ IOM 2k B22H with Nexus 5500 (HP) Server Rack Rack Server Rack FEX FEX Switch FEX Chassis FEX Switch / FI Blade Server Blade Server Chassis Blade Server 1 2 UCS 6100/ VIC 1 or 2 Nexus VIC P81E Blade/Rack Server Adapter OS FEX Adapter FEX Switch / FI n n 1 1 Port 0 UCS 6100/ VIC 1 or 2 + VM Mgmt Link Nexus VIC P81E vCenter/VMM Management Plane Integration UCS Manager VM Host Hypervis or VMVM VMVM VMVM VMVM FEX VM-FEX VMVM VMVM Switch / FI RedHat KVM n n 23

24 © 2013 Cisco and/or its affiliates. All rights reserved.BRKCOM-2005Cisco Public VM-FEX Operations Model Pre-Boot Configuration 24 Hypervisor  Step 1: Preboot –UCS defined PCIe devices and enumerations –Host discovers PCIe devices

25 © 2013 Cisco and/or its affiliates. All rights reserved.BRKCOM-2005Cisco Public VM-FEX Operational Model Defining “Port Profiles” on the UCS or Nexus  Step 1: Preboot –UCS defined PCIe devices and enumerations –Host discovers PCIe devices  Step 2: Port Profile –Folder of Network Policy defined Hypervisor Port Profiles Definition WEB Apps HRDBCompliance VLAN Web VLAN HR VLAN DB VLAN Comp UCSM or Nexus 5500

26 © 2013 Cisco and/or its affiliates. All rights reserved.BRKCOM-2005Cisco Public VM-FEX Operational Model Pushing Port Profiles to the Hypervisor System 26  Step 1: Preboot –UCS defined PCIe devices and enumerations –Host discovers PCIe devices  Step 2: Port Profile –Folder of Network Policy on UCS or Nexus 5500 defined  Step 3: Port Profile Export –Port Profile name list exported to virtualization manager Hypervisor VLAN Web VLAN HR VLAN DB VLAN Comp HypervisorManager UCSM or Nexus 5500 exports Port Profiles UCSM or Nexus 5500

27 © 2013 Cisco and/or its affiliates. All rights reserved.BRKCOM-2005Cisco Public VM-FEX Operational Model Mapping of Port Profiles to VM Virtual Adapters 27  Step 1: Preboot –UCS defined PCIe devices and enumerations –Host discovers PCIe devices  Step 2: Port Profile –Folder of Network Policy on UCS or Nexus 5500 defined  Step 3: Port Profile Export –Port Profile name list exported to virtualization manager  Step 4: VM Definition –Named Policy in VM Hypervisor VLAN Web VLAN HR VLAN DB VLAN Comp HypervisorManager NetworkManager VM VMVMVM

28 © 2013 Cisco and/or its affiliates. All rights reserved.BRKCOM-2005Cisco Public VM-FEX Operational Model Simplifying the Access Infrastructure 28  Unify the virtual and physical network –Same Port Profiles for various hypervisors and bare metal servers  Consistent functions, performance, management Physical Network Virtual Network Hypervisor VMVMVMVMVMVMVMVM VETH VNIC

29 © 2013 Cisco and/or its affiliates. All rights reserved.BRKCOM-2005Cisco Public VM-FEX Operational Model Traffic Forwarding 29  Removing performance dependencies from VM location  Offloading software switching functionalities from host CPU  More on this in upcoming slides Physical Network Hypervisor VMVMVMVMVMVMVMVM VETH VNIC

30 VM-FEX Operational Model

31 © 2013 Cisco and/or its affiliates. All rights reserved.BRKCOM-2005Cisco Public VM-FEX Modes of Operation VMware ESX 31 Emulated Mode VMDirectPath Emulated Mode  Each VM gets a dedicated PCIe device  Appears as distributed virtual switch to hypervisor  LiveMigration supported High Performance Mode  Co-exists with Standard mode  Bypasses Hypervisor layer  ~30% improvement in I/O performance  Appears as distributed virtual switch to hypervisor  Currently supported with ESXi  LiveMigration supported

32 © 2013 Cisco and/or its affiliates. All rights reserved.BRKCOM-2005Cisco Public Dynamic VIC device Config: Used by PCI mgmt layer Mgmt Bar: Used by Vmkernel and PTS Data Bar : Vmxnet3 compliant Rings and Registers Vmxnet3 OS PCI subsystem OS PCI subsystem Control Path Emulation PT transitions Port Events PCIe events VM Vmxnet3 Driver Ethernet Device Driver Cisco DVS Data Path VMDirectPath: How is Works 32

33 © 2013 Cisco and/or its affiliates. All rights reserved.BRKCOM-2005Cisco Public UCS VM-FEX Modes of Operation Windows Hyper-V & Red Hat KVM with SR-IOV 33 Emulated Mode Hypervisor Bypass Emulated Mode  Each VM gets a dedicated PCIe device  Appears as a Virtual Function to Guest OS  LiveMigration supported High Performance Mode  Co-exists with Standard mode  Bypasses Hypervisor layer  ~30% improvement in I/O performance  Appears as a Virtual Function to Guest OS  Currently supported through SR- IOV with Hyper-V 2012 & RHEL KVM 6.3  Live Migration supported

34 © 2013 Cisco and/or its affiliates. All rights reserved.BRKCOM-2005Cisco Public SR-IOV: How is Works Hyper-V Host Root Partition Hyper-V Switch Physical NIC Virtual Machine Virtual NIC Switching VLAN Filtering Data Copy Switching VLAN Filtering Data Copy Hyper-V Host Root Partition Hyper-V Switch SR-IOV Physical Function Virtual Machine Virtual Function Switching VLAN Filtering Data Copy Switching VLAN Filtering Data Copy Network I/O path without SRIOV Network I/O path with SRIOV 34

35 © 2013 Cisco and/or its affiliates. All rights reserved.BRKCOM-2005Cisco Public VM-FEX Operational Model Live Migration with Hypervisor Bypass 35 Temporary transition from to standard I/O VM Sending TCP stream (1500MTU) UCS B200 M2 blades with UCS VIC card Live Migration to secondary host 1 sec silent period

36 © 2013 Cisco and/or its affiliates. All rights reserved.BRKCOM-2005Cisco Public VM-FEX Modes of Operation Enumeration vs. Hypervisor Bypass 36 VM-FEX Mode VMwareHyper-VKVM EmulationPass Through (PTS)Hyper-V SwitchSR-IOV with MacVTap Hypervisor VersionESX 4.0 U1 +Window Server 2012RHLE UCSM Version VMotion / Live MigrationSupport VM-FEX Mode VMwareHyper-VKVM Hypervisor Bypass SR-IOV SR-IOV with PCI Passthrough Hypervisor VersionESX 5.0 +Window Server 2012 RHEL 6.3 UCSM Version VMotion / Live MigrationSupport N/A

37 Chassis IO Module A Server Ports VN 10Gbe Chassis IO Module B Internal Connections vfc vNIC1(s) vNIC2(s) vfc VM-FEX UCS 6x00 Physical Ports Chassis IOM Ports UCS 6x00 Physical Ports Chassis IOM Ports VIC CPU Virtual Interface Control Logic Virtual Interface Control Logic Virtual Interface Control Logic Virtual Interface Control Logic HBA 0 vHBA0 HBA 1 vHBA1 veth Fiber Channel Uplink Ports veth1 veth2 veth3 veth4 veth1 veth2 veth3 veth4 veth Mgmt Uplink 0 0 Mgmt Uplink CIMC KVM etc. CIMC KVM etc. Cisco Adapter UCS B or C Series Server UCS Fabric Interconnect B (port profiles) UCS Fabric Interconnect A (port profiles) ESX Kernel Module / Libvirt / HyperV Extendable Switch 37 UCS VM-FEX System View Deploying on a UCS B or C Series Infrastructure

38 © 2013 Cisco and/or its affiliates. All rights reserved.BRKCOM-2005Cisco Public UCS VM-FEX System View Deploying on a UCS B or C Series Infrastructure Chassis IO Module A Server Ports VN 10Gbe Chassis IO Module B Service Console Service Console Kernel Internal Connections vfc d-vNIC1 vNIC1(s) d-vNIC2 vNIC2(s) d-vNIC3 vfc VM-FEX UCS 6x00 Physical Ports Chassis IOM Ports UCS 6x00 Physical Ports Chassis IOM Ports VIC CPU Virtual Interface Control Logic Virtual Interface Control Logic Virtual Interface Control Logic Virtual Interface Control Logic ESX Kernel Module / Libvirt / HyperV Extendable Switch HBA 0 vHBA0 HBA 1 vHBA1 d-vNIC4 veth Fiber Channel Uplink Ports veth1 veth2 veth3 veth4 veth1 veth2 veth3 veth4 veth Mgmt Uplink 0 0 Mgmt Uplink CIMC KVM etc. CIMC KVM etc. UCS Fabric Interconnect B (port profiles) UCS Fabric Interconnect A (port profiles) Cisco Adapter UCS B or C Series Server

39 © 2013 Cisco and/or its affiliates. All rights reserved.BRKCOM-2005Cisco Public VM-FEX Scalability Number of VIF Supported per Hypervisor 39 Cisco UCS 6100 / 6200 Series Hypervisor Half-Width Blade with Single VICFull-Width Blade with Dual VIC ESX 4.0 – 4.1 (DirectPath I/O)56 (54 vNIC + 2 vHBA) ESXi 5.0 – 5.1 (DirectPath I/O)116 (114 vNIC + 2 vHBA)116 (114 vNIC + 2 vHBA)* Windows 2012 (SR-IOV)116 (114 vNIC + 2 vHBA)232 (228 vNIC + 4 vHBA) KVM 6.1 – 6.3 (SR-IOV)116 (114 vNIC + 2 vHBA)232 (228 vNIC + 4 vHBA) * Additional VIC will NOT increase the total VIF count due to OS limitation * Multiple VIC is Supported for full width blade and B200M3 Nexus 5500 Series Hypervisor Adapter FEXVM-FEX ESX 4.1 – ESXi vNIC * Only one VIC (P81E/VIC1225) is Supported for each C series rack server

40 © 2013 Cisco and/or its affiliates. All rights reserved.BRKCOM-2005Cisco Public VM-FEX Advantage 40  Simpler Deployments –Unifying the virtual and physical network –Consistency in functionality, performance and management  Robustness –Programmability of the infrastructure –Troubleshooting, traffic engineering virtual and physical together  Performance –Near bare metal I/O performance –Improve jitter, latency, throughput and CPU utilization  Security –Centralized security policy enforcement at controller bridge –Visibility down to VM to VM traffic

41 VM-FEX General Baseline on UCS

42 © 2013 Cisco and/or its affiliates. All rights reserved.BRKCOM-2005Cisco Public UCS General Baseline #1: Dynamic vNICs Policy Setting a Dynamic Adapter Policy Up 42  Policies are to automatically provision dynamics on Servers  Dependent on the number of Fabric Interconnect to IO Module connections –(# IOM to FI links * 63) - 2 for Gen 2 Hardware (62xx, 22xx and VIC12xx) –(# IOM to FI links * 15) - 2 for Gen 1 Hardware (61xx, 21xx and Palo)

43 © 2013 Cisco and/or its affiliates. All rights reserved.BRKCOM-2005Cisco Public UCS General Baseline #1: Creating Dynamic vNICs Fabric Interconnect VIF calculation In port-channel mode, when you cable between FEX and the FI, the available virtual interface (VIF) namespace varies, depending on where the uplinks are connected to the FI ports:When port-channel uplinks from the FEX are connected only within a set of eight ports managed by a single chip, Cisco UCS Manager maximizes the number of VIFs used in service profiles deployed on the servers. If uplink connections are distributed across ports managed by separate chips, the VIF count is decreased. For example, if you connect seven members of the port channel to ports 1–7, but the eighth member to port 9, this port channel can only support VIFs as though it had one member

44 © 2013 Cisco and/or its affiliates. All rights reserved.BRKCOM-2005Cisco Public UCS General Baseline #2: Building Service Profile Adding the Dynamic Policy and Static Adapters 44  2 Statics – 1 to each UCS Fabric  Change dynamic vNIC connection policy to setup dynamics

45 © 2013 Cisco and/or its affiliates. All rights reserved.BRKCOM-2005Cisco Public UCS General Baseline #2: Building Service Profile Static and Dynamic Adapter Policy 45 Adapter PolicyStatic vNICDynamic vNIC VMware ESXiVMwareVMwarePassThru Window Hyper-VSR-IOVWindows RedHat KVMSR-IOVLinux

46 © 2013 Cisco and/or its affiliates. All rights reserved.BRKCOM-2005Cisco Public UCS General Baseline #3: Building Port Profiles Creating Folders of Network Access Attributes 46  Creating Port Profiles Includes: –VLAN(s) –Native and/or Tagging allowed –QoS Weights and Flow Rates –Upstream Ports to always use

47 © 2013 Cisco and/or its affiliates. All rights reserved.BRKCOM-2005Cisco Public UCS General Baseline #4: Building Port Profiles Enhanced Options like VMDirectPath with VM-FEX 47  Selecting High Performance will only Impact VMware deployment today  No problem if selected and used on other hypervisors

48 © 2013 Cisco and/or its affiliates. All rights reserved.BRKCOM-2005Cisco Public UCS General Baseline #5: Communication with Manager Establishing Communication to Hypervisor Manager 48  Tool discussed later to simplify the whole integration process

49 © 2013 Cisco and/or its affiliates. All rights reserved.BRKCOM-2005Cisco Public UCS General Baseline #5: Communication with Manager Hypervisor ManagerDVSHypervisor ManagerDVS ESXi1 vCenter per UCS doman 1 DVS per vCenter 1 vCenter per UCS doman 4 DVS per vCenter Hyper-V1 MMC Instance5 DVS per MMC Instance 1 MMC Instance5 DVS per MMC Instance KVMN/A1 DVS per UCS Domain N/A1 DVS per UCS Domain Port Profile per UCS Domain 512 Dynamic Ports per port profile 4096 Dynamic Ports per DVS 4096

50 © 2013 Cisco and/or its affiliates. All rights reserved.BRKCOM-2005Cisco Public UCS General Baseline #6: Publishing Port Profiles Exporting Port Profiles to these to Hypervisor Manager 50  Publish Port Profiles to Hypervisors and virtual switches within

51 VM-FEX Implementation with VMware on UCS 51

52 © 2013 Cisco and/or its affiliates. All rights reserved.BRKCOM-2005Cisco Public VMware VM-FEX: Infrastructure Requirements Versions, Licenses, etc. 52  VMware VM-FEX is available B series, integrated and standalone C series  Each VIC card supports up to 116 Virtual Machine Interface (OS limitation)  Enterprise Plus License required (as is for any DVS) on Host  Standard License and above is required for vCenter  Hypervisor features are supported for both emulated and hypervisor bypass mode –vMotion, HA, DRS, Snapshot and Hot add/remove virtual device, Suspend/Resume  VMDirectPath (Hypervisor Bypass) with VM-FEX is supported with ESXi 5.0+  VM-FEX upgrade is supported from ESXi4.x to ESXi5.x with Customized ISO and VMware Update Manger

53 © 2013 Cisco and/or its affiliates. All rights reserved.BRKCOM-2005Cisco Public VM-FEX and VMware SR-IOV Comparison 53  VM-FEX is the hypervisor bypass solution with vMotion capability –VMware SR-IOV is incompatible with hypervisor features including vMotion, HA, DRS …  VM-FEX has the highest Virtual Machine interface per host –With UCSM 2.1 release, each ESXi host support up to 116 VM interface –With ESX5.1, SR-IOV supports up to 64 VF with Emulex and 41VF with Intel adapter  VM-FEX is available on both UCS blade and rack severs –Blade Server, Integrated rack server with UCSM and Standalone rack server with Nexus 5500 –With ESX5.1, SR-IOV is only available on PCIe adapter and standalone rack server  VM-FEX enable centralized network management and policy enforcement –Network policy is configured as port profile in UCSM / N5K and push to vCenter as network label –Clean separation between Network and Server responsibility  VM-FEX Configuration is fully automated – Easy VM-FEX tool  VM-FEX provides inter VMs traffic visibility through network tool

54 © 2013 Cisco and/or its affiliates. All rights reserved.BRKCOM-2005Cisco Public VMware VM-FEX Configuration Workflow Configure Service Profile and adapter VMW ESX Server vCenter Server UCS DVS (PTS) VM#1VM#1VM#4VM#4VM#3VM#3VM#2VM#2VM#5VM#5VM#8VM#8VM#7VM#7VM#6VM#6 UCS exports Port Profiles to VC UCSUCS Server administrator Network Administrator 2. Creating Port Profile and Cluster in UCSM 4. Install Cisco VEM software bundle and plugin in vCenter 3. Configure USCM VMware Integration wizard 5. Add ESX host into DVS cluster in vCenter 6. Configure Virtual Machine setting to enable hypervisor bypass 7. Verify VM-FEX configuration in both UCSM and vCenter

55 © 2013 Cisco and/or its affiliates. All rights reserved.BRKCOM-2005Cisco Public VMware VM-FEX Demo Topology Server Cisco UCS VM-FEX VMware ESX 5.1 Server Cisco UCS VM-FEX Vware ESX 5.1 Cisco UCS Fabric Interconnect Virtual Ethernet ports (vEth) Virtual Ethernet ports (vEth) DirectPath I/O Active VMXNET 3 Adapter VM-FEX NTTTCP Sender vSwitch NTTTCP Sender VM-FEX NTTTCP Receiver vSwitch NTTTCP Receiver 55

56 VMware VM-FEX Demo 56

57 © 2013 Cisco and/or its affiliates. All rights reserved.BRKCOM-2005Cisco Public VMware VM-FEX Best Practice 57  Pre-provision the number of dynamic vNIC for future usage –Changing the quantity and the adapter policy require server reboot  Select “High Performance Mode” in Port Profile to enable hypervisor bypass  Utilize ESX Native VMXNET3 Driver –User configurable parameter including queue, interrupt, ring size through policy –Recommend to have Num (vCPU) = Num (TQ) = Num(RQ) to enable DirectPath I/O  Other consideration to deploy VM-FEX –ESX heap memory size : MTU size –ESX available interrupt vectors : Guest OS and adapter policy –Dedicated spreadsheet for VM-FEX calculation and sizing

58 © 2013 Cisco and/or its affiliates. All rights reserved.BRKCOM-2005Cisco Public Easy VM-FEX Tool 58

59 VM-FEX Implementation with Nexus 5K 59

60 UCS VM-FEX System View FEX A Server Ports Server Ports VN 10Gbe FEX B Internal Connections vfc vNIC1(s) vNIC2(s) vfc VM-FEX Nexus 55xx Physical Ports 2232 Fabric Ports Nexus 55xx Physical Ports 2232 Fabric Ports VIC CPU Virtual Interface Control Logic Virtual Interface Control Logic Virtual Interface Control Logic Virtual Interface Control Logic ESX Kernel Pass Through Module HBA 0 vHBA0 HBA 1 vHBA1 veth Fiber Channel Uplink Ports veth1 veth2 veth3 veth4 veth1 veth2 veth3 veth4 veth Mgmt Uplink 0 0 Mgmt Uplink CIMC KVM etc. CIMC KVM etc. Cisco Adapter UCS C Series Server vPC Connections Nexus 55xx A (port profiles) Nexus 55xx B (port profiles) 60 Nexus VM-FEX System View Deploying on a UCS C Series with Nexus 5500 Infrastructure Nexus VM-FEX System View Deploying on a UCS C Series with Nexus 5500 Infrastructure

61 UCS VM-FEX System View FEX A Server Ports Server Ports VN 10Gbe FEX B Internal Connections vfc vNIC1(s) vNIC2(s) vfc VM-FEX Nexus 55xx Physical Ports 2232 Fabric Ports Nexus 55xx Physical Ports 2232 Fabric Ports VIC CPU Virtual Interface Control Logic Virtual Interface Control Logic Virtual Interface Control Logic Virtual Interface Control Logic HBA 0 vHBA0 HBA 1 vHBA1 veth Fiber Channel Uplink Ports veth1 veth2 veth3 veth4 veth1 veth2 veth3 veth4 veth Mgmt Uplink 0 0 Mgmt Uplink CIMC KVM etc. CIMC KVM etc. Cisco Adapter UCS C Series Server Service Console Service Console Kernel d-vNIC1 d-vNIC2 d-vNIC3 d-vNIC Nexus 55xx A (port profiles) Nexus 55xx B (port profiles) ESX Kernel Pass Through Module vPC Connections (veth’s not a vPC at FCS) 61 Nexus VM-FEX System View Deploying on a UCS C Series with Nexus 5500 Infrastructure

62 © 2013 Cisco and/or its affiliates. All rights reserved.BRKCOM-2005Cisco Public Nexus 5548 Fabric Extenders Port 1 Port 2 Nexus 5500 VM-FEX Demo Topology  Nexus 5548/C22 M3/ VIC1225/ESXi 5.1  Nexus 5548-A is pre-configured, focus on 5548-B with the same configuration in the demo  Uplink Redundancy – Each static vNIC configure as Active/Standby No need for OS teaming software Required for hypervisor uplink  Each dynamic vNIC attach to uplink in Round Robin fashion  vPC Doman and Peer Link is configured to synchronize veth numbering for the same VM –Not used for the forwarding plane ESXi 5.1 vNIC 1 CH1 vNIC 1 CH1 vNIC 2 CH2 vNIC 2 CH2 dNIC 1 dNIC 2 dNIC 3 VM 1 vNIC 0 VM 1 vNIC 0 VM 2 vNIC 0 VM 2 vNIC 0 VM 3 vNIC 0 VM 3 vNIC 0 C22 M3 VIC 1225 vPC Peer Link vEth 62

63 © 2013 Cisco and/or its affiliates. All rights reserved.BRKCOM-2005Cisco Public Nexus Set Up Connection 2. Install N5K Feature License 3. Configure Static Binding interface and enable VNTag on host interfaces 4. Configure Port Profile 5. Configure DVS & Extension 10. Verify VM-FEX status 6. Download Extension and register plugin 8. Add Sever into DVS cluster 9. VM created and attach port profile Cisco Adapter 1. VM vNICs provisioned and VNTag Mode enable 7. Install VEM on ESXi host Nexus 5500 VM-FEX Configuration Workflow UCS C-series CIMC Network Administrator Server administrator Server administrator 63

64 Nexus 5K VM-FEX Demo 64

65 © 2013 Cisco and/or its affiliates. All rights reserved.BRKCOM-2005Cisco Public Nexus 5500 VM-FEX Best Practice 65  VM-FEX supports single N5k topology, vPC Topology is recommended  In vPC Topology, ensure both N5k have the same port-profile configuration  In vPC Topology, need to configure the same SVS connection on both N5k but only Primary switch has active connection to vCenter –When secondary switch takeover primary role, seamlessly activate the connection to vCenter  Enable “vethernet auto-create” feature –Automatically create vEth port for dynamic vNIC during server boot up –Auto created vEth are numbered > =  DirectPath I/O is active with “ high-performance host-netio” cmd in port profile

66 VM-FEX Implementation with Hyper-V on UCS 66

67 © 2013 Cisco and/or its affiliates. All rights reserved.BRKCOM-2005Cisco Public Hyper-V Scale Comparison Vmware vSphere 5.1Windows Server 2008 R2Windows Server 2012 HW Logical Processor Support 160 LPs64 LPs320 LPs Physical Memory Support2 TB1 TB4 TB Cluster Scale32 Nodes up to 4000VMs16 Nodes up to 1000 VMs64 Nodes up to 4000 VMs Virtual Machine Processor Support Up to 64 VPsUp to 4 VPsUp to 64 VPs VM MemoryUp to 1 TBUp to 64 GBUp to 1 TB Live MigrationConcurrent vMotion 128 per datastore Yes, one at a timeYes, with no limits. As many as hardware will allow. Live Storage MigrationConcurrent Storage vMotion 8 per datastore, 2 per host No. Quick Storage Migration via SCVMM Yes, with no limits. As many as hardware will allow. Servers in a Cluster VP:LP Ratio8:18:1 for Server 12:1 for Client (VDI) No limits. As many as hardware will allow. 67

68 © 2013 Cisco and/or its affiliates. All rights reserved.BRKCOM-2005Cisco Public SR-IOV Overview Hyper-V Host Root Partition Hyper-V Switch Physical NIC Virtual Machine Virtual NIC Switching VLAN Filtering Data Copy Switching VLAN Filtering Data Copy Hyper-V Host Root Partition Hyper-V Switch SR-IOV Physical Function Virtual Machine Virtual Function Switching VLAN Filtering Data Copy Switching VLAN Filtering Data Copy Network I/O path without SRIOV Network I/O path with SRIOV 68

69 © 2013 Cisco and/or its affiliates. All rights reserved.BRKCOM-2005Cisco Public Hyper-V Extensible Switch Architecture Root Partition Extension Miniport Extension Protocol Hyper-V Switch Physical NIC Virtual Machine Host NIC VM NIC Virtual Machine VM NIC Filtering Extensions Forwarding Extension WFP Extensions Capture Extensions VM-FEX Forwarding Extension VF Driver PF Driver  Hyper-V extensible switch architecture is an open API model that enhance vSwtich feature  Three types of extension is defined by Hyper-V –Capture Extension –Filtering Extension (Window Filtering Platform) –Forwarding Extension (VM-FEX)  Multiple extension is allowed –Still need to verify with Vendor for compatibility –Several extension is incompatible with WFP  Extension state is unique for each vSwitch –Leverage SCVMM to centrally configure extension  Cisco also provides both PF and VF Drivers 69

70 © 2013 Cisco and/or its affiliates. All rights reserved.BRKCOM-2005Cisco Public PF Driver SCVMM Management of Switch Extensions UCS VMM Service SCVMM UCS Manager API Network and Security Policy database UCSM Provider Plugin Virtualization VM 1 VM3 VM 2 Root Partition VMM Agent Capture Extension Filtering Extension Forwarding Extension Physical NIC SCVMM Server VM-FEX Forwarding Extension VF Driver UCS VIC Service Profile 70

71 © 2013 Cisco and/or its affiliates. All rights reserved.BRKCOM-2005Cisco Public Hyper-V VM-FEX: Infrastructure Requirements 71  VM-FEX is available for Hyper-V on UCS B series and Integrated C series –Standalone C series support is on the road map  Each VIC card supports up to 116 Virtual Machine Interface –Install additional VIC will double VM interface (B200M3 with 2 VIC -> 232 per host)  Windows Server 2012 is required for both Host and Guest OS (same level of Kernel) –Do Not Support Windows Server Core and Hyper-V standalone server  VM-FEX with Live Migration fully supported –Various options for share storage – Failover Cluster, SMB, share nothing storage  Full PowerShell library support for automation –PowerShell Commandlet for UCSM, Hyper-V and SCVMM

72 © 2013 Cisco and/or its affiliates. All rights reserved.BRKCOM-2005Cisco Public Hyper-V VM-FEX: Infrastructure Requirements 72  Support Two Management Approaches –Microsoft Management Console (MMS) for Standalone Hyper-V deployment –System Center Virtual Machine Manager (SCVMM SP1) for Integrated Hyper-V deployment  UCS Manager full Integration with Systems Center Virtual Machine Manager (SCVMM) 2012 SP1 –Expect to release with UCSM 2.2 –UCS Provider Plugin includes VM-FEX switch forwarding extension –SCVMM use UCSM Provide Plugin to pull information from UCSM  Cisco provides both VIC drivers and VM-FEX switch forwarding extension –VIC Utility Tool is provided (MSI) –The same Windows Driver for both Physical Function (Host) and Virtual Function (Guest)

73 © 2013 Cisco and/or its affiliates. All rights reserved.BRKCOM-2005Cisco Public Hyper-V VM-FEX: SCVMM Network Definition 73 Logical Switch (DVS) FND: PUBLIC-SJC VMN: WEB VM 1 VM 2 vSwit ch VM 4 VM 5 vSwit ch WEB, Silver--VPP HOST GROUP: SJHOST GROUP: NYC Uplink PP-SJC Uplink PP-NYC Gold-VPP Silver-PPBronze-PP VM ND : WE B, VL AN : 55 VM ND: WE B, VL AN: 155 FND: PUBLIC- NYC FN: PUBLIC WEB Gold-VPP Fabric Network (FN) – A network abstraction representing a logical network composed of network segments (VLANs) spanning across multiple sites Fabric Network Definition (FND) – A network abstraction composed of site-specific network segments VM Network Definition (VMND) – A sub-network abstraction composed of a single network segment (and an IP pool) at a specific site VM Network (VMN) – A sub-network abstraction composed of network segments spanning across multiple sites. Used by a tenant’s VM Uplink Port-Profile (UPP) – Carries a list of allowed FNDs for a pNIC Virtual Port Profile (VPP) – Profile defining QoS/SLA characteristics for a vNIC Logical Switch – Microsoft’s native DVS and define Live Migration Boundary Live Migration Boundary

74 © 2013 Cisco and/or its affiliates. All rights reserved.BRKCOM-2005Cisco Public Hyper-V VM-FEX Configuration Workflow 1. Configure Service Profile 2. Setup SCVMM and Create Port Profile 3. Install UCSM Provider Plugin 4. Configure SCVMM Switch Extension Manager 5. Configure SCVMM Logical Switch 6. Associate Native VM Network to External Provided VM Network 7. Assign Hyper-V Host to Logical Switch and attach port Classification 8. Verify VM-FEX Connectivity in UCSM 74

75 © 2013 Cisco and/or its affiliates. All rights reserved.BRKCOM-2005Cisco Public Hyper-V VM-FEX Demo Topology Server Cisco UCS VM-FEX Microsoft Hyper-V Server NTTTCP/ Server Cisco UCS VM-FEX Microsoft Hyper-V Cisco UCS Fabric Interconnect Virtual Ethernet ports (vEth) Virtual Ethernet ports (vEth) SR-IOV enabled adapter NTTTCP/ Client 75

76 Hyper-V VM-FEX Demo 76

77 © 2013 Cisco and/or its affiliates. All rights reserved.BRKCOM-2005Cisco Public Hyper-V VM-FEX Best Practice 77  Always utilize SCVMM to configure Hyper-V and Virtual Machine property  Utilize NTTTCP as performance benchmark tool in Windows Platform –NTTTCP is Microsoft internal testing tool –Latest version available – NTTTCP v5.28 with Windows Server 2012 (April 2013)  Optimized for 10GE interface –Enable Receive Side Scaling (RSS)  Use Powershell Command - Set-VMNetworkAdapter –VMName “Server” – IovQueuePairsRequested 4  Need to shutdown VM to apply RSS –iSCSI boot is NOT support for PF as an overlay iSCSI vNIC

78 VM-FEX Implementation with KVM on UCS 78

79 © 2013 Cisco and/or its affiliates. All rights reserved.BRKCOM-2005Cisco Public RHEL KVM VM-FEX: Infrastructure Requirements 79  VM-FEX is available for Hyper-V on UCS B series and Integrated C series –Standalone C series support is on the road map  Each VIC card supports up to 116 Virtual Machine Interface –Install additional VIC will double VM interface (B200M3 with 2 VIC -> 232 per host)  UCS Manager 2.1 release is required to supported SR-IOV in KVM  Install Red Hat as Virtualization Host – RHEL 6.2 for VM-FEX emulation mode (SR-IOV with MacVTap) – RHEL 6.3 for VM-FEX hypervisor bypass mode (SR-IOV with PCI passthrough) – MacVTap Direct (Private) mode is no longer supported with UCSM release 2.1  Live migration feature only supported in emulation mode  Guest Operating System RHEL 6.3 Required to support SR-IOV with PCI passthrough –RHEL 6.3 inbox driver supports SR-IOV with PCI passthrough  Scripted nature of configuration at FCS –No current RHEV-M for RHEL KVM 6.x  Virtual Machine interface management via editing of VM domain XML file

80 © 2013 Cisco and/or its affiliates. All rights reserved.BRKCOM-2005Cisco Public VM-FEX with KVM Architecture 80 … … UCS Switch Cisco VIC Adapter Port Switch Port … … User Kernel Macvtap Interface Netdev Interfac e Libvirt Virsh Netlink Socket Application virtio-net eth1 Macvtap 1 vhost-net KVM GuestOS Guest 2 Application virtio-net GuestOS Guest 1 eth2 Macvtap 2 vhost-net eth0 ethn PFPFVF1VF1VF2VF2VFnVFn Port Profile1: Qos1, vlan1 Port Profile2: Qos2, vlan2 Veth 1 Veth 2 Port Profile2: Qos2, vlan2 Port Profile1: Qos1, vlan1 Libvirt – Open source management tool is used for managing virtual machines provides a generic framework supports for a various virtualization -A Virtual Machine in Libvirt is represented as a domain XML file and store under QEMU user space -Virsh is GUI interface built on top of Libvirt API MacvTap - Linux Macvtap driver provides a mechanism to connect a VM interface directly to a host physical device -Libvirt uses macvtap to provide direct attach of VM NIC to host physical device -VM-FEX bypass MacvTap Interface

81 © 2013 Cisco and/or its affiliates. All rights reserved.BRKCOM-2005Cisco Public VM-FEX on KVM Configuration Steps 1.Upgrade UCSM to 2.1+ firmware (Del Mar release support SR-IOV) 2.Configure Service Profile with static (PF) and dynamic (VF) adapter policy 3.Creating Port Profile and Port Profile Client (only single default DVS support) 4.Install VM OS with RHEL 6.3 to support SR-IOV 5.Modify Virtual Machine Domain XML file to enable VM-FEX function 6.Connect VM to Virtual Machine Manager (GUI interface of Virsh) 7.Verify VM-FEX configuration in both UCSM and RHEL host 81

82 KVM VM-FEX Demo 82

83 © 2013 Cisco and/or its affiliates. All rights reserved.BRKCOM-2005Cisco Public VM-FEX Customer Benefit 83  Trouble shooting & traffic engineering VM traffic holistically from the physical network  Programmability, ability to re-number VLANs without disruptive changes Simplicity  One infrastructure for virtual & physical resource provisioning, management, monitoring and troubleshooting  Consistent features, performance and management for virtual & physical infrastructure  VMDirectPath and SR-IOV enabled near bare metal I/O performance  Line rate traffic to the virtual machine Performance Robustness

84 © 2013 Cisco and/or its affiliates. All rights reserved.BRKCOM-2005Cisco Public BRKCOM-2005 Recommended Viewing 84 PlaylistUCS Technical Videos 5 OverviewCisco UCS Advantagehttp://www.youtube.com/watch?v=IW4zHXIjpPU UCS Advantage Videos on YouTube

85 © 2013 Cisco and/or its affiliates. All rights reserved.BRKCOM-2005Cisco Public CategoryTitleURL UCS serverService Profiles and Templateshttp://www.youtube.com/watch?v=JW-YtVN75R0 UCS serverOrganizations and Roleshttp://www.youtube.com/watch?v=tb-L0zv3If UCS serverExtended Memory Technologyhttp://www.youtube.com/watch?v=kS3ehPRcVDo UCS serverServer Pre-Provisioninghttp://www.youtube.com/watch?v=o7BuEE3hNPE UCS serverBIOS Policieshttp://www.youtube.com/watch?v=Pr6EptC9JXQ UCS serverRAID Policieshttp://www.youtube.com/watch?v=Vcs56wjUWuI UCS serverFirmware Policieshttp://www.youtube.com/watch?v=vjj8Xz0NqI4 UCS serverServer Pools and Qualification Policieshttp://www.youtube.com/watch?v=KTw7M3T-VOw UCS serverMaintenance Policieshttp://www.youtube.com/watch?v=QQTlm98NgTI UCS serverHigh Availability During Upgradeshttp://www.youtube.com/watch?v=57HXMGn88HA UCS serverMonitoring with BMC BPPMhttp://www.youtube.com/watch?v=mdoEZf7tM5E UCS serverMicrosoft Hyper-V on UCShttp://www.youtube.com/watch?v=G3x_YOYK-Fo BRKCOM-2005 Recommended Viewing 85

86 © 2013 Cisco and/or its affiliates. All rights reserved.BRKCOM-2005Cisco Public BRKCOM-2005 Recommended Viewing 86 CategoryTitleURL UCS I/OAdapter Templateshttp://www.youtube.com/watch?v=KpVEn3DhfO M UCS I/ONetwork Interface Virtualizationhttp://www.youtube.com/watch?v=njjbCEblxVc UCS I/OAdapter Fabric Failoverhttp://www.youtube.com/watch?v=tlu8RSq6T_M UCS I/OExtend the Network to the Virtual Machine UCS I/OTraffic Analysis of All Servershttp://www.youtube.com/watch?v=PHTdXy_8Zdg UCS I/OEthernet Switching Modeshttp://www.youtube.com/watch?v=roX8MRN66U M UCS I/OFibre Channel and Switch Modeshttp://www.youtube.com/watch?v=VSetsgOYYCo UCS I/OFC Port Channels and Trunkinghttp://www.youtube.com/watch?v=PpzKPguRTXc

87 © 2013 Cisco and/or its affiliates. All rights reserved.BRKCOM-2005Cisco Public CategoryTitleURL UCS Infrastructure Lights-Out Managementhttp://www.youtube.com/watch?v=QEO1d_1vTxs UCS Infrastructure Easy VM-FEX Deploymenthttp://www.youtube.com/watch?v=0aAuj80cNvg UCS Infrastructure Server Power Groupinghttp://www.youtube.com/watch?v=EgoFe33YoD8 UCS Infrastructure Blade and Rack-Mount Managementhttp://www.youtube.com/watch?v=aOsx4YMiOho UCS Infrastructure Manager Platform Emulatorhttp://www.youtube.com/watch?v=ZNNrs2e0wvk UCS Infrastructure Cisco Developer Network and Sandboxhttp://www.youtube.com/watch?v=Syhl6SAiwew BRKCOM-2005 Recommended Viewing 87

88 © 2013 Cisco and/or its affiliates. All rights reserved.BRKCOM-2005Cisco Public Maximize your Cisco Live experience with your free Cisco Live 365 account. Download session PDFs, view sessions on-demand and participate in live activities throughout the year. Click the Enter Cisco Live 365 button in your Cisco Live portal to log in. Complete Your Online Session Evaluation  Give us your feedback and you could win fabulous prizes. Winners announced daily.  Receive 20 Cisco Daily Challenge points for each session evaluation you complete.  Complete your session evaluation online now through either the mobile app or internet kiosk stations. 88

89 89


Download ppt "Virtual Machine Fabric Extension (VM-FEX) Bringing the Virtual Machines Directly on the Network BRKCOM-2005 Dan Hanson, Technical Marketing Manager, Data."

Similar presentations


Ads by Google