Presentation is loading. Please wait.

Presentation is loading. Please wait.

Cisco Unified Computing System Nuts & Bolts

Similar presentations


Presentation on theme: "Cisco Unified Computing System Nuts & Bolts"— Presentation transcript:

1 Cisco Unified Computing System Nuts & Bolts
Sean Hicks General DataTech, L.P. Managing Principal - Compute & Storage

2 What the heck is a UCS? Cisco UCS is a next-generation compute platform designed to address the following concerns: Data Center Sprawl Points of Management Cabling Power & Cooling Virtualization Changing Admin Boundaries Resource Bottlenecks Software-based L2 Overhead Reliability & Scalability High Availability Minimal Effort Rapid Deployment Stateless Hardware

3 Fabric Interconnects – G1
Cisco UCS 6120XP 1 RU, Redundant Power, Front-to-Back Air Flow 20 x 10 Gbps Ethernet, L1, L2, Mgmt0, Mgmt1 (unused) & Console 1 Expansion Bay Cisco UCS 6140XP 2 RU, Redundant Power, Front-to-Back Air Flow 40 x 10 Gbps Ethernet, L1, L2, Mgmt0, Mgmt1 (unused) & Console 2 Expansion Bays Expansion Modules N10-E0080: 8 x 4 Gbps FC N10-E0060: 6 x 8 Gbps FC N10-E0440: 4 x 10 Gbps Ethernet + 4 x 4 Gbps FC N10-E0600: 6 x 10 Gbps Ethernet

4 Fabric Interconnects – G2
Cisco UCS 6248UP 1 RU, Redundant Power, Front-to-Back Air Flow 32 x Unified Fabric Ports, L1, L2, Mgmt0 & Console 1 Expansion Bay Cisco UCS 6296UP 2 RU, Redundant Power, Front-to-Back Air Flow 48 x Unified Fabric Ports, L1, L2, Mgmt0 & Console 3 Expansion Bays Expansion Modules UCS E16UP: 16 x Unified Fabric Ports

5 End-Host Virtualization
In its default configuration (End-Host mode), a Fabric Interconnect appears to the network as a server with a bunch of NICs, much like a Hypervisor. It is not a true Ethernet switch! Does not participate in Spanning Tree elections. Allows for rapid L2 link establishment and multiple active uplinks. Does not learn northbound MAC addresses. Makes intelligent forwarding decisions based on learned (internal server) versus unlearned (external network) MAC addresses. Traversing frames are forwarded/dropped based on pinning traffic to interfaces. Déjà vu check, reverse path forwarding check, inter-uplink port blocking and bcast/mcast link election defeat the purpose of Spanning Tree between the FI and the northbound switch. Server-to-server in same VLAN is the only locally switched traffic. Ethernet Switch Visio Stencil Fabric Interconnect Visio Stencil You see, Kyle?! Totally different!!!

6 Historical Need For Spanning Tree
Broadcast Broadcast frame enters Switch 1. Switch 1 learns Src MAC, floods to every port except the originator. Switches 2 and 3 do the same, creating a loop. Broadcast storm begins. Trogdor strikes again! And again, and again, and again…

7 Spanning Tree Behavior
Broadcast Switches hold Spanning Tree election, Switch 1 blocks port to Switch 3. Broadcast frame enters Switch 1. Learns Src MAC then floods. Switch 2 does the same. No loop, but… Trogdor strikes again! Half of Switch 3’s cabled bandwidth is unused. Redundant link convergence is very slow.

8 EHV Bcast/Mcast Behavior
Broadcast Bcast/Mcast LAN Uplink Broadcast frame enters Switch 1. Learns Src MAC and floods. Broadcast frame enters Switch 3. Learns Src MAC and floods. FI sees unlearned Bcast on Bcast/Mcast LAN Uplink, forwards to Server Ports. FI sees unlearned Bcast on any other LAN Uplink, drops the frame. All LAN Uplinks on FI are active and utilized for pinned MAC addresses. MAC’s dynamically and rapidly re-pin for LAN Uplink state change.

9 EHV Bcast/Mcast Behavior
Broadcast Bcast/Mcast LAN Uplink Learned Bcast enters FI on Server Port. FI forwards out pinned LAN Uplink. Broadcast frame enters Switch 1. Switch 1 learns Src MAC and floods. Broadcast frame enters Switch 3. Switch 3 learns Src MAC and floods. If Bcast gets back to FI it will drop since Src isn’t pinned to that LAN Uplink. All LAN Uplinks on FI are still active and utilized for pinned MAC addresses. MAC’s still dynamically and rapidly re-pin for LAN Uplink state change.

10 N-Port Virtualization
In its default configuration (NPV mode), a Fabric Interconnect is also not a true Fibre Channel switch! Does not have a Fibre Channel domain ID. Conserves FC domain ID’s, but unable to assign FCID’s to connected N-ports. Keeps SAN Uplinks from becoming E-ports. No ISL means no FSPF negotiation. Acts as a “N-port Proxy” for connected N-ports. Requires “N-Port ID Virtualization” (NPIV) be enabled on an upstream Fibre Channel switch. Do not confuse the two! A Cisco MDS or Nexus switch can be enabled for NPV mode, too, but this setting erases the config and reboots the switch. An additional FC switch is then needed farther upstream to be enabled for NPIV. Cisco MDS Switch Visio Stencil Fabric Interconnect Visio Stencil You see, Kyle?! Still different!!!

11 Historical Fibre Channel
N-port F-port FLOGI FCID 0x010001 WWPNA Domain ID 0x01 Server with WWPNA needs an FCID so its HBA can begin forwarding FC frames. Sends FLOGI to FC switch with FC Domain ID 0x01. FC switch responds with an FCID built off its FC Domain ID. The server’s HBA is now logged into the fabric and can begin forwarding FC frames. The same process happens between the FC switch and the FC storage array. It is often likened to DHCP in IP networks. In traditional FC design, there are dual fabrics such as this one for path redundancy. Storage Array

12 Blade Server Considerations
FLOGI FLOGI FLOGI FLOGI WTF?! LOL!!! WWPNA WWPNB Domain ID 0x01 WWPNC WWPND Servers need FCIDs so HBAs can begin forwarding FC frames. The problem is there is traditionally a 1:1 relationship between N-port and F-port. Storage Array

13 NPV/NPIV Behavior NPV NP-port NPIV FLOGI FCID 0x010001 FLOGI 0x010002
FDISC Domain ID 0x01 FCID 0x010002 Storage Array Only one FLOGI is received on the switchport for the FI’s WWPN. FI Looks like F-port to Blade’s N-port, but acts as an N-port Proxy. FLOGI received by FI from Blade, FI kicks off FDISC to switch. Switch issues FCID for the FDISC, FI forwards on to Blade.

14 IO Modules (Fabric Extenders)
Cisco UCS 2104XP (G1) External: 4 x 10 Gbps Ethernet Internal: 8 x 10 Gbps Ethernet 1 x 10 Gbps to each blade slot 2:1 Oversubscription, Static Pinning Cisco UCS 2204XP (G2) Internal: 16 x 10 Gbps Ethernet 2 x 10 Gbps to each blade slot 2:1 Oversubscription, Static Pinning or Port Channel Cisco UCS 2208XP (G2) External: 8 x 10 Gbps Ethernet Internal: 32 x 10 Gbps Ethernet 4 x 10 Gbps to each blade slot 1:1 Oversubscription, Static Pinning or Port Channel

15 What is a Fabric Extender
Extends the presence of the Fabric Interconnect into the blade server chassis. Works exactly like Nexus Fabric Extenders. In fact… C-series integration! Unable to make any L2 decisions on its own. Must rely on parent switch or FI. Looks exactly like line cards in a modular switch. Single point of management!

16 VIFs and VNTags Each upstream port is a FEX port (external ports on IOM). Each downstream port is a Virtual Interface or VIF (internal ports on IOM). VIFs are identified at the parent switch using VNTags (Virtual Networking Tags). A VNTag is sort of like a virtual cable inside an actual cable. vEth VNTag 1 FI Port VNTag 2 VNTag 3 vFC

17 Fibre Channel Over Ethernet
Creates a virtual SAN fabric over physical LAN links. Encapsulates FC frames inside Ethernet frames. Ethertype is FCoE (0x8906) Uses Priority Flow Control (PFC) to match the lossless behavior of Fibre Channel. Uses Jumbo Frames. SCSI data cannot be fragmented. CoS is on by default in UCS. Requires FCoE VLAN ID’s. Cannot overlap with existing VLANs. FI requires VSANs to pin FC traffic to SAN Uplinks. Needs a VLAN ID to reference for each VSAN. Allows cabling consolidation between FI’s and chassis. Inherits SAN fabric separation, which is why each IOM is connected to only one FI. Fibre Channel Payload Maximum 2112 Bytes Ethernet Header FCoE Header FC Header CRC EOF FCS

18 UCS 5108 Blade Server Chassis
6 RU, Front-to-Back Air Flow 2 x IO Module Slots 8 x Half-width Blade Slots Maximum 8 Half-width Blades Maximum 4 Full-width Blades Can combine Half- and Full-width Blades 4 x 2500 Watt Power Supplies 2 Required for Full Population (Non-redundant) 3 for N+1 Redundancy 4 for Full Grid Redundancy Up to 20 per FI Cluster Up to 160 Half-width Blade Servers Current Supported Limit, Not Hard Limit “Rolled Steel”

19 UCS B-Series Blade Servers
Sub-light Speed: UCS B22 M3 2 x Intel Xeon E (16 Cores Max) 192 GB Maximum Half-width Light Speed: UCS B200 M3 2 x Intel Xeon E (16 Cores Max) 768 GB Maximum Ridiculous Speed: UCS B230 M2 2 x Intel Xeon E (20 Cores Max) 512 GB Maximum Ludicrous Speed: UCS B420 M3 4 x Intel Xeon E (32 Cores Max) 1.5 TB Maximum Full-width Plaid: UCS B440 M2 4 x Intel Xeon E (40 Cores Max) 1 TB Maximum

20 Stateless Hardware A Service Profile is a logical representation of a server. Contains unique identifiers (UUID, MACs & WWNs). Makes them portable! Also contains policies related to network and storage connectivity, management. Can be rapidly deployed from a Service Profile Template.

21 Virtual Interface Cards (VICs)
M81KR “Palo” (G1) 2 x 10 Gbps, 1 to each IOM Maximum 128 Virtual Interfaces* Mezzanine Adaptor (PCIe x16) VIC 1240 (G2) 4 x 10 Gbps, 2 to each IOM Maximum 256 Virtual Interfaces* mLOM (M3 blades only) Expansion Module VIC 1280 (G2) 8 x 10 Gbps, 4 to each IOM Architectural limit. Real limit based on the number of server ports cabled between FI and IOM.

22 Single Root IO Virtualization
SR-IOV allows a PCIe device to appear as multiple devices of the same type. Made possible by distinction between Physical Functions and Virtual Functions. Physical Functions (PF’s) can control the device and move data in or out of the device. Virtual Functions (VF’s) can only move data in or out of the device. Requires Hardware, Firmware and Software (OS) support to implement. Cisco VICs have the intelligence to handle all of it on board. Transparent to the OS (no support needed). Multiple adaptors for different CoS/QoS markings. Allows “Shadow NIC” capability (aka Fabric Failover). Allows for VM-FEX capability… Replaces Hypervisor soft-switch, off-loading L2 overhead to real, dedicated hardware. Returns control of the access layer switching environment to the network administrators. Does not break vMotion!

23 Cable Less, Scale Rapidly
2 Fabric Interconnects 1 Chassis, 8 Blades, 2 Cables 2 Chassis, 16 Blades, 4 Cables 3 Chassis, 24 Blades, 6 Cables 4 Chassis, 32 Blades, 8 Cables 5 Chassis, 40 Blades, 10 Cables 6 Chassis, 48 Blades, 12 Cables 13 Chassis, 104 Blades, 26 Cables 20 Chassis, 160 Blades, 40 Cables Assumes bandwidth requirement of 2 x 10 Gbps to each UCS 5108.

24 Single Point of Management
The pair of Fabric Interconnects represents a single UCS Manager domain. Every connected physical blade server chassis is actually a module in a single logical blade server chassis. Every component in this larger, logical chassis is configured, operated and monitored via the pair of Fabric Interconnects across their Mgmt0 ports. Mgmt0 ports and Cluster IP address must be in the same broadcast domain.

25 High Availability The Fabric Interconnects negotiate high availability for the cluster IP address of the UCS Manager domain through a primary-subordinate election over their L1 and L2 ports (known as cluster links) and through the connected chassis (failsafe mechanism to prevent management split brain in the event of cluster links loss). The dual fabric, active-active design provides LAN and SAN connectivity to every blade in every chassis even if one of the Fabric Interconnects malfunctions. HA for UCS Manager is unavailable until the first connected chassis is acknowledged.

26 Questions & Answers Contact Information: (214)


Download ppt "Cisco Unified Computing System Nuts & Bolts"

Similar presentations


Ads by Google