Presentation is loading. Please wait.

Presentation is loading. Please wait.

QLogic I/O Solutions for IBM System X, Blade and PureFlex

Similar presentations


Presentation on theme: "QLogic I/O Solutions for IBM System X, Blade and PureFlex"— Presentation transcript:

1 QLogic I/O Solutions for IBM System X, Blade and PureFlex

2 Ingrained into the IBM server ecosystem
BladeCenter® System x™ POWER Systems™ PureSystems™ System Storage™

3 QLogic Adapter Value July 2013

4 QLogic Adapter value Readiness for virtualized environments & Cloud deployments Support for VMware ESX, Microsoft Hyper-V, Linux/KVM, PowerVM Optimized for virtualized environments Superior application level I/O performance Microsoft Exchange: up to 133% higher IOPs than competition* Oracle OLTP: up to 106% higher IOPs than competition** 200,000 IOPS per port 1,600MB throughput Power efficiency lowest power consumption for FC adapters in IBM Flex portfolio QLogic StarPower * **

5 Oracle OLTP Performance Exchange Workload Scalability
QLogic 8Gb HBAs Oracle OLTP Performance Exchange Workload Scalability Transactions per Second In high-performance Oracle database applications QLogic 2500 Series 8Gb FC Adapters deliver up to 47% higher IOPS compared to the Emulex’s LPe12000 series adapters QLogic 2500 Series 8Gb FC adapters deliver an average of 98% percent better performance than Brocade’s 815 adapters 224% better performance Outstanding I/O

6 Dynamic Power Management
Smart QLogic ASIC senses the PCI-e bus and uses the right amount of lanes to provide maximum performance Uses up to 40% less power January 24, 2013

7 QLogic connectivity solutions for IBM System x
Server connectivity 16Gb single & dual port Fibre Channel adapters 4 & 8Gb single & dual port FC adapters 10GbE Converged Network Adapter 10GbE Virtual Fabric Adapters

8 16Gb Fibre Channel Technology for IBM System X

9 Introducing QLogic 16Gb HBAs for System x
QLogic 16Gb Fibre Channel adapters The first PCIe Gen3 FC adapter in the IBM product portfolio Ideally suited for IBM System x M4 servers but also compatible with M3 servers OS support Windows 2008 & 2012, RHEL 5 & 6, SLES 10 & 11, VMware ESX & ESXi 4.1, VMware vSphere 5 & 5.1 16GB SFP+ (P/N 00Y3345) ships with adapter Product name System x part number QLogic part number IBM System x M4 servers IBM System x M3 servers QLogic 16Gb FC Single-port HBA for System x 00Y3337 QLE2660 QLogic 16Gb FC Dual-port HBA for System x 00Y3341 QLE2662

10 QLogic 16Gb FC performance benefits
What does TRUE PCI-e 3.0 mean to your customers? Improve your price per performance and lower your power requirements Companies wanting to consolidate hardware by deploying virtualized environments will benefit by using 16Gb FC On Line Transaction Processing (OLTP) applications such as on-line banking and travel reservation systems or any other type of high volume/many user applications will benefit from 16Gb FC Using 16Gb FC, backup and restore applications will complete in nearly ½ the time that it would take to complete in 8Gb FC environments It means 16% better performance than PCI-e 2.0 adapters 3x IOPs Twice the Throughput of 8G, cutting response time in half 3 times the IOPs of 8G Fibre 40% faster than 10G Ethernet/FCoE Backward compatibility ensures investment protection For those organizations who continue to invest in Fibre Channel SANs, 16Gb Fibre Channel will bring added performance and functionality. 16Gb Fibre Channel is 40% faster than 10G Ethernet FCoE and 2x faster than 8Gb Fibre Channel. increased Transactional performance – IOPS go from 400K to over 1.2 Million (3x increase) Improve your price per performance and lower your power requirements. 16Gb Fibre Channel backward compatibility ensures investment protection. There are more physical functions, which increases PCIe slot conservation. Companies wanting to consolidate hardware by deploying virtualized environments will benefit by using 16Gb FC On Line Transaction Processing (OLTP) applications such as on-line banking and travel reservation systems or any other type of high volume/many user applications will benefit from 16Gb FC Using 16Gb FC, backup and restore applications will complete in nearly ½ the time that it would take to complete in 8Gb FC environments Greater IOPS Twice the data throughput Cuts the response time in half More IOPS for typical 4 & 8KB I/O block database applications This Demartek study explains the benefits of the QLogic 16Gb PCI-e 3 adapters compared to 16Gb PCI-e 2 adapters

11 QLogic Architectural Advantage: Port Isolation
QLogic’s Architecture Provides Independent functionality Higher reliability Simplified manageability Predictable performance Excellent stability Per Port Functionality QLogic Independent CPU Isolated Memory Independent Firmware Image Memory Firmware Processor Port 0 Port 1 PCIe Shared Architecture Shared resources across ports creates: Higher risk of failure Irregular performance Risk of instability Processor Firmware Memory

12 Port Isolation Performance Characteristics
When considering that the purpose of a failover configuration is redundancy and reliability, the fact that both ports share memory, CPU and Firmware make this decision counterintuitive. If one port fails due to internal issue, both will fail, leaving no port to failover to. QLogic’s Port Isolation ensures this does not happen. At first glance, it would appear that The reasoning behind Emulex’s design is for customers running in an Active/Passive or Failover Configuration. Assuming this was true, then a customer might want better single port performance. QLogic Product Portfolio

13 QLogic VFA Product Portfolio

14 QLogic Virtual Fabric Adapters for IBM System x
QLogic 8200 VFA Standard PCIe card Support for IBM System x M4 servers QLogic Embedded VFA Mezzanine board Support for System x x3550, x3650 M4 servers Doesn't use up a PCIe slot Two products, same functionality 10GbE dual-port with vNIC capability 4 virtual ports per physical port Licence upgrade with IBM Feature-on-Demand Full Converged Network Adapter capability FCoE & iSCSI offload add switch picture & information add converged ToR

15 QLogic Virtual Fabric Adapters for IBM System x
All of the same capabilities: PCIe Gen2 x8 Supports Optical and Copper DAC (SFPs are NOT included on Embedded VFA) Multi-personality ports (10GE, iSCSI & FCoE) QLogic’s switch agnostic N-PAR (NIC Partitioning) Multi Protocol Hardware offload Dual port removable 10Gb SFP+ Industry-leading 10Gbps performance Flexible networking: Upgrade for iSCSI and FCoE functionality add switch picture & information add converged ToR Product name System x part number QLogic part number IBM System x M4 servers QLogic Embedded VFA 90Y6454 n/a QLogic Embedded VFA FOD License 90Y5179 QLogic 8200 VFA 90Y4600 QLE3262 QLogic 8200 VFA FOD License 00Y5624

16 Supported Transceivers / Cables
QLogic Product Portfolio

17 QLogic VFA (8200) vs QLogic 10Gb CNA (8100)
Feature Comparison 8100 8200 PCIe Gen2 x8 No Yes Full iSCSI hardware offload NIC Partitioning Congestion Notification (802.1Qau) VEB(Virtual Ethernet Bridge), VEPA(Virtual Ethernet Port Aggregator) capable Simultaneous Multi-protocol support Optical Cable support Active Copper DAC support Passive Copper DAC support Progressive functionality (Feature On Demand) QLogic Product Portfolio

18 QLogic PureFlex Product Line
July 2013

19 IBM - PureSystem Product Portfolio
PureFlex x220 x240 x440 p260 p460 p24L IBM PureFlex Chassis Adapter Server to storage Interoperability IBM Flex System FC port 8Gb FC Adapter 69Y1938 Network IBM Flex System FC3171 8Gb SAN Switch 69Y1930 IBM Flex System FC3171 8Gb Pass-thru 69Y1934 IBM Flex System Fabric 10GbE Converged Scalable Switch (CN4093) 00D5823

20 Choices when deploying a new Flex chassis
Connecting to an existing Fibre Channel SAN use Pass-thru transparent mode quick to deploy, configured in minutes zero interoperability issues with external switches automatic load balancing and failover no reconfiguration needed to existing SAN Connecting directly to Fibre Channel storage use 8Gb SAN Switch fully featured with no additional licences to buy virtualization ready with enhanced NPIV support autosensing to 2Gb, 4Gb, 8Gb/s operation Management QLogic QuickTools management suite run from within Flex Systems Manager

21 CN4093 - Flex convergence with QLogic
IBM Flex System Fabric 10GbE Converged Scalable Switch (CN4093) QLogic module provides Fibre Channel Gateway functionality Similar to QLogic Virtual Fabric Ext Module in BladeCenter Allows Flex chassis to connect directly to Fibre Channel SAN or storage Omniports configurable to 10GbE or Fibre Channel Benefits Converged networking in Flex chassis No requirement for expensive Top of Rack switch

22 IBM PureFlex - reasons to specify QLogic products
Complete end-to-end 8Gb Fibre Channel solution Cost effective, default choice for today's customer requirements The only Fibre Channel adapter supporting all Compute Nodes x220, x240, x440, p24L, p260, p460 Chassis connectivity options for all deployments Connecting to Fibre Channel Storage Connecting to Fibre Channel SAN Proven technology from IBM BladeCenter and System x Rack/Tower adapter of choice for IBM customers today in BladeCenter and System x rack/tower Fibre Channel adapters - industry market leaders at 55% >13 M adapter ports shipped Flex Enterprise Chassis support System x part number POWER Systems feature code IBM Flex Systems™ IBM PureFlex™ IBM PureApplication™ IBM Flex System FC3171 8Gb SAN Switch 69Y1930 3595 default IBM Flex System FC3171 8Gb Pass-thru 69Y1934 3591 Flex Compute node support System x part number POWER Systems feature code x220 x240 x440 p24L p260 p460 IBM Flex System FC port 8Gb FC Adapter 69Y1938 1764

23 IBM Smartcloud for PureFlex
Pre-configured Express, Standard, Enterprise Intel & Power compute nodes Storwize V7000 Storage Array QLogic FC Adapters & Switch Virtualization support VMware KVM, HyperV Smartcloud capabilities Create images: simplify storage of images Deploy VMs: reduce deployment time Operate a Cloud

24 QLogic BladeCenter Product Line
July 2013

25 IBM - System X Product Portfolio
Server Platform BladeCenter S, E, HT, H HS, HX5, JS and PS 8Gb FC + 1GE Combo HBA (CFFh) 44X1940 FCoE 10Gb CNA 42C1803 8Gb HBA (CIOv) 44X1945 4Gb HBA 46M6065 QLogic 10GbE VFA 00Y3332 Server Adapter (Mezz or Standup) Network 44X port 8Gb FC SAN switch module 88Y6406 4/8Gb SAN Switch 88Y6410 4/8Gb FC IPM 44X Gb FC IPM 46M6172 Virtual Fabric Extension Module (FCoE)

26 QLogic VFA for BladeCenter
New product introduction - Announce June 25th, GA July 26th - Three different part numbers IBM BladeCenter Product Part # Description QLogic 10Gb VFA 00Y3332 10GbE NIC with QLogic NIC partitioning (NPAR) Dynamic QoS settings QLogic 10Gb VF Advanced FoD Upgrade 00Y5622 Feature-on-Demand upgrade licence QLogic 10Gb VF Advanced CNA 00Y5618 Full Converged Network Adapter functionality Two opportunities for I/O consolidation - Combine mulitiple 1GbE connections into fewer 10GbE Converged Data and Storage networking

27 NPAR (NIC Partitioning)

28 NPAR Theory of Operation – Partitioning
NPAR provides multiple Ethernet interfaces per physical 10Gb Ethernet port. This is done by partitioning the port’s PCIe interface into four independent Ethernet functions. Each NPAR Function is presented as a unique Ethernet interface to the server and the OS, with its own unique MAC address Each Function has its own instance of a device driver NPAR supports functions with NIC, FCoE and iSCSI personalities Empty PF0 NIC PF1 NIC PF2 NIC * PF2 Disabled PF2 Disabled PF3 NIC * PF3 Disabled PF4 iSCSI PF4 Disabled PF4 NIC * PF5 NIC * PF5 Disabled PF5 iSCSI PF6 NIC * PF6 Disabled PF6 FCoE PF7 FCoE PF7 NIC * PF7 Disabled Empty PCIe Configuration Bus Physical Port 0 Changing the Personality of the NIC Partition Physical Port 1

29 NPAR – eSwitch NPAR Implements an eSwitch which is similar in function and capability to the hypervisor’s vSwitch VLAN aware MAC address lookup The eSwitch works in conjunction with the hypervisor’s vSwitch. The vSwitches cascade into the NPAR eSwitches through the NPAR NIC Functions Each eSwitch is associated with a single Physical Port All NIC Functions associated with a single Physical Port are switched in the same eSwitch The Physical Port is the Uplink for its associated eSwitch Keeps switch statistics within the card. NPAR is external switch agnostic It does not require any switch specific features in the external switch for it to work correctly

30 NIC Traffic Flow within the Host
vSwitch to eSwitch NIC Traffic Flow within the Host Server VM VM VM VM VM VM VM VM VM VM vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC Hypervisor vSwitch vSwitch vSwitch vSwitch PF 0 PF 2 PF 4 PF 6 NIC Port 0 eSwitch TX/RX PHY External Bridge

31 Not all NIC Partitioning implementations are created equal!
IB v6.0 The Competition Can NOT run FCoE and iSCSI at the same time. Cannot turn NIC partitioning off. A firmware update and reboot are required for adding iSCSI or FCoE Not all NIC Partitioning implementations are created equal! NPAR, or NIC Partitioning, is where you can divide a single 10GigE adapter into multiple, independent partitions. Each of these partitions can support concurrent networking and storage protocols and appear as an independent interface to the host OS. So, instead of deploying multiple 1 GigE NICs in a single server and dedicating each to a particular task, you can consolidate into a single 10GigE port and provide NPAR in either native or virtualized OS environments. 31

32 QLogic's NPAR = True Flexibility
IB v6.0 QLogic's NPAR = True Flexibility CAN run FCoE and iSCSI concurrently. NIC partitioning can be fully disabled. FOD key is all that is needed for FCoE and iSCSI capabilities NPAR functions (NIC/FCoE and iSCSI) can be disabled if desired. NIC NIC/iSCSI NIC/FCoE Disabled Disabled NIC NPAR, or NIC Partitioning, is where you can divide a single 10GigE adapter into multiple, independent partitions. Each of these partitions can support concurrent networking and storage protocols and appear as an independent interface to the host OS. So, instead of deploying multiple 1 GigE NICs in a single server and dedicating each to a particular task, you can consolidate into a single 10GigE port and provide NPAR in either native or virtualized OS environments. Disabled NIC NIC Partitioning NIC NIC/iSCSI NIC/FCoE Disabled NIC Disabled Disabled NIC 32

33 NPAR QoS Configuration
Minimum Bandwidth setting This is a guaranteed amount of bandwidth available through the NIC Function Specified as a percentage of the NIC’s portion of the physical port’s bandwidth The sum of all the Minimum settings for the NIC Functions of a physical port must be less than or equal to 100% Maximum Bandwidth setting Sets the Maximum amount of bandwidth a NIC Function is allowed to utilize Specified as percentage of the NIC’s portion of the physical port’s bandwidth The sum of all the Maximum settings for the NIC Functions of a physical port may exceed 100% QoS settings are tunable “On the Fly”. Changes can be implemented seamlessly without a reboot or port reset.

34 Oversubscription Oversubscription:
Allows the total maximum bandwidth settings of the NIC Functions of a physical port to exceed the port’s actual bandwidth Each NIC Function can claim up to100% of a physical port’s bandwidth If no other NIC Function is using it. Allows unused bandwidth to be dynamically shifted to where it is needed Oversubscription prevents bandwidth waste

35 QLogic vs the Competition
Key Features The Competition QLogic 8200 Divide physical port into multiple partitions Bandwidth Guarantee per Partition Oversubscription Capabilities X Dynamic QoS Configuration (reboot required) Concurrent FCoE & iSCSI traffic Operating System, Integrated Tools Offload VM-to-VM Traffic (eSwitch) Can disable partitioned ports (NIC Partitioning –NPAR) (Change QoS “on the fly”)

36 QLogic Tools

37 QLogic Adapter Hardware
Adapter Management QConvergeConsole CLI PreBoot Utilities Fast!UTIL QConvergeConsole GUI (QCC) VMware Plug-in All QLogic adapters can be managed using multiple tools. All adapters have some configuration at Pre-Boot, with Fast!UTIL being used for Fibre Channel and Converged Network Adapter configuration prior to the OS loading on a server. The QConvergeConsole CLI is available for installation on the server with the QLogic adapter, allowing server administrators to use a text user interface to configure and manage the adapters or write scripts for easy configuration. The QConvergeConsole GUI supports all QLogic adapters, allowing a network administrator to have access to the adapter management remotely. There are multiple modules in the FlexCert programs regarding the use and operation of these tools. QLogic Adapter Hardware

38 QCC GUI: Simplified Management
Single-Pane-of-Glass QConvergeConsole: Unified web-based single-pane-of-glass console (GUI & CLI) for FC, Ethernet and Converge Network adapters Customer freedom to manage all adapters in data center QLogic QConvergeConsole Plug-in for VMware vCenter 3rd Party tools using standard APIs - One tool – all protocols – all OSs Integrated with APIs - visibility across data center management Native OS tools for Networking Role based Authentication – for LAN and SAN administrators Common Driver Stack Single driver – backward compatible with 8Gb and 4Gb FC Adapters Windows, Linux, ESX, Solaris, XenServer Diskless Boot BIOS, UEFI, FCode Higher Uptime Extended Hardware Assisted Firmware Tracing (eHAFT) – faster resolution Manageability. QLogic’s QConvergeConsole delivers a unified Web-based single-pane-of-glass management console across QLogic’s broad product line of storage and networking adapters. This is a multiprotocol proven solution. A single pane of glass, available as a graphical user interface (GUI) or command line interface (CLI). QConvergeConsole also has the ability to detect QLogic CNAs when storage attach is required from the host. Common IT practices such as VLAN configuration and teaming can be easily accomplished either through QConvergeConsole or through native OS tools, thereby minimizing IT training and deployment costs. Role based Authentication Separate logins / visibility for SAN and LAN administrators Eliminates need to converge your organization as you converge the network Ease of Management A single management application (QConvergeConsole) that works with QLogic’s portfolio of iSCSI, Fibre Channel, Converged Network and Intelligent Ethernet adapters, enables end users to manage their current QLogic adapters, as well as, the new 2600 Series adapters with a single management application. The 2600 Series also support standards based APIs which provide end users the flexibility to use third-party management applications. End users have additional flexibility with respect to management by being able to use management applications provided by the Operating System such as VMware’s vCenter Plug-in.

39 QCC CLI Installation & Overview
QCC CLI: Menu Driven Scanning for QLogic adapters, please wait… QConvergeConsole CLI - Version (Build 32) Main Menu 1: Adapter Information 2: Adapter Configuration 3: Adapter Updates 4: Adapter Diagnostics 5: Adapter Statistics 6: NIC Partitioning (NPAR) Information 7: NIC Partitioning (NPAR) Configuration 8: Refresh 9: Help 10: Exit Please Enter Selection: The QCC CLI, will first scan the server for QLogic adapters. *Once they are found, the Main Menu will appear. From the Main Menu, the administrator will select numbered options as presented to complete the configuration. This is the menu from a Converged Network Adapter. QCC CLI Installation & Overview

40 QCC CLI Installation & Overview
QCC CLI: Scripting Using QCC CLI commands with options, a script can be created Example Script for CNA (8000 Series) Set Port 2 NIC to Wake on LAN Set Port 2 FC parameters to factory default Set Port 2 FC Frame Size to 1024 The most useful feature of the QCC CLI is the ability to script configurations or commands to retrieve information from the adapters. The QCC CLI uses the command qaucli with options. The command and options may be written into a script and made executable. We have created an example script for Port 2 on our QLE When configuring this adapter, we want the NIC on Port 2 to be able to Wake On LAN. Then we make sure that all the FCoE settings are at default, then we change the Fibre Channel Frame Size to 1024. By putting these commands into a file and making it executable, we can create a script that will help us configure our adapter. This is extremely useful if we have multiple adapters to be configured for redundant fabrics. qaucli –pr nic –n 1 Port_Wake_On_LAN_Option 1 qaucli –n 1 default qaucli –n 1 FR 1024 QCC CLI Installation & Overview

41 QLogic Converged Console - vCenter Plug-in
QLogic Adapter Management integrated with VMware vCenter Dynamic bandwidth Provisioning Bandwidth allocation charts help map fabric to business logic Color coding simplify the determination of health of a component Network and Storage Maps – connect the fabric to the Cloud In-build Diagnostics and Statistics Online Firmware Updates at the click of a button

42 Additional Information

43 Reference material Videos Enabling NPAR on QLogic 10GbE
Videos Enabling NPAR on QLogic 10GbE                                                                Configuring NPAR under Windows 2008                                                NIC teaming 3200/8200 teaming                                                                QLogic CNA featuring multi-protocol VMware vMotion                 

44 QLogic Proof-of-Concept/Demo facilities
QLogic Solutions Lab, Minnesota Customer Proof-of-Concept and Demo facilities remote access available Additional IBM equipment recently deployed IBM PureFlex Flex Enterprise Chassis FC3171 8G FC SAN switch, 10GbE switch x86 & Power compute nodes V7000 Storwize, Flex System Manager node Additional IBM servers 3 x System x M4 2 x Power Systems 720 To arrange access

45 end


Download ppt "QLogic I/O Solutions for IBM System X, Blade and PureFlex"

Similar presentations


Ads by Google