Presentation is loading. Please wait.

Presentation is loading. Please wait.

© 2016 BROCADE COMMUNICATIONS SYSTEMS, INC. Shared Memory Communication – RDMA (SMC-R) Utilizing the 10GbE RoCE Express and Brocade VDX.

Similar presentations


Presentation on theme: "© 2016 BROCADE COMMUNICATIONS SYSTEMS, INC. Shared Memory Communication – RDMA (SMC-R) Utilizing the 10GbE RoCE Express and Brocade VDX."— Presentation transcript:

1 © 2016 BROCADE COMMUNICATIONS SYSTEMS, INC. Shared Memory Communication – RDMA (SMC-R) Utilizing the 10GbE RoCE Express and Brocade VDX

2 Agenda Introduction: Why RoCE on zSystems? RDMA, RoCE and SMC-R basics SMC-R with RoCE on z Systems Brocade VCS Fabric Technology benefits for SMC-R Examples and Performance Summary

3 CPU Savings (cost reduction) With response time improvements SMC-R can lead to substantial CPU savings (lower costs) SMC-R also simultaneously improves response times Lower costs with improved performance © 2016 Brocade Communications Systems, Inc. 3

4 Why SMC-R (RoCE ) on z Systems? CPU Savings and application response time improvement Significant response time improvements and CPU savings for CICS transactions using Distributed Program Link (DPL) when using SMC-R vs. standard TCP/IP. ‒ Up to 48% reduction in response time and 10% CPU savings Significant overall transaction response time improvement for Websphere Application Server (WAS) accessing DB2 in another system. ‒ 40% reduction vs standard TCP/IP Brocade VCS Fabric technology further enhances this performance © 2016 Brocade Communications Systems, Inc. 4

5 Part 1: RDMA, RoCE, SMC-R and zSystems © 2016 Brocade Communications Systems, Inc. 5

6 RDMA Remote Direct Memory Access (RDMA) Allows a host to write or read memory from a remote host without involving the remote host’s CPU and operating system (OS). Bypasses OS layers and many communications protocol layers that are otherwise required for communication between applications. Reduces software overhead, providing for high throughput, low latency networking. © 2016 Brocade Communications Systems, Inc. 6

7 RoCE - RDMA over Converged Ethernet  RDMA-based technology has been available in the industry for many years – primarily based on Infiniband (IB) ‒ RDMA technology provides the capability to allow hosts to logically share memory ‒ Infiniband requires a completely unique network ecosystem (unique hardware such as host adapters, switches, host application software, system management software/firmware, security controls, etc.) – IB is common in the HPC market  RDMA technology is now available on Ethernet – RDMA over Converged Ethernet (RoCE)

8 RoCE RDMA over Converged Ethernet (standard) RDMA protocol over Ethernet Uses RoCE Network Interface Adapter (RNIC) and Layer-2 switches with IEEE Converged Enhanced Ethernet (CEE) capability. Provides low latency, high bandwidth, high throughput, low processor utilization data transfer between hosts Direct RoCE port to RoCE port (no switch) is possible, but not recommended by IBM. Switches must support global pause frame (IEEE 802.3x) © 2016 Brocade Communications Systems, Inc. 8

9 Shared Memory Communications – Remote Direct Memory Access (SMC-R) Definition  Shared Memory Communications – Remote Direct Memory Access (SMC-R) is a new communication protocol aimed at providing transparent acceleration for sockets-based TCP/IP applications and middleware ‒ Remote Direct Memory Access (RDMA) technology provides low latency, high bandwidth, high throughput, low processor utilization attachment between hosts ‒ SMC-R utilizes RDMA over Converged Ethernet (RoCE) as the physical transport layer  SMC-R is built on the following concepts: ‒ RDMA enablement of the communications fabric ‒ Partitioning a part of OS host real memory into buffers and using RDMA technology to access this memory ‒ Establishing an ‘out of band’ connection over which data is passed to the partner peer using RMDA writes and signaling

10 SMC-R Shared Memory Communication over RDMA Sockets over RDMA communication protocol that allows existing TCP applications to transparently benefit from RoCE. Requires no application changes. Provides host-to-host direct memory access without the traditional TCP/IP processing overhead. z/OS V2R1 includes SMC-R support SMC-R is only used over 10GbE RoCE Express features to a partner z/OS V2R1 system. © 2016 Brocade Communications Systems, Inc. 10

11 SMC-R Additional Benefits Load balancing ‒ The first application data is sent over one RoCE pair between the two hosts ‒ The second application data would be sent over the second RoCE pair between the two hosts High Availability ‒ If there was a failure of the connection between the first RoCE pair, all the sessions using the first RoCE pair would transparently move to the second RoCE pair. © 2016 Brocade Communications Systems, Inc. 11

12 SMC-R Additional Benefits Provides high availability and load balancing when redundant network hardware paths are available. Introduces minimal administrative and operational changes. Provides dynamic discovery of partner RDMA capabilities and dynamic setup of RDMA connections over RoCE. © 2016 Brocade Communications Systems, Inc. 12

13 The Network A single Layer 2 (no IP router) 10GbE network is required. ‒ RoCE provides low latency, high bandwidth, high throughput, low processor utilization data transfer between hosts by avoiding TCP/IP capabilities such as IP routing. Both partners must be in the same IP subnet (no IP router between them) © 2016 Brocade Communications Systems, Inc. 13

14 OSA channels (Open Systems Adapter) SMC-R does require OSA channels Still requires the OSA TCP connect SMC-R uses the TCP connection to determine eligibility for RoCE and to build the point-to-point SMC-R link. ‒ Then the actual data traffic is sent over the SMC-R link. The TCP session is also used for Keepalive and to terminate both the RDMA and TCP connections when the session ends. OSA channels can be connected to the same switches as the RNICs. © 2016 Brocade Communications Systems, Inc. 14

15 Recommended configuration Minimum of 2 RoCE features and minimum 2 switches Once a session has been switched to SMC-R it cannot fall back to the TCP/IP OSA path. If a failure occurs with the RoCE connection and an alternate RoCE path exists, the active sessions on the original path will transparently move to the alternate path. If a failure occurs with the RoCE connection and an alternate RoCE path is not available, then all active sessions will fail. ‒ All new sessions will not switch to SMC-R but will flow over the TCP/IP OSA path instead. © 2016 Brocade Communications Systems, Inc. 15

16 SMC-R Link Group Two SMR-R links using different RNICs, between two peers are logically grouped into an SMC-R Link Group. This provides redundancy and load balancing. ‒ If one link fails, all active connections are automatically and dynamically moved to the other link without interruption. ‒ After recovery of the failed link, no connections are moved back. ‒ All new connections will be setup over the recovered link and load balancing recovered. © 2016 Brocade Communications Systems, Inc. 16

17 Application use cases for SMC-R Application servers such as the z/OS WebSphere Application Server communicating (via TCP based communications) with CICS, IMS or DB2 – particularly when the application is network intensive and transaction oriented Transactional workloads that exchange larger messages (e.g. web services such as WAS to DB2 or CICS) will see benefit. Applications that use z/OS to z/OS TCP based communications using Sysplex Distributor © 2016 Brocade Communications Systems, Inc. 17

18 z13 SMC-R advantages Allows concurrent sharing of a RoCE Express feature by multiple virtual servers (OS instances) ‒ Up to 31 virtual servers (OS instances, LPARs or 2nd level guests under zVM) can share a single feature Support for up to 16 RoCE Express features per zCPC Enables concurrent use of both RoCE Express ports b z/OS (SMC-R) For High Availability each OS instance requires access to two unique (physical) features © 2016 Brocade Communications Systems, Inc. 18

19 z13: 10GbE RoCE Express Sample Configuration  This configuration allows redundant SMC-R connectivity among LPAR A, LPAR C, LPAR 1, LPAR 2, and LPAR 3  LPAR to LPAR OSD connections are required to establish the SMC-R communications ‒ 1 GbE OSD connections can be used instead of 10 GbE ‒ OSD connections can flow through the same 10 GbE switches or different switches ‒ z13 exclusive: Simultaneous use of both 10 GbE ports on 10 GbE RoCE Express features OSA/OSD z13 LPAR A z/OS V2.1 LPAR B LPAR C z/OS V2.1 LPAR D LPAR E LPAR 1 z/VM V6.3 + z/OS V2.1 LPAR 2 z/OS V2.1 LPAR 3 z/OS V2.1 LPAR 4 LPAR 5 Brocade VDX OSA/OSD RoCE On z13 each 10GbE RoCE FC 0411 Can support up to 31 logical partitions ( Two or more features for each server recommended) IFP Brocade VDX RoCE z13

20 Measuring CPU usage TCP/IP address space CP usage in the time interval shown in: ‒ RMF Report Class for TCP/IP ‒ SMF Record Type 30 © 2016 Brocade Communications Systems, Inc. 20

21 CPU Usage by Active Jobs © 2016 Brocade Communications Systems, Inc. 21

22 Performance benchmarks of SMC-R at distance  Performance summary ‒ Technology viable even at 100km distances with DWDM ‒ At 10km: Retain significant latency reduction and increased throughput ‒ At 100km: Large savings in latency and significant throughput benefits for larger payloads, modest savings in latency for smaller payloads ‒ CPU benefits of SMC-R for larger payloads consistent across all distances  Use cases for SMC-R at distance ‒ TCP Workloads deployed on Parallel Sysplex spanning sites ‒ Software based replication (i.e. TCP based) across sites (Disaster Recovery) e.g. InfoSphere Data Replication suite for z/OS ‒ File transfers across z/OS systems in different site FTP, Connect:Direct, SFTP, etc. ‒ Opportunity: Lower CPU cost for sending/receiving data while boosting throughput and lowering latency

23 Part 2: Brocade VCS Fabric Technology and SMC-R © 2016 Brocade Communications Systems, Inc. 23

24 Brocade VCS Fabrics Evolve Data Centers Continual evolution with VCS fabrics © 2016 BROCADE COMMUNICATIONS SYSTEMS, INC. COMPANY PROPRIETARY INFORMATION Lower OpEx Greater network utilization Faster time to application deployment EfficientAutomatedCloud-Optimized All links fully active Multipathing at all layers: Layers 1, 2, and 3 IP storage-aware Automatic provisioning Zero-touch VM discovery, Layer 2/Layer 3 configuration, and mobility Self-forming trunks Manage many switches as a single logical device Multitenancy at scale with Brocade VCS Virtual Fabric feature Scale out non-disruptively Orchestrated via OpenStack DevOps support

25 Ethernet Fabrics vs. Legacy Networks © 2015 BROCADE COMMUNICATIONS SYSTEMS, INC. Core Aggregation Access Rigid architecture, north-south optimized Inefficient link utilization Individually managed switches VM-ignorant No network virtualization Classic Hierarchical Architecture 25 Automated provisioning All links active, L1/2/3 multipathing Fabric managed as one logical switch VM-aware Native and overlay network virtualization Leaf / Spine Core Scale-out Ethernet Fabric Architecture

26 Key VDX Capabilities for SMC-R Operational automation and efficiency Provides an automated RoCE Fabric Single Logical Chassis for entire fabric Scale-out fabric technology Automatic trunk formation Per-frame load balancing between switches Deep on-chip buffering Simplified automation and visibility © 2015 BROCADE COMMUNICATIONS SYSTEMS, INC. 26

27 VDX 6740 VDX 6940VDX 8770-4 VDX 8770-8 RoCE hosts VDX Automated RoCE fabric with Brocade VDX RDMA over Converged Ethernet – the automatic lossless fabric RoCE enabled automatically on all fabric ports On the VDX’s host-connected interfaces: Ensure the setting for the CEE map is set to its default with global command: cee-map default Disable LLDP with interface command: lldp disable Enable CEE map with interface command: cee default Configure host's RoCE network adapter to enable DCB Priority Flow Control (DCB PFC) and set it for PFC class 3 (priority 3). Automatic, end-to-end lossless Ethernet throughout the VCS Ethernet fabric.

28 Advanced Flexibility Address business-critical SLAs with a resilient, high-performance fabric Deliver predictable performance and unmatched resiliency for business-critical applications Provision network capacity with minimal intervention and virtually no learning curve Configure and manage multiple switches in the fabric as a single logical element Provide storage-class resiliency with non- disruptive failover after a path or link failure © 2016 BROCADE COMMUNICATIONS SYSTEMS, INC. COMPANY PROPRIETARY INFORMATION NASiSCSI Flash

29 Auto-Configuration with Logical Chassis Simplifies Brocade VCS fabric deployment, scalability, and management of the network Enables VCS fabric capabilities on each switch (on by default) Connects the switches Fabric automatically forms ‒ Common configuration across all switches, Inter- Switch Link (ISL) trunks auto-form Managed as a single logical chassis © 2016 BROCADE COMMUNICATIONS SYSTEMS, INC. COMPANY PROPRIETARY INFORMATION VCS Configuring LAG (for 2 members) Execute the following commands on one switch: configure terminal interface port-channel 1 switchport switchport mode trunk switchport trunk allowed vlan all qos flowcontrol tx on rx on mtu 9208 no shutdown interface tengigabitethernet 1/Ø/5 channel-group 1 mode active type standard no shutdown interface tengigabitethernet 1/Ø/6 channel-group 1 mode active type standard no shutdown exit Repeat same commands on other end switch. Total commands: 30 Configuring ISL Trunking (for up to 8 members) Absolutely no configuration required. Total commands: 0 Others Brocade

30 Eliminate Protocol Exotica You don’t have to be an IP expert BGPMP-BGPISIS-TERSVP-TEIPsec EIGRPMSDPIGMPTACACS+AAA OSPFIS-ISPIM-SMRSTPMSTPLLDP RIPDVMRPX.509DRRWREDRED MPLSPWE3VPLSGREMLDSNTPDSCP LDPRSVPL2TPSHAIKEDWFQLISP IRDPMS-CHAPIPv6WRRWFQNTP VRRPCIDRRADIUSHMACSTPsFlow Layer 2+3 IP Connectivity Traditional Shared Network Layer 2 Ethernet Fabric Connectivity Dedicated IP Storage Network vs. Step 1: Configure addressing and VLANs Step 2: Connect switches and automatically form fabric Step 3: Connect hosts and storage © 2015 BROCADE COMMUNICATIONS SYSTEMS, INC 30

31 Brocade VCS Logical Chassis—Architecture Simple and efficient © 2015 BROCADE COMMUNICATIONS SYSTEMS, INC. 31 Virtual IP Management vv Configuration Management Centralized Software Upgrade Downgrade and Auto Provisioning Centralized Monitoring and Troubleshooting

32 Brocade Trunking Frame-based, high throughput ISL trunking using Brocade ASICs Brocade ISL Trunking provides high link utilization and ease-of-use ‒ All 10GE ports are not alike Frame-level, hardware-based trunking at Layer 1 ‒ Near 100% link utilization versus 802.3ad LAG groups ~60–70% link utilization ‒ Spill and Fill across links in trunk group ‒ Single flows can be split across all links ‒ Built into Brocade ASIC ISL Trunks automatically form ‒ Once both switches are in VCS mode, multiple ISLs automatically form a trunk ‒ No CLI entries necessary © 2015 BROCADE COMMUNICATIONS SYSTEMS, INC. 32 = 10GE link, width represents utilization 80 Gbps ~50 Gbps Brocade ISL Trunking (8 links active) 802.3ad Link Aggregation (8 links active) Frame-based trunking at Layer 1 Flow-based trunking at Layer 2

33 VDX 6740 8.4 Gbps VDX 6740 & 6940 Buffering Results of real-world performance of deep on-chip buffering VDX 6740/6940 has twice the on-chip buffering of in-class competitive products. Allows absorption of much longer bursts, resulting in greater throughput without loss. Device 10GbE Egress Rate without loss Broadcom Trident-based competitor 4.0 Gbps © 2015 BROCADE COMMUNICATIONS SYSTEMS, INC 33 TE1/0/13 TE1/0/14 TE1/0/15 TE1/0/16 6740 TE1/0/10

34 BNA for Storage Stakeholders  Extending Operational Ownership into IP Storage Building on the combined dashboard customizable today Ongoing Strategy: Combined usability focus via a “Policy Configuration Center” across both FOS & NOS to include: Extending MAPS to include NOS Leverage dynamic group capabilities from dashboards across MAPS and Configuration Policies Consistent Usability Focus Unified Storage Dashboard Common Troubleshooting navigation Common Configuration Policy UI Concepts MAPS Configuration Policy © 2014 Brocade Communications Systems, Inc. Proprietary Information 34

35 BNA MAPS Dashboard © 2015 BROCADE COMMUNICATIONS SYSTEMS, INC 35

36 Storage Innovation: MAPS Simplified storage monitoring and alerting © 2015 BROCADE COMMUNICATIONS SYSTEMS, INC 36 GroupsPolicies, Rules, Actions Reporting Monitor a group of similar components as one entity Use pre-defined groups (SFPs, fans, power supplies) Filter network scope to view specific port groups Reduce errors and laborious manual effort Apply aggressive, moderate or conservative policy levels Policies based on multiple rules with unique actions Automate policy application across fabrics with Brocade Network Advisor integration Monitor storage health and performance in dashboards and reports Track the status of each monitored category View out-of-range conditions and rules Compare policies to identify drift from default

37 © 2015 BROCADE COMMUNICATIONS SYSTEMS, INC. COMPANY PROPRIETARY INFORMATION Brocade VDX Switches Data Center Fabrics Building Blocks

38 LEAF SPINE 1 GbE/10 GbE/10 GBASE-T optimized with 40 GbE uplinks Brocade VDX Switches Complete breadth of portfolio © 2016 BROCADE COMMUNICATIONS SYSTEMS, INC. COMPANY PROPRIETARY INFORMATION Brocade VDX 6740 Family Brocade VDX 6940 Family Brocade VDX 8770 Family 10 GbE/40 GbE optimized with 100 GbE uplinks 10 GbE/10 GBASE-T/40 GbE/100 GbE optimized modular system Brocade VDX 6940-36Q Brocade VDX 6940-144S Brocade VDX 6740 Brocade VDX 6740T Brocade VDX 6740T-1G SCALABILITY Brocade VDX 8770-4 Brocade VDX 8770-8

39 Brocade VDX 6740 Switches Product details © 2016 BROCADE COMMUNICATIONS SYSTEMS, INC. COMPANY PROPRIETARY INFORMATION Brocade VDX 6740TBrocade VDX 6740Brocade VDX 6740T-1G Simplicity and Automation Brocade VCS Fabric technology Automatic Migration of Port Profiles (AMPP) VCS Logical Chassis Auto-fabric provisioning Advanced Capabilities Full IP storage support with DCB capabilities Auto QoS prioritizes storage traffic –VCS Virtual Fabric feature supports multitenancy –VCS Gateway for NSX unifies virtual and physical networks –SDN-capable (OpenFlow support) –IPv6 hardware-ready Leading Scalability and Performance Fixed 48 1/10 GbE SFP+ (6740) /48 1/10 GBASE-T ports (Brocade VDX 6740-T) and 4 40GbE QSFP+ GbE; option up to 64 ports 10 GbE Fixed 48 1 GbE with 10 GbE software upgrade option 4×40 GbE QSFP+ ports; each 40 GbE can optionally be configured as 4×10 GbE in break-out mode High-performance Layer 2/Layer 3 switching Industry-leading deep buffers with dynamic buffering –Increases MAC, VLANS, port profiles; delivers increased scalability –Single ASIC, non-blocking with cut-through architecture –Up to 160 GbE Brocade ISL Trunking improves switch capacity –Efficient multilayer, multipathing for reliability and elasticity

40 Brocade VDX Flex Ports Panel layout for Brocade VDX 6740 Ports highlighted in blue are Flex Ports Flex Ports can be configured individually as Fibre Channel or Ethernet Brocade VDX 6740 is the only converged platform with Gen 5 Fibre Channel support © 2016 BROCADE COMMUNICATIONS SYSTEMS, INC. COMPANY PROPRIETARY INFORMATION 4 x 40 GbE Ports CEE-only Ports CEE/FC Ports Console Management Port 1 GbE 48 x 10 GbE Ports

41 Brocade VDX 6940-36Q and 6940-144S Switches Industry’s highest-density 10/40/100 GbE compact leaf and spine switch © 2016 BROCADE COMMUNICATIONS SYSTEMS, INC. COMPANY PROPRIETARY INFORMATION * 100 GBE WILL BE ENABLED IN A LATER RELEASE. Industry-leading performance Industry’s highest-density 10/40 GbE switch in a fixed form factor ‒ Line-rate 144×10 GbE in 1RU: 40 percent higher density than that of the closest competition ‒ Line-rate 36×40 GbE in 1RU: Industry’s highest 40 GbE density ‒ Line-rate 96×10 GbE and 12×40 GbE (or up to 4×100 GbE) in 2RU* Optimized buffer and latency Low latency, non-blocking, cut-through architecture High on-chip buffer with dynamic buffering capability Advanced capabilities Distributed VXLAN Gateway inside the VCS fabric Virtual Fabric extension (extend Layer 2 over Layer 3) capability Hardware-optimized ISSU for Layer 2, Layer 3, Fibre Channel, and FCoE protocols OpenFlow 1.3-capable 40 GbE density 10 GbE density 50100 150 24 36 Cisco, Arista, Juniper Brocade VDX 6940 Latency Buffer Low High Low Brocade VDX 6940 Leading port densityOptimized buffer and latency Brocade VDX 6940 Switches Brocade VDX 6940-36Q Brocade VDX 6940-144S

42 Brocade VDX 8770 Switch Product details © 2016 BROCADE COMMUNICATIONS SYSTEMS, INC. COMPANY PROPRIETARY INFORMATION * Hardware-ready; some features to be enabled post-GA. Ethernet Fabrics Data Center Access/Aggregation Simplicity and Automation Brocade VCS Fabric technology Automatic Migration of Port Profiles (AMPP) Brocade Fabric Watch provides proactive monitoring and notification of critical switch component failure Built to Last 100 GbE connectivity Hardware-enhanced network virtualization * Cloud management via RESTful Application Programming Interfaces (APIs) * Leading Scalability and Performance Supports 1 GbE/10 GbE/40 GbE/100 GbE Scales from 12 ports to over 8,000 ports per fabric Backplane scales to 4 Tbps per slot Best-in-class, 3.5 microsecond any-to-any latency Efficient multipathing for reliability and elasticity Best-in-class power efficiency

43 VDX Mainframe Deployment example © 2016 Brocade Communications Systems, Inc. 43 IBM z13 DB2AA RoCE 10GbE OSA channels VDX 6740 FICON SAN 10GbE Brocade Network Advisor SAN + IP Management IBM z13 Use the VDX for: RoCE/SMC-R IBM DB2AA GDPS AA Qrep TS7700 Grid FICON SAN DASD Virtual tape

44 Performance impact of SMC-R on real z/OS workloads 40% reduction in overall transaction response time for WebSphere Application Server v8.5 Liberty profile TradeLite workload accessing z/OS DB2 in another system measured in internal benchmarks * SMC-Rz/OS SYSA z/OS SYSB RoCE WAS Liberty TradeLite DB2 JDBC/DRDA Linux on x Workload Client Simulator (JIBE) HTTP/REST TCP/IP WebSphere to DB2 communications using SMC-R SMC-Rz/OS SYSA z/OS SYSB RoCE FTP ClientFTP Server FTP File Transfers (FTP) using SMC-R * Based on projections and measurements completed in a controlled environment. Results may vary by customer based on individual workload, configuration and software levels. ** Based on internal IBM benchmarks in a controlled environment using z/OS V2R1 Communications Server FTP client and FTP server, transferring a 1.2GB binary file using SMC-R (10GbE RoCE Express feature) vs standard TCP/IP (10GbE OSA Express4 feature). The actual CPU savings any user will experience may vary. Up to 50% CPU savings for FTP binary file transfers across z/OS systems when using SMC-R versus standard TCP/IP. **

45 Up to 48% reduction in response time and up to 10% CPU savings for CICS transactions using DPL (Distributed Program Link) to invoke programs in remote CICS regions in another z/OS system via CICS IP interconnectivity (IPIC) when using SMC-R vs standard TCP/IP * SMC-Rz/OS SYSA z/OS SYSB RoCE CICS A DPL calls CICS B Program X IPIC CICS to CICS IP Intercommunications (IPIC) using SMC-R Performance impact of SMC-R on real z/OS workloads (cont) WebSphere MQ for z/OS realizes up to 200% increase in messages per second it can deliver across z/OS systems when using SMC-R vs standard TCP/IP **** SMC-Rz/OS SYSA z/OS SYSB RoCE WebSphere MQ WebSphere MQ MQ messages WebSphere MQ for z/OS using SMC-R * Based on internal IBM benchmarks using a modeled CICS workload driving a CICS transaction that performs 5 DPL (Distributed Program Link) calls to a CICS region on a remote z/OS system via CICS IP interconnectivity (IPIC), using 32K input/output containers. Response times and CPU savings measured on z/OS system initiating the DPL calls. The actual response times and CPU savings any user will experience will vary. ** Based on internal IBM benchmarks using a modeled WebSphere MQ for z/OS workload driving non-persistent messages across z/OS systems in a request/response pattern. The benchmarks included various data sizes and number of channel pairs The actual throughput and CPU savings users will experience may vary based on the user workload and configuration.

46 Network performance comparison Reduced latency, CPU consumption and improved wall clock time TIME Transaction without SMC-R Transaction with SMC-R TCP Network SMC-R * Based on internal IBM benchmarks of modeled z/OS TCP sockets-based workloads with request/response traffic patterns using SMC-R vs TCP/IP. The actual throughput that any user will experience will vary. Network latency reduced up to 80% for z/OS TCP/IP multi- tier OLTP workloads such as web-based claims and payment systems *

47 SMC-R and Brocade VCS-summary Optimized Network Performance (leverageing RDMA technology) Transparent to (TCP socket based) application software More efficient, highly available SMC-R fabric Simpler to manage and configure Preserves existing network security model Resiliency (dynamic failover to redundant hardware) Transparent to load balancers Preserves existing IP topology, network administrative and operational models. 47


Download ppt "© 2016 BROCADE COMMUNICATIONS SYSTEMS, INC. Shared Memory Communication – RDMA (SMC-R) Utilizing the 10GbE RoCE Express and Brocade VDX."

Similar presentations


Ads by Google