Download presentation
Presentation is loading. Please wait.
1
High-Performance Routing (HPR)
1129_04F8_c2
2
Software Engineer dmccowan@cisco.com
Dave McCowan Software Engineer 2
3
Agenda Why Enterprises Are Using APPN IBM Networking Models
Role/Benefits of APPN in Each Model Customer Updates—ISR to HPR Summary
4
What Is HPR? Architectural enhancements to APPN that While providing
Improve performance Faster intermediate node routing Improve reliability Nondisruptive session switch While providing Functional equivalence to APPN Interoperability with existing APPN nodes
5
What Can HPR Do for My Network?
Better availability No outages for planned or unplanned network or intermediate node outages Better intermediate node performance Performance improved, reduced storage requirements, reduced processing requirements Better congestion control End-to-end congestion control, adapts to fully utilize links, rate-based algorithm Full class of service support Prioritization of data, controlled data path selection Selective end-to-end retransmission
6
HPR Highlights Establishes a transport pipe
Uses CoS for route calculation Runs on existing hardware Transports data at high speeds Reroutes nondisruptively RTP Pipe SNA Sessions RTP End User
7
HPR Components Data-link control Automatic Network Routing (ANR)
Standard DLCs E.g. Frame Relay, LAN Automatic Network Routing (ANR) No session awareness Improves routing speed Traffic prioritization Rapid Transport Protocol (RTP) End-to-end error recovery Selective retransmission Nondescriptive rerouting Adaptive rate base flow control RTP Pipe Data Link RTP End User ANR Router
8
APPN Routing: ISR Vs. HPR
Transmission priority Forward packet Flow control Segmentation Error recovery LFSID label swap RTP: Congestion control Segmentation Error recovery ANR: Transmission priority Forward packet
9
HPR Data Link Controls Local-area networks Wide-area networks ATM
Token Ring, Ethernet, FDDI Wide-area networks Frame Relay, PPP, SMDS, QLLC, SDLC RSRB, DLSw+ ATM RFC 1483, LAN Emulation Channel
10
ANR Transport Characteristics
Connection In-order delivery Same route Route setup Reservation Low overhead addressing Corrects errors Congestion control Connectionless Out-of-order delivery Different routes No route setup No reservation High overhead addressing No error correction No congestion control 1129_04F8_c2 9
11
RTP Tower Rapid Transport Protocol (RTP)
Connections that carry session traffic Protocols executed at “edge” of HPR network Multiplexed sessions with same CoS Nondisruptive path switch for failed RTP connection APPN/HPR boundary function
12
HPR Boundary Function “Translates” ISR traffic to/from HPR traffic
CS/2 NN D NN F 4 1 NN G 3 7 2 NN H 8 NN A NN B 6 VTAM 4.3 5 NN C NN E “Translates” ISR traffic to/from HPR traffic Seamless migration from ISR to HPR Continued support for current APPN features Transmission priority, DLUS/DLUR, connection network, network management, end node HPR features only active inside HPR subnet 1129_04F8_c2 11
13
HPR Packet Types Between HPR Nodes Three Types of Packets May Flow:
XID3 Packet XID3 I-Frame FID2 PIU Packet FID2 TH RH RU Network Layer Packet (NLP) NHDR THDR DATA
14
HPR Capabilities Vector (CV61)
HPR XID3 Packets XID3 Packet XID3 I-Frame X’61’ HPR Capabilities Vector (CV61) Presence indicates sender HPR capable Error recovery mode Tower support indicators ANR label information (NCE ID)
15
HPR FID2 PIU Packet First four bits of TH X’0010’
FID2 TH RH RU First four bits of TH X’0010’ SSCP-PU, SSCP-LU, and LU-LU sessions for dependent LUs CP-CP session Route setup messages
16
Route Setup Request GDS X’12CE’
FID2 TH RH RU X’10’ Route Setup Request GDS X’12CE’ X’12CE’ Dest Info Control Vectors RSCV (X’2B’) FQPCID (X’60’) CoS/TP (X’2C) Forward Route Information (X’80’)
17
HPR Network Layer Packet (NLP)
NHDR THDR DATA First four bits of header x’110x’ Network layer header (NHDR)— ANR routing information RTP transport header (THDR)— RTP information Data portion includes new FID5 PIU for independent LU-LU sessions Packets flow interleaved with FID2 packets
18
Network Layer Header (NHDR)
THDR DATA B’1100’ TP … ANR Routing Field X’FF’ ANR routing field contains consecutive labels for each TG in the session path Labels 1–8 bytes Assigned in “outbound direction”—unique to assigning node Labels removed from ANR field as used Last label a Network Connection Endpoint (NCE) Identifies component receiving message in final node
19
Network Connection Endpoint (NCE)
Last label in ANR routing field Identifies what component receives message Three types of components identified CP NCEs—exchanged with XID3 LU NCEs—returned on LOCATE Boundary function NCE—carried on route setup Route setup NCE—exchanged with XID3
20
ANR Routing Example NN C NN D NN E NN F ANR (92, 73, 80) ANR (73, 80)
96 75 64 NCE: 80 NCE: 80 81 92 73 NN C NN D NN E NN F ANR (92, 73, 80) ANR (73, 80) ANR (80) ANR (80) ANR (96, 80) ANR (75, 96, 80)
21
LU-LU Session Setup Summary
NN D CS/2 NN F 4 1 7 NN G 2 3 NN H 8 VTAM 4.3 NN A NN B NN C NN A NN B NN C NN D NN F NN G NN H BIND Route Setup (REQ) Route Setup (REQ) Route Setup (Reply) Route Setup (Reply) RTP Connection Setup + BIND BIND + BIND Rsp RTP Connection Setup Reply + BIND Rsp + BIND Rsp
22
Route Setup—Detail NN C NN D NN E TG1 TG2 83 84 86 95
Route Setup (REQ) Route Setup (REQ) Route Setup Request Destination Hop Index (2) RSCV (Current Hop (1) TG1 to NND TG2 to NNE) CoS/TPF FQPCID Forward ANR (83) Route Setup Request Destination Hop Index (2) RSCV (Current Hop (2) TG1 to NND TG2 to NNE) CoS/TPF FQPCID Forward ANR (83, 84)
23
Route Setup—Detail (Cont.)
NCE:77 NCE:99 83 84 TG1 TG2 86 95 NN C NN D NN E Route Setup (Reply) Route Setup (Reply) Route Setup Reply Destination Hop Index (2) CoS/TPF FQPCID Forward ANR (83, 84, 99) Reverse ANR (95, 86) Reverse RSCV (NNE to TG2 to NND to TG1 to NNC)) Route Setup Reply Destination Hop Index (2) CoS/TPF FQPCID Forward ANR (83, 84, 99) Reverse ANR (95) Reverse RSCV (NNE to TG2 to NND)
24
RTP Connection Setup Reply (Including + Response to BIND)
NCE:77 NCE:99 83 84 TG1 TG2 86 95 NN C NN D NN E RTP Connection Setup Connection Setup Reverse ANR (95, 86, 77) Reverse RSCV (NNE-TG2-NND-TG1-NNC) FID5 Header (Session Address) BIND Connection Setup Reverse ANR (95, 86, 77) Reverse RSCV (NNE-TG2-NND-TG1-NNC) FID5 Header (Session Address) BIND RTP Connection Setup Reply (Including + Response to BIND) 1129_04F8_c2 23
25
Existing RTP Connection
NCE:77 NCE:99 83 84 TG1 TG2 86 95 NN C NN D NN E Existing RTP Connection BIND + BIND Rsp
26
Reliable RTP Transport
THDR contains retry indicator Sender will retransmit packets lost THDR contains status requested indicator Requests acknowledgment of data successfully received Status segment from receiver identifies packets received Sender releases buffers when received
27
Status Example NN D NN F NN C NN C NN D NN F MSG 1, Status Req
Status—Got MSG 1 MSG 2, No Status Req MSG 3, No Status Req MSG 4, Status Req Status—Got MSG 3, 4 MSG 2, Status Req Status—Got MSG 2
28
Byte Sequence Numbers What Is the Next BSN? 17 RTP A RTP B
THDR (SOM, EOM, BSN (1), Data (ABC) (3 Bytes Data+1 Byte EOM) THDR (SOM, EOM, BSN (5), Data (DE) (2 Bytes Data+1 Byte EOM) THDR (SOM, No EOM, BSN (8), Data(F) (1 Byte Data+0 Byte EOM) THDR (No SOM, No EOM, BSN (9), Data (G) (1 Byte Data+0 Byte EOM) THDR (No SOM, EOM, BSN (10), Data (H) (1 Byte Data+1 Byte EOM) THDR (SOM, EOM, BSN (12), Data (Null) (0 Bytes Data+1 Byte EOM) THDR (SOM, EOM, BSN (13), Data (IJK) (3 Bytes Data+1 Byte EOM) What Is the Next BSN? 17
29
Requesting Retransmission
RTP A RTP B THDR (SOM, EOM, BSN (1), Data (ABC) 1 + 1 (EOM) + 3 (Data) = 5 THDR (SOM, EOM, BSN (5), Data (DE) 5 + 1 (EOM) + 2 (Data) = 8 THDR (No SOM, No EOM, BSN (9) , Data (G) X’0E’, GAPDETR, Received BSN, ABSP Oops! Gap in BSN indicates a message lost Status message to sender asks for retransmission of one message
30
HPR Segmentation Message segmentation and reassembly only at end RTP nodes NLP message size cannot exceed smallest link BTU size along session path Includes NHDR, THDR, and data Route setup sequence carries maximum packet size field X’80 route information CV on route setup GDS variable X’12CE” Node replace is maximum packet size on next hop less than what’s in the field Minimum acceptable packet size 768 NHDR and THDR cannot be segmented
31
Segmentation and Reassembly
CS/2 NN D NN F 1 4 NN G 3 7 2 NN H 8 NN A NN B VTAM 4.3 NN C NN A NN B NN C NN D NN F NN G NN H PIU (Segments 1-j) NHDR, THDR (SOM), DATA (PIU (Part 1) NHDR, THDR, DATA (PIU (Part 2) • NHDR, THDR (EOM), DATA (PIU (Part k) PIU (Segments 1-m)
32
Flow/Congestion Control
Adaptive rate-based flow control Provide fairness between RTP connections Session pacing Provide fairness between sessions within an RTP connection
33
ARB Flow/Congestion Control
Knee = Point Where Congestion Starts Cliff = Congestion Results in Packet Loss and Queuing Delays ARB Keeps Traffic in the Operating Region Operating Region Throughput Knee Cliff Send Rate Input traffic reduced as network approaches congestion Input traffic increased when network capacity increases
34
Properties of ARB Adapts to network conditions to maximize throughput and minimize congestion Smooths input traffic to avoid bursts Provides end-to-end flow control so one endpoint cannot flood the other Requires minimal processor cycles and network bandwidth Provides fairness to all RTP connections
35
How ARB Works RTP Intermediate Node RTP Rate Req and Data ARB Sender ARB Receiver Rate Reply Rate Req and Data ARB Receiver ARB Sender Rate Reply Closed-loop mechanism based on information exchanged between 2 RTP endpoints ARB setup on RTP connection setup or path switch
36
The ARB Algorithm Sender sends interval since last request
Receiver Time Reply Reply Reply REQ REQ REQ Sender Time Interval 1 Interval 2 Interval 3 Sender sends interval since last request Receiver measures interval since last request Receiver recommends action sender should take
37
ARB Features Burst timer indicates when the ARB sender may send a “burst size” of data A rate request and rate reply may be in the same message Recommendations to sender: Normal—increase send rate Restraint—no rate change Slowdown1—reduce rate (~12.5%) Slowdown2—reduce rate (~25%) Critical—reduce rate (~50%)
38
Traffic Prioritization
Same CoS/transmission priority as base APPN Intermediate node processes transmission priority RTP connections with same CoS/transmission priority sharing a link get equal share of bandwidth Fairness occurs because incrementing and decrementing done consistently across connections
39
The Switch CS/2 NN D NN F 4 1 X NN G 3 7 2 NN H 8 NN A NN B 6 VTAM 4.3 5 NN C NN E Automatically reroute RTP connection traffic around a failed link or node Only operates within an HPR subnet Alternate path must be HPR-only and support the same CoS
40
Switch Triggers RTP connection failure detection Operator request
Status requested message sent— timer expires Retry limit exceeded Operator request Used to switch back if original path optimal
41
Path Switch Sequence NN D NN C NN E NN F TDU Update Calculate RSCV
CS/2 NN D NN F 4 1 NN G 7 2 3 NN H 8 NN A NN B 6 VTAM 4.3 5 NN C NN E NN D NN C NN E NN F TDU Update Calculate RSCV Route Setup Req Route Setup Req Route Setup Reply Route Setup Reply Data •
42
Cisco High-Performance Routing
Evolutionary extension to APPN Better performance, lower overhead Higher-link utilization, predictable response time Complements networking trends Supports class of service and transmission priority Drop-in migration for current APPN networks
43
Positioning APPN in a Multiprotocol Network
44
Why Use APPN/HPR? Data center and FEP backbone
Support for high-availability sysplex environment Native SNA routing Reduced FEP dependency Simplified configuration Branch Peer-to-peer communications End-to-end class of service
45
APPN—What Our Customers Tell Us
Reason 1997 1998 Native SNA Routing 90% 100% Reduced FEP Dependency 80% 90% Support For Sysplex 70% 70% Simplified Configuration 50% 50% Peer-to-Peer Communication 20% 10% VTAM Currency NA 10% 1129_04F8_c2 44
46
Why Use HPR? Required for Sysplex environment
Improved intermediate node throughput Improved availability—routing around failed nodes
47
HPR—What Our Customers Tell Us
Reason 1998 Data Center/ Network Availability 90% Sysplex 70% Improved Performance 10% End-to-End Availability 10%
48
APPN in a Parallel Sysplex
ICN1/DLUS MDH1 MDH2 ICN2/DLUS (Backup) ESCON Director Coupling Facility APPN NN/DLUR NN/DLUR Generic resources Multihost persistent sessions (requires HPR)
49
HPR—High Availability Options
RTP RTP CMPC CSNA ANR SRB … … ANR Switching SR Bridging RTP or ANR/ DLUR Routers DLSw+ (Optional RTP) DLSw+ or RFC1490 Routers
50
HPR—Option Comparison
ANR Node XCA Node Function Performed by Channel- Attached Router ANR Switch Source Route Bridge (“Duplicate TICs”) Who Uses the Solution? New APPN Installations Migration from Existing ISR Installation Performance 100,000 PPS Fast Switched 100,000 PPS Fast Switched Points of Failure Application, End User Application, End User 1129_04F8_c2 49
51
Banking Customer: Migration to HPR Using XCA
Upgrade VTAM to HPR 4.4 Upgrade APPN NNs at aggregation points to HPR Data center routers and branch routers do not require upgrade SRB Connection Network DLSw+ NN DLUR NN DLUR DLSw+ Over Frame Branch Branch Branch 1129_04F8_c2 50
52
IBM Networking Paradigms
Pure SNA IP Transport APPN can play a role in three of the four models SNA SNA/APPN IP Network SNA Client IP Client Pure IP This chart shows 4 different paradigms of IBM networks. The first paradigm involves SNA applications and SNA clients and uses a pure SNA network to connect them (Typically subarea or APPN over frame or SDLC). The second quadrant also involves bothSNA applications and SNA clients but uses a TCP/IP transport. Thousands of customers have used this approach to improve availability and to allow consolidation of multiprotocol and SNA. The third quadrant shows a network that has migrated SNA to the data center by using either WEB or TN3270 clients. The fourth quadrant shows a network where the mainframe applications have been replaced with TCP/IP applications. Most networks today are in multiple quadrants, but the most common trend is to move towards IP solutions. Because end systems play a key role in deciding what you need from your networking equipment, consider your current and future applications direction as part of your networking decision. Appl SNA IP IP 1129_04F8_c2 51
53
EN or PU 2, Dependent or Independent LUs
APPN Native PPP, X.25, Frame, and ATM enable sharing of physical media Multiprotocol on same or different virtual circuits Ships in the night routing ENs NN NN DLUR EN or PU 2, Dependent or Independent LUs EN
54
APPN Customer— Government
200 branches, 600 by ’99 Branch has 100–150 CS/2 ENs, 20 sessions each Network characteristics Public Frame Relay NN in each branch, centralized NNs in data center DLUR node in each branch, DLUS node in data center Other mainframes ENs and LEN nodes Use border node in VTAM today, branch extender in the future Why APPN? Pure SNA network Future peer-to-peer requirements The customer we are discussing here is a government agency that has asked to not be referenced. Work with this customer began in December of last year. A week of training in San Jose was provided to acquaint the customer with the features of the product and provide some hands-on experience. Formal testing began in February and DLUR was added to the test in April. Merging the test network with the production network began in June and full production began in July. This network will be a very large network within the next 2 years. APPN will be installed in over 600 routers, supporting 50-60K logical units.
55
Government Customer—Final
IBM Hitachi NEC 8 Mainframes Cisco 7XXX NNs Frame Relay This diagram shows what the final configuration will look like. Cisco 4XXX and 7XXX machines will provide APPN routing at the central site and in remote locations. Each remote location will contain a token ring with ENs per token ring running CM/2. Each workstation will have 12 sessions--4 LU 2 sessions and 8 LU 6.2 sessions with applications on the mainframes. Additionally, physically close remote token rings will be connected by bridges to a remote backbone ring for peer-to-peer traffic. Peer-to-peer traffic is not part of the initial test. Cisco 4XXX, 7XXX NNs Total 50K-60K workstations 30-40 CM/2 ENs/Token Ring 4 x LU2 Sessions/Workstation 8 x LU 6.2 Sessions/Workstation Backbone Token Ring 1129_04F8_c2 54
56
Issues/Concerns/ Observations
APPN NN at each branch is not required today, but may be in a future environment Number of sessions drove memory requirements HPR plans: None Application hosts do not support HPR
57
EN or PU 2, Dependent or Independent LUs
APPN over IP DLSw+ provides non-disruptive rerouting, load balancing, and consolidation WAN can be any DLC Single routing protocol across the WAN ENs NN Optional NN/DLUR DLSw+ DLSw+ EN or PU 2, Dependent or Independent LUs EN
58
APPN Customer—Banking
1800 branches, multiple data centers Network characteristics Existing FEP backbone Future direction for backbone TCP/IP New application development will be on TCP/IP Why APPN? SNA routing to multiple data centers Reduced FEP dependency Configuration ease
59
Banking Customer—Final
Multiple Data Centers APPN over DLSw+ CN NNs only at distribution sites Multiple DLSw+ peers Use DLSw+ “Cost” to define preference DLSw+ Over SMDS NN DLUR NN DLUR DLSw+ Over Frame 1800 Branches Branch Branch Branch DLSw+ Peering CP-CP Sessions 1129_04F8_c2 58
60
Issues/Concerns/ Observations
APPN over DLSw+ provides multiple benefits Native SNA routing Simplifies management—single protocol network Eliminates scalability concerns Connection network simplifies configuration and provides scalability benefits HPR—would like to do, but no plans yet Current project—TCP/IP on every mainframe
61
IP Client, APPN at the Data Center
ENs NN Isolate SNA to data center Use APPN for Parallel Sysplex, configuration simplicity, and SNA routing Client can be TN3270 or Web browser TN3270 Server IP Router TN3270 Web Browser 1129_04F8_c2 60
62
APPN Customer—Government
Government agency with bias towards standards-based solutions Must make tax records available to public Telephone operators at green screens Cost sensitive—must leverage existing investments Requires access to multiple mainframes Must have scalable solution The rationale for deploying remote network nodes in the branch offices was to offload directory services when new 6.2 applications rolled out. They avoided potential VTAM utilization issue by this deployment. These are large branches, PCs running OS/2. Local 6.2 applications to server and any to any. We avoided going to VTAM for directory services and local session establishment.
63
Agenda HPR Internals HPR Externals HPR Architecture and Design
HPR in a Multiprotocol Environment Customer Case Studies
64
Government Customer—Final
Web Access for S/390 TN3270Server Distributed Director TN3270 Server session switching selects correct mainframe Distributed Director load balances across TN3270 servers Web Access for S/390 simplifies user interface and reduces cost of client support World Wide Web 1129_04F8_c2 62
65
Issues/Concerns/ Observations
Enables single backbone protocol Web client reduces desktop costs Not applicable to LU 6.2 or LU0 applications Planning for HPR/Sysplex at this time
66
How Are Enterprises Using HPR?
67
Case Study: Service Provider
Something for Everyone! CTC APPN and DLSw+ DLUR/ APPN DLSw+ and TN3270 RSRB / DLSw+ 1129_04F8_c2 65
68
Service Provider: Final
Consolidated 4 networks Single platform for mainframe access Direct, any-to-any communication Common access regardless of desktop APPN DLUR/ APPN DLSw+/ TN3270 24
69
How Are they Using HPR? Currently adding HPR and Sysplex in data center Channel-attached routers using SRB and CSNA to access mainframes Native HPR is used for host-to-host
70
Case Study: Insurance 65 branches require peer-to-peer
Up to 2000 OS/2 users each Network characteristics—before Routers installed for TCP/IP All APPN traffic through VTAM Why APPN in branch? Peer-to-peer LU 6.2 sessions between OS/2s Offload directory processing from VTAM The rationale for deploying remote network nodes in the branch offices was to offload directory services when new 6.2 applications rolled out. They avoided potential VTAM utilization issue by this deployment. These are large branches, PCs running OS/2. Local 6.2 applications to server and any to any. We avoided going to VTAM for directory services and local session establishment. 1129_04F8_c2 68
71
Insurance Customer: Before
SNA Connectivity LU-LU Session Existing Branch-to-Branch Physical Connectivity 1129_04F8_c2 69
72
Insurance Customer: After
SNA Connectivity LU-LU Session
73
How Are they Using HPR? HPR Re-evaluating branch strategy
Not considering at this time Re-evaluating branch strategy Branch-to-branch traffic did not materialize Moving to TCP/IP branch platform DLSw+ as an interim step
74
Case Study: Insurance Campus network with 12,000 ENs
Network characteristics Ethernet and ATM switched infrastructure TCP/IP and SNA DLUR/DLUS All APPN NNs in data center Why APPN? Native SNA routing between mainframes and, potentially, server-to-server Reduced FEP dependency
75
Insurance Campus—Final
10/100 Mbps 100 Mbps … ATM APPN NNs • APPN ENs • • Network …
76
How Are they Using HPR? Considering sysplex configuration— no decision yet Considering HPR for the WAN between branch and data center No plans to migrate campus to HPR— little value
77
Case Study: Manufacturing
Before: … Data Center 3x74 3x74 … 3x74 3x74 Remote Locations Manufacturing Campus 1129_04F8_c2 75
78
Manufacturing Customer: Final
Sysplex HPR/DLUR Nodes Data Center DLSw+ 3x74 3x74 3x74 3x74 Remote Locations Manufacturing Campus 1129_04F8_c2 76
79
How Are they Using HPR? HPR in sysplex hosts and channel-attached routers only Maximum application availability to manufacturing campus is key requirement Campus is dual-backbone switched Token Ring Remote-site traffic carried over TCP/IP network using DLSw+
80
Before: Traditional SNA Hierarchical Network
Case Study: Insurance Before: Traditional SNA Hierarchical Network 3x74 3x74 3x74 3x74 3x74 3x74 3x74 3x74
81
Insurance Customer: Final
Sysplex Sysplex APPN DLSw+ 3x74 3x74 3x74 3x74 3x74 3x74 3x74 1129_04F8_c2 79
82
How Are they Using HPR? Sysplex configurations in two data centers
APPN/HPR in data centers and aggregation points Acquired by another company with APPN/HPR only in the data center… Most enterprises are implementing APPN/HPR in the data center and/or aggregation points Business objectives and application strategy usually dictate backbone Corporate intranet strategy driving client changes TN3270 and Web browsers at the desktop Sysplex and high availability are the key HPR drivers
83
HPR in the Data Center CMPC: Cisco MultiPath channel
Enables HPR over the channel Enhances host availability/reliability Replaces IBM 3172, 3745/3746, and 950 channel attached processors Allows for backup router configuration Allows for backup host configuration with parallel Sysplex
84
Cisco High-Performance Routing
The End
85
Cisco High-Performance Routing
The End 2
Similar presentations
© 2025 SlidePlayer.com Inc.
All rights reserved.