Presentation is loading. Please wait.

Presentation is loading. Please wait.

Técnicas de Alta Disponibilidade para NAPs Marcelo Molinari – Foundry Networks do Brasil marcelo@foundrynet.com © 2002 Foundry Networks, Inc.

Similar presentations


Presentation on theme: "Técnicas de Alta Disponibilidade para NAPs Marcelo Molinari – Foundry Networks do Brasil marcelo@foundrynet.com © 2002 Foundry Networks, Inc."— Presentation transcript:

1 Técnicas de Alta Disponibilidade para NAPs Marcelo Molinari – Foundry Networks do Brasil © 2002 Foundry Networks, Inc.

2 Agenda LINX topology overview AMS-IX topology overview
Metro Ring Protocol Virtual Switching Redundancy Protocol Efficient link utilization is achieved by having multiple master switches. Each master switch is forwarding for a unique set of topology groups while blocking on others. In this example, Switch 1 is master for topo group 1 and backup for topo group 2. Switch 2 is master for topo group 2 and blocking on topo group 1. This provides link utilization. ©2002 Foundry Networks, Inc.

3 London Internet Exchange (LINX)
© 2002 Foundry Networks, Inc.

4 LINX Topology Efficient link utilization is achieved by having multiple master switches. Each master switch is forwarding for a unique set of topology groups while blocking on others. In this example, Switch 1 is master for topo group 1 and backup for topo group 2. Switch 2 is master for topo group 2 and blocking on topo group 1. This provides link utilization. ©2002 Foundry Networks, Inc.

5 LINX Topology The LINX Network consists of two separate high-performance Ethernet switching platforms installed across seven locations. Switches from two equipment vendors are deployed in two separate networks to provide an extra level of fault-tolerance, the logic being that both systems shouldn't develop the same fault at the same time. Efficient link utilization is achieved by having multiple master switches. Each master switch is forwarding for a unique set of topology groups while blocking on others. In this example, Switch 1 is master for topo group 1 and backup for topo group 2. Switch 2 is master for topo group 2 and blocking on topo group 1. This provides link utilization. ©2002 Foundry Networks, Inc.

6 LINX Topology Two switches are installed in every LINX location, and the locations are interconnected by multiple 10 gigabit Ethernet circuits to form two physically separate backbone rings. Most LINX members connect to both switching platforms, which reduces the impact of any downtime on a single network element. Management of the logical redundancy of the network is done using MRP (Metro Ring Protocol). In the event of the loss of a network segment, MRP activates a redundant link within tenths of a second and restore connectivity. Efficient link utilization is achieved by having multiple master switches. Each master switch is forwarding for a unique set of topology groups while blocking on others. In this example, Switch 1 is master for topo group 1 and backup for topo group 2. Switch 2 is master for topo group 2 and blocking on topo group 1. This provides link utilization. ©2002 Foundry Networks, Inc.

7 LINX Aggregated Traffic Statistics
Efficient link utilization is achieved by having multiple master switches. Each master switch is forwarding for a unique set of topology groups while blocking on others. In this example, Switch 1 is master for topo group 1 and backup for topo group 2. Switch 2 is master for topo group 2 and blocking on topo group 1. This provides link utilization. ©2002 Foundry Networks, Inc.

8 Amsterdam Exchange (AMS-IX)
© 2002 Foundry Networks, Inc.

9 AMS-IX Topology Efficient link utilization is achieved by having multiple master switches. Each master switch is forwarding for a unique set of topology groups while blocking on others. In this example, Switch 1 is master for topo group 1 and backup for topo group 2. Switch 2 is master for topo group 2 and blocking on topo group 1. This provides link utilization. ©2002 Foundry Networks, Inc.

10 AMS-IX Topology AMS-IX is a distributed exchange, currently present at five independent co-location facilities in Amsterdam. The AMS-IX topology is built around two hub/spoke arrangements. The core switches are Foundry Networks NetIron MLX-32 switches. Members connected with GigE, 100Base-TX or 10Base-T ports are connected to Foundry Networks BigIron and BigIron RX-8 switches. Members with a 10GbE port are connected to Glimmerglass Networks photonic cross-connects. These L1 switches connect the member 10GbE ports to BigIron RX-16 or NetIron MLX-16 switches. Efficient link utilization is achieved by having multiple master switches. Each master switch is forwarding for a unique set of topology groups while blocking on others. In this example, Switch 1 is master for topo group 1 and backup for topo group 2. Switch 2 is master for topo group 2 and blocking on topo group 1. This provides link utilization. ©2002 Foundry Networks, Inc.

11 AMS-IX Topology The two core switches run VSRP (Virtual Switch Redundancy Protocol) to define the active hub/spoke and to automatically fail over to the other, based on pre-defined triggers (e.g. link failure). All edge switches from Foundry follow VSRP automatically. The Glimmerglass switches follow the VSRP failover based on software developed at AMS-IX. Members can connect to the AMS-IX infrastructure at any of the five AMS-IX co-locations, at 100 Mbit/s, 1 Gbit/s or 10 Gbit/s. Efficient link utilization is achieved by having multiple master switches. Each master switch is forwarding for a unique set of topology groups while blocking on others. In this example, Switch 1 is master for topo group 1 and backup for topo group 2. Switch 2 is master for topo group 2 and blocking on topo group 1. This provides link utilization. ©2002 Foundry Networks, Inc.

12 AMS-IX Traffic Statistics
Efficient link utilization is achieved by having multiple master switches. Each master switch is forwarding for a unique set of topology groups while blocking on others. In this example, Switch 1 is master for topo group 1 and backup for topo group 2. Switch 2 is master for topo group 2 and blocking on topo group 1. This provides link utilization. ©2002 Foundry Networks, Inc.

13 Metro Ring Protocol © 2002 Foundry Networks, Inc.

14 Metro Ring Protocol (MRP)
Metro Ring Protocol is a Layer 2 protocol designed to provide SONET-like, high speed, fault tolerant, fast recovery for Metro Ethernet networks. MRP SONET-like features provide: Sub-second failover Efficient use of bandwidth with topology groups (802.1s based) Scalable protection for multiple VLANs Large Scale L2 MANs with multi-ring support Highly flexible network designs Works with other L2 features Runs on all Ethernet and PoS/SDH interfaces, including 10 Gigabit MRP is also designed to meet the Metro requirements talked about in the first slide with… Ring topology specific design Sub second convergence Efficient link utilization Scalability It runs on all Ethernet AND POS interfaces On existing hardware ©2002 Foundry Networks, Inc.

15 How it works A single node is defined as the Ring Master Node
All other nodes are defined as Ring Member Nodes The Master Node prevents loops by blocking its secondary port Ring Hello Packets are generated by the Master Node to check ring integrity As long as the master sees its own Hello packets on the secondary port, ring health is verified, and secondary port remains blocked RHP RHP RHP RHP MRP is designed to run in a ring – where all switches “daisy chain” (if you will) into a ring topology. A master is configured on the ring and blocks one of the ports on the ring to prevent loops. The master sends ring health packets out the primary interface and expects to see them enter on the blocked interface This is how ring integrity is maintained. Primary (Forwarding) Secondary (Blocking) ©2002 Foundry Networks, Inc.

16 Rapid Failover FAULT NetIron 400 Hello packets are hardware forwarded by the nodes in the ring to ensure fastest failure detection. Master considers ring broken if no Hello packets are received within 300 ms (3 consecutive Hellos are lost). If no Hellos are received, Master transitions secondary port into forwarding state to restore ring connectivity. To provide reliable flushing of stale MAC entries, Master sends 3 consecutive TCN notifications. By changing timers and using messages sent by the node where the ring broke, it is possible to achieve recovery times from 150 ms to 200 ms. RHP RHP NetIron 400 NetIron 400 RHP RHP NetIron 400 NetIron 400 TC Health packets are sent out at a configurable rate and the nodes in the ring will forward them in hardware. If they aren’t received by the master, the master will transition the ports from blocking to forwarding and notify the other nodes with a “topology change” so they can flush the MAC database. NetIron 400 NetIron 400 TC TC NetIron 400 ©2002 Foundry Networks, Inc.

17 Link Restoration Preventing Temporary Loops
Link restored NetIron 400 When an MRP port goes up, it goes into pre-forwarding mode to avoid the creation of temporary loops. In pre-forwarding mode the port forwards no data, but only the ring hello packets from the master. Master sees its own RHP, detects that ring integrity has been restored, puts its secondary port in blocking mode. From that point onwards, master sends RHPs with the Forwarding flag bit set, indicating that members should transition their ports from pre-forwarding to forwarding. The Forwarding flag bit is always set, as long as the master is blocking its secondary port. RHP PF RHP PF NetIron 400 NetIron 400 RHP RHP NetIron 400 NetIron 400 RHP RHP F Health packets are sent out at a configurable rate and the nodes in the ring will forward them in hardware. If they aren’t received by the master, the master will transition the ports from blocking to forwarding and notify the other nodes with a “topology change” so they can flush the MAC database. F NetIron 400 NetIron 400 RHP RHP NetIron 400 ©2002 Foundry Networks, Inc.

18 Topology Groups Topology Group Master VLAN
A VLAN running a control protocol (or more) that controls the active topology for the whole topology group. Control protocols: STP, RSTP, MRP, VSRP. Member VLAN A VLAN running NO control protocol of its own but rather follows the active topology of the master VLAN.. Member VLAN Group A group of VLANs running NO control protocol of their own but rather follow the active topology of the master VLAN. VLAN groups are defined via the “vlan-group” command. ©2002 Foundry Networks, Inc.

19 Efficient use of Ring Bandwidth
MRP Supports Multiple Topology groups within a Ring An MRP Node can be both a Master node and Member Node for different topology groups Each Topology group contains a Master VLAN and Member VLANs Master VLANs generate Hello packets and block secondary ports 4094 VLANs can be divided among up to 255 Topology groups Topo Group 1 To achieve efficient link utilizaiton, multiple topology groups can traverse the same physical ring. In this example, the switch on the bottom (blue) is master for VLAN group1 and is blocking one of the ports in VLAN group 1. The top switch is Master for VLAN group 2, and blocking one of the ports in that VLAN. Links can be forwarding for some VLANs and blocking for others. Topo Group 2 ©2002 Foundry Networks, Inc.

20 Using Multiple Rings There are 3 ring scenarios: Single ring
Rings that don’t overlap Overlapping rings that share links Each Ring runs its own instance of MRP A ring node can be Master for Multiple rings There are 3 basic ring configurations possible. Single ring Non-overlapping ring And rings that share links Ring Master nodes (that block ports) can be master for multiple rings Each ring runs MRP independently of the other rings. Ring IDs should not overlap Single Ring Non-Overlapping Ring Overlapping Rings Phase II Phase I ©2002 Foundry Networks, Inc.

21 Example Scenarios – Phase I
High-speed 10 GE trunks for Metro rings or IXPs Provides sub-second fault-detection and fail-over Superior scalability: no limit on maximum number of nodes per ring Counter rotating topology groups provide efficient use of bandwidth BigIron 4000 S6 BigIron 8000 BigIron 4000 Secondary port Ring 2 BigIron 4000 S1 Master node Primary port BigIron 4000 S5 Ring 1 S2 BigIron 8000 BigIron 4000 Phase 1 supports manually configured Master nodes and non-overlapping rings. S5 BigIron 4000 Ring 3 S4 BigIron 4000 S4 BigIron 4000 S3 Ring 4 BigIron 4000 BigIron 4000 Master node BigIron 4000 ©2002 Foundry Networks, Inc.

22 Example Scenarios – Phase II
Shared ring support Increased reliability Increased bandwidth BigIron 4000 BigIron 8000 S6 BigIron 4000 Secondary port Ring 2 Master node BigIron 4000 S1 Primary port BigIron 4000 BigIron 4000 Master node S5 Ring 1 S2 BigIron 8000 Phase 2 will support auto config of Master and overlapping ring BigIron 4000 S5 BigIron 4000 Ring 3 S4 BigIron 4000 BigIron 4000 S4 S3 BigIron 4000 Ring 4 BigIron 4000 Master node BigIron 4000 ©2002 Foundry Networks, Inc.

23 Interface Flexibility
Support for Mixed interfaces 10Gig & Gig Gig & 10/100 10Gig & PoS/SDH Support for Trunked interfaces 10Gig & Gig PoS/SDH Slower link S6 BigIron 8000 S6 BigIron 8000 Secondary port Secondary port Master node Master node BigIron 4000 BigIron 4000 S1 S1 Primary port Primary port BigIron 4000 BigIron 4000 S5 S5 You can use basically all interfaces we have. (except ATM). And eve trunked interfaces for higher bandwidth links and more redundancy. S2 BigIron 8000 BigIron 8000 S2 BigIron 4000 BigIron 4000 S4 S4 BigIron 4000 S3 BigIron 4000 S3 ©2002 Foundry Networks, Inc.

24 MRP – Summary of Benefits
Fast, sub-second, predictable fail-over functionality Maximizes ring bandwidth utilization Cost effective scalable solution for MAN resiliency Attractive alternative to STP Utilizes Ethernet Packet standards and MACs Can be combined with other Foundry features to provide complete end to end MAN designs ©2002 Foundry Networks, Inc.

25 Virtual Switch Redundancy Protocol
© 2002 Foundry Networks, Inc.

26 Virtual Switch Redundancy Protocol
VSRP provides an alternative to Rapid Spanning Tree Protocol (RSTP) in dual homed/mesh configurations, providing sub-second fail-over and recovery. VSRP features provide: Sub-second fail-over Efficient use of mesh bandwidth – no blocked links Block and unblock ports at the per-VLAN group level Large scale L2 MANs with multi-tiered support Highly flexible network designs Configurable tracking options Works with other L2 features Works with all Ethernet interfaces, including 10 Gigabit VSRP is based on VRRP-E & can provide L2 and L3 backup VSRP (and MRP) are alternatives to spanning tree. Many customers are vary biased agents STP !! VSRP gives you… Mesh topologies – Sub-second failover was a prerequisite for this protocol VLAN / topology group dependent Works with other L2 features and our existing Ethernet technologies VSRP is based in VRRP-e and it can provide L2 redundancy at the VLAN link level AND gateway redundancy at the IP level – at the same time ©2002 Foundry Networks, Inc.

27 How it works VSRP uses an election process to select a Master switch and up to 4 backup switches for each VLAN: higher configured priority wins; if equal, higher IP address wins. Only the Master switch forwards data, while Backup switches block traffic on all VSRP configured interfaces within the VLAN (or the topology group). Master switch sends Hello packets to all backup switches Switches do not have to be VSRP aware. VSRP aware provides faster failover. VSRP can track ports and decrease the priority of VSRP active switch in case a tracked port goes down. VSRP Master VSRP Backup BigIron 8000 S1 S2 BigIron 8000 F B B B Hello F F Hello BigIron 4000 S3 VSRP Aware BigIron 4000 S4 BigIron 4000 S5 VSRP consists of Backup switches at the “core” and VSRP aware switches at the edge. The backup switches elect a master for each VLAN / topology group and the master forwards on all ports associated with the specific group. Master switches send out hello packets at configurable intervals on all specified VSRP ports and they are monitored by the backup switches Master switches can also monitor non-VSRP ports, like upstream router ports, and when these fail, the master can transition. This provides multiple levels of redundancy. The edge switches should be VSRP aware for fast failover. In order to provide a loop-free topology, multiple switches are configured as backup switches and maintain an active-standby relationship. One of the backup switches will be active and forwarding, while the others are in standby (blocking). The active switch sends out hello packets on all VSRP interfaces. The standby switch monitors these packets; when these packets are not seen by the standby for a configurable period of time, the standby switch becomes active. ©2002 Foundry Networks, Inc.

28 Rapid Failover A VSRP Backup switch monitors Hellos from the Master.
If no Hellos are received for Master Dead Interval (default 300 ms), Backup goes into Hold Down state, starts sending periodic Hellos. Hold Down interval is by default 300 ms, and it allows for the election of a new master. If the switch is elected as Master, it sets its port into forwarding state, sends 3 TCNs. A VSRP aware switch receives TCN, and looks for the new master. Hellos of the new master will be received on a different port. A VSRP aware switch shifts the MAC addresses learned on the failed port to the new port. NetIron 800 NetIron 800 NetIron 800 F FAULT B B Hello Hello Hello NetIron 400 Master Backup NetIron 800 NetIron 800 NetIron 800 The edge VSRP switch knows which uplink port the active switch is connected to by monitoring the hello packets. When one of the backup switches becomes the new master, the hello packets will be seen by the edge switch on a different link. When this occurs, the edge switch moves the MAC addresses that were learned on the failed link to the new link. Dn F B FAULT Hello Hello NetIron 400 Mac Type:D = Dynamic, S = Static, H = Host, R = Router MAC Port Age Type 0060.f320.23a D 0030.1b D 00d0.b758.88dc D f D ©2002 Foundry Networks, Inc.

29 Link Restoration Switching Back to Original Master
Link restored (Orig. Master) Backup Master Backup When the failed link is restored, the original Master remains as a Backup. Original Master receives inferior Hello from the current Master, so it immediately replies with its own Hello, switches into Hold Down state (300 ms), starts sending periodic Hellos Current Master receives superior Hello, so it switches into Backup mode. If no superior Hellos are received during Hold Down interval, original Master considers itself the new current Master, and sets its port in Forwarding mode. New Master sends out 3 TCNs. A VSRP aware switch receives TCN, and looks for the new master. Hellos of the new master will be received on a different port. NetIron 800 NetIron 800 NetIron 800 B F B Hello Hello Hello Hello Hello Hello NetIron 400 Master Backup Backup NetIron 800 NetIron 800 NetIron 800 The edge VSRP switch knows which uplink port the active switch is connected to by monitoring the hello packets. When one of the backup switches becomes the new master, the hello packets will be seen by the edge switch on a different link. When this occurs, the edge switch moves the MAC addresses that were learned on the failed link to the new link. F B B Hello Hello Hello NetIron 400 A VSRP aware switch shifts the MAC addresses to the new port. Mac Type:D = Dynamic, S = Static, H = Host, R = Router MAC Port Age Type 0060.f320.23a D 0030.1b D 00d0.b758.88dc D f D ©2002 Foundry Networks, Inc.

30 Efficient use of Uplink Bandwidth
VSRP supports topology groups to fully utilize switches and links Topology groups are a collection of VLANs Each yopology group contains a Master VLAN and Member VLANs VSRP configured switches can be Master for some topology groups while backup other for others 4094 VLANs can be divided among up to 255 Topology groups Master topology group 1 Backup topology group 2 Master topology group 2 Backup topology group 1 Hello Packets S1 BigIron 8000 S2 BigIron 8000 BigIron 4000 S3 VSRP Aware BigIron 4000 S4 BigIron 4000 S5 Efficient link utilization is achieved by having multiple master switches. Each master switch is forwarding for a unique set of topology groups while blocking on others. In this example, Switch 1 is master for topo group 1 and backup for topo group 2. Switch 2 is master for topo group 2 and blocking on topo group 1. This provides link utilization. Topology group 1 = Master VLAN 1 Member VLANs 2 to 2048 Topology group 2 = Master VLAN 2049 Member VLANs 2050 to 4096 ©2002 Foundry Networks, Inc.

31 VSRP Domains VSRP can be configured in separate domains within the same VLAN to allow for larger topologies. Topology groups can be designed to use unique paths in each domain. A TTL value within the VSRP Hello packet controls how deep the packet goes into the network. TTL is being decremented by 1 at each VSRP aware switch. Default TTL is 2, which allows Hello to traverse one VSRP aware switch to go to another VSRP active switch. NetIron 800 NetIron 800 NetIron 800 VSRP Active VSRP Domain 1 (VRID 1) NetIron 400 NetIron 400 VSRP Aware VSRP Active VSRP Domain 2 (VRID 2) NetIron 400 NetIron 400 VSRP Aware Foundry switches can be both VSRP backup switches for down-stream connectivity and VSRP aware for upstream connectivity. Failure detection and convergence is localized only to the VSRP backup switches and does not propagate down the network. A VLAN or VLAN group can span up to 255 VSRP “hops”. In this diagram, the top VSRP active switches may be configured with Topology group 1 utilizing faster and more redundant links. The bottom VSRP active switches my be configured with Topology group 2 utilizing faster and more redundant links. High priority VLANs can traverse topology group 2 in the bottom domain and topology group 1 in the top domain. This provides more control over the data paths. VSRP Active VSRP Domain 3 (VRID 3) NetIron 400 NetIron 400 VSRP Aware ©2002 Foundry Networks, Inc.

32 Intelligent Port Level Control
VSRP can be configured to run only on designated ports Only VSRP configured ports are placed in blocking for Backup switches Supports host attachments and dual port NICs: host facing ports are configured as VSRP free. Works in combination with MRP to provide flexible Metro / Enterprise Ethernet designs. NetIron 800 NetIron 800 MRP Master Backup NetIron 800 NetIron 800 For tight control over VSRP, you can designate which ports belong to the VSRP domain. This allows for interoperability with MPR and simpler configurations. In the redundant NIC environment, the servers can be placed into the Master and Standby switch with both ports active. NetIron 400 NetIron 400 VSRP Aware Dual Homed Servers ©2002 Foundry Networks, Inc.

33 VSRP-Aware Switches Both VSRP-Aware and Non VSRP-Aware switches can be used as edge devices VSRP-Aware switches recognize the VSRP Hello packet sent by the Master, and then create an table which contains the VRID of the VLAN which sent the VSRP Hello plus the incoming port where the VSRP Hello was received. When the VSRP-Aware switch sees a Hello packet coming from a different port, it quickly moves the MAC address table entries to the new port. VSRP Master VSRP Backup S1 BigIron 8000 S2 BigIron 8000 F B B B Hello F F Hello BigIron 4000 S3 BigIron 4000 S4 BigIron 4000 S5 For tight control over VSRP, you can designate which ports belong to the VSRP domain. This allows for interoperability with MPR and simpler configurations. In the redundant NIC environment, the servers can be placed into the Master and Standby switch with both ports active. VSRP Aware VSRP Aware VSRP Aware ©2002 Foundry Networks, Inc.

34 Non VSRP-Aware Switches
Non VSRP-Aware switches can be used as edge devices, but they do not recognize VSRP Hello packets. MAC entries will age out or they will eventually be learned from the new port. This results in slow convergence when the Master fails. Solution is to configure “VSRP Fast Start” in the VSRP Master and Backup nodes. VSRP Fast Start disables and re-enable the ports before transitioning from Master to Backup. This causes a MAC address flush in the edge devices which makes convergence faster. VSRP Master VSRP Backup S1 BigIron 8000 S2 BigIron 8000 F B B B F F BigIron 4000 S3 BigIron 4000 S4 BigIron 4000 S5 For tight control over VSRP, you can designate which ports belong to the VSRP domain. This allows for interoperability with MPR and simpler configurations. In the redundant NIC environment, the servers can be placed into the Master and Standby switch with both ports active. Non VSRP Aware Non VSRP Aware Non VSRP Aware ©2002 Foundry Networks, Inc.

35 VSRP – Summary of Benefits
Fast, sub-second protection without Spanning Tree Combines both switching and routing redundancy Provides default gateway redundancy if needed Supports topology groups for full link utilization Can be combined with other Foundry features to provide complete end to end MAN designs ©2002 Foundry Networks, Inc.

36 References LINX - https://www.linx.net/
AMS-IX - “I can feel your traffic” - MRP- VSRP - ©2002 Foundry Networks, Inc.

37 Thank You ! Marcelo Molinari – Foundry Networks do Brasil marcelo@foundrynet.com
© 2002 Foundry Networks, Inc.


Download ppt "Técnicas de Alta Disponibilidade para NAPs Marcelo Molinari – Foundry Networks do Brasil marcelo@foundrynet.com © 2002 Foundry Networks, Inc."

Similar presentations


Ads by Google