Presentation is loading. Please wait.

Presentation is loading. Please wait.

MCT Design Options & Best Practices © 2012 Brocade Communications Systems, Inc 1.

Similar presentations


Presentation on theme: "MCT Design Options & Best Practices © 2012 Brocade Communications Systems, Inc 1."— Presentation transcript:

1 MCT Design Options & Best Practices © 2012 Brocade Communications Systems, Inc 1

2 Single Level MCT Active-active load balance in between servers and access switches High Availability in link-level and switch-level redundancy Fast failover in sub-second without Spanning tree protocol Work with existing LACP and server trunk Coexist with existing Spanning Tree based network for migration Dual-homed default gateway © 2012 Brocade Communications Systems, Inc 2

3 Multi-tier MCT – Full Mesh MCT Active-active load balance in between access and aggregation/core switches. Full protection against failures in both upstream and downstream direction Remove Spanning Tree Protocol from aggregation/core layer Work with existing LACP © 2012 Brocade Communications Systems, Inc 3

4 Best Practice of Single Level MCT and Multi-tier MCT ICL ports must be tagged member of session VLANs ICL is preferably a LAG for link redundancy and higher bandwidth for packets cross ICL If CCEP is 1GbE, ICL is preferably a 10G LAG. If CCEP is 10GbE, ICL is preferably a 100G LAG All member VLANs including MCT client must be running on ICL Non-MCT VLAN from CEP can co-exist with client VLANs on ICL in MLXe Slow failover mode is to prevent the port flapping on ICL Topology is preferably in symmetry to avoid single point of failures Number of clients, ports and VLANs must conform scalability formula Exceeding the supported scalability may cause cluster in unstable forwarding More clients can be added later without interrupting the traffic from existing clients with scalability enhancement © 2012 Brocade Communications Systems, Inc. 4

5 Best Practice of Single Level MCT and Multi-tier MCT Fiber ports is preferable in low latency requirement in data center Fiber ports converge faster than copper ports when link failure occurs. Copper interfaces need more time to initialize Port Loop Detection (PLD) is strongly recommended when the first time configuring MCT Prevent the loops caused by mis-configuration It is recommended to configure loop-detection shutdown-disble on ICL port if PLD is still enabled after deployment The configuration will prevent ICL be shut down by PLD CPU protection and VLAN hardware flooding is recommended if MCT is only in L2 domain MCT control protocol and FDB synchronization requires CPU power intensively © 2012 Brocade Communications Systems, Inc. 5

6 Best Practice of Single Level MCT and Multi-tier MCT Hitless failover and upgrade are not supported but compatible During failover/upgrade, the MCT peer will be responsible for forwarding After failover/upgrade, the MCT peer will synchronize the FDB database and re- establish MCT PB and PBB are not supported on CCEP ports It is recommended to configure the keep-alive VLAN in between two MCT nodes only and configure the VLAN for this purpose only Only one VLAN can be configured as the keep-alive VLAN If the cost of extra ports in two MCT nodes for keep-alive VLAN is a consideration, it is suggested to configure the keep-alive VLAN through one of uplinks to the L3 network MCT control plane is capable of crossing continents CCP is running by connection-oriented TCP protocol Keep-alive timer: 300ms as hello timer, 900ms as down timer by default, routed packets from San Francisco and Tokyo is about 73 ms each way © 2012 Brocade Communications Systems, Inc. 6

7 Best Practice of Single Level MCT and Multi-tier MCT Loose Client Isolation Mode is recommended to keep traffic forwarding when ICL fails With keep alive VLAN in loose client isolation mode, MCT slave will block its client ports Keep-alive VLAN is recommended with loose client isolation mode for preventing an unexpected l2 loop Client Isolation Mode © 2012 Brocade Communications Systems, Inc. 7 MCT Routed Layer 3 Network MCT Master VRRP Master MCT Slave VRRP Backup SPF ICL Keep Alive VLAN loop Routed Layer 3 Network MCT Master VRRP Master MCT Slave VRRP Backup SPF ICL

8 Best Practice of Single Level MCT and Multi-tier MCT Strict Client Isolation Mode will isolate the client network when ICL fails Regardless keep-alive VLAN, both MCT peers blocks client ports Prevent the packets from a problem client network to populate into the whole network Client Isolation Mode (cont’) © 2012 Brocade Communications Systems, Inc. 8 MCT Routed Layer 3 Network VRRP Master VRRP Backup SPF ICL

9 Best Practice of MCT with VRRP/VRRP-E VRRP/VRRP-E is recommended as a default gateway of clients IPv4 and IPv6 VRRP/VRRP-E (no SPF) are both supported with MCT VRRP-E Short Path Forwarding (SPF) is recommended to prevent overloading ICL for layer 3 uplink VRRP-e backup with SPF acts as a hidden VRRP-e master in terms of default gateway IPv6 VRRP-E Short Path Forwarding is not supported © 2012 Brocade Communications Systems, Inc 9 MCT Routed Layer 3 Network VRRP Master VRRP Backup SPF ICL MCT Routed Layer 3 Network VRRP Master VRRP Backup ICL

10 Best Practice of MCT with VRRP/VRRP-E When a Layer 3 uplink fails and routes to Layer 3 network are all learned from one uplink, the routed traffic may hit a black-hole situation routing regardless SPF is enabled or not Configuring the Layer 3 uplink as VRRP/VRRP-E track port and track port priority can force VRRP/VRRP-E master failover, the MCT node without L3 uplink will forward traffic to the new VRRP/VRRP-E master with routes The track port priority should be configured large enough to force VRRP/VRRP-E master failover when the Layer 3 uplink fails. Adding IP interfaces to propagate the route from VRRP/VRRP-E backup to master is an alternative to solve the black-hole situation. © 2012 Brocade Communications Systems, Inc. 10 MCT Routed Layer 3 Network VRRP-E Backup SPF enabled VRRP-E Master SPF enabled ICL MCT Routed Layer 3 Network VRRP-E Master SPF enabled VRRP-E Backup SPF enabled ICL L3 Uplink Back Hole L3 Uplink

11 Best Practice of Routing with MCT Before R5.4, routing occurs above default gateway Routing with MCT is not supported on CCEP and ICL ports before R5.4 Requires virtual router as default gateway, add another layer of routers for routing purpose Packets can not route across ICL as shortest path routing Support only IPv4 routing if VRRP-E short path forwarding is configured © 2012 Brocade Communications Systems, Inc 11 MCT Routed Layer 3 Network VRRP Master VRRP Backup SPF IP subnet 1 IP subnet 2 IP subnet 3 IP subnet 4

12 Best Practice of Routing with MCT With R5.4, layer 3 routing can occurs on CCEP and ICL Routing with MCT supports IPv4 and v6 passive interface on CCEP Routing at level of MCT peers without virtual router Routed packets can route across ICL as shortest path routing Connect IP-based VMs in different subnets in a data center with less latency Connect routed customer networks in WAN © 2012 Brocade Communications Systems, Inc. 12 MCT Routed Layer 3 Network IP subnet 1 IP subnet 2 IP subnet 3 IP subnet 4

13 Best Practice of MCT with L2 Metro Ring Dual-homing to Metro Ring to build large resilient Layer 2 domains Sub-second re-convergence for any failure from access switches to border routers Metro Ring Protocol © 2012 Brocade Communications Systems, Inc. 13 MCT Provider Network ICL Metro Ring Customer A Network Customer B Network MRP Master PBB endpoints or L2VPN endpoints

14 Best Practice of MCT with L2 Metro Ring The secondary port of MRP master must not be configured on ICL ICL must not be in blocking state The convergence time requires to balance the preforwarding time and the number of MRP instances With MRP default preforwarding timer, topology group is recommended for reducing the number of MRP instance G.8032 (ERP) is not recommended Metro Ring Protocol © 2012 Brocade Communications Systems, Inc. 14

15 Best Practice of MCT For VPLS High availability in between point-to-multipoint clients Active-active path to customer edge router Active-standby path to remote end-points Multiple standby path in between local and remote end-points Provide cloud service across MPLS network in between data centers Do not require remote PEs be aware of MCT © 2012 Brocade Communications Systems, Inc. 15 MCT LAG MPLS network MCT cluster client edge ports MCT cluster Active-Active data path Active-Standby data path CE PEPE PEPE PEPE PEPE PEPE MCT Active MCT Active PW Active PW Standby CE

16 Best Practice of MCT For VLL © 2012 Brocade Communications Systems, Inc. 16 SPOKE-PW MPLS Network MCT LAG CE PEPE PEPE PEPE PEPE SPOKE-PW Point-to-point VLL MCT LAG MPLS network MCT cluster client edge ports MCT cluster Active-Active data path Active-Standby data path CE PEPE PEPE PEPE PEPE PW Active PW Standby MCT Active MCT Active High availability in between point-to-point clients Active-active path to customer edge router Active-standby path to remote end-point Multiple standby path in between local and remote end-points Provide cloud service across MPLS network in between data centers Do not require remote PEs be aware of MCT

17 Best Practice of MCT for VPLS/VLL Two MCT PE peers must configure same VC-mode, have same size of VPLS MAC table, and have same set of remote peers Tagged and raw mode are supported in MCT for VPLS/VLL MCT will not form if these configurations are not identical It is preferable to use MCT spoke-PW in between MCT PE nodes In a MPLS network, a direct ICL with L2 session VLANs may not be achievable As long as there is a routed path in between two MCT PEs, MCT spoke PW never fails L2 ICL and session and MCT for VPLS/VLL can be support simultaneously MCT for VPLS/VLL always requires to configure L2VPN peer in the cluster Most IT admin who are familiar with L2 MCT forget to configure l2VPN peers Cluster will not form without L2VPN peer If node faulure it will take more than 10 (30sec) © 2012 Brocade Communications Systems, Inc. 17

18 Best Practice of MCT for VPLS/VLL MCT for VPLS/VLL requires keep-alive timer for confirming CCP down CCP can run in layer 3 domain without direct-link ICL Adjustable keep alive timer to accommodate the time of layer 3 path reroute MAC address withdrawal is recommended in MCT for VPLS to avoid black holing of packets from remote PE’s VE over VPLS does not support Routing over MCT for VPLS/VLL © 2012 Brocade Communications Systems, Inc. 18

19 © 2012 Brocade Communications Systems, Inc. 19 Introduction of Multicast Routing with MCT PIM-SM over MCT as Last Hop New R5.4 PIM and IGMP run natively on the MCT VLAN (VE) PIM sets up peering across the MCT chassis via ICL IGMP query message sent natively on CCEP IGMP membership synched across ICL to MCT peer when IGMP join message is received on CCEP. IGMP membership is installed in PIM mcache on both MCT peers Hashing algorithm determines if the actual forwarding port is local or remote if the source is on uplink and receiver on CCEP. For other combinations of source and receivers, MCT multicast uses shortest path Both MCT peers use PIM to join RPF towards RP Multicast traffic arrives on both MCT peers, but the peer with local forwarding port forwards traffic to client Adding the process of registry and remove the picture

20 Introduction of Multicast Routing with MCT Receiver sends IGMP report for (*, G1) (*, G2) MCT client forwards (*,G1) to CCEP1 and forwards (*, G2) to CCEP2 MCT switches synchronize the IGMP report received locally to the peer CCEPs via MDUP Synchronize IGMP State On CCEPs © 2012 Brocade Communications Systems, Inc. 20 IGMP report CCEP (G1) Routed L3 network MDUP Sync MCT Client CEP MDUP sync Receiver (G2) (G1) (G2) (*,G2) (*,G1) CCEP1 CCEP2 New R5.4

21 Introduction of Multicast Routing with MCT Streams requested by Receiver are added to the CCEP in both MCT Peers Streams requested by Receiver are pulled to both MCT peers Only one MCT switch forwards a stream to its CCEP and the MCT peer drops the stream Streams ingress from CEP (S3, S4), the MCT switch connecting to the source forwards to CCEP. If the local CCEP goes down, the peer will forward to its CCEP A stream ingress from ICL (S3, S4), it is dropped until the remote CCEP goes down A stream ingress from Uplink (S1, S2), MCT hash function decides which MCT switch forwards a steam Receivers Behind MCT Client © 2012 Brocade Communications Systems, Inc. 21 Multicast Streams CCEP (G1) (G3) Routed L3 network ICL MCT Client CEP Receiver (G2) (G4) S4 S3 CCEP1 CCEP2 (G1) (G2) (G4) (G3) S1 S2 New R5.4

22 Introduction of Multicast Routing with MCT Streams requested by Receiver are added to CEP1 A stream from Routed L3 network (S2) is pulled by Receiver side MCT switch via the Uplink and forwarded to CEP1 A stream sourced from CEP2 (S4) is pulled by Receiver side MCT via the ICL and forwarded to CEP1 Streams sourced from MCT Client (S1,S3) are load balancing on the LAG to one of MCT switches A stream load balancing to Receiver side MCT switch (S1) is natively forwarded to CEP1 A stream load balancing to the remote side of MCT (S3) switch will be forwarded to CEP1 via the ICL Receivers on CEP © 2012 Brocade Communications Systems, Inc. 22 Multicast Streams CCEP (G1) (G3) Routed L3 network ICL MCT Client CEP Receiver (G2) (G4) S4 S3 CCEP1 CCEP2 (G1) (G2) (G4) (G3) S1 S2 CEP1 CEP2 Multicast Streams dropped by peer New R5.4

23 Introduction of Multicast Routing with MCT Streams sourced behind MCT client are load balancing on the LAG to MCT switches Streams load balancing to the MCT switch which has the OIF (S2, S3) are forwarded to the OIFs locally regardless the OIF is Uplink, CEP, or other CCEP Streams load balancing to the MCT switch that the OIFs are on MCT Peer (S1 to R1, S4 to R4) will be forwarded to the OIF via the ICL regardless the OIF is Uplink or CEP From the above rule, the stream (S4 to R4) will not be forward to remote CCEP since the ingress is the ICL and both CCEPs are up MCT Client forwards the stream (S4) to the local receiver (R5) in normal VLAN Flooding. Sources Behind MCT Client © 2012 Brocade Communications Systems, Inc. 23 Multicast Streams CCEP (G1) (G3) Routed L3 network MCT Client CEP Receiver (G2) (G4) R4 R3 CCEP1 CCEP2 (G1) (G2) (G4) (G3) R1 R2 Receiver Source R5 Remote Receiver Multicast Streams dropped by peer S1 S2 S3 S4 New R5.4

24 Introduction of Multicast Routing with MCT Streams with OIFs on local Uplink are forwarded locally by MCT switch (S1 to R1) Streams with OIFs on local CCEP are forwarded locally by MCT switch (S1 to R5) Streams with OIFs on remote CCEP are forwarded to MCT peer via ICL then forward to MCT client (S3 to R3) From the above rule, steams are also forwarded to MCT peer via ICL but dropped until the local CCEP is down (S1 to MCT peer) Streams with OIFs on remote Uplink (R2) and remote CEP (R4) are forwarded to OIFs via the ICL (S2 to R2, S4 to R4) Sources on CEP © 2012 Brocade Communications Systems, Inc. 24 Multicast Streams CCEP (G1) (G3) Routed L3 network ICL MCT Client CEP Source (G2) (G4) R4 R3 CCEP1 CCEP2 (G1) (G2) (G4) (G3) R5 CEP1 CEP2 Receiver Remote Receiver R1 R2 Remote Receiver Multicast Streams dropped by peer S1 S2 S3 S4 New R5.4

25 Best Practice of Multicast for MCT 25 In the case of source is from L3 uplink and receiver is on the MCT client. Without keep- alive VLAN, ICL failure results in the client interface in both MCT nodes to be in forwarding state The client interface in both MCT nodes are in forwarding state, multicast traffic is duplicated in the receiver. Keep-alive VLAN is strongly recommended for multicast for MCT 25 Routed L3 network ICL MCT Client Source CCEP1 CCEP2 R CEP1 CEP2 Receiver 25 © 2012 Brocade Communications Systems, Inc 25 Routed L3 network ICL MCT Client Source CCEP1 CCEP2 R CEP1 CEP2 Receiver Keep-Alive VLAN shutdown Forwarding state

26 Best Practice of Multicast for MCT © 2012 Brocade Communications Systems, Inc. CONFIDENTIAL—For Internal Use Only 26 Following the previous scenario. If the uplink of designated forwarder fails, it does not cause the designated forwarder to change to peer MCT node. Only the peer MCT node now can pull the traffic from source. Only CCEP failure triggers designated forwarder to change to peer MCT node An extra L3 interface in between both MCT nodes should be configured to be the next hop of routed multicast traffic in order to prevents black-hole from uplink failure. The IP interface can be either configured on ICL or on an extra link It is required to have an extra L3 interface in between both MCT nodes for multicast for MCT if the source and receivers are in the same position as the topology 26 Routed L3 network ICL MCT Client Source CCEP1 CCEP2 R CEP1 CEP2 Receiver 26 © 2012 Brocade Communications Systems, Inc 26 Routed L3 network ICL MCT Client Source CCEP1 CCEP2 R CEP1 CEP2 Receiver Keep-Alive VLAN Designated Forwarder Extra IP interface

27 Best Practice of Multicast for MCT The (S, G) registry is synchronized to both MCT nodes. But only one MCT node in the pair forwards the multicast traffic to clients or uplink, not both. Double amount of data traffic flows through ICL, ICL requires more bandwidth in the deployment of multicast for MCT Multicast for MCT is recommended only for single-tier MCT © 2012 Brocade Communications Systems, Inc 27

28 Summary High-Availability: Sub-second failover in the event of a Link, Module, Switch Fabric, Control Plane, or Node failure Active-Active links : No idle Ethernet links in the network. Optimal Forwarding; Layer 2 and Layer 3 forwarding regardless of VRRP-E state Traffic Load-Balancing: Flow base load balancing rather than VLANs sharing across network links Simple Deployment & Operation: Minimal configuration and easy troubleshooting. No Rip and Replace: Ability to provide the resiliency regardless of the type/vendor of edge device Flexibility: Ability to provide this resiliency, regardless of the traffic type layer 2, layer 3 or non-IP Scalable for VMotion: Interaction with MRP to build larger resilient Layer 2 domains Key takeaways © 2012 Brocade Communications Systems, Inc 28


Download ppt "MCT Design Options & Best Practices © 2012 Brocade Communications Systems, Inc 1."

Similar presentations


Ads by Google