Presentation is loading. Please wait.

Presentation is loading. Please wait.

Multi Node Label Routing – A layer 2.5 routing protocol

Similar presentations


Presentation on theme: "Multi Node Label Routing – A layer 2.5 routing protocol"— Presentation transcript:

1 Multi Node Label Routing – A layer 2.5 routing protocol
Addressing routing scalability issues NSF Ignite Project

2 Routing protocol challenges
All Routing Protocols face Scalability issues – dependency on network size Flooding on information changes High Churn rates Packet Looping Unreliable and unstable routing Work with IP and avoid impacts of IP forwarding when necessary

3 What should the Solution look like?
Forward the packets - transparent to IP at layer 3 – at layer MPLS-like? But MPLS is not adequate for emergency situations Set up time, failure recovery, dependency on routing protocols etc.

4 WHAT DO WE HAVE? Every network has a structure
CORE (Backbone (BB) routers) DISTRIBUTION (DB routers) ACCESS (AC Routers) Tiers of routers for specific operations If they do not have - it is easy to set up a (virtual) structure Packets between access networks have to be forwarded via distribution and core Flat routing based on logical IP addresses OR Use the structure to forward packets

5 Using the Structure – the Tier Structure
Capture the attributes of the structure LABELS Based on the structure and connectivity - Routers can have multiple LABELS We will use the LABELS to route and forward

6 TIER Structure and LABELS
Let us introduce routers and assign LABELS that capture the structure properties 1.1 TierValue.UniqueID TIER 1 1.1 BB Routers 1.2 1.3 TierValue.UniqueID UniqueID = parentID: ChildUniqueID 2.1:1 DB Router Set 1 2.1:1 2.3:1 DB Router Set 2 2.2:1 2.3:2 TIER 2 AC Router Set 1 3.3:1:1 3.1:1:1 AC Router Set 2 3.3:2:1 3.2:1:1 TIER 3 3.1:1:1 TierValue.UniqueID UniqueID = grandparentID:parentID: ChildUniqueID The label structure is TierValue UniqueID Unique ID carries the parent child relationship Grandparent : Parent : Child Tree like - Can be used for routing and forwarding TierValue provides a level of aggregation

7 Routing and Forwarding in the Structure
Each node records a neighbor table Neighbor Table of 2.1:1 1.1 1.2 1.3 Label Port 3.1:1:1 1 2.3:1 2 1.1 3 2.1:1 2.3:1 2.2:1 2.3:2 3.3:1:1 3.1:1:1 3.3:2:1 3.2:1:1 Frame from Source 3.1:1:1 to Destination 3.3:1:1 At 3.1:1:1 check my neighbor table 3.3:1:1 is not my direct child or parent (compare uniqueIDs 1:1:1 with 3:1:1) send to my parent packet reaches 2.1:1

8 Routing and Forwarding in the Structure
Neighbor Table of 2.1:1 1.1 1.2 1.3 Label Port 3.1:1:1 1 2.3:1 2 1.1 3 2.1:1 2.3:1 2.2:1 2.3:2 3.3:1:1 3.1:1:1 3.3:2:1 3.2:1:1 Frame from Source 3.1:1:1 to Destination 3.3:1:1 At 2.1:1 – this node checks if the destination is relation of any one of my neighbors – 3.3:1:1 is a child of 2.3:1 – so node 2.1:1 forward packet to port 2 if this was not the case – send to my parent

9 Routing and Forwarding in the Structure
1.1 1.2 1.3 2.1:1 2.3:1 2.3:2 2.2:1 3.1:1:1 3.3:1:1 3.3:2:1 3.2:1:1 At each node - comparisons with the neighbor table entries The packet will be directed to the proper port No match – send to my parent No routing tables – like OSPF or BGP No flooding of routing updates – local neighbor information exchanged Tier 1 may get more traffic – that normally happens Tier 1 – partial mesh – max 2 hops (Seattle POP – AT&T network 44 routers) Neighbor table can record up-to max 2 hops (working)

10 Routing and Forwarding in the Structure
1.1 1.2 1.3 Link / node failure – follow another decision path no flooding, low churn rates Example link between 2.3:1and 2.1:1 fails only node 2.1:1 record a change 2.1:1 would send the packet to 1.1 Packet will take path 1.1, 1.3, 2.3:1 2.1:1 2.3:1 2.2:1 2.3:2 3.3:1:1 3.1:1:1 3.3:2:1 3.2:1:1

11 How to implement in current networks?
Solution should be Deployable and easy/smooth transition As a layer 2.5 routing protocol – similar to MPLS Forward traffic from IP networks connected at the edge Learning edge IP Networks <-> Labels of nodes connected to the IP Networks Disseminated this information to all edge nodes

12 Multi Node Label Routing (MNLR)
Implementation Details DEMO available with 30 nodes on GENI - SAI

13 MNLR Domain – Edge IP Networks
End IP network 2 End IP network 1 MNLR Edge Node MNLR Edge Node MNLR Core Node MNLR Core Node MNLR Edge Node MNLR Core Node MNLR Edge Node End IP network 4 MNLR Domain End IP network 3

14 Tasks completed MNLR coding in C and implementation over GENI
Automation of script to setup in GENI and performance metric collection BGP performance over Emulab

15 Demo 27 nodes MNLR vs BGP MNLR operation in a 27 node scenario / BGP operation

16 Automated Process Flow
GIT HUB Manifest Rspec.xml Parse Topology Info GIT update on the Local Repo Copy Code to all GENI Nodes Trigger MNLR on all Nodes Tier Allocation and Command Formation Code Compilation/ Error Check Trigger Performance Analysis Test Cases using Iperf Notification

17 Software Defined Networks
SDN can be used for Label assignment and connectivity information dissemination at tier 1 Kind of management End IP address to Label map dissemination to all edge MNLR nodes Placement of SDN controllers

18 SDN Controllers SDN C1 SDN C2 SDN C3 SDN C5 SDN C4 IP NW 1 IP NW 4

19 MNLR Routers (Edge and Core)
MNLR Edge Node MNLR Edge Node MNLR Core Node MNLR Domain End IP network 2 End IP network 3 End IP network 1 End IP network 4 CONFIGURATION Labels assignment to all nodes in an MNLR domain. Hello Messages (neighbor table) and Connect messages (label assignment at tiers 2 and 3 CORE CONNECTIVITY Periodic ‘hello’ messages by MNLR nodes to neighbors -> Neighbor Table EDGE CONNECTIVITY End IP network address dissemination to all egress/ingress MNLR nodes EDGE ENCAP/DE-ENCAP Encapsulation of incoming IP packets in special MNLR headers De-encapsulating MNLR packets to deliver IP packet to the destination IP networks CORE FORWARDING Forwarding of MNLR encapsulated packets towards egress MNLR nodes based on MNLR labels MNLR SRC | MNLR DST IP PACKET

20 Optimization with MNLR
With one-hop neighbors in neighbor table How many hops in the neighbor table Optimization problem Only same tier neighbors

21 Implemented at layer 2.5 in the geni testbeds
Evaluated for several topologies and compared with BGP Quagga BGP run on Emulab

22 SETTING UP THE TOPOLOGY AND CONFIGURATIONS
TIME CONSUMING AUTOMATION SCRIPTS CAN SET UP LARGE TOPOLOGIES IN GENI

23 Test Cases IPN1  IPN2 IPN1  IPN4 IPN1  IPN6 IPN1  IPN7
N5 – N3 – N4 – N8 IPN1  IPN4 N5 – N3 – N0 – N1 – N13 – N16 IPN1  IPN6 N5 – N3 – N0 – N2 – N24 – N26 IPN1  IPN7 N5 – N3

24 Test Case – Link Failure
IPN1  IPN2 N5 – N3 – N4 – N8 After link failure between N3 and N4, follows the path: N5 – N3 – N0 – N4 – N8

25 BGP routing table size Routing table size is equal to the number of networks in the topology All nodes have routing information for all networks in the topology For this topology, the routing table size is 29

26 Link Failure Example We fail the link between Nodes 0 and 1:
Churn Rate: 18/27 Convergence Time: 156 secs (HELLO INTERVAL IS 60 secs) In comparison with MNLR: Churn Rate is equal to the number nodes that experience a link failure (2 in this case) Convergence Time is equal to the number of hello times required to determine a link failure plus the time to update the affected nodes’ neighbor tables (2 second hello times, 3 hellos for link failure)

27 Future work Video images Files Round trip time Reliability
Demo in Early May

28 Demo MNLR vs BGP

29 Discussions Truly clean slate technology
No distance vector, link state, path vector Decouples dependency of routing and operations from network size Scalable Complements SDN – modularized control plane operations MNLR is modular Can be used for intra or inter-AS routing Suggestions

30 BACKUP SLIDES

31 Routing Structure and Modularity
Let us introduce routers and assign IDs that capture the structure properties ISP A 1.2 Seattle POP 1.2:1 New York POP 1.2:2 Tier 1 1.1 BB Routers 1.2 1.3 Tier 2 DB Router Set 1 2.1:1 2.3:1 DB Router Set 2 2.3:2 2.2:1 Tier 3 AC Router Set 1 3.3:1:1 AC Router Set 2 3.1:1:1 3.3:2:1 3.2:1:1 Devices 4. ::: Chicago POP 1.2:3 Forward between 3.3:1:1 – 3.3:2:1 – via 2.3:1 and 2.3:2 Forward between 3.3:1:1 in Seattle POP – and NY POP – packet leaves the Seattle cloud – address will be 1.2:1(3.3:1:1). The device in NY POP will accordingly have an address 1.2:2{3.3:1:1…) – Name services


Download ppt "Multi Node Label Routing – A layer 2.5 routing protocol"

Similar presentations


Ads by Google