Presentation is loading. Please wait.

Presentation is loading. Please wait.

DOT – Distributed OpenFlow Testbed. Motivation Mininet is currently the de-facto tool for emulating an OpenFlow enabled network However, the size of network.

Similar presentations


Presentation on theme: "DOT – Distributed OpenFlow Testbed. Motivation Mininet is currently the de-facto tool for emulating an OpenFlow enabled network However, the size of network."— Presentation transcript:

1 DOT – Distributed OpenFlow Testbed

2 Motivation Mininet is currently the de-facto tool for emulating an OpenFlow enabled network However, the size of network and amount of traffic are limited by the hardware resources of a single machine Our recent experiments with Mininet show that it can cause – Flow serialization of otherwise parallel flows – Many flows co-exist and compete for switch resources as transmission rates are limited by the CPU – Process for running parallel iperf servers and clients is not trivial 2

3 Objective Run large scale emulations of an OpenFlow enabled networks and – Avoid/reduce flow serialization and contention introduced by the emulation environment – Enable emulation of large amounts of traffic 3

4 DOT Emulation Embedding algorithm partitions the logical network into multiple physical hosts – Intra-host virtual link Eembedded inside a single host – Cross-host link Connects switches located at different hosts Gateway Switch (GS) is added to each active physical host to emulate link delay of the cross-host links – The augmented network with GS is called physical network SDN controller operates on the logical network 4

5 Embedding of Logical Network 5 Two Physical Machines Cross-host linksEmulated Network Physical Host 1 Physical Host 2

6 Embedding Cross-host Links 6 Physical Embedding Gateway switches a b b” b’ a’a” Virtual Switch (VS)

7 SDN Controller’s View 7 Controller’s View SDN Controller

8 Software Stack of a DOT Node 8 Virtual Interface Virtual Link Physical Link OpenFlow Switch

9 Gateway Switch – A DOT component – One gateway switch per active physical host – Is attached with the physical NIC of the machine – Facilitates inter-physical host packet transfer – Enables emulation of delays in cross-host links – Oblivious of the forwarding protocol used in the emulated network 9

10 Simulating Delay of the cross host links 10 Emulated Network (Only the cross-host links are shown) Physical Embedding Link delay Only one of the segments of a cross-host link will simulate delay

11 Simulating delay 11 A->F B->E D->E

12 Simulating delay 12 A->F B->E D->E When a packet is received at a Gateway Switch through its physical interface, it should identify the remote segment through which it was previously forwarded Now, GS2 has to forward the packet through particular link even if the next hop (e.g., B->E and D->E) is same.

13 Solution of Traffic Forwarding at the Gateway Switch Mac Rewriting Tagging – Tunnel with tag 13

14 Approach 1: MAC Rewrite Each GS maintains IP to MAC address mapping of all VMs When a packet arrives at a GS through logical links, it replaces – The source MAC with its receiving port MAC This enables the remote GS to identify the segment through which the packet has been forwarded – The destination MAC with the destination physical host’s physical NIC’s MAC This enables unicast of the packet through physical switching fabric When a GS receives a packet from the physical interface – It checks the source MAC to identify the corresponding segment through which it should forward the packet – Before forwarding, it replaces the source and destination MAC by inspecting the IP address field of the packet 14

15 Approach 1: MAC Rewriting 15 MAC (src, dst)IP (src, dst) VM2, VM1 SDN Controller

16 16 Approach 1: MAC Rewriting SDN Controller

17 17 MACIP VM2, VM1 Approach 1: MAC Rewriting SDN Controller

18 18 MACIP VM2, VM1 Approach 1: MAC Rewriting SDN Controller

19 19 MACIP VM2, VM1 Approach 1: MAC Rewriting SDN Controller

20 20 Controller’s View MACIP VM2, VM1 PEPE PDPD P M2 P M1 PBPB PCPC Approach 1: MAC Rewriting SDN Controller

21 21 Controller’s View MACIP VM2, VM1 PEPE PDPD P M2 P M1 PBPB PCPC Approach 1: MAC Rewriting GS1GS2 Outward Traffic If(receiving port P B ) srcMac←P B,dstMac←P M2 If(receiving port P C ) srcMac←P C,dstMac←P M2 Output: P M1 If(receiving port P D ) srcMac←P D,dstMac←P M1 If(receiving port P E ) srcMac←P E,dstMac←P M1 Output: P M2 Inward Traffic If(srcMAC= P D ) output: P B If(srcMAC = P E ) output: P C Restore MAC by inspecting IP If(srcMAC= P B ) output: P D If(srcMAC = P C ) output: P E Restore MAC by inspecting IP

22 22 Controller’s View MACIP VM2, VM1 PEPE PDPD P M2 P M1 PBPB PCPC GS1GS2 Outward Traffic If(receiving port P B ) srcMac←P B,dstMac←P M2 If(receiving port P C ) srcMac←P C,dstMac←P M2 Output: P M1 If(receiving port P D ) srcMac←P D,dstMac←P M1 If(receiving port P E ) srcMac←P E,dstMac←P M1 Output: P M2 Inward Traffic If(srcMAC= P D ) output: P B If(srcMAC = P E ) output: P C Restore MAC by inspecting IP If(srcMAC= P B ) output: P D If(srcMAC = P C ) output: P E Restore MAC by inspecting IP Approach 1: MAC Rewriting

23 23 Controller’s View MACIP P D, P M1 VM2, VM1 PEPE PDPD P M2 P M1 PBPB PCPC Approach 1: MAC Rewriting GS1GS2 Outward Traffic If(receiving port P B ) srcMac←P B,dstMac←P M2 If(receiving port P C ) srcMac←P C,dstMac←P M2 Output: P M1 If(receiving port P D ) srcMac←P D,dstMac←P M1 If(receiving port P E ) srcMac←P E,dstMac←P M1 Output: P M2 Inward Traffic If(srcMAC= P D ) output: P B If(srcMAC = P E ) output: P C Restore MAC by inspecting IP If(srcMAC= P B ) output: P D If(srcMAC = P C ) output: P E Restore MAC by inspecting IP

24 24 Controller’s View MACIP P D, P M1 VM2, VM1 PEPE PDPD P M2 P M1 PBPB PCPC Approach 1: MAC Rewriting GS1GS2 Outward Traffic If(receiving port P B ) srcMac←P B,dstMac←P M2 If(receiving port P C ) srcMac←P C,dstMac←P M2 Output: P M1 If(receiving port P D ) srcMac←P D,dstMac←P M1 If(receiving port P E ) srcMac←P E,dstMac←P M1 Output: P M2 Inward Traffic If(srcMAC= P D ) output: P B If(srcMAC = P E ) output: P C Restore MAC by inspecting IP If(srcMAC= P B ) output: P D If(srcMAC = P C ) output: P E Restore MAC by inspecting IP

25 25 Controller’s View MACIP P D, P M1 VM2, VM1 PEPE PDPD P M2 P M1 PBPB PCPC Approach 1: MAC Rewriting GS1GS2 Outward Traffic If(receiving port P B ) srcMac←P B,dstMac←P M2 If(receiving port P C ) srcMac←P C,dstMac←P M2 Output: P M1 If(receiving port P D ) srcMac←P D,dstMac←P M1 If(receiving port P E ) srcMac←P E,dstMac←P M1 Output: P M2 Inward Traffic If(srcMAC= P D ) output: P B If(srcMAC = P E ) output: P C Restore MAC by inspecting IP If(srcMAC= P B ) output: P D If(srcMAC = P C ) output: P E Restore MAC by inspecting IP

26 26 Controller’s View MACIP VM2, VM1 PEPE PDPD P M2 P M1 PBPB PCPC Approach 1: MAC Rewriting GS1GS2 Outward Traffic If(receiving port P B ) srcMac←P B,dstMac←P M2 If(receiving port P C ) srcMac←P C,dstMac←P M2 Output: P M1 If(receiving port P D ) srcMac←P D,dstMac←P M1 If(receiving port P E ) srcMac←P E,dstMac←P M1 Output: P M2 Inward Traffic If(srcMAC= P D ) output: P B If(srcMAC = P E ) output: P C Restore MAC by inspecting IP If(srcMAC= P B ) output: P D If(srcMAC = P C ) output: P E Restore MAC by inspecting IP

27 27 Controller’s View PEPE PDPD P M2 P M1 PBPB PCPC MACIP VM2, VM1 Approach 1: MAC Rewriting SDN Controller

28 28 Controller’s View PEPE PDPD P M2 P M1 PBPB PCPC MACIP VM2, VM1 Approach 1: MAC Rewriting SDN Controller

29 Advantages – Packet size remains same – No change is required in the physical switching fabric Limitations – Needs to maintain all IP to MAC address mapping in each of the GSs. Not scalable 29 Approach 1: MAC Rewriting

30 Approach 2: Tunnel with Tag An unique id is assigned to each cross-host link When a packet arrives at a GS through internal logical links – It encapsulates the packet with any tunneling protocol (eg. GRE) The destination address is the IP Address of the physical host address – An tag equal to the id of the cross-host link is assigned to the packet (using tunnel id field of GRE) When an GS receives a packet from the physical interface – It checks the tag (tunnel id) field to identify the outgoing segment – It forwards the packet after decapsulating the tunnel header. 30

31 31 Controller’s View MACIP VM2, VM1 PEPE PDPD P M2 P M1 PBPB PCPC Approach 2: Tunnel with Tag SDN Controller Cross-host link id #1 #2

32 32 Controller’s View MACIP VM2, VM1 PEPE PDPD P M2 P M1 PBPB PCPC Approach 2: Tunnel with Tag SDN Controller Cross-host link id #1 #2 GS1GS2 Outward Traffic If(receiving port P B ) tunnelID←1 Use tunnel to Machine 2 If(receiving port P C ) tunnelID←2 Use tunnel to Machine 2 If(receiving port P D ) tunnelID←1 Use tunnel to Machine 1 If(receiving port P E ) tunnelID←2 Use tunnel to Machine 1 Inward Traffic If(tunnelID=1) output: P B If(tunnelID=2) output: P C If(tunnelID=1) output: P D If(tunnelID=2) output: P E

33 33 Controller’s View MACIP VM2, VM1 PEPE PDPD P M2 P M1 PBPB PCPC Approach 2: Tunnel with Tag MACIPTID PM1, PM2 #1 SDN Controller #1 #2 Header for encapsulation Original Packet

34 34 Controller’s View MACIP VM2, VM1 PEPE PDPD P M2 P M1 PBPB PCPC Approach 2: Tunnel with Tag MACIPTID PM1, PM2 #1 SDN Controller #1 #2

35 35 Controller’s View MACIP VM2, VM1 PEPE PDPD P M2 P M1 PBPB PCPC Approach 2: Tunnel with Tag SDN Controller Cross-host link id #1 #2 GS1GS2 Outward Traffic If(receiving port P B ) tunnelID←1 Use tunnel to Machine 2 If(receiving port P C ) tunnelID←2 Use tunnel to Machine 2 If(receiving port P D ) tunnelID←1 Use tunnel to Machine 1 If(receiving port P E ) tunnelID←2 Use tunnel to Machine 1 Inward Traffic If(tunnelID=1) output: P B If(tunnelID=2) output: P C If(tunnelID=1) output: P D If(tunnelID=2) output: P E

36 Advantages – No change is required in the physical switching fabric – No GS need to know IP-MAC address mapping – Rule set in GS is the order of cross-host link Scalable solution Limitations – Lowers the MTU Due to the scalability issue, we choose this solution 36 Approach 2: Tunnel with Tag

37 Emulating Bandwidth Configured for each logical link – Using Linux tc command Maximum bandwidth for a cross-host link is bounded by the physical switching capacity Maximum bandwidth of an internal link is capped by the processing capability of the physical host 37

38 DOT: Summary Can emulates OpenFlow network with – Specific link delay – Bandwidth Traffic forwarding – General OpenVSwitch Forwards traffic as instructed by the Floodlight controller – Gateway Switches Instances of OpenVSwitch Forwards traffic based on pre-configured flow rules 38

39 Technology used so far OpenVSwitch : Version 1.8 – Rate limit is configured in each port Floodlight Controller: Version 0.9 – Custom modules added Static Network Loader, ARP Resolver Hypervisor – Qemu-KVM Link delays are simulated using tc (Linux traffic control) 39


Download ppt "DOT – Distributed OpenFlow Testbed. Motivation Mininet is currently the de-facto tool for emulating an OpenFlow enabled network However, the size of network."

Similar presentations


Ads by Google