Download presentation

Presentation is loading. Please wait.

Published bySimon Gulliford Modified about 1 year ago

1
Optimizing the ‘One Big Switch’ Abstraction in Software Defined Networks Nanxi Kang Princeton University in collaboration with Zhenming Liu, Jennifer Rexford, David Walker

2
Software Defined Network Decouple data and control plane A logically centralized control plane (controller) Standard protocol e.g., OpenFlow 2 Switc h Controller program Network policies Switch rules...

3
Existing control platform Decouple data and control plane A logically centralized control plane (controller) Standard protocol e.g., OpenFlow 3 Flexible policies ✔ ✖ Easy management

4
‘One Big Switch’ Abstraction 4 H1 H2 H3 H1 H2 H1 H3 From H1, dstIP = 0* => go to H2 From H1, dstIP = 1* => go to H3 Endpoint policy E e.g., ACL, Load Balancer Routing policy R e.g., Shortest path routing H1 H2 H3 Automatic Rule Placement

5
Challenges of Rule Placement 5 H1 H2 H1 H3 H1 H2 H3... #rules >10k TCAM size =1k ~ 2k Automatic Rule Placement Endpoint policy ERouting policy R

6
Past work 6 Nicira Install endpoint policies on ingress switches Encapsulate packets to the destination Only apply when ingress are software switches DIFANE Palette

7
Contributions Design a new rule placement algorithm Realize high-level network policies Stay within rule capacity of switches Handle policy update incrementally Evaluation on real and synthetic policies 7

8
Contribution Design a new rule placement algorithm Realize high-level network policies Stay within rule capacity of switches Handle policy update incrementally Evaluation on real and synthetic policies 7

9
Problem Statement 8... Automatic Rule Placement Endpoint policy E Routing policy R Topology 1.Stay within capacity 2.Minimize total 1k 0.5k

10
Algorithm Flow Place rules over paths Divide rule space across paths Decompose the network into paths

11
Algorithm Flow Place rules over paths Divide rule space across paths Decompose the network into paths

12
Single Path Routing policy is trivial 10 C 1 C 2 C 3

13
Endpoint policy 11 R1: (srcIP = 0*, dstIP = 00), permit R2: (srcIP = 01, dstIP = 1* ), permit R3: (srcIP = **, dstIP = 11), deny R4: (srcIP = 11, dstIP = ** ), permit R5: (srcIP = 10, dstIP = 0* ), permit R6: (srcIP = **, dstIP = ** ), deny

14
Map rule to rectangle srcI P dstIP 12 R1: (0*, 00),P R2: (01, 1*),P R3: (**, 11),D R4: (11, **),P R5: (10, 0*),P R6: (**, **),D R1R1 srcI P dstIP

15
Map rule to rectangle srcI P dstIP 13 R1: (0*, 00),P R2: (01, 1*),P R3: (**, 11),D R4: (11, **),P R5: (10, 0*),P R6: (**, **),D R1R1 R4 R3R3 R2 R5 srcI P dstIP C 1 = 4

16
Pick rectangle for every switch 14 R1R1 R4 R3 R2 R5

17
Select a rectangle Overlapped rules: R2, R3, R4, R6 Internal rules: R2, R3 #Overlapped rules ≤ C R1 R4 R3 R2 R5 15 C 1 = 4 q

18
Install rules in first switch R’4 R3 R R1 R4 R3 R2 R5 C 1 = 4 q

19
Rewrite policy R1 R4 R5 q Fwd everything in q Skip the original policy R1 R4 R3 R2 R5 q

20
Overhead of rules 18 #Installed rules ≥ |Endpoint policy| Non-internal rules won’t be deleted Objective in picking rectangles Max （ #internal rules) / (#overlap rules)

21
Algorithm Flow Place rules over paths Divide rule space across paths Decompose the network into paths

22
Routing policy Implement: install forwarding rules on switches Gives {Paths} Topology = {Paths} H1 H2 H3 H1 H2 H1 H3 H1 H2 H1 H3 20

23
Enforce endpoint policy Project endpoint policy to paths Only handle packets using the path Solve paths independently Project endpoint policy to paths 21 H1 H2 H3 H1 H2 H1 H3 H1 H2 H1 H3 Endpoint Policy E E1E1 E2E2 E3E3 E4E4

24
What is next step ? H1 H2 H3 Decomposition to paths ✔ ？ Divide rule space across paths Estimate the rules needed by each path Partition rule space by Linear Programming Solve rule placement over paths ✔ 22

25
Algorithm Flow Place rules over paths Divide rule space across paths Decompose the network into paths Success Fail

26
Roadmap Design a new rule placement algorithm Stay within rule capacity of switches Minimize the total number of installed rules Handle policy update incrementally Fast in making changes, Compute new placement in background Evaluation on real and synthetic policies 24

27
Insert a rule to a path Path 25

28
Limited impact Path Update a subset of switches 26 RR R

29
Limited impact Path Update a subset of switches Respect original rectangle selection 27 R’R’ R

30
Roadmap Design a new rule placement algorithm Stay within rule capacity of switches Minimize the total number of installed rules Handle policy update incrementally Evaluation on real and synthetic policies ACLs(campus network), ClassBench Shortest-path routing on GT-ITM topology 28

31
Path Assume switches have the same capacity Find the minimum #rules/switch that gives a feasible rule placement Overhead = 29 #rule/switch x #switches

32
Path Assume switches have the same capacity Find the minimum #rules/switch that gives a feasible rule placement Overhead = 30 #rule/switch x #switches - |E|

33
Path Assume switches have the same capacity Find the minimum #rules/switch that gives a feasible rule placement Overhead = 31 #rule/switch x #switches - |E| |E|

34
#Extra installed rules vs. length 32 Path Length Normalized #extra rules

35
#Extra installed rules vs. length 33 Path Length Normalized #extra rules

36
Data set matters Path Length Normalized #extra rules Real ACL policies 34 Many rule overlaps Few rule overlaps

37
Place rules on a graph #Installed rules Use rules on switches efficiently Unwanted traffic Drop unwanted traffic early Computation time Compute rule placement quickly 35

38
Place rules on a graph #Installed rules Use rules on switches efficiently Unwanted traffic Drop unwanted traffic early Computation time Compute rule placement quickly 36

39
Carry extra traffic along the path Install rules along the path Not all packets are handled by the first hop Unwanted packets travel further Quantify effect of carrying unwanted traffic Assume uniform distribution of traffic with drop action 37

40
When unwanted traffic is dropped An example single path Fraction of path travelled 38

41
When unwanted traffic is dropped An example single path Fraction of path travelled Unwanted traffic dropped until the switch 39

42
Aggregating all paths 40 Min #rules/switch for a feasible rule placement

43
Give a bit more rule space 41 Put more rules at the first several switches along the path

44
Take-aways 42 Path: low overhead in installing rules. Rule capacity is efficiently shared by paths. Most unwanted traffic is dropped at the edge. Fast algorithm, easily parallelized < 8 seconds to compute the all paths

45
Summary Contribution An efficient rule placement algorithm Support for incremental update Evaluation on real and synthetic data Future work Integrate with SDN controllers, e.g., Pyretic Combine rule placement with rule caching 43

46

Similar presentations

© 2017 SlidePlayer.com Inc.

All rights reserved.

Ads by Google