Presentation is loading. Please wait.

Presentation is loading. Please wait.

Nanxi Kang Princeton University

Similar presentations


Presentation on theme: "Nanxi Kang Princeton University"— Presentation transcript:

1 Optimizing the ‘One Big Switch’ Abstraction in Software Defined Networks
Nanxi Kang Princeton University in collaboration with Zhenming Liu, Jennifer Rexford, David Walker

2 Software Defined Network
Decouple data and control plane A logically centralized control plane (controller) Standard protocol e.g., OpenFlow Network policies Switch Controller program ... ... Switch rules 2

3 Existing control platform
Decouple data and control plane A logically centralized control plane (controller) Standard protocol e.g., OpenFlow Flexible policies Easy management 3

4 ‘One Big Switch’ Abstraction
From H1, dstIP = 0* => go to H2 From H1, dstIP = 1* => go to H3 H1 H2 H3 Routing policy R e.g., Shortest path routing Endpoint policy E e.g., ACL, Load Balancer Automatic Rule Placement 4

5 Challenges of Rule Placement
Endpoint policy E Routing policy R ... H1 H2 H1 H2 H3 H1 H3 #rules >10k Automatic Rule Placement TCAM size =1k ~ 2k ... 5

6 Past work Nicira DIFANE Palette
Install endpoint policies on ingress switches Encapsulate packets to the destination Only apply when ingress are software switches DIFANE Palette 6

7 Contributions Design a new rule placement algorithm
Realize high-level network policies Stay within rule capacity of switches Handle policy update incrementally Evaluation on real and synthetic policies 7

8 Contribution Design a new rule placement algorithm
Realize high-level network policies Stay within rule capacity of switches Handle policy update incrementally Evaluation on real and synthetic policies 7

9 Automatic Rule Placement
Problem Statement Topology Endpoint policy E Routing policy R 0.5k ... 1k 1k 0.5k Automatic Rule Placement Stay within capacity Minimize total ... 8

10 Divide rule space across paths Decompose the network into paths
Algorithm Flow Place rules over paths Divide rule space across paths Decompose the network into paths 1. 2. 3. 9

11 Divide rule space across paths Decompose the network into paths
Algorithm Flow Place rules over paths Divide rule space across paths Decompose the network into paths 1. 2. 3. 9

12 Single Path Routing policy is trivial C C C3 10

13 Endpoint policy R1: (srcIP = 0*, dstIP = 00), permit
R3: (srcIP = **, dstIP = 11), deny R4: (srcIP = 11, dstIP = ** ), permit R5: (srcIP = 10, dstIP = 0* ), permit R6: (srcIP = **, dstIP = ** ), deny 11

14 Map rule to rectangle R1: (0*, 00),P R2: (01, 1*),P R3: (**, 11),D
10 11 srcIP dstIP 00 01 10 11 R1 srcIP dstIP R1: (0*, 00),P R2: (01, 1*),P R3: (**, 11),D R4: (11, **),P R5: (10, 0*),P R6: (**, **),D 12

15 Map rule to rectangle R4 R1: (0*, 00),P R2: (01, 1*),P R3: (**, 11),D
10 11 srcIP dstIP 00 01 10 11 R1 R4 R3 R2 R5 srcIP dstIP R1: (0*, 00),P R2: (01, 1*),P R3: (**, 11),D R4: (11, **),P R5: (10, 0*),P R6: (**, **),D C1 = 4 13

16 Pick rectangle for every switch
14

17 Select a rectangle R2 R5 R4 q Overlapped rules: R2, R3, R4, R6
00 01 10 11 R1 R4 R3 R2 R5 Overlapped rules: R2, R3, R4, R6 Internal rules: R2, R3 #Overlapped rules ≤ C1 q C1 = 4 15

18 Install rules in first switch
00 01 10 11 R1 R4 R3 R2 R5 00 01 10 11 R’4 R3 R2 q C1 = 4 16

19 Skip the original policy
Rewrite policy 00 01 10 11 R1 R4 R5 q 00 01 10 11 R1 R4 R3 R2 R5 q Fwd everything in q Skip the original policy 17

20 Overhead of rules #Installed rules ≥ |Endpoint policy|
Non-internal rules won’t be deleted Objective in picking rectangles Max(#internal rules) / (#overlap rules) 18

21 Divide rule space across paths Decompose the network into paths
Algorithm Flow Place rules over paths Divide rule space across paths Decompose the network into paths 1. 2. 3. 19

22 Topology = {Paths} Routing policy
Implement: install forwarding rules on switches Gives {Paths} H1 H2 H3 H1 H2 H3 20

23 Project endpoint policy to paths
Enforce endpoint policy Project endpoint policy to paths Only handle packets using the path Solve paths independently H1 H2 H3 Endpoint Policy E E1 E2 E3 E4 21

24 ✔ ? ✔ What is next step ? Decomposition to paths
Divide rule space across paths Estimate the rules needed by each path Partition rule space by Linear Programming Solve rule placement over paths 22

25 Divide rule space across paths Decompose the network into paths
Algorithm Flow Place rules over paths Divide rule space across paths Decompose the network into paths 1. 2. 3. Fail Success 23

26 Roadmap Design a new rule placement algorithm
Stay within rule capacity of switches Minimize the total number of installed rules Handle policy update incrementally Fast in making changes, Compute new placement in background Evaluation on real and synthetic policies 24

27 Insert a rule to a path Path 25

28 Limited impact Path Update a subset of switches R R R 26

29 Limited impact Update a subset of switches
Path Update a subset of switches Respect original rectangle selection R’ R 27

30 Roadmap Design a new rule placement algorithm
Stay within rule capacity of switches Minimize the total number of installed rules Handle policy update incrementally Evaluation on real and synthetic policies ACLs(campus network), ClassBench Shortest-path routing on GT-ITM topology 28

31 #rule/switch x #switches
Path Assume switches have the same capacity Find the minimum #rules/switch that gives a feasible rule placement Overhead = #rule/switch x #switches |E| #switch #rules / switch #total rules #extra rules Overhead 13985 4 3646 14584 29

32 #rule/switch x #switches - |E|
Path Assume switches have the same capacity Find the minimum #rules/switch that gives a feasible rule placement Overhead = #rule/switch x #switches - |E| |E| #switches #rules / switch #total rules #extra rules Overhead 13985 4 3646 14584 599 30

33 #rule/switch x #switches - |E|
Path Assume switches have the same capacity Find the minimum #rules/switch that gives a feasible rule placement Overhead = #rule/switch x #switches - |E| |E| |E| #switch #rules / switch #total rules #extra rules Overhead 13985 4 3646 14584 599 4.3% 31

34 #Extra installed rules vs. length
Normalized #extra rules Path Length |E| #switches #rules / switch #total rules Overhead 13985 4 3646 14584 4.3% 32

35 #Extra installed rules vs. length
Normalized #extra rules Path Length |E| #switches #rules / switch #total rules Overhead 13985 4 3646 14584 4.3% 8 1895 15160 8.4% 33

36 Data set matters Real ACL policies Normalized #extra rules Path Length
Many rule overlaps Normalized #extra rules Few rule overlaps Path Length Real ACL policies 34

37 Place rules on a graph #Installed rules
Use rules on switches efficiently Unwanted traffic Drop unwanted traffic early Computation time Compute rule placement quickly 35

38 Place rules on a graph #Installed rules Unwanted traffic
Use rules on switches efficiently Unwanted traffic Drop unwanted traffic early Computation time Compute rule placement quickly 36

39 Carry extra traffic along the path
Install rules along the path Not all packets are handled by the first hop Unwanted packets travel further Quantify effect of carrying unwanted traffic Assume uniform distribution of traffic with drop action 37

40 When unwanted traffic is dropped
An example single path Fraction of path travelled #hops Fraction of path travelled Unwanted traffic dropped at this switch Unwanted traffic dropped until this switch 1 25% 2 50% 3 75% 4 100% 38

41 When unwanted traffic is dropped
An example single path Fraction of path travelled Unwanted traffic dropped until the switch #hops Fraction of path travelled Unwanted traffic dropped at this switch Unwanted traffic dropped until this switch 1 25% 30% 2 50% 10% 40% 3 75% 5% 45% 4 100% 39

42 Fraction of path travelled Unwanted traffic dropped
Aggregating all paths Min #rules/switch for a feasible rule placement Fraction of path travelled Unwanted traffic dropped 20% 64% 75% 70% 100% 40

43 Give a bit more rule space
Put more rules at the first several switches along the path Fraction of path travelled Min #rules/switch 10% more #rules/switch 20% 64% 84% 75% 70% 90% 100% 41

44 Take-aways Path: low overhead in installing rules.
Rule capacity is efficiently shared by paths. Most unwanted traffic is dropped at the edge. Fast algorithm, easily parallelized < 8 seconds to compute the all paths 42

45 Summary Contribution An efficient rule placement algorithm
Support for incremental update Evaluation on real and synthetic data Future work Integrate with SDN controllers, e.g., Pyretic Combine rule placement with rule caching 43

46 Thanks!


Download ppt "Nanxi Kang Princeton University"

Similar presentations


Ads by Google