Presentation is loading. Please wait.

Presentation is loading. Please wait.

BUFFALO: Bloom Filter Forwarding Architecture for Large Organizations Minlan Yu Princeton University Joint work with Alex Fabrikant,

Similar presentations


Presentation on theme: "BUFFALO: Bloom Filter Forwarding Architecture for Large Organizations Minlan Yu Princeton University Joint work with Alex Fabrikant,"— Presentation transcript:

1 BUFFALO: Bloom Filter Forwarding Architecture for Large Organizations Minlan Yu Princeton University minlanyu@cs.princeton.edu Joint work with Alex Fabrikant, Jennifer Rexford 1

2 Large-scale SPAF Networks Large layer-2 network on flat addresses – Simple for configuration and management – Useful for enterprise and data center networks SPAF:Shortest Path on Addresses that are Flat – Flat addresses (e.g., MAC addresses) – Shortest path routing Link state, distance vector, spanning tree protocol 2 S H S S S S S S S S S S S S H H H H H H H H

3 Challenges Scaling control plane – Topology and host information dissemination – Route computation – Recent practical solutions (TRILL, SEATTLE) Scaling data plane – Forwarding table growth (in # of hosts and switches) – Increasing link speed – Remains a challenge 3

4 Existing Data Plane Techniques TCAM (Ternary content-addressable memory) – Expensive, power hungry Hash table in SRAM – Crash or perform poorly when out of memory – Difficult and expensive to upgrade memory 4

5 BUFFALO: Bloom Filter Forwarding Bloom filters in fast memory (SRAM) – A compact data structure for a set of elements – Calculate s hash functions to store element x – Easy to check membership – Reduce memory at the expense of false positives h 1 (x)h 2 (x)h s (x) 010001010000010 x V0V0 V m-1 h 3 (x)

6 BUFFALO: Bloom Filter Forwarding One Bloom filter (BF) per next hop – Store all the addresses that are forwarded to that next hop 6 Nexthop 1 Nexthop 2 Nexthop T …… packet query Bloom Filters hit

7 BUFFALO Challenges How to optimize memory usage? Minimize the false-positive rate How to handle false positives quickly? No memory/payload overhead How to handle routing dynamics? Make it easy and fast to adapt Bloom filters 7

8 BUFFALO Challenges How to optimize memory usage? Minimize the false-positive rate How to handle false positives quickly? No memory/payload overhead How to handle routing dynamics? Make it easy and fast to adapt Bloom filters 8

9 Optimize Memory Usage Goal: Minimize overall false-positive rate – The probability that one BF has a false positive Input: – Fast memory size M – Number of destinations per next hop – The maximum number of hash functions Output: the size of each Bloom filter – Larger Bloom filter for the next hop that has more destinations 9

10 Optimize Memory Usage (cont.) Constraints – Memory constraint Sum of all BF sizes <= fast memory size M – Bound on number of hash functions To bound CPU calculation time Bloom filters share the same hash functions Proved to be a convex optimization problem – An optimal solution exists – Solved by IPOPT (Interior Point OPTimizer) 10

11 Minimize False Positives 11 A FIB with 200K entries, 10 next hop 8 hash functions

12 Comparing with Hash Table 12 65% Save 65% memory with 0.1% false positive

13 BUFFALO Challenges How to optimize memory usage? Minimize the false-positive rate How to handle false positives quickly? No memory/payload overhead How to handle routing dynamics? Make it easy and fast to adapt Bloom filters 13

14 Handle False Positives Design goals – Should not modify the packet – Never go to slow memory – Ensure timely packet delivery BUFFALO solution – Exclude incoming interface Avoid loops in one false positive case – Random selection from matching next hops Guarantee reachability with multiple false positives 14

15 One False Positive Most common case: one false positive – When there are multiple matching next hops – Avoid sending to incoming interface We prove that there is at most a 2-hop loop with a stretch <= l(AB)+l(BA) 15 False positive A A Shortest path B B dst

16 Multiple False Positives Handle multiple false positives – Random selection from matching next hops – Random walk on shortest path tree plus a few false positive links – To eventually find out a way to the destination 16 dst Shortest path tree for dst False positive link

17 Multiple False Positives (cont.) Provable stretch bound – Expected stretch with k false positive case is at most – Proved by random walk theories Stretch bound actually not bad – False positives are independent Different switches use different hash functions – k false positives for one packet are rare – Probability of k false positives decreases exponentially with k 17

18 Stretch in Campus Network 18 When fp=0.001% 99.9% of the packets have no stretch When fp=0.001% 99.9% of the packets have no stretch 0.0003% packets have a stretch of shortest path length When fp=0.5%, 0.0002% packets have a stretch 6 times of shortest path length

19 BUFFALO Challenges How to optimize memory usage? Minimize the false-positive rate How to handle false positives quickly? No memory/payload overhead How to handle routing dynamics? Make it easy and fast to adapt Bloom filters 19

20 Handle Routing Dynamics Counting Bloom filters (CBF) – CBF can handle adding/deleting elements – Take more memory than Bloom filters (BF) BUFFALO solution: use CBF to assist BF – CBF in slow memory to handle FIB updates – Occasionally re-optimize BF size based on CBF 20 1 0 Hard to expand to size 4 CBF BF 1 1 Easy to contract CBF to size 4

21 BUFFALO Switch Architecture 21 – Prototype implemented in kernel-level Click

22 Prototype Evaluation Environment – 3.0 GHz 64-bit Intel Xeon – 2 MB L2 data cache, used as fast memory size M Forwarding table – 10 next hops – 200K entries 22

23 Prototype Evaluation Peak forwarding rate – 365 Kpps, 1.9 μs per packet – 10% faster than hash-based solution EtherSwitch Performance with FIB updates – 10.7 μs to update a route – 0.47 s to reconstruct BFs based on CBFs On another core without disrupting packet lookup Swap in new BFs is fast 23

24 Conclusion Improve data plane scalability in SPAF – Complementary to recent Ethernet advances Three nice properties of BUFFALO – Small memory requirement – Gracefully increase stretch with the growth of forwarding table – React quickly to routing updates 24

25 Papers and Future Work Paper in submission to CoNext – www.cs.princeton.edu/~minlanyu/writeup/conext09. pdf www.cs.princeton.edu/~minlanyu/writeup/conext09. pdf Preliminary work in WREN Workshop – Use Bloom filters to reduce memory for enterprise edge routers Future work – Reduce transient loops in OSPF using Random walk in shortest path tree 25

26 Thanks! 26 Questions?


Download ppt "BUFFALO: Bloom Filter Forwarding Architecture for Large Organizations Minlan Yu Princeton University Joint work with Alex Fabrikant,"

Similar presentations


Ads by Google