Presentation is loading. Please wait.

Presentation is loading. Please wait.

Hash, Don’t Cache: Fast Packet Forwarding for Enterprise Edge Routers Minlan Yu Princeton University Joint work with Jennifer.

Similar presentations


Presentation on theme: "Hash, Don’t Cache: Fast Packet Forwarding for Enterprise Edge Routers Minlan Yu Princeton University Joint work with Jennifer."— Presentation transcript:

1 Hash, Don’t Cache: Fast Packet Forwarding for Enterprise Edge Routers Minlan Yu Princeton University minlanyu@cs.princeton.edu Joint work with Jennifer Rexford 1 SIGCOMM WREN’09

2 Enterprise Edge Router Enterprise edge routers – Connects upstream providers and internal routers A few outgoing links – A small data structure for each next hop 2 Provider 1 Provider 2 Enterprise Network

3 Challenges of Packet Forwarding Full routes forwarding table (FIB) – For load balancing, fault tolerance, etc. – More than 250K entries, and growing Increasing link speed – Over 10 Gbps Requires large, expensive memory – Expensive, complicated high-end routers More cost-efficient, less power-hungry solution? – Perform fast packet forwarding in a small SRAM 3

4 Using a Small SRAM Route caching is not a viable solution – Store the most frequently used entries in cache – Bad performance during cache miss Low throughput and high packet loss – Bad performance under worst-case workloads Malicious traffic with a wide range of destinations Route changes, link failures Our solution should be workload independent – Fit the entire FIB in the small SRAM 4

5 Bloom Filter Bloom filters in fast memory (SRAM) – A compact data structure for a set of elements – Calculate s hash functions to store element x – Easy to check membership – Reduce memory at the expense of false positives h 1 (x)h 2 (x)h s (x) 010001010000010 x V0V0 V m-1 h 3 (x)

6 Bloom Filter Forwarding One Bloom filter (BF) per next hop – Store all addresses forwarded to that next hop Consider flat addresses in the talk – See paper for extensions to longest prefix match 6 Nexthop 1 Nexthop 2 Nexthop T …… Packet destination query Bloom Filters hit T is small for enterprise edge routers

7 Contributions Make efficient use of limited fast memory – Formulate and solve optimization problem to minimize false-positive rate Handle false positives – Leverage properties of enterprise edge routers Adapt Bloom filters for routing changes – Leverage counting Bloom filter in slow memory – Dynamically adjust Bloom filter size 7

8 Outline Optimize memory usage Handle false positives Handle routing dynamics 8

9 Outline Optimize memory usage Handle false positives Handle routing dynamics 9

10 Memory Usage Optimization Consider fixed forwarding table Goal: Minimize overall false-positive rate – Probability one or more BFs have a false positive Input: – Fast memory size M – Number of destinations per next hop – The maximum number of hash functions Output: the size of each Bloom filter – Larger BF for next-hops with more destinations 10

11 Constraints and Solution Constraints – Memory constraint Sum of all BF sizes fast memory size M – Bound on number of hash functions To bound CPU calculation time Bloom filters share the same hash functions Proved to be a convex optimization problem – An optimal solution exists – Solved by IPOPT (Interior Point OPTimizer) 11

12 – The FIB with 200K entries, 10 next hop – 8 hash functions – Takes at most 50 msec to solve the optimization Evaluation of False Positives 12

13 Outline Optimize memory usage Handle false positives Handle routing dynamics 13

14 False Positive Detection Multiple matches in the Bloom filters – One of the matches is correct – The others are caused by false positives 14 Nexthop 1 Nexthop 2 Nexthop T …… Packet destination query Bloom Filters Multiple hits

15 Handle False Positives on Fast Path Leverage multi-homed enterprise edge router Send to a random matching next hop – Packets can get to the destination even through a less- preferred outgoing link occasionally – No extra traffic, but may cause packet loss Send duplicate packets – Send copy of packet to all matching next hops – Guarantees reachability, but introduce extra traffic 15

16 Prevent Future False Positives For a packet that experiences a false positive – Conventional lookup in the background – Cache the result For the subsequent packets – No longer experience false positives Compared to conventional route cache – Much smaller (only for false-positive destinations) – Not easily invalidated by an adversary 16

17 Outline Optimize memory usage Handle false positives Handle routing dynamics 17

18 Problem of Bloom Filters Routing changes – Add/delete entries in BFs Problem of Bloom Filters (BF) – Do not allow deleting an element Counting Bloom Filters (CBF) – Use a counter instead of a bit in the array – CBFs can handle adding/deleting elements – But, require more memory than BFs 18

19 Update on Routing Change Use CBF in slow memory – Assist BF to handle forwarding-table updates – Easy to add/delete a forwarding-table entry 19 CBF in slow memory BF in fast memory Delete a route

20 Occasionally Resize BF Under significant routing changes – Number of addresses in BFs changes significantly – Re-optimize BF sizes Use CBF to assist resizing BF – Large CBF and small BF – Easy to expand BF size by contracting CBF 20 1 0 Hard to expand to size 4 CBF BF Easy to contract CBF to size 4

21 BF-based Router Architecture 21

22 Prototype and Evaluation Prototype in kernel-level Click Experiment environment – 3.0 GHz 64-bit Intel Xeon – 2 MB L2 data cache, used as fast memory size M Forwarding table – 10 next hops, 200K entries Peak forwarding rate – 365 Kpps for 64 Byte packets – 10% faster than conventional lookup 22

23 Conclusion Improve packet forwarding for enterprise edge routers – Use Bloom filters to represent forwarding table Only require a small SRAM – Optimize usage of a fixed small memory Multiple ways to handle false positives – Leverage properties of enterprise edge routers React quickly to FIB updates – Leverage Counting Bloom Filter in slow memory 23

24 Ongoing Work: BUFFALO Bloom filter forwarding in large enterprise – Deploy BF-based switches in the entire network – Forward all the packets on the fast path Gracefully handling false positives – Randomly select a matching next hop – Techniques to avoid loops and bound path stretch 24 www.cs.princeton.edu/~minlanyu/writeup/conext09.pdf

25 Thanks Questions? 25


Download ppt "Hash, Don’t Cache: Fast Packet Forwarding for Enterprise Edge Routers Minlan Yu Princeton University Joint work with Jennifer."

Similar presentations


Ads by Google