Presentation is loading. Please wait.

Presentation is loading. Please wait.

Fast Incremental Updates on Ternary-CAMs for Routing Lookups and Packet Classification Devavrat Shah and Pankaj Gupta Department of Computer Science Stanford.

Similar presentations


Presentation on theme: "Fast Incremental Updates on Ternary-CAMs for Routing Lookups and Packet Classification Devavrat Shah and Pankaj Gupta Department of Computer Science Stanford."— Presentation transcript:

1 Fast Incremental Updates on Ternary-CAMs for Routing Lookups and Packet Classification Devavrat Shah and Pankaj Gupta Department of Computer Science Stanford University {devavrat, August 17, 2000 Hot Interconnects 8

2 2 Lookup in an IP Router Unicast destination address based lookup Dstn Addr Next Hop ---- Dstn-prefixNext Hop Forwarding Table Next Hop Computation Forwarding Engine Incoming Packet HEADERHEADER

3 3 IP Lookup = Longest Prefix Matching / / / /13 100/ PrefixNext-hop Forwarding Table Find the longest prefix matching the incoming destination address

4 4 Requirements of a Route Lookup Scheme High Speed : tens of millions per sec Low storage : ~100K entries Fast updates: few thousands per second, but ideally at lookup speed

5 5 Route Lookup Schemes Various algorithms : come to tutorial tomorrow if interested This paper is about ternary CAMs

6 6 Content-addressable Memory (CAM) Fully associative memory Exact match (fixed-length) search operation in a single clock cycle TCAM: stores a 0, 1 or X in each cell: useful for wildcard matching

7 7 Priority Encoder Location P / P2 P3 P4 P / / /13 100/ Route Lookup Using TCAM PrefixNext-hop To find the longest prefix cheaply, need to keep entries sorted in order of decreasing prefix lengths

8 8 31 bit Prefixes 30 bit Prefixes 10 bit Prefixes 9 bit Prefixes 8 bit Prefixes General TCAM Configuration For Longest Prefix Matching Prefix-length ordering constraint (PLO) 32 bit Prefixes

9 9 Incremental Update Problem Updates: 1.Insert a new prefix 2.Delete an old prefix Problem: how to keep the sorting invariant (e.g., the PLO) under updates

10 10 Target Update Rate ? Many are happy with a few hundred thousand per second Others want (and claim) single clock- cycle updates Our goal: make them as fast as possible (ideally single-cycle)

11 11 32 bit Prefix 31 bit Prefix 30 bit Prefix 9 bit Prefix 8 bit Prefix 10 bit Prefix Empty Space Common Solution: O(N) Add new 30-bit prefix M N Problem: How to manage the empty space for best update time and TCAM utilization?

12 12 32 bit Prefix 31 bit Prefix 30 bit Prefix 9 bit Prefix 8 bit Prefix Better Average Update Rate Add new 30-bit prefix Worst case is still O(N)

13 13 32 bit Prefix 31 bit Prefix 30 bit Prefix 9 bit Prefix 8 bit Prefix 10 bit Prefix Empty Space An L-solution (L=32) Add Two prefixes of same length can be in any order

14 14 Routing Table for Simulation Mae-East Entries43344 Insertion34204 Deletion4140 Snapshot + 3-hour updates on the original table Source: - March 1, 2000www.merit.edu

15 15 Performance of L-solution # Entries Avg #memory writes

16 16 Outline of Rest of the Talk Algorithm PLO_OPT: worst case L/2 memory shifts (provably optimal) Algorithm CAO_OPT: even better (conjectured to be optimal)

17 17 32 bit Prefix 31 bit Prefix 9 bit Prefix 8 bit Prefix 20 bit Prefix 21 bit Prefix Empty Space PLO_OPT Worst-case L/2 Add

18 18 PLO_OPT (MAE-EAST)

19 19 Better Algorithm ? PLO_OPT is optimal under the PLO constraint Question: can we relax the constraint and still achieve correct lookup operation?

20 20 Yes: PLO Constraint is More Restrictive Than Needed P1 P2 P3 P4 P110/8 P210.64/15 P /29 P /31 Maximal chain P2 has no ordering constraint with P3 or P4 P4 < P3 < P1, P2 < P1 Chain ancestor Ordering Constraint

21 21 Algorithm CAO_OPT Maintain free space pool in the “middle” of the maximal chain Basic idea: for every prefix, the longest chain that this prefix belongs to should be split around the free space pool as equally as possible

22 22 CAO_OPT: Example P1 P2 P3 P4 P110/8 P210.64/15 P /29 P /31 P4 < P3 < P1, P2 < P1

23 23 CAO_OPT: Updates Insertion : find the maximal chain to which new entry belongs and insert it such that this chain is distributed as equally as possible around the free space : D/2 operations Deletion : reverse operation with update possibly using another chain

24 24 Auxiliary Data Structure Trie of prefixes with two additional fields per node Update operation takes L memory writes in software and D/2 in TCAM

25 25 Maximal-chain Length (D) Distribution Number of chains Chain Length

26 26 CAO_OPT (MAE-EAST) # Entries Avg #memory writes

27 27 AlgorithmCAO_OPTPLO_OPTL-soln Mean Variance Worst Case Summary of Simulation Results Hence, can achieve 1-2 cycle updates


Download ppt "Fast Incremental Updates on Ternary-CAMs for Routing Lookups and Packet Classification Devavrat Shah and Pankaj Gupta Department of Computer Science Stanford."

Similar presentations


Ads by Google