Presentation is loading. Please wait.

Presentation is loading. Please wait.

Routing Prefix Caching in Network Processor Design Huan Liu Department of Electrical Engineering Stanford University

Similar presentations


Presentation on theme: "Routing Prefix Caching in Network Processor Design Huan Liu Department of Electrical Engineering Stanford University"— Presentation transcript:

1 Routing Prefix Caching in Network Processor Design Huan Liu Department of Electrical Engineering Stanford University huanliu@stanford.edu http://www.stanford.edu/~huanliu ICCCN’01 Oct. 15, 2001

2 Outline Goal: Advocate routing prefix caching instead of IP address caching Why prefix caching? Prefix cache architecture How to guarantee correct lookup result Experimental result Summary

3 Motivation Why cache? –Small enough to be integrated on chip => Lower on chip delay than going off chip –Could use fast SRAM instead of DRAM –Smaller capacitance loading => Fast circuit Why cache prefix instead of IP address? –Solution space is smaller: # of prefixes << # of IP addresses # of prefixes at a router is even smaller Prefixes can be compacted –Data centers host thousands of web servers

4 IP address caching Problem ?? –Spatial locality : none –Temporal locality : Limited Goal: Fully exploit temporal locality using prefix cache 192.168.10.33 192.168.10.53 IP address Next hop 192.168.10.x3 Routing Prefix Next hop Micro Engine IP Cache Routing Table Network Processor (Fully assoc. = CAM)

5 Prefix cache architecture 192.168.10.x3 Prefix Next hop 192.168.10.x3 Routing Prefix Next hop Micro Engine Prefix cache (TCAM) Routing Table (TCAM) Network Processor ASIC Prefix memory

6 Alternative prefix cache arch. 192.168.10.x3 Prefix Next hop RAM Micro Engine Prefix cache (TCAM) SW Routing Table (Trie) Network Processor Host CPU Prefix memory

7 Density comparison IP cache –32 bit tag –Tag comparing logic CAM –10T SRAM cell Prefix cache –32 bit tag –32 bit mask –Tag comparing logic –Masking logic TCAM –16T SRAM cell Rough estimate: < 2x density difference One implementation (Mosaid)

8 Key problem Not all prefixes are cacheable Lookup 000000 0* Lookup 010100 Cache Prefix memory Wrong!! 01010* should be returned. 0* is non cacheable 1 0* 01010* 0 1 0

9 Solution #1 Complete prefix tree expansion (CPTE) 1 0* 01010* Lookup 000000 00* Lookup 010100 CachePrefix memory Problem: Routing table explosion 0 00* 01011* 01010* 10 10 10 011* 0100*

10 Solution #2 Cache IP address instead 1 0* 01010* Lookup 000000 000000 Lookup 010100 Cache Prefix memory 01010* 0 1 0 Problem: 1. Degrade to IP cache 2. Extra logic to send IP when matched non-cachable prefix

11 Solution #3 Partial Prefix Tree Expansion (PPTE)(#1 + #2) 1 0* 01010* Lookup 000000 00* Lookup 010100 CachePrefix memory 0 00* 01010* 0 1 0

12 Explosion factor comparison

13 Simulation result

14 Summary Prefix caching outperform IP address caching because temporal locality is fully exploited We show three ways to guarantee correct lookup result Experiment result: 3x+ improvement with only 2x- more transistors


Download ppt "Routing Prefix Caching in Network Processor Design Huan Liu Department of Electrical Engineering Stanford University"

Similar presentations


Ads by Google