Presentation is loading. Please wait.

Presentation is loading. Please wait.

CLUE: Achieving Fast Update over Compressed Table for Parallel Lookup with Reduced Dynamic Redundancy Author: Tong Yang, Ruian Duan, Jianyuan Lu, Shenjiang.

Similar presentations


Presentation on theme: "CLUE: Achieving Fast Update over Compressed Table for Parallel Lookup with Reduced Dynamic Redundancy Author: Tong Yang, Ruian Duan, Jianyuan Lu, Shenjiang."— Presentation transcript:

1 CLUE: Achieving Fast Update over Compressed Table for Parallel Lookup with Reduced Dynamic Redundancy Author: Tong Yang, Ruian Duan, Jianyuan Lu, Shenjiang Zhang, Huichen Dai and Bin Liu Publisher: IEEE ICDCS, 2012 Presenter: Kai-Yang, Liu Date: 2013/3/13

2 INTRODUCTION To achieve high performance, backbone routers must gracefully handle the three problems: routing table Compression, fast routing Lookup, and fast incremental UpdatE (CLUE). CLUE consists of three parts: a routing table compression algorithm, an improved parallel lookup mechanism, and a new fast incremental update mechanism. 2

3 Compression Algorithm ONRTC compresses the routing table size to 70% of its original size. Prefix overlap is eliminated. 3

4 ONRTC Algorithm 4

5 Partition Algorithm In order to achieve parallel lookup, the prefixes should be split into partitions firstly. Step 1: compute the partition size. Suppose the size of routing table is M and the partition count is n, then the size of each partition is M/n. Step 2: traverse the trie by inorder, then put every M/n prefixes to each bucket. 5

6 Improved Parallel Lookup Mechanism 6

7 The DRed update process of CLPL’s mechanism 7

8 The DRed update process of CLUE’s mechanism 8

9 The Incremental Update Mechanism The whole update process is divided into three steps : 1) trie update; 2) TCAM update; 3) DRed update. Time to Fresh (TTF) is defined in this paper, including TTF1 (TTF-trie), TTF2 (TTF-TCAM), and TTF3 (TTF- DRed). 9

10 Experiments on Compression by ONRTC 10

11 Partition comparison among the three algorithms 11

12 TTF1 comparison between CLPL and CLUE 12

13 TTF2 comparison between CLPL and CLUE 13

14 TTF3 comparison between CLPL and CLUE 14

15 TTF1+TTF2+TT3 comparison between CLPL and CLUE 15

16 WORKLOAD ON DIFFERENT PARTITIONS AND TCAM CHIPS. 16

17 Load balance of workload distribution by CLUE Each TCAM takes 4 clocks to process a packet, while a packet arrives per clock. The FIFO is set to 256 and redundancy size is set to 1024 prefixes. 17

18 Speedup factor comparison between CLPL and CLUE 18

19 Hit rate comparison between CLPL and CLUE 19


Download ppt "CLUE: Achieving Fast Update over Compressed Table for Parallel Lookup with Reduced Dynamic Redundancy Author: Tong Yang, Ruian Duan, Jianyuan Lu, Shenjiang."

Similar presentations


Ads by Google