Presentation is loading. Please wait.

Presentation is loading. Please wait.

Dept of Computer Science and Technology

Similar presentations


Presentation on theme: "Dept of Computer Science and Technology"— Presentation transcript:

1 Dept of Computer Science and Technology
Name Lookup in Named Data Networking(NDN) Bin Liu Dept of Computer Science and Technology Tsinghua University Dec 7,2013

2 1. Introduction to NDN Name Lookup 2. Name Lookup Evaluation Criteria
——Outline 1. Introduction to NDN Name Lookup 2. Name Lookup Evaluation Criteria 3. Name Lookup: Algorithms and Data Structure ---NSDI’13 ---INFOCOM2013 ---ICDCS2012+Computer Networks’13 ---CCNx/Hashing 4. NDN Name Lookup TestBench 5. Conclusion and Future Work

3 1. Introduction to NDN Name Lookup 2. Name Lookup Evaluation Criteria
——Outline 1. Introduction to NDN Name Lookup 2. Name Lookup Evaluation Criteria 3. Name Lookup: Algorithms and Data Structure ---NSDI’13 ---INFOCOM2013 ---ICDCS2013+Computer Networks’13 ---CCNx/Hashing 4. NDN Name Lookup TestBench 5. Conclusion and Future Work

4 —— Introduction: NDN Namelookup
Named Data Networking (NDN) Named Data Networking is proposed recently as the clean-slate network architecture for future Internet, which no longer concentrates on “where” the information is located, but “what” the information (content) is needed. NDN uses names to identify every piece of contents instead of IP addresses for hardware devices attached to IP network. Unlike traditional IPv4/6 networks,

5 ——Introduction: NDN Namelookup
Naming in NDN An NDN name is hierarchically structured and composed of explicitly delimited components Interest and Data Packets in NDN /cn/edu/tsinghua/cs/s-router/~liubin/research/papers cn edu tsinghua cs s-router ~liubin research papers while the delimiters, usually slashes or dots, are not part of the name.

6 ——Introduction: NDN Namelookup
Name-based forwarding/routing In-network caching Multi-path transmission/delivery Interest Packet Data Packet Client Replica Content Store Content Provider 6/30 31

7 ——Introduction: NDN Namelookup
A forwarding table consists of name prefixes. A core challenge and enabling technique in implementing NDN is exactly to perform name lookup for packet forwarding at wire speed. Unlike traditional IPv4/6 networks,

8 ——Introduction: NDN Namelookup
Three kinds of tables in an NDN router, each has different function Unlike traditional IPv4/6 networks,

9 ——Introduction: NDN Namelookup
Packet Forwarding Process

10 ——Introduction: NDN Namelookup
Two High-level Requirements for NDN Name Lookup: 1) Longest name Prefix Matching(LPM) 2) Strict latency requirement (<100us) T CCN has two high-level requirements: one is the Longest Prefix Matching, it is similar with the existing IP lookup; the 2nd is the lookup latency. Although no specification on the lookup delay, but usually, in practice we should make this searching time as samll as possible, here saying, we assume it is no more than 100 microseconds.

11 ——Introduction: NDN Namelookup
The detailed challenges of name lookup Complex name structure 1) consists of digits and characters; 2) variable length name; 3) without an externally imposed upper bound. The large-scale name table 1. CCN name is hierarchically structured and composed of explicitly delimited name components, such as reversed domain names followed by directory-style path, so name table in CCN router can aggregate the name prefixes to reduce the number of name prefixes in name table. Therefore, name lookup in CCN is similar with IP lookup in traditional IP networking. Name lookup should comply with Longest name prefix matching. 2. Since a name in CCN is composed of variable components, a CCN name has variable and unlimited length. 3. Given CCN name has variable length, name table in CCN can be orders of magnitude larger than current IP routing tables. 4. in addition to network topology changes and routing policy modifications, CCN routers have to handle one new type of FIB update — when contents are published/deleted, name prefixes may need to be inserted into or deleted from FIBs. This makes FIB update much more frequent than in today’s Internet. Fast FIB update, therefore, must be well handled for large-scale FIBs. 5. wire speeds have been relentlessly accelerating. Today, OC-768 (40Gbps) links have already been deployed in Internet backbone, and OC-3072 (160Gbps) technology is emerging at the horizon of Internet.

12 ——Introduction: NDN Namelookup
400k Prefixes 100M Active Domain Names Without elaborate compression and implementation, they can by far exceed the capacity of today’s commodity devices. Without efficient compression, it is difficulty to hold the name table in today’s commodity devices. Number of Active Web-site Worldwide Backbone IP Table Size Name tables could be 2~3 orders of magnitude larger than IP lookup table

13 ——Introduction: NDN Namelookup
The detailed challenges of name lookup Complex name structure 1) consists of digits and characters; 2) variable length name; 3) without an externally imposed upper bound. The large-scale name table (2~3 orders larger ) Frequently update Wire Speed (100Gbps Ethernet, OC-3072) So a practical name lookup engine requires elaborate design-level innovation plus implementation-level re-engineering. This is exactly the opportunities of this paper.

14 ——Introduction: NDN Namelookup
A quick comparison between IP lookup table and NAME lookup table IPv4 Prefixes Name Prefixes */24 /com/yahoo/news *.*/16 /com/google/news */24 /cn/edu/tsinghua/cs/s-router/wy/papers */24 /cn/cae/www/html/main/col66 Unlike traditional IPv4/6 networks,

15 1. Introduction to NDN Name Lookup 2. Name Lookup Evaluation Criteria
——Outline 1. Introduction to NDN Name Lookup 2. Name Lookup Evaluation Criteria 3. Name Lookup: Algorithms and Data Structure ---NSDI’13 ---INFOCOM2013 ---ICDCS2013+Computer Networks’13 ---CCNx/Hashing 4. NDN Name Lookup TestBench 5. Conclusion and Future Work

16 ——Namelookup Evaluation Criteria
Throughout Wire speed lookup (100Gbs_GE, 160Gbps_OC-3072) Gbps MSPS(Million Search per Second) Memory efficiency hundreds of millions of entries each entry has tens, even hundreds of characters Update network topology changes and routing policy modifications one new type of FIB updates----contents published/deleted Search latency (< 100us) Scalability

17 1. Introduction to NDN Name Lookup 2. Name Lookup Evaluation Criteria
——Outline 1. Introduction to NDN Name Lookup 2. Name Lookup Evaluation Criteria 3. Name Lookup: Algorithms and Data Structure ---NSDI’13 ---INFOCOM2013 ---ICDCS2012+Computer Networks’13 ---CCNx/Hashing 4. NDN Name Lookup TestBench 5. Conclusion and Future Work

18 —NSDI’13: Algorithm & Data Structure
Two-Dimensional State Transition Table (STT) Character Trie Name Table / a b c 1 2 6 3 4 5 7 8 9 /a/bc/ /ab/c /a/ /ab/ 6 2 3 7 1 8 4 9 5 a b / c 一张名字路由表可以用字符查找树Character Trie来表示。例如,具有4个名字的名字路由表可以由如图所示的具有10个节点的字符查找树来表示。 为实现快速查找,字符查找树通常使用二维状态转移表来存储。在二维状态转移表中,一次跳转仅需一次内容访问。图中的字符查找树可以用右边的二维状态转移表来实现。 举例来说,在节点0输入字符a,在二维状态转移表中找到第0行的第97列,获得下一个节点存放的位置为第1行。 不断的跳转,就可以找到匹配的最长前缀。 Character Trie

19 —NSDI’13: Algorithm & Data Structure
Two-Dimensional State Transition Table (STT) Advantage Easy to build Fast speed: One State Transition needs one memory access only Disadvantage Too much memory required to be implemented

20 —NSDI’13: Algorithm & Data Structure
Aligned Transition Array (ATA) to compress STT Aligned Transition Array STT Address Transition 1045 /,1001 1046 /,1003 1096 b,999 1097 a,998 1098 /,997 1099 b,1002 1100 /,1004 1101 c,1051 1102 c,1053 / a b c 1 2 6 3 4 5 1000 999 998 998+b=1096 999+b=1097 1.姚期智先生在1979年的时候就提出了对齐转移数组来存放二维状态转移表。 2.基本的思想是对不同的行进行合并,从而压缩存储空间。在ATA中,每一个节点具有一个offset(偏移量)来表示其在整个数组中的相对位置,查找跳转后的下一个节点时,使用offset加上输入字符的ASC码值来实现。例如,在图中节点0的偏移量为1000,输入a时,应该跳转到节点1,其偏移量为998. 3. 当发生冲突时,一个节点需要改变offset来避免冲突。 4. 例如,节点1的偏移量为999,输入字符b后的值为1097,与节点1的输入字符a发生冲突,因此修改节点b的偏移量为998来避免冲突。 1000+a=1097 offset + character’s ASCII code

21 —NSDI’13: Algorithm & Data Structure
Advantage Keep fast speed: one state transition needs one memory access Low memory space Disadvantage Building speed is too slow for large-scale name table Cannot support incremental updates

22 —NSDI’13: Algorithm & Data Structure
Multiple Stride Character Trie ATA Name Table Address Transition 1 a/,… 24879 ab,… /a/bc/ /ab/c /a/ /ab/ “a/” = 24879 d-stride character trie, every state may have 28d transitions at most. 4 1 2 5 6 3 ab / bc a/ c/ 2-stride Character Trie 1. 在1步进的字符查找树中,一次状态跳转仅能消耗1个字符,因此我们想通过多步进字符查找树来减少总的跳转次数。例如图中的2步进字符查找树,一次消耗2个字符。这里“a/”的值为24879. 2. However, in d stride character trie, every state may have 28d transitions at most. Therefore, ATA cannot support multiple Stride Character Trie. 2. 但是,这样就造成每个节点具有28d条状态转移边,会造成ATA的空间膨胀,因此ATA不能支持多步进字符查找树。 ATA cannot support multiple Stride Character Trie

23 —NSDI’13: Algorithm & Data Structure
Multiple Stride Character Trie Multi-ATA Name Table Address Transition 1 a/,… 2 3 ab,… 4 5 6 /a/bc/ /ab/c /a/ /ab/ “a/” Mod 7=1 “bc” Mod 7=1 “bc” Mod 3=2 4 1 2 5 6 3 ab / bc a/ c/ 2-stride Character Trie 因此,我们提出多步进ATA来组织多步进字符查找树。基本的想法是:分解原先单一的ATA到多个小的ATA。如图中的ATA被分解为2个小的ATA。此外,字符跳转的位置变为字符串的ASC码值对质数取余数。例如图中的“a/”被7除后余数为1,这样“a/”就存在了位置1中。 如果我们想存储“bc”到第一个ATA中,由于“bc”对7取余也是1,而1已经被“a/”占用,因此可以把“bc”存放到第二个ATA中。 Address Transition 1 2 bc,… “ab” Mod 7=3

24 —NSDI’13: Algorithm & Data Structure
MATA Advantage Improve lookup throughput: one state transition consumes multiple characters, and each state transition requires only one memory access Further compress memory space Small ATAs in MATA are easier to build and manage Support fast incremental update

25 —NSDI’13: Name Lookup Engine Implementation
Name Table Name Trace NVIDIA GeForce GTX590 GPU board GPU Aligned Transition Array PCIe Character Trie PCIe 1. Here is the framework of our GPU-based name lookup engine. Name Table is first constructed as Character Trie. Then, we use ATA or MATA to organize the character trie, and load the name table to GPU memory for name lookup. 2. The name lookup update first modify the character trie, then we map the changed nodes and edges in the character trie to MATA. 3. Name table also used to produce Name Trace, which is used to test the lookup performance of Name lookup engine. Name trace is transferred from CPU memory to GPU device memory through PCIe bus. The GPU kernel execute name lookup operations and return the lookup results to CPU memory through PCIe bus too. Finally, Forwarding module forwards the packets according to the name lookup results. Forwarding Update CPU Name Lookup Engine

26 —NSDI’13: Latency Optimization
Name Table Name Trace Name Batch GPU Aligned Transition Array PCIe Character Trie A PCIe 1. For exploiting the parallel processing power of GPU, GPU-based name lookup engine execute name lookup in batch model. Each time, a lot of names form a name batch. Then a name batch is transferred from CPU memory to GPU memory through PCIe bus. After all names in the batch are finished the name lookup process, the results of all names in the batch are return to CPU memory in a batch. Batch model will cause too large latency to apply the GPU-based name lookup engine in practice. Therefore, we must reduce the name lookup latency. 2. To measure the name lookup latency, we start calculate the times from point A: the start time of copying name batch from CPU memory to GPU memory, to point B: the name lookup results are returned from GPU memory to CPU memory. Forwarding Update B CPU Name Lookup Engine

27 ——NSDI’13: Experimental Results
Platform: A commodity PC Name Table Download from DMOZ website: 3M Crawl from Internet: 10M Name Trace Average workload: random name prefix + suffix Heavy workload: the longest 10% name prefix + suffix

28 ——NSDI’13: Experimental Results
Memory Space 3M name table ATA vs STT: 101× MATA vs STT: 130 × 10M name table ATA vs STT: 102× MATA vs STT: 142× Compared with Two-dimension state transition table, ATA compresses storage space by 101*;

29 ——NSDI’13: Experimental Results
Lookup Speed (Million Searches per Second, MSPS) 10M, Average Workload 10M, Heavy Workload Under 10M name table, we get the similar name lookup throughput. MATA-NW can achieve 63 MSPS under average workload, and 55 MSPS under heavy workload. If the average packet size is 250 bytes, MATA-NW can achieve 127 Gbps wire speed.

30 ——NSDI’13: Experimental Results
Scalability Lookup speed Memory Latency Our experimental results exhibit the stability on the lookup delay. For test the scalability of name lookup engine. We partition each name table into ten equal-sized subsets, and progressively generate ten name tables for each of them; the k-th generated name table consists of the first k equal-sized subset. As name table size grows, lookup throughput tends to stabilize around 60 MSPS. Lookup latency tends to stabilize below 100us. The memory space requirement tends to grow with linear scalability, which is consistent with our intuition.

31 ——NSDI’13: Experimental Results
Update Nearly 600K deletion per second On both name tables, we can consistently handle more than 30K insertions per second. Deletions are much easier to implement than insertions. We can steadily handle around 600K deletions per second. Compared with the current IP networks, which have an average update rate of several thousands per second, our name updating mechanism runs one order of magnitude faster. More than 30K insertion per second

32 Extensive experiments demonstrate:
——NSDI’13: Conclusion MATA is proposed to compress memory space and improve name lookup speed Implement a wire speed name lookup engine based on a commodity PC installed with a GTX590 GPU board Extensive experiments demonstrate: Name lookup speed: MSPS,>127 Gbps wire-speed Latency: <100us Memory: compress >140× Good Scalability Our GPU-based name lookup engine can achieve: Open Website: http//

33 1. Introduction to NDN Name Lookup 2. Name Lookup Evaluation Criteria
——Outline 1. Introduction to NDN Name Lookup 2. Name Lookup Evaluation Criteria 3. Name Lookup: Algorithms and Data Structure ---NSDI’13 ---INFOCOM2013 ---ICDCS2012+Computer Networks’13 ---CCNx/Hashing 4. NDN Name Lookup TestBench 5. Conclusion and Future Work

34 —INFOCOM2013_NameFilter:Algorithm & Data Structure
[5] Sarang Dharmapurikar, P. Krishnamurthy, and D. E. Taylor, “Longest prefix matching using bloom filters,” in Proceedings of ACM SIMCOMM’03, 2003. NameFilter: Basic Framework First Stage: Bloom filters for ascertaining the prefix length of a name Second Stage: Hash Table for ascertaining forwarding ports name: /com/google/maps/…/ prefix: prefix: /com/google/maps/ /com/google/maps/ prefixes: Prefix Next Ports List Prefix Hash Table: Bloom filter assisted IP lookup was proposed by Sarang [5], where IP prefixes of the same length are stored into the same Bloom filter. The same framework can be used in Name lookup. Name prefixes are thus classified into different subsets according to their prefix length, and each subset is organized has a hash table. In the first stage, the Bloom filter corresponding to each prefix length is queried to determine whether there is a matching name prefix of that length (for the name being looked up). If yes, during the second stage, exact match is performed within that subset of name prefixes using hash table. Bascically, NameFilter uses BloomFilter. The similar work was proposed by Sarang in his SIGCOMM paper, but it is for IPv4. Briefly, it uses 2 stage stucture, the 1st stage is Bloomfilter, and the 2nd one is a hash table. In the 1st stage, the BloomFilter is used to determine the pfrfix length of the imcoming IPv4 destonation, and by searching the 2nd hash table, we get the final forwarding port. Using Bloom filter to help make the IP lookup has been proposed in Sarang [5]’s SIGCOMM paper, it is working principle is as below: 1) It has a two-stage architecture, the 2st stage is using Bloom Filter, and the 2nd stage is to use a hash table; 2) In the 1st stage, it uses Bloom filter to store the routing table instead of using other data structure such as Binary Trie at the first stage, but the has a unique data structure; that is; the IP prefix of the same length is stored into the same Bloom filter, given the # of prefixes for IPv4 is limited, so the required # of Bloom filters is not too much. The function of the 1st stage Bloom filter is to determine the prefix length of the incoming IP address; 3) The function of the 2nd stage is to ???? 3) 1-th 2-th 3-th B-th Bloom filters: Next Port Match Vector:

35 —NameFilter:Algorithm & Data Structure
NameFilter: Improved Framework First Stage: Bloom Filters for ascertaining the prefix length of a name Second Stage: Merged Bloom Filters for ascertaining forwarding port(s) name: /com/google/maps/…/ prefix: /com/google/maps/ prefixes: Merged Bloom Filters: 1-th 2-th 3-th w-th Our 2nd improvement is to replace the hash table by a Blomfilter, but not a standard one, it is a merged Bloomfilter. As we all know Bloom filter has a more tighten memory structure, and essentially it is a more generic hash form of hashing, with the trade-off between the memory occupation and the collision propobility (meaning the false positive probabilty). The idea has two aspects; 1) we group the prefixes who share the same port into a same group and present this group y a Bloomfilter; The 2nd is to use one memory access techniques to fasten the lookup speed. We will explain in the following Compared with hash table, Bloom filter is another way to represent name sets and actually a generalized form of hashing with trade-offs between memory consumption and collision probability (false positive). Bloom filter allocates a relatively small number of bits to per set element with a bounded false positive probability, decreasing the memory consumption. Therefore, we propose NameFilter, a two-stage Bloom filter-based name lookup approach. In the first stage, name prefixes are mapped into Bloom filters based on their lengths, and the longest prefix of a name is determined by inquiring the Bloom filters at this stage. The second stage divides name prefixes into groups according to their associated next-hop port(s), with each group stored into a Bloom filter. The destination port(s) corresponding to that particular longest prefix will be reported by searching the second stage Bloom filters. Bloom filters: Next Ports Match Vector:

36 —NameFilter: Algorithm & Data Structure
NameFilter: Improved Framework Merged Bloom Filter Merged Bloom filter: 11..1 00..0 01..1 10..0 00..1 01..0 Hash functions (prefix) 1-th Bloom filter: 1 Hash functions (prefix) Merge and Transpose Hash functions (prefix) 2-th Bloom filter: 1 The number of name prefixes in the Bloom filter for each port is close to one another, therefore we set the Bloom filters to be equal size and apply the same group of hash functions to all the 𝑁 Bloom filters. The Merged Bloom filter combines the 𝑁 Bloom filters bound to 𝑁 forwarding ports, with 𝑁 bits in the same location aggregated into one bit string. For example, the 1st location of each Bloom filter is 1 (corresponding to the results from the 1-st hash function), which turns into a bit string of in the first array of the Merged Bloom filter via merging and permutation. N-th Bloom filter: 1 Hash functions (prefix) Memory access times is reduced from k to 1 11..1 & & & => 10..0

37 ——NameFilter: Experimental Results
Platform: A commodity PC Name Table Download from DMOZ website: 3M Crawl from Internet: 10M Name Trace Average workload: random name prefix + suffix Heavy workload: the longest 10% name prefix + suffix

38 ——NameFilter: Experimental Results
Lookup Speed (Million Searches per Second, MSPS) One processing thread NameFilter: 16× of Character Trie 1.8× of Bloomhash

39 ——NameFilter: Experimental Results
Lookup Speed (Million Searches per Second, MSPS) 24 processing threads NameFilter 11.7× of Character Trie

40 ——NameFilter: Experimental Results
Memory Space Compared with Character Trie, NameFilter saves more than 77% memory space. Compared with BloomHash, NameFilter saves more than 65% memory space.

41 ——NameFilter: Experimental Results
Scalability Lookup speed Memory Our experimental results exhibit the stability on the lookup delay. For test the scalability of name lookup engine. We partition each name table into ten equal-sized subsets, and progressively generate ten name tables for each of them; the k-th generated name table consists of the first k equal-sized subset. As name table size grows, lookup throughput tends to stabilize around 30 MSPS. The memory space requirement tends to grow with linear scalability, which is consistent with our intuition.

42 ——NameFilter: Experimental Results
Update Nearly 3.4M deletion per second More than 3M insertion per second On both name tables, we can consistently handle more than 3M insertions per second. Deletions are much easier to implement than insertions. We can steadily handle around 3.4M deletions per second. Compared with the current IP networks, which have an average update rate of several thousands per second, our name updating mechanism runs one order of magnitude faster.

43 1. Introduction to NDN Name Lookup 2. Name Lookup Evaluation Criteria
——Outline 1. Introduction to NDN Name Lookup 2. Name Lookup Evaluation Criteria 3. Name Lookup: Algorithms and Data Structure ---NSDI’13 ---INFOCOM2013 ---ICDCS2012+Computer Networks’13 ---CCNx/Hashing 4. NDN Name Lookup TestBench 5. Conclusion and Future Work

44 —Component Coding : Algorithm & Data Structure
The idea of Name Component Encoding (NCE) /com/google/map/china /10/10/0 1.名字是以Component为粒度的,因此可以对Component进行编码来减小FIB的大小和加快查找速度。 2. 基本过程是:首先对名字进行编码,然后利用编码后的码串进行名字查找。

45 —Component Coding : Algorithm & Data Structure
1 2 9 3 7 A D com cn 4 5 8 B E yahoo google baidu maps map news 6 uk level-1 level-5 level-2 level-4 level-3 C sina Name Pointer /com/yahoo /com/yahoo/news /com/yahoo/maps/uk /com/google /com/google/maps /cn/google/maps /cn/sina /cn/baidu /cn/baidu/map 名字是由component构成的,如图所示的名字路由表可使用词元查找树(Component Trie)来构建。在词元查找树中,一次跳转消耗1个词元。 为减少存储开销,提高查找速率,考虑到词元具有较大的重复,我们可以对词元进行编码。 例如yahoo被编码为1、google被编码为2。不同的节点可以分别编码,以提高码字的利用率。 各个节点编码后,需要进行合并。同时,会重现同一字符串在不同节点中被编码为不同值的现象。例如图中的google在节点2和节点9分别被编码为2和3。为保持正确性,需重新为google编码,这里我们编码为4. < yahoo,1> <google, 2> <baidu,1> <sina, 2> <google, 3> <baidu,1> <google, 2> <google, 3> <sina, 3> <yahoo, 1> <baidu,1> <google, 4> <sina, 3> <yahoo, 1>

46 —Component Coding : Algorithm & Data Structure
Global-Code Allocation Mechanism 词元编码的机制会直接影响最终的存储空间。 图中显示的是全部词元编码的方式,词元与码字一一对应。 在这种情况下,使用数组来存储节点时,需要使用27个位置。

47 —Component Coding : Algorithm & Data Structure
Local-Code Allocation Mechanism 而如果我们采用局部词元编码机制,则数组仅需要18个位置。 局部词元编码是充分利用词元的局部特性,使得在不同节点上的不同的词元可以具有相同的码字。例如节点8中的acm和节点1中的org都可以编码为0。

48 —Component Coding:Implementation
GPU-based Implementation 在GPU上的实现与ATA中的方式类似。主要是GPU内核中的查找算法如图所示。 名字先被编码,然后在编码后的词元查找树中进行查找,得到最终的匹配结果。

49 —Component Coding:Experimental Results
Experimental Result: Memory Name Table Average Name Length TCT (MByt e) # of total Comp. # of unique Comp. NCE (MByte) Comress Ratio (TCT vs. NCE) CHT OSTA 总数 3M 21.27 370.87 60.14 60.72 67.41% 10M 24.48 186.16 343.47 59.57% NCE方法相比字符查找树,可以实现60%左右的存储压缩。

50 —Component Coding:Experimental Results
Experimental Result: Throughput NCE的速度在50MSPS左右。

51 —Component Coding:Experimental Results
Experimental Result: Update Compared to MATA Insertion speed: 30× Deletion speed: 1.8× NCE相比ATA方法或字符查找树,具有较好的更新性能,可以达到1Mpps。

52 1. Introduction to NDN Name Lookup 2. Name Lookup Evaluation Criteria
——Outline 1. Introduction to NDN Name Lookup 2. Name Lookup Evaluation Criteria 3. Name Lookup: Algorithms and Data Structure ---NSDI’13 ---INFOCOM2013 ---ICDCS2012+Computer Networks’13 ---CCNx/Hashing 4. NDN Name Lookup TestBench 5. Conclusion and Future Work

53 —CCNx/hashing: Algorithm & Data Structure
CCNx applies hash table to construct the FIB Lookup process in CCNx Generates all candidate prefixes of a name Sorts all candidate prefixes in a descendent order Looks up each candidate prefix against the hash table CCNx使用Hash表来存储FIB,即将FIB中的名字前缀存储在Hash表中; CCNx的查找过程如图所示,有三个步骤: 为每个名字产生所有的可能前缀,称为候选前缀; 对所有候选前缀按照重长到短的顺序进行排序; 按照从长到短的顺序,对每个前缀在hash表中进行查找,直到找到一个匹配的前缀。最先匹配的就是最长前缀。

54 —CCNx/hashing: Comments and Further Work
The name search speed in CCNx is much slow; The name lookup in CCNx prototype is basically for the demonstration purpose Future Improvements: Improve hash table lookup speed: perfect hash table Compress hash table to reduce memory consumption [1] Yi Wang, Dongzhe Tai, Ting Zhang, Jianyuan Lu, BoyangXu, Huichen Dai and Bin Liu. Greedy Name Lookup for Named Data Networking. Sigmetrics 2013 Poster.

55 1. Introduction to NDN Name Lookup 2. Name Lookup Evaluation Criteria
——Outline 1. Introduction to NDN Name Lookup 2. Name Lookup Evaluation Criteria 3. Name Lookup: Algorithms and Data Structure ---NSDI’13 ---INFOCOM2013 ---ICDCS2013+Computer Networks’13 ---CCNx/Hashing 4. NDN Name Lookup TestBench 5. Conclusion and Future Work

56 —NDN Namelookup TestBench

57 1. Introduction to NDN Name Lookup 2. Name Lookup Evaluation Criteria
——Outline 1. Introduction to NDN Name Lookup 2. Name Lookup Evaluation Criteria 3. Name Lookup: Algorithms and Data Structure ---NSDI’13 ---INFOCOM2013 ---ICDCS2013+Computer Networks’13 ---CCNx/Hashing 4. NDN Name Lookup TestBench 5. Conclusion and Future Work

58 Name Lookup is a complex search work
——Conclusion Name Lookup is a complex search work We have conducted several Name lookup Algorithms and Data Structures to achieve inspiring results Our results are much better than the ones in CCNx prototype The experimental results demonstrate the feasibility towards building high speed NDN routers Name Lookup is also widely used in a broad range of technological fields

59 Larger scale datasets/traces experiments Improving the TestBench
——Future Work Larger scale datasets/traces experiments Improving the TestBench Extending the lookup core engine to a whole router system to examine the performance Improvements on Algorithms and Data Structures

60 Thank you! Q & A


Download ppt "Dept of Computer Science and Technology"

Similar presentations


Ads by Google