Presentation is loading. Please wait.

Presentation is loading. Please wait.

Duo Liu, Bei Hua, Xianghui Hu, and Xinan Tang

Similar presentations


Presentation on theme: "Duo Liu, Bei Hua, Xianghui Hu, and Xinan Tang"— Presentation transcript:

1 High-performance Packet Classification Algorithm for Many-core and Multithreaded Network Processor
Duo Liu, Bei Hua, Xianghui Hu, and Xinan Tang CASES'06, October 23-25, 2006, Seoul, Korea. Research Ideas Paper Contents Presentation: Yaxuan Qi Date:

2 A good classification algorithm for an NPU
must at least take into account: classification characteristics parallel algorithm design multithreaded architecture and compiler optimizations. believe that high performance can only be achieved through close interaction among these interdisciplinary factors.

3 The main contributions
A scalable packet classification algorithm is proposed and efficiently implemented on the IXP2800. Experiments show that its speedup is almost linear and it can run even faster than 10Gbps. Algorithm design, implementation, and performance issues are carefully studied and analyzed. We apply the systematical approach to address these issues by incorporating architecture awareness into parallel algorithm design.

4 RFC Bitmap RFC 16 bits /entry 1 bit /entry

5 W0: 32-bit bitmap W1: E1:E0 W2: E3:E2 W3: E5:E4 W4: E7:E6 Totally: 5 words Represent: 32/2 words Compression Rate: 5:16 Few accessory Table, Because the average number of elements is 3. However, if exists, worst-case (32-8)/2=12 extra memory accesses.

6 SIMULATION AND PERFORMANCE ANALYSIS
Experimental Setup: Use the minimal packets as the worst-case input [19]. For the OC-192 core routers a minimal packet has 49 bytes (9-byte PPP header + 20-byte IPv4 header + 20-byte TCP header). Thus, a classifying rate of 25.5Mpps (Million Packets per Second) is required to achieve the OC-192 line rate.

7 Memory Requirement: The minimum number of rules that causes one of tables lager than 64MBytes (the size of one SRAM controller). The minimum number of rules that causes the total size of all tables larger than 256MBytes (the total size of SRAM). 149.9:43.4 => 16:5 273.5:86.0 => 16:5

8 Relative Speedups: reaches up to 25.65Mpps for 32 threads. sub-linear speedup is partially caused by the saturation of command request FIFOs On average all of the classifiers need 20 threads on 4 MEs to reach OC-192 line rate.

9 GOAL POP_COUNT FFS To calculate the sum of a (masked) bit-sting.
the number of instructions is reduced by more than 97% compared with other RISC/CISC implementations. essential for the Bitmap-RFC algorithm to achieve the line rate. FFS for NPs without pop_count. ffs( )=2, the position of the first “1” bit require looping through the 32 bits and continuously looking for the next first bit set.

10 SRAM bandwidth is the performance bottleneck
utilization rate of a single SRAM controller is up to 98% at 4 MEs Need at least 4 SRAM (and 4 MEs) to reach OC192 Only the 4-SRAM configuration obtains almost linear speedup from 1~8 MEs. Utilization rates of 4 SRAM channels are 96% at 8 MEs: indicating the potential speedup could be even greater if the system could have more SRAM controllers.

11 Task Partition: According to:
algorithm logic number of memory accesses required per ME multi-processing is preferable for Bitmap-RFC because of the dynamic nature of the workload.

12 Latency Hiding: MicroengineC compiler provides only one switch to turn latency hiding optimizations on or off. On average, improvement of approximately 7.13% by applying the latency hiding techniques.

13 Summary: The algorithm Multi-processing NPU
Compress data structures and store them in SRAM to reduce memory access latency. (algorithm design) Multi-processing preferred to parallelize network applications because insensitive to workload balance. (multi vs. pipe) NPU has many shared resources, such as command and data buses. they might be a bottleneck. (cmd and sram fifo saturation) supports powerful bit-manipulation instructions. Select appropriate instructions. (pop_count vs. ffs) compiler Use compiler optimizations to hide memory latency whenever possible. (compiling parameters) Research Ideas Paper Contents

14 Weakness of this design:
For each chunk, RFC reads 1 word. Bitmap RFC reads 5 words. May consume too much Sram band width… Eg. 4 MEs have consumed all 4 Srams’ bandwidth.


Download ppt "Duo Liu, Bei Hua, Xianghui Hu, and Xinan Tang"

Similar presentations


Ads by Google