Download presentation
Presentation is loading. Please wait.
Published byChrystal Heath Modified over 9 years ago
1
An Efficient Programmable 10 Gigabit Ethernet Network Interface Card Paul Willmann, Hyong-youb Kim, Scott Rixner, and Vijay S. Pai
2
1 Designing a 10 Gigabit NIC Programmability for performance Computation offloading improves performance NICs have power, area concerns Architecture solutions should be efficient Above all, must support 10 Gb/s links What are the computation, memory requirements? What architecture efficiently meets them? What firmware organization should be used?
3
2 Mechanisms for an Efficient Programmable 10 Gb/s NIC A partitioned memory system Low-latency access to control structures High-bandwidth, high-capacity access to frame data A distributed task-queue firmware Utilizes frame-level parallelism to scale across many simple, low-frequency processors New RMW instructions Reduce firmware frame-ordering overheads by 50% and reduce clock frequency requirement by 17%
4
3 Outline Motivation How Programmable NICs work Architecture Requirements, Design Frame-parallel Firmware Evaluation
5
4 How Programmable NICs Work PCI Interface Ethernet Interface Memory Processor(s) Bus Ethernet
6
5 Per-frame Requirements Instructions Data Accesses TX Frame281101 RX Frame25385 Processing and control data requirements per frame, as determined by dynamic traces of relevant NIC functions
7
6 Aggregate Requirements 10 Gb/s - Max Sized Frames Instruction Throughput Control Data Bandwidth Frame Data Bandwidth TX Frame229 MIPS2.6 Gb/s19.75 Gb/s RX Frame206 MIPS2.2 Gb/s19.75 Gb/s Total435 MIPS4.8 Gb/s39.5 Gb/s 1514-byte Frames at 10 Gb/s 812,744 Frames/s
8
7 Meeting 10 Gb/s Requirements with Hardware Processor Architecture At least 435 MIPS within embedded device Does NIC firmware have ILP? Memory Architecture Low latency control data High bandwidth, high capacity frame data … both, how?
9
8 ILP Processors for NIC Firmware? ILP limited by data, control dependences Analysis of dynamic trace reveal dependences Perfect BPPerfect 1BPNo BP In-order 1 0.87 In-order 2 1.19 1.13 In-order 4 1.341.331.17 Out-order 1 1.00 0.88 Out-order 2 1.961.741.21 Out-order 4 2.652.001.29
10
9 Processors: 1-Wide, In-order 2x performance costly Branch prediction, reorder buffer, renaming logic, wakeup logic Overheads translate to greater than 2x core power, area costs Great for a GP processor; not for an embedded device Other opportunities for parallelism? YES! Many steps to process a frame - run them simultaneously Many frames need processing - process simultaneously Use parallel single-issue cores Perfect 1BPNo BP In-order 1 0.87 Out-order 2 1.741.21
11
10 Memory Architecture Competing demands Frame data: High bandwidth, high capacity for many offload mechanisms Control data: Low latency; coherence among processors, PCI Interface, and Ethernet Interface The traditional solution: Caches Advantages: low latency, transparent to the programmer Disadvantages: Hardware costs (tag arrays, coherence) In many applications, advantages outweigh costs
12
11 Are Caches Effective? SMPCache trace analysis of a 6-processor NIC architecture
13
12 Choosing a Better Organization Cache Hierarchy A Partitioned Organization
14
13 Putting it All Together Instruction Memory I-Cache 0 CPU 0 (P+4)x(S) Crossbar (32-bit) PCI Interface Ethernet Interface PCI Bus DRAM Ext. Mem. Interface (Off-Chip) Scratchpad 0Scratchpad 1S-pad S-1 CPU P-1 I-Cache 1I-Cache P-1 CPU 1
15
14 Parallel Firmware NIC processing steps already well-defined Previous Gigabit NIC firmware divides steps between 2 processors … but does this mechanism scale?
16
15 Task Assignment with an Event Register PCI Read BitSW Event Bit… Other Bits PCI Interface Finishes Work Processor(s) inspect transactions 00011 Processor(s) need to enqueue TX Data Processor(s) pass data to Ethernet Interface
17
16 Task-level Parallel Firmware Transfer DMAs 0-4 0Idle PCI Read Bit PCI Read HW Status Function Running (Proc 0) Function Running (Proc 1) 1 Transfer DMAs 5-9 1 0 Time Process DMAs 0-4 Idle Process DMAs 5-9 1 Idle
18
17 Frame-level Parallel Firmware Transfer DMAs 0-4 Idle PCI RD HW Status Function Running (Proc 0) Function Running (Proc 1) Transfer DMAs 5-9 Time Process DMAs 0-4 Build Event Idle Process DMAs 5-9 Build Event Idle
19
18 Evaluation Methodology Spinach: A library of cycle-accurate LSE simulator modules for network interfaces Memory latency, bandwidth, contention modeled precisely Processors modeled in detail NIC I/O (PCI, Ethernet Interfaces) modeled in detail Verified when modeling the Tigon 2 Gigabit NIC (LCTES 2004) Idea: Model everything inside the NIC Gather performance, trace data
20
19 Scaling in Two Dimensions
21
20 Processor Performance Processor Behavior IPC Component Execution0.72 Miss Stalls0.01 Load Stalls0.12 Scratchpad Conflict Stalls 0.05 Pipeline Stalls0.10 Total1.00 Achieves 83% of theoretical peak IPC Small I-Caches work Sensitive to mem stalls Half of loads are part of a load- to-use sequence Conflict stalls could be reduced with more ports, more banks
22
21 Reducing Frame Ordering Overheads Firmware ordering costly - 30% of execution Synchronization, bitwise check/updates occupy processors, memory Solution: Atomic bitwise operations that also update a pointer according to last set location
23
22 Maintaining Frame Ordering 0 Index 0Index 1Index 3Index 4… more bits Frame Status Array 000 CPU A prepares frames CPU B prepares frames CPU C Detects Completed Frames Ethernet Interface LOCK Iterate Notify Hardware UNLOCK 1111
24
23 RMW Instructions Reduce Clock Frequency Performance: 6x166 MHz = 6x200 MHz Performance is equivalent at all frame sizes 17% reduction in frequency requirement Dynamically tasked firmware balances the benefit Send cycles reduced by 28.4% Receive cycles reduced by 4.7%
25
24 Conclusions A Programmable 10 Gb/s NIC This NIC architecture relies on: Data Memory System - Partitioned organization, not coherent caches Processor Architecture - Parallel scalar processors Firmware - Frame-level parallel organization RMW Instructions - reduce ordering overheads A programmable NIC: A substrate for offload services
26
25 Comparing Frame Ordering Methods
Similar presentations
© 2025 SlidePlayer.com Inc.
All rights reserved.