Designing Packet Buffers for Internet Routers Friday, October 23, 2015 Nick McKeown Professor of Electrical Engineering and Computer Science, Stanford.

Slides:



Advertisements
Similar presentations
1 Maintaining Packet Order in Two-Stage Switches Isaac Keslassy, Nick McKeown Stanford University.
Advertisements

Sundar Iyer Winter 2012 Lecture 8a Packet Buffers with Latency EE384 Packet Switch Architectures.
Fast Buffer Memory with Deterministic Packet Departures Mayank Kabra, Siddhartha Saha, Bill Lin University of California, San Diego.
Design and Analysis of a Robust Pipelined Memory System Hao Wang †, Haiquan (Chuck) Zhao *, Bill Lin †, and Jun (Jim) Xu * † University of California,
1 Statistical Analysis of Packet Buffer Architectures Gireesh Shrimali, Isaac Keslassy, Nick McKeown
Shivkumar Kalyanaraman Rensselaer Polytechnic Institute 1 High Speed Router Design Shivkumar Kalyanaraman Rensselaer Polytechnic Institute
Router Architecture : Building high-performance routers Ian Pratt
A Load-Balanced Switch with an Arbitrary Number of Linecards Isaac Keslassy, Shang-Tse Chuang, Nick McKeown.
Routers with a Single Stage of Buffering Sundar Iyer, Rui Zhang, Nick McKeown High Performance Networking Group, Stanford University,
Scaling Internet Routers Using Optics UW, October 16 th, 2003 Nick McKeown Joint work with research groups of: David Miller, Mark Horowitz, Olav Solgaard.
May 28th, 2002Nick McKeown 1 Scaling routers: Where do we go from here? HPSR, Kobe, Japan May 28 th, 2002 Nick McKeown Professor of Electrical Engineering.
MEMS and its Applications Optical Routing, an example Shashi Mysore Computer Science UCSB.
Isaac Keslassy, Shang-Tse (Da) Chuang, Nick McKeown Stanford University The Load-Balanced Router.
A Scalable Switch for Service Guarantees Bill Lin (University of California, San Diego) Isaac Keslassy (Technion, Israel)
Making Parallel Packet Switches Practical Sundar Iyer, Nick McKeown Departments of Electrical Engineering & Computer Science,
Analysis of a Statistics Counter Architecture Devavrat Shah, Sundar Iyer, Balaji Prabhakar & Nick McKeown (devavrat, sundaes, balaji,
1 Circuit Switching in the Core OpenArch April 5 th 2003 Nick McKeown Professor of Electrical Engineering and Computer Science, Stanford University
Analysis of a Packet Switch with Memories Running Slower than the Line Rate Sundar Iyer, Amr Awadallah, Nick McKeown Departments.
Scaling Internet Routers Using Optics Producing a 100TB/s Router Ashley Green and Brad Rosen February 16, 2004.
1 Architectural Results in the Optical Router Project Da Chuang, Isaac Keslassy, Nick McKeown High Performance Networking Group
1 OR Project Group II: Packet Buffer Proposal Da Chuang, Isaac Keslassy, Sundar Iyer, Greg Watson, Nick McKeown, Mark Horowitz
Using Load-Balancing To Build High-Performance Routers Isaac Keslassy, Shang-Tse (Da) Chuang, Nick McKeown Stanford University.
Sizing Router Buffers (Summary)
Sizing Router Buffers Nick McKeown Guido Appenzeller & Isaac Keslassy SNRC Review May 27 th, 2004.
048866: Packet Switch Architectures Dr. Isaac Keslassy Electrical Engineering, Technion The.
A Load-Balanced Switch with an Arbitrary Number of Linecards Isaac Keslassy, Shang-Tse (Da) Chuang, Nick McKeown Stanford University.
Scaling Internet Routers Using Optics Isaac Keslassy, Shang-Tse Da Chuang, Kyoungsik Yu, David Miller, Mark Horowitz, Olav Solgaard, Nick McKeown Department.
Modeling TCP in Small-Buffer Networks
EE 122: Router Design Kevin Lai September 25, 2002.
IEE, October 2001Nick McKeown1 High Performance Routers Slides originally by Nick McKeown Professor of Electrical Engineering and Computer Science, Stanford.
048866: Packet Switch Architectures Dr. Isaac Keslassy Electrical Engineering, Technion Introduction.
Nick McKeown 1 Memory for High Performance Internet Routers Micron February 12 th 2003 Nick McKeown Professor of Electrical Engineering and Computer Science,
1 EE384Y: Packet Switch Architectures Part II Load-balanced Switches Nick McKeown Professor of Electrical Engineering and Computer Science, Stanford University.
Ph. D Oral Examination Load-Balancing and Parallelism for the Internet Stanford University Ph.D. Oral Examination Tuesday, Feb 18 th 2003 Sundar Iyer
Optimal Load-Balancing Isaac Keslassy (Technion, Israel), Cheng-Shang Chang (National Tsing Hua University, Taiwan), Nick McKeown (Stanford University,
Analysis of a Memory Architecture for Fast Packet Buffers Sundar Iyer, Ramana Rao Kompella & Nick McKeown (sundaes,ramana, Departments.
1 Growth in Router Capacity IPAM, Lake Arrowhead October 2003 Nick McKeown Professor of Electrical Engineering and Computer Science, Stanford University.
Can Google Route? Building a High-Speed Switch from Commodity Hardware Guido Appenzeller, Matthew Holliman Q2/2002.
1 IP routers with memory that runs slower than the line rate Nick McKeown Assistant Professor of Electrical Engineering and Computer Science, Stanford.
Computer Networks Switching Professor Hui Zhang
Gigabit Routing on a Software-exposed Tiled-Microprocessor
Nick McKeown CS244 Lecture 7 Valiant Load Balancing.
Professor Yashar Ganjali Department of Computer Science University of Toronto
Optics in Internet Routers Mark Horowitz, Nick McKeown, Olav Solgaard, David Miller Stanford University
CS 552 Computer Networks IP forwarding Fall 2005 Rich Martin (Slides from D. Culler and N. McKeown)
Designing Packet Buffers for Router Linecards Sundar Iyer, Ramana Kompella, Nick McKeown Reviewed by: Sarang Dharmapurikar.
Winter 2006EE384x1 EE384x: Packet Switch Architectures I Parallel Packet Buffers Nick McKeown Professor of Electrical Engineering and Computer Science,
Applied research laboratory 1 Scaling Internet Routers Using Optics Isaac Keslassy, et al. Proceedings of SIGCOMM Slides:
Nick McKeown1 Building Fast Packet Buffers From Slow Memory CIS Roundtable May 2002 Nick McKeown Professor of Electrical Engineering and Computer Science,
1 Performance Guarantees for Internet Routers ISL Affiliates Meeting April 4 th 2002 Nick McKeown Professor of Electrical Engineering and Computer Science,
1 Router Design Bruce Davie with help from Hari Balakrishnan & Nick McKeown.
Shivkumar Kalyanaraman Rensselaer Polytechnic Institute 1 Challenges in Modern Multi-Tera- bit Class Switch Design.
Winter 2006EE384x Handout 11 EE384x: Packet Switch Architectures Handout 1: Logistics and Introduction Professor Balaji Prabhakar
Techniques for Fast Packet Buffers Sundar Iyer, Ramana Rao, Nick McKeown (sundaes,ramana, Departments of Electrical Engineering & Computer.
Opticomm 2001Nick McKeown1 Do Optics Belong in Internet Core Routers? Keynote, Opticomm 2001 Denver, Colorado Nick McKeown Professor of Electrical Engineering.
IEE, October 2001Nick McKeown1 High Performance Routers IEE, London October 18 th, 2001 Nick McKeown Professor of Electrical Engineering and Computer Science,
1 A quick tutorial on IP Router design Optics and Routing Seminar October 10 th, 2000 Nick McKeown
1 How scalable is the capacity of (electronic) IP routers? Nick McKeown Professor of Electrical Engineering and Computer Science, Stanford University
Techniques for Fast Packet Buffers Sundar Iyer, Ramana Rao, Nick McKeown (sundaes,ramana, Departments of Electrical Engineering & Computer.
Block-Based Packet Buffer with Deterministic Packet Departures Hao Wang and Bill Lin University of California, San Diego HSPR 2010, Dallas.
The Fork-Join Router Nick McKeown Assistant Professor of Electrical Engineering and Computer Science, Stanford University
A Load Balanced Switch with an Arbitrary Number of Linecards I.Keslassy, S.T.Chuang, N.McKeown ( CSL, Stanford University ) Some slides adapted from authors.
Techniques for Fast Packet Buffers Sundar Iyer, Nick McKeown Departments of Electrical Engineering & Computer Science, Stanford.
1 Building big router from lots of little routers Nick McKeown Assistant Professor of Electrical Engineering and Computer Science, Stanford University.
Weren’t routers supposed
Addressing: Router Design
Parallelism in Network Systems Joint work with Sundar Iyer
CS 740: Advance Computer Networks Hand-out on Router Design
Techniques and problems for
Techniques for Fast Packet Buffers
Presentation transcript:

Designing Packet Buffers for Internet Routers Friday, October 23, 2015 Nick McKeown Professor of Electrical Engineering and Computer Science, Stanford University

2 Contents 1. Motivation  A 100 Tb/s router  160 Gb/s packet buffer 2. Theory  Generic Packet Buffer Problem  Optimal Memory Management 3. Implementation

3 Motivating Design: 100Tb/s Optical Router Arbitration 40Gb/s OpticalSwitch Line termination IP packet processing Packet buffering Line termination IP packet processing Packet buffering Electronic Linecard #1 Electronic Linecard #1 Electronic Linecard #625 Electronic Linecard #625 Request Grant Gb/s 160Gb/s Gb/s (100Tb/s = 625 * 160Gb/s)

4 Load Balanced Switch Three stages on a linecard Segmentation/ Frame Building 1st stage 1 2 N Main Buffering 2nd stage 1 2 N R/N RRRR 3rd stage 1 2 N RR Reassembly

5 Advantages  Load-balanced switch  100% throughput  No switch scheduling  Hybrid Optical-Electrical Switch Fabric  Low (almost zero) power  Can use an optical mesh  No reconfiguration of internal switch (MEMS)

6 160 Gb/s Linecard Fixed-size Packets Reassembly Segmentation Lookup/ Processing R 1 N 2 VOQs 2nd Stage Load-balancing Switching 1st Stage 3 rd stage R R R R R 0.4 Gbit at 3.2 ns 0.4 Gbit at 3.2 ns 40 Gbit at 3.2 ns

7 Contents 1. Motivation  A 100 Tb/s router  160 Gb/s packet buffer 2. Theory  Generic Packet Buffer Problem  Optimal Memory Management 3. Implementation

8 Packet Buffering Problem Packet buffers for a 160Gb/s router linecard Buffer Memory Write Rate, R One 128B packet every 6.4ns Read Rate, R One 128B packet every 6.4ns 40Gbits Buffer Manager Problem is solved if a memory can be (random) accessed every 3.2ns and store 40Gb of data Scheduler Requests

9 Memory Technology  Use SRAM? + Fast enough random access time, but - Too low density to store 40Gbits of data.  Use DRAM? + High density means we can store data, but - Can’t meet random access time.

10 Can’t we just use lots of DRAMs in parallel? Write Rate, R One 128B packet every 6.4ns Read Rate, R One 128B packet every 6.4ns Buffer Manager Buffer Memory Read/write 1280B every 32ns 1280B Buffer Memory Buffer Memory Buffer Memory Buffer Memory … ……………… Scheduler Requests

11 128B Works fine if there is only one FIFO Write Rate, R One 128B packet every 6.4ns Read Rate, R One 128B packet every 6.4ns Buffer Manager (on chip SRAM) 1280B Buffer Memory 1280B 128B 1280B … ……………… 128B Aggregate 1280B for the queue in fast SRAM and read and write to all DRAMs in parallel Scheduler Requests

12 In practice, buffer holds many FIFOs 1280B 1 2 Q e.g.  In an IP Router, Q might be 200.  In an ATM switch, Q might be Write Rate, R One 128B packet every 6.4ns Read Rate, R One 128B packet every 6.4ns Buffer Manager 1280B 320B ?B 320B 1280B ?B How can we write multiple packets into different queues? … ……………… Scheduler Requests

13 Buffer Manager Arriving Packets R Scheduler Requests Departing Packets R 12 1 Q Small head SRAM cache for FIFO heads (ASIC with on chip SRAM) Parallel Packet Buffer Hybrid Memory Hierarchy cache for FIFO tails Q 2 Small tail SRAM Large DRAM memory holds the body of FIFOs Q 2 Writing b bytes Reading b bytes DRAM b = degree of parallelism

14  Problem:  What is the minimum size of the SRAM needed so that every packet is available immediately within a fixed latency?  Solutions:  Qb(2 +ln Q) bytes, for zero latency  Q(b – 1) bytes, for Q(b – 1) + 1 time slots latency. Problem Examples: 1.160Gb/s line card, b =1280, Q =625: SRAM = 52Mbits 2.160Gb/s line card, b =1280, Q =625: SRAM =6.1Mbits, latency is 40ms.

15 Pipeline Latency, x SRAM Size Queue Length for Zero Latency Queue Length for Maximum Latency Discussion Q=1000, b = 10

16 Contents 1. Motivation  A 100 Tb/s router  160 Gb/s packet buffer 2. Theory  Generic Packet Buffer Problem  Optimal Memory Management 3. Implementation

17 Technology Assumptions in 2005  DRAM Technology  Access Time ~ 40 ns  Size ~ 1 Gbits  Memory Bandwidth ~ 16 Gbps (16 data pins)  On-chip SRAM Technology  Access Time ~ 2.5 ns  Size ~ 64 Mbits  Serial Link Technology  Bandwidth ~ 10 Gb/s  100 serial links per chip

18 Packet Buffer Chip (x4) Details and Status  Incoming: 4x10 Gb/s  Outgoing: 4x10 Gb/s  35 pins/DRAM x 10 DRAMs = 350 pins  SRAM Memory: 3.1 Mbits with 3.2ns SRAM  Implementation starts Fall 2003 DRAM Buffer Manager SRAM R/4