EE384Y: Packet Switch Architectures

Slides:



Advertisements
Similar presentations
EE:450 – Computer Networks
Advertisements

1 EE384Y: Packet Switch Architectures Part II Load-balanced Switch (Borrowed from Isaac Keslassys Defense Talk) Nick McKeown Professor of Electrical Engineering.
EE384y: Packet Switch Architectures
Advanced Piloting Cruise Plot.
1 On the Long-Run Behavior of Equation-Based Rate Control Milan Vojnović and Jean-Yves Le Boudec ACM SIGCOMM 2002, Pittsburgh, PA, August 19-23, 2002.
Effective Change Detection Using Sampling Junghoo John Cho Alexandros Ntoulas UCLA.
Congestion Control and Fairness Models Nick Feamster CS 4251 Computer Networking II Spring 2008.
Autotuning in Web100 John W. Heffner August 1, 2002 Boulder, CO.
Helping TCP Work at Gbps Cheng Jin the FAST project at Caltech
Introducing optical switching into the network
New Packet Sampling Technique for Robust Flow Measurements Shigeo Shioda Department of Architecture and Urban Science Graduate School of Engineering, Chiba.
We need a common denominator to add these fractions.
Jeopardy Q 1 Q 6 Q 11 Q 16 Q 21 Q 2 Q 7 Q 12 Q 17 Q 22 Q 3 Q 8 Q 13
Jeopardy Q 1 Q 6 Q 11 Q 16 Q 21 Q 2 Q 7 Q 12 Q 17 Q 22 Q 3 Q 8 Q 13
FACTORING ax2 + bx + c Think “unfoil” Work down, Show all steps.
Who Wants To Be A Millionaire?
£1 Million £500,000 £250,000 £125,000 £64,000 £32,000 £16,000 £8,000 £4,000 £2,000 £1,000 £500 £300 £200 £100 Welcome.
Welcome to Who Wants to be a Millionaire
1 Understanding Buffer Size Requirements in a Router Thanks to Nick McKeown and John Lockwood for numerous slides.
A Switch-Based Approach to Starvation in Data Centers Alex Shpiner and Isaac Keslassy Department of Electrical Engineering, Technion. Gabi Bracha, Eyal.
1 Maintaining Packet Order in Two-Stage Switches Isaac Keslassy, Nick McKeown Stanford University.
CS 4700 / CS 5700 Network Fundamentals
1 EE 122: Networks Performance & Modeling Ion Stoica TAs: Junda Liu, DK Moon, David Zats (Materials with thanks.
Network, Local, and Portable Storage Media Computer Literacy for Education Majors.
Mohamed ABDELFATTAH Vaughn BETZ. 2 Why NoCs on FPGAs? Embedded NoCs Power Analysis
Making Time-stepped Applications Tick in the Cloud Tao Zou, Guozhang Wang, Marcos Vaz Salles*, David Bindel, Alan Demers, Johannes Gehrke, Walker White.
Mohammad Alizadeh, Albert Greenberg, David A. Maltz, Jitendra Padhye Parveen Patel, Balaji Prabhakar, Sudipta Sengupta, Murari Sridharan Microsoft Research.
© 2012 National Heart Foundation of Australia. Slide 2.
Lets play bingo!!. Calculate: MEAN Calculate: MEDIAN
RED-PD: RED with Preferential Dropping Ratul Mahajan Sally Floyd David Wetherall.
25 seconds left…...
Introduction to Queuing Theory
Equal or Not. Equal or Not
Slippery Slope
One More Bit Is Enough Yong Xia, RPI Lakshmi Subramanian, UCB Ion Stoica, UCB Shiv Kalyanaraman, RPI SIGCOMM’ 05, Philadelphia, PA 08 / 23 / 2005.
We will resume in: 25 Minutes.
Network Operations & administration CS 4592 Lecture 15 Instructor: Ibrahim Tariq.
PSSA Preparation.
Probability Review.
Router Buffer Sizing and Reliability Challenges in Multicast Aditya Akella 02/28.
Sizing Router Buffers Guido Appenzeller Isaac Keslassy Nick McKeown Stanford University.
Designing Networks with Little or No Buffers or Can Gulliver Survive in Lilliput? Yashar Ganjali High Performance Networking Group Stanford University.
High Performance All-Optical Networks with Small Buffers Yashar Ganjali High Performance Networking Group Stanford University
High Performance Networking with Little or No Buffers Yashar Ganjali High Performance Networking Group Stanford University
Network Processors and their memory Network Processor Workshop, Madrid 2004 Nick McKeown Departments of Electrical Engineering and Computer Science, Stanford.
High Performance Networking with Little or No Buffers Yashar Ganjali on behalf of Prof. Nick McKeown High Performance Networking Group Stanford University.
Sizing Router Buffers (Summary)
Sizing Router Buffers Nick McKeown Guido Appenzeller & Isaac Keslassy SNRC Review May 27 th, 2004.
Modeling TCP in Small-Buffer Networks
The Effect of Router Buffer Size on HighSpeed TCP Performance Dhiman Barman Joint work with Georgios Smaragdakis and Ibrahim Matta.
Reducing the Buffer Size in Backbone Routers Yashar Ganjali High Performance Networking Group Stanford University February 23, 2005
Nick McKeown 1 Memory for High Performance Internet Routers Micron February 12 th 2003 Nick McKeown Professor of Electrical Engineering and Computer Science,
Isaac Keslassy (Technion) Guido Appenzeller & Nick McKeown (Stanford)
Routers with Small Buffers Yashar Ganjali High Performance Networking Group Stanford University
CS144 An Introduction to Computer Networks
Congestion models for bursty TCP traffic Damon Wischik + Mark Handley University College London DARPA grant W911NF
1 - CS7701 – Fall 2004 Review of: Sizing Router Buffers Paper by: – Guido Appenzeller (Stanford) – Isaac Keslassy (Stanford) – Nick McKeown (Stanford)
Sizing Router Buffers How much packet buffers does a router need? C Router Source Destination 2T The current “Rule of Thumb” A router needs a buffer size:
Designing Packet Buffers for Internet Routers Friday, October 23, 2015 Nick McKeown Professor of Electrical Engineering and Computer Science, Stanford.
Winter 2006EE384x1 EE384x: Packet Switch Architectures I Parallel Packet Buffers Nick McKeown Professor of Electrical Engineering and Computer Science,
Nick McKeown1 Building Fast Packet Buffers From Slow Memory CIS Roundtable May 2002 Nick McKeown Professor of Electrical Engineering and Computer Science,
Analysis of Buffer Size in Core Routers by Arthur Dick Supervisor Anirban Mahanti.
Winter 2008CS244a Handout 81 CS244a: An Introduction to Computer Networks Handout 8: Congestion Avoidance and Active Queue Management Nick McKeown Professor.
Winter 2008CS244a Handout 71 CS244a: An Introduction to Computer Networks Handout 7: Congestion Control Nick McKeown Professor of Electrical Engineering.
Buffers: How we fell in love with them, and why we need a divorce Hot Interconnects, Stanford 2004 Nick McKeown High Performance Networking Group Stanford.
Networks with Very Small Buffers Yashar Ganjali, Guido Appenzeller, High Performance Networking Group Prof. Ashish Goel, Prof. Tim Roughgarden, Prof. Nick.
Sachin Katti, CS244 Slides courtesy: Nick McKeown
Open Issues in Router Buffer Sizing
Lecture 19 – TCP Performance
Routers with Very Small Buffers
Presentation transcript:

EE384Y: Packet Switch Architectures Part II Sizing Router Buffers (Recent work by Guido Appenzeller) Nick McKeown Professor of Electrical Engineering and Computer Science, Stanford University nickm@stanford.edu http://www.stanford.edu/~nickm

How much Buffer does a Router need? Universally applied rule-of-thumb: A router needs a buffer size: 2T is the round-trip propagation time (or just 250ms) C is the capacity of the outgoing link Background Mandated in backbone and edge routers. Appears in RFPs and IETF architectural guidelines. Has major consequences for router design. Comes from dynamics of TCP congestion control. Villamizar and Song: “High Performance TCP in ANSNET”, CCR, 1994. Based on 2 to 16 TCP flows at speeds of up to 40 Mb/s.

Example 10Gb/s linecard or router Memory technologies Requires 300Mbytes of buffering. Read and write new packet every 32ns. Memory technologies SRAM: require 80 devices, 1kW, $2000. DRAM: require 4 devices, but too slow. Problem gets harder at 40Gb/s Hence RLDRAM, FCRAM, etc.

TCP TCP adapts to congestion Sender sends packets, receiver sends ACKs Sending rate is controlled by Window W At any time, only W unacknowledged packets may be outstanding W is adjusted for each packet (in CA mode): If ACK received: W = W+1/W (W=W+1 for each W packets) If packet is lost: W = W/2 (W halved in case of loss) The sending rate of TCP is:

For every W ACKs received, Single TCP Flow Router with large enough buffers for full link utilization t Window size Buffer size and RTT For every W ACKs received, send W+1 packets B Source Dest C’ > C C

Over-buffered Link

Under-buffered Link

Buffer = Rule-of-thumb Interval magnified on next slide

Microscopic TCP Behavior When sender pauses, buffer drains one RTT Drop

Origin of rule-of-thumb Before and after reducing window size, the sending rate of the TCP sender is the same Inserting the rate equation we get The RTT is part transmission delay T and part queuing delay B/C . We know that after reducing the window, the queueing delay is zero. 

Rule-of-thumb Rule-of-thumb makes sense for one flow Typical backbone link has > 20,000 flows Does the rule-of-thumb still hold? Answer: If flows are perfectly synchronized, then Yes. If flows are desynchronized then No.

Buffer size is height of sawtooth t

If flows are synchronized t Aggregate window has same dynamics Therefore buffer occupancy has same dynamics Rule-of-thumb still holds.

Two TCP Flows Two TCP flows can synchronize

If flows are not synchronized Aggregate window has less variation Therefore buffer occupancy has less variation The more flows, the smaller the variation Rule-of-thumb does not hold.

If flows are not synchronized Probability Distribution B Buffer Size

Quantitative Model Model congestion window of a flow as random variable model as where For many de-synchronized flows We assume congestions windows are independent All congestion windows have the same probability distribution Now central limit theorem gives us queue length distribution

Required buffer size Simulation

Required buffer size 99.9% 99.5% 2× 98.0%

Small buffers help short flows Average flow completion times of 14 packet flows that share a congested bottleneck link with long-lived flows.

Experiments with backbone router GSR 12000, OC3 Line Card TCP Flows Router Buffer Link Utilization Pkts RAM Model Sim Exp 100 0.5 x 1 x 2 x 3 x 64 129 258 387 1Mb 2Mb 4Mb 8Mb 96.9% 99.9% 100% 94.7% 99.3% 99.8% 94.9% 98.1% 99.7% 400 32 128 192 512kb 99.2% 99.5% Thanks: Experiments conducted by Paul Barford and Joel Sommers, U of Wisconsin

What about Short Flows? So far we assumed long flows in congestion avoidance mode. What if traffic is mainly short flows in slow-start? Answer: Behavior is different, but In mixes of flows, long flows drive buffer requirements Required buffer for short flows is independent of line speed and RTT (same for 1Mbit/s or 40 Gbit/s)

A single, short-lived TCP flow Flow length 62 packets, RTT ~140 ms 32 Flow Completion Time (FCT) 16 8 4 fin ack received syn 2 RTT

Modelling TCP Flows vs. independent bursts Inter-Burst Arrival Time is greater than buffer size Therefore, we assume bursts are independent. Poisson arrivals of flows Arrivals of length Lflow (the flow length in packets) Poisson arrivals of bursts Four different poisson arrival processes of lengths 2,4,...

The M/G/1 Model TCP traffic is modelled as an M/G/1 arrival process: poisson arrivals of jobs with an arrival rate of Average queue length in jobs is: This gives us an average queue length in packets of Let's see if this works in practice...

Average Queue length

Queue Distribution To determine the required buffer, we need the queue distribution. Or at least the tail end of the queue distribution Buffer B Q Packet Loss P(Q = x) For M/G/1 queues there is no general solution for the queue distribution. We did two things (details are in the paper): Use M/G/1 processor sharing model (bad) Use Frank Kelly's effective bandwidth (good)

In Summary Buffer size is dictated by long TCP flows. 10Gb/s linecard with 200,000 x 56kb/s flows Rule-of-thumb: Buffer = 2.5Gbits Requires external, slow DRAM Becomes: Buffer = 6Mbits Can use on-chip, fast SRAM Completion time halved for short-flows 40Gb/s linecard with 40,000 x 1Mb/s flows Rule-of-thumb: Buffer = 10Gbits Becomes: Buffer = 50Mbits