Understanding Buffer Size Requirements in a Router

Slides:



Advertisements
Similar presentations
EE384Y: Packet Switch Architectures
Advertisements

Martin Suchara, Ryan Witt, Bartek Wydrowski California Institute of Technology Pasadena, U.S.A. TCP MaxNet Implementation and Experiments on the WAN in.
Introducing optical switching into the network
1 Understanding Buffer Size Requirements in a Router Thanks to Nick McKeown and John Lockwood for numerous slides.
QoS Solutions Confidential 2010 NetQuality Analyzer and QPerf.
Sizing Router Buffers Guido Appenzeller Isaac Keslassy Nick McKeown Stanford University.
On Modeling Feedback Congestion Control Mechanism of TCP using Fluid Flow Approximation and Queuing Theory  Hisamatu Hiroyuki Department of Infomatics.
Nick McKeown CS244 Lecture 6 Packet Switches. What you said The very premise of the paper was a bit of an eye- opener for me, for previously I had never.
Designing Networks with Little or No Buffers or Can Gulliver Survive in Lilliput? Yashar Ganjali High Performance Networking Group Stanford University.
High Performance All-Optical Networks with Small Buffers Yashar Ganjali High Performance Networking Group Stanford University
Software Performance Engineering - SPE HW - Answers Steve Chenoweth CSSE 375, Rose-Hulman Tues, Oct 23, 2007.
High Performance Networking with Little or No Buffers Yashar Ganjali High Performance Networking Group Stanford University
Computer Networks: TCP Congestion Control 1 TCP Congestion Control Lecture material taken from “Computer Networks A Systems Approach”, Third Ed.,Peterson.
High Performance Networking with Little or No Buffers Yashar Ganjali on behalf of Prof. Nick McKeown High Performance Networking Group Stanford University.
Sizing Router Buffers (Summary)
Sizing Router Buffers Nick McKeown Guido Appenzeller & Isaac Keslassy SNRC Review May 27 th, 2004.
Computer Networks : TCP Congestion Control1 TCP Congestion Control.
Modeling TCP in Small-Buffer Networks
Reducing the Buffer Size in Backbone Routers Yashar Ganjali High Performance Networking Group Stanford University February 23, 2005
1 Emulating AQM from End Hosts Presenters: Syed Zaidi Ivor Rodrigues.
Isaac Keslassy (Technion) Guido Appenzeller & Nick McKeown (Stanford)
Routers with Small Buffers Yashar Ganjali High Performance Networking Group Stanford University
Ns Simulation Final presentation Stella Pantofel Igor Berman Michael Halperin
Courtesy: Nick McKeown, Stanford 1 TCP Congestion Control Tahir Azim.
CS144 An Introduction to Computer Networks
Congestion models for bursty TCP traffic Damon Wischik + Mark Handley University College London DARPA grant W911NF
1 - CS7701 – Fall 2004 Review of: Sizing Router Buffers Paper by: – Guido Appenzeller (Stanford) – Isaac Keslassy (Stanford) – Nick McKeown (Stanford)
Sizing Router Buffers How much packet buffers does a router need? C Router Source Destination 2T The current “Rule of Thumb” A router needs a buffer size:
Router Buffer Sizes in the WAN draft-ksubram-lmap-router-buffer-sizes- 00 Kamala Subramaniam Microsoft IETF 91 Honolulu Hawaii.
Chapter 12 Transmission Control Protocol (TCP)
I-Path : Network Transparency Project Shigeki Goto* Akihiro Shimoda*, Ichiro Murase* Dai Mochinaga**, and Katsushi Kobayashi*** 1 * Waseda University **
High-speed TCP  FAST TCP: motivation, architecture, algorithms, performance (by Cheng Jin, David X. Wei and Steven H. Low)  Modifying TCP's Congestion.
2000 년 11 월 20 일 전북대학교 분산처리실험실 TCP Flow Control (nagle’s algorithm) 오 남 호 분산 처리 실험실
Copyright 2008 Kenneth M. Chipps Ph.D. Controlling Flow Last Update
Analysis of Buffer Size in Core Routers by Arthur Dick Supervisor Anirban Mahanti.
1 Analysis of a window-based flow control mechanism based on TCP Vegas in heterogeneous network environment Hiroyuki Ohsaki Cybermedia Center, Osaka University,
Winter 2008CS244a Handout 71 CS244a: An Introduction to Computer Networks Handout 7: Congestion Control Nick McKeown Professor of Electrical Engineering.
Buffers: How we fell in love with them, and why we need a divorce Hot Interconnects, Stanford 2004 Nick McKeown High Performance Networking Group Stanford.
Networks with Very Small Buffers Yashar Ganjali, Guido Appenzeller, High Performance Networking Group Prof. Ashish Goel, Prof. Tim Roughgarden, Prof. Nick.
Network layer (addendum) Slides adapted from material by Nick McKeown and Kevin Lai.
1 Building big router from lots of little routers Nick McKeown Assistant Professor of Electrical Engineering and Computer Science, Stanford University.
Advanced Computer Networks
Sachin Katti, CS244 Slides courtesy: Nick McKeown
DMET 602: Networks and Media Lab
Exams hints The exam will cover the same topics covered in homework. If there was no homework problem on the topic, then there will not be any exam question.
March 21, 2001 Recall we have talked about the Sliding Window Protocol for Flow Control Review this protocol A window of size I can contain at most I.
EGR 2201 Unit 6 Theorems: Thevenin’s, Norton’s, Maximum Power Transfer
Data Center Network Architectures
Chapter 3 outline 3.1 transport-layer services
Experimental Psychology
Data Virtualization Tutorial… CORS and CIS
Empirically Characterizing the Buffer Behaviour of Real Devices
Chapter 3 outline 3.1 Transport-layer services
Queue Dynamics with Window Flow Control
Open Issues in Router Buffer Sizing
Transport Layer Unit 5.
Review: Statistical Multiplexing
TCP.
Lecture 19 – TCP Performance
Amogh Dhamdhere, Hao Jiang and Constantinos Dovrolis
i-Path : Network Transparency Project
IT351: Mobile & Wireless Computing
COMP/ELEC 429/556 Introduction to Computer Networks
Chapter 3 Part 3 Switching and Bridging
COMP/ELEC 429/556 Fall 2017 Homework #2
Project-2 (20%) – DiffServ and TCP Congestion Control
Transport Layer: Congestion Control
Routers with Very Small Buffers
Modeling and Evaluating Variable Bit rate Video Steaming for ax
Control-Data Plane Separation
Presentation transcript:

Understanding Buffer Size Requirements in a Router Thanks to Nick McKeown and John Lockwood for numerous slides

Buffer Requirements in a Router Buffer size matters: Small queues reduce delay Large buffers are expensive Theoretical tools predict requirements Queuing theory Large deviation theory Mean field theory Yet, there is no direct answer Flows have a closed-loop nature Question arises on whether focus should be on equilibrium state or transient state Having said that, one might think the buffer sizing problem must be very well understood. After all, we are equipped with tools like queueing theory, large deviations theory, and mean field theory which are focused on solving exactly this type of problem. You would think this is simply a matter of understanding the random process that describes the queue occupancy over time. Unfortunately, this is not the case. The closed-loop nature of the flows, and the fact that flows react to the state of the system makes it necessary to use control theoretic tools, but those tools emphasize on the equilibrium state of the system, and fail to describe transient delays.

Rule-of-thumb Universally applied rule-of-thumb: Context Source Router Destination C 2T Universally applied rule-of-thumb: A router needs a buffer size: 2T is the two-way propagation delay (or just 250ms) C is capacity of bottleneck link Context Mandated in backbone and edge routers Appears in RFPs and IETF architectural guidelines Already known by inventors of TCP [Van Jacobson, 1988] Has major consequences for router design So if the problem is not easy, what do people do in practice? Buffer sizes in today’s Internet routers are set based on a rule-of-thumb which says If we want the core routers to have 100% utilization, The buffer size should be greater than or equal to 2TxC Here 2T is the two way propagation delay of packets going through the router And C is the capacity of the target link. This rule is mandated in backbone and edge routers, and Appears in RFPs and IETF architectural guidelines. It has been known almost since the time TCP was invented. Note that if the capacity of the network is increased, based on this rule, we need to increase the buffer size linearly with capacity. We don’t expect the propagation delay changed that much over time, but we expect the capacity to grow very rapidly, Therefore, this rule can have major consequences in router design, and that’s exactly why today’s routers have so much buffering as I showed you a few moments ago.

The Story So Far 10,000 20 1,000,000 # packets at 10Gb/s After this relatively long introduction, let me give an overview of the rest of my presentation. I'll talk about three different rules for sizing router buffers. The first rule is the rule-of-thumb which I just described. As I mentioned, this rule is based on the assumption that we want to have 100% link utilization at the core links. The second rule is a more recent result proposed by Appenzeller, Keslassy, and McKeown which basically challenges the original rule-of-thumb. Based on this rule if we have N flows going through the router, we can reduce the buffer size by a factor of sqrt(N) The underlying assumption is that we have a large number of flows, and the flows are desynchronized. Finally, the third rule which I’ll talk about today, says that If we are willing to sacrifice a very small amount of throughput, i.e. if having a throughput less than 100% is acceptable, We might be able to reduce the buffer sizes significantly to just O(log(W)) packets. Here W is the maximum congestion window size. If we apply each of these rules to a 10Gb/s link We will need to buffer 1,000,000 packets based on the first rule, About 10,000 packets based on the 2nd one, And only 20 packets based on the 3rd rule. For the rest of this presentation I’ll show you the intuition behind each of these rules; and Will provide some evidence that validates the rule. Let’s start with the rule-of-thumb. Assume: Large number of desynchronized flows; 100% utilization Assume: Large number of desynchronized flows; <100% utilization

Using NetFPGA to explore buffer size Need to reduce buffer size and measure occupancy Alas, not possible in commercial routers So, we will use the NetFPGA instead Objective: Use the NetFPGA to understand how large a buffer we need for a single TCP flow.

Why 2TxC for a single TCP Flow? Rule for adjusting W If an ACK is received: W ← W+1/W If a packet is lost: W ← W/2 Only W packets may be outstanding

Time Evolution of a Single TCP Flow Time evolution of a single TCP flow through a router. Buffer is 2T*C Time evolution of a single TCP flow through a router. Buffer is < 2T*C Here is a simulation of a single TCP flow with a buffer size equal to the bandwidth delay product. As you can see, the congestion window changes according to a sawtooth shape, and varies between 140, and 280. On the bottom graph we can see the queue occupancy. As you can see, when the congestion window is halved the buffer occupancy becomes zero, and the two curves change similarly. Note that since the pipe is full at all times, and the link utilization remains at 100%. Now, on the other hand, when the buffer size is less than the bandwidth delay product, we can see that When the congestion window is halved, the queue occupancy goes to zero and Remains at zero for a while before the congestion window is increased again, and can fill up the pipe. During this time, we see a reduction in link utilization.

Instructions for a real evaluation using NetFPGA Observing and Controlling the Queue Size Instructions for a real evaluation using NetFPGA

Setup for the Buffer Evaluation Adjacent Web & Video Server nf2c1 eth2 NetFPGA Local Host nf2c2 eth1 Router 9

Enhanced Router Objectives Execution Observe router with new modules New modules: rate limiting, event capture Execution Run event capture router Look at routing tables Explore details pane Start tcp transfer, look at queue occupancy Change rate, look at queue occupancy

Step 1 - Run Pre-made Enhanced Router Start terminal and cd to “NF2/projects/tutorial_router/sw/” Type “./tut_adv_router_gui.pl” A familiar GUI should start

Step 2 - Explore Enhanced Router Click on the Details tab A similar pipeline to the one seen previously shown with some additions

Enhanced Router Pipeline MAC RxQ CPU Input Arbiter Output Port Lookup TxQ Output Queues Rate Limiter Event Capture Two modules added Event Capture to capture output queue events (writes, reads, drops) Rate Limiter to create a bottleneck

Step 3 - Decrease the Link Rate To create bottleneck and show the TCP “sawtooth,” link-rate is decreased. In the Details tab, click the “Rate Limit” module Check Enabled Set link rate to 1.953Mbps 14

Step 4 – Decrease Queue Size Go back to the Details panel and click on “Output Queues” Select the “Output Queue 2” tab Change the output queue size in packets slider to 16

Step 5 - Start Event Capture Click on the Event Capture module under the Details tab This should start the configuration page

Step 6 - Configure Event Capture Check Send to local host to receive events on the local host Check Monitor Queue 2 to monitor output queue of MAC port1 Check Enable Capture to start event capture

Step 7 - Start TCP Transfer We will use iperf to run a large TCP transfer and look at queue evolution Start a terminal and cd to “NF2/projects/tutorial_router/sw” Type “./iperf.sh” 18

Step 8 - Look at Event Capture Results Click on the Event Capture module under the Details tab. The sawtooth pattern should now be visible.

Queue Occupancy Charts Observe the TCP/IP sawtooth Leave the control windows open