Queueing theory for TCP Damon Wischik, UCL Gaurav Raina, Cambridge.

Slides:



Advertisements
Similar presentations
Helping TCP Work at Gbps Cheng Jin the FAST project at Caltech
Advertisements

Michele Pagano – A Survey on TCP Performance Evaluation and Modeling 1 Department of Information Engineering University of Pisa Network Telecomunication.
Restless bandits and congestion control Mark Handley, Costin Raiciu, Damon Wischik UCL.
Queueing theory and Internet congestion control Damon Wischik, UCL Mark Handley, UCL Gaurav Raina, Cambridge / IIT Madras.
CS 268: Lecture 7 (Beyond TCP Congestion Control) Ion Stoica Computer Science Division Department of Electrical Engineering and Computer Sciences University.
1 End to End Bandwidth Estimation in TCP to improve Wireless Link Utilization S. Mascolo, A.Grieco, G.Pau, M.Gerla, C.Casetti Presented by Abhijit Pandey.
Restricted Slow-Start for TCP William Allcock 1,2, Sanjay Hegde 3 and Rajkumar Kettimuthu 1,2 1 Argonne National Laboratory 2 The University of Chicago.
Router-assisted congestion control Lecture 8 CS 653, Fall 2010.
Advanced Computer Networking Congestion Control for High Bandwidth-Delay Product Environments (XCP Algorithm) 1.
XCP: Congestion Control for High Bandwidth-Delay Product Network Dina Katabi, Mark Handley and Charlie Rohrs Presented by Ao-Jan Su.
TCP Stability and Resource Allocation: Part II. Issues with TCP Round-trip bias Instability under large bandwidth-delay product Transient performance.
Fair queueing and congestion control Jim Roberts (France Telecom) Joint work with Jordan Augé Workshop on Congestion Control Hamilton Institute, Sept 2005.
On Modeling Feedback Congestion Control Mechanism of TCP using Fluid Flow Approximation and Queuing Theory  Hisamatu Hiroyuki Department of Infomatics.
Control Theory in TCP Congestion Control and new “FAST” designs. Fernando Paganini and Zhikui Wang UCLA Electrical Engineering July Collaborators:
TDC365 Spring 2001John Kristoff - DePaul University1 Internetworking Technologies Transmission Control Protocol (TCP)
Congestion Control Tanenbaum 5.3, /12/2015Congestion Control (A Loss Based Technique: TCP)2 What? Why? Congestion occurs when –there is no reservation.
AQM for Congestion Control1 A Study of Active Queue Management for Congestion Control Victor Firoiu Marty Borden.
High speed TCP’s. Why high-speed TCP? Suppose that the bottleneck bandwidth is 10Gbps and RTT = 200ms. Bandwidth delay product is packets (1500.
Sizing Router Buffers Nick McKeown Guido Appenzeller & Isaac Keslassy SNRC Review May 27 th, 2004.
Modeling TCP in Small-Buffer Networks
1 Chapter 3 Transport Layer. 2 Chapter 3 outline 3.1 Transport-layer services 3.2 Multiplexing and demultiplexing 3.3 Connectionless transport: UDP 3.4.
ACN: AVQ1 Analysis and Design of an Adaptive Virtual Queue (AVQ) Algorithm for Active Queue Managment Srisankar Kunniyur and R. Srikant SIGCOMM’01 San.
Data Communication and Networks
Estimating Congestion in TCP Traffic Stephan Bohacek and Boris Rozovskii University of Southern California Objective: Develop stochastic model of TCP Necessary.
Presented by Anshul Kantawala 1 Anshul Kantawala FAST TCP: From Theory to Experiments C. Jin, D. Wei, S. H. Low, G. Buhrmaster, J. Bunn, D. H. Choe, R.
Congestion Control for High Bandwidth-delay Product Networks Dina Katabi, Mark Handley, Charlie Rohrs.
Core Stateless Fair Queueing Stoica, Shanker and Zhang - SIGCOMM 98 Rigorous fair Queueing requires per flow state: too costly in high speed core routers.
Congestion Control for High Bandwidth-Delay Product Environments Dina Katabi Mark Handley Charlie Rohrs.
1 A State Feedback Control Approach to Stabilizing Queues for ECN- Enabled TCP Connections Yuan Gao and Jennifer Hou IEEE INFOCOM 2003, San Francisco,
Buffer requirements for TCP: queueing theory & synchronization analysis Gaurav RainaDamon Wischik CambridgeUCL.
New Designs for the Internet Why can’t I get higher throughput? Why is my online video jerky? How is capacity shared in the Internet?
Open Issues in Buffer Sizing Amogh Dhamdhere Constantine Dovrolis College of Computing Georgia Tech.
Buffer requirements for TCP Damon Wischik DARPA grant W911NF
Courtesy: Nick McKeown, Stanford 1 TCP Congestion Control Tahir Azim.
The teleology of Internet congestion control Damon Wischik, Computer Science, UCL.
CS144 An Introduction to Computer Networks
Queueing analysis of a feedback- controlled (TCP/IP) network Gaurav RainaDamon WischikMark Handley CambridgeUCLUCL.
Congestion models for bursty TCP traffic Damon Wischik + Mark Handley University College London DARPA grant W911NF
Queueing theory for TCP Damon Wischik University College London TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A.
27th, Nov 2001 GLOBECOM /16 Analysis of Dynamic Behaviors of Many TCP Connections Sharing Tail-Drop / RED Routers Go Hasegawa Osaka University, Japan.
High-speed TCP  FAST TCP: motivation, architecture, algorithms, performance (by Cheng Jin, David X. Wei and Steven H. Low)  Modifying TCP's Congestion.
Congestion Control for High Bandwidth-Delay Product Networks D. Katabi (MIT), M. Handley (UCL), C. Rohrs (MIT) – SIGCOMM’02 Presented by Cheng.
Queueing theory for TCP Damon Wischik, UCL Gaurav Raina, Cambridge.
Some questions about multipath Damon Wischik, UCL Trilogy UCL.
The history of the Internet 1974: First draft of TCP/IP “A protocol for packet network interconnection”, Vint Cerf and Robert Kahn 1983: ARPANET switches.
Hybrid Modeling of TCP Congestion Control João P. Hespanha, Stephan Bohacek, Katia Obraczka, Junsoo Lee University of Southern California.
CS640: Introduction to Computer Networks Aditya Akella Lecture 15 TCP – III Reliability and Implementation Issues.
Computer Networking Lecture 18 – More TCP & Congestion Control.
CS640: Introduction to Computer Networks Aditya Akella Lecture 15 TCP – III Reliability and Implementation Issues.
1 Time-scale Decomposition and Equivalent Rate Based Marking Yung Yi, Sanjay Shakkottai ECE Dept., UT Austin Supratim Deb.
New designs for Internet congestion control Damon Wischik (UCL)
Analysis of RED Goal: impact of RED on loss and delay of bursty (TCP) and less bursty or smooth (UDP) traffic RED eliminates loss bias against bursty traffic.
Winter 2008CS244a Handout 71 CS244a: An Introduction to Computer Networks Handout 7: Congestion Control Nick McKeown Professor of Electrical Engineering.
1 Computer Networks Congestion Avoidance. 2 Recall TCP Sliding Window Operation.
TCP transfers over high latency/bandwidth networks & Grid DT Measurements session PFLDnet February 3- 4, 2003 CERN, Geneva, Switzerland Sylvain Ravot
XCP: eXplicit Control Protocol Dina Katabi MIT Lab for Computer Science
Analysis and Design of an Adaptive Virtual Queue (AVQ) Algorithm for AQM By Srisankar Kunniyur & R. Srikant Presented by Hareesh Pattipati.
HP Labs 1 IEEE Infocom 2003 End-to-End Congestion Control for InfiniBand Jose Renato Santos, Yoshio Turner, John Janakiraman HP Labs.
@Yuan Xue A special acknowledge goes to J.F Kurose and K.W. Ross Some of the slides used in this lecture are adapted from their.
1 Transport Bandwidth Allocation 3/29/2012. Admin. r Exam 1 m Max: 65 m Avg: 52 r Any questions on programming assignment 2 2.
1 Network Transport Layer: TCP Analysis and BW Allocation Framework Y. Richard Yang 3/30/2016.
Queueing theory, control theory, & buffer sizing Damon Wischik DARPA grant W911NF
Congestion Control for High Bandwidth-Delay Product Networks Dina Katabi, Mark Handley, Charlie Rohrs Presented by Yufei Chen.
TCP - Part II Relates to Lab 5. This is an extended module that covers TCP flow control, congestion control, and error control in TCP.
Lecture 19 – TCP Performance
FAST TCP : From Theory to Experiments
CS640: Introduction to Computer Networks
Internet congestion control
Gaurav Raina Damon Wischik Mark Handley Cambridge UCL UCL
Presentation transcript:

Queueing theory for TCP Damon Wischik, UCL Gaurav Raina, Cambridge

DJW /24 if (seqno > _last_acked) { if (!_in_fast_recovery) { _last_acked = seqno; _dupacks = 0; inflate_window(); send_packets(now); _last_sent_time = now; return; } if (seqno < _recover) { uint32_t new_data = seqno - _last_acked; _last_acked = seqno; if (new_data < _cwnd) _cwnd -= new_data; else _cwnd=0; _cwnd += _mss; retransmit_packet(now); send_packets(now); return; } uint32_t flightsize = _highest_sent - seqno; _cwnd = min(_ssthresh, flightsize + _mss); _last_acked = seqno; _dupacks = 0; _in_fast_recovery = false; send_packets(now); return; } if (_in_fast_recovery) { _cwnd += _mss; send_packets(now); return; } _dupacks++; if (_dupacks!=3) { send_packets(now); return; } _ssthresh = max(_cwnd/2, (uint32_t)(2 * _mss)); retransmit_packet(now); _cwnd = _ssthresh + 3 * _mss; _in_fast_recovery = true; _recover = _highest_sent; } TCP transmission rate [0–100 kB/sec] time [0–8 sec]

DJW /24 Models for TCP Single TCP sawtooth One can explicitly calculate the mean transmission rate, stationary distribution etc.

DJW /24 Models for TCP * ***** * Single TCP sawtooth Billiards model Baccelli+Hong 2002 Each flow increases its transmission rate linearly... until the total transmission rate hits capacity, when some flows experience dropped packets... and those flows back off. Each flow increases its transmission rate linearly... until the total transmission rate hits capacity, when some flows experience dropped packets... and those flows back off.

DJW /24 Models for TCP * ***** * Single TCP sawtooth Billiards model Baccelli+Hong 2002 Fluid model Kelly+Maulloo+Tan 1998 Misra+Gong+Towsley 2000 Baccelli+McDonald+Reynier 2002 Approximate the aggregate TCP traffic by means of a differential equation... The differential equation has increase & decrease terms, which reflect the TCP sawtooth. Approximate the aggregate TCP traffic by means of a differential equation... The differential equation has increase & decrease terms, which reflect the TCP sawtooth.

DJW /24 Models for TCP * ***** * Single TCP sawtooth Billiards model Baccelli+Hong 2002 Fluid model Kelly+Maulloo+Tan 1998 Misra+Gong+Towsley 2000 Baccelli+McDonald+Reynier 2002 QUESTIONS When are these models good? What is the formula for packet drop probability, to plug into the models? How does it depend on buffer size etc.? QUESTIONS When are these models good? What is the formula for packet drop probability, to plug into the models? How does it depend on buffer size etc.?

DJW /24 Simulation results queue size [0–100%] drop probability [0–30%] traffic intensity [0–100%] time [0–5s] small bufferlarge bufferint. buffer

DJW /24 Simulation results queue size [0–100%] drop probability [0–30%] traffic intensity [0–100%] time [0–5s] small bufferlarge bufferint. buffer When we vary the buffer size, the system behaviour changes dramatically. WHY? HOW? ANSWER It depends on the timescale and spacescale of fluctuations in the arrival process... ANSWER It depends on the timescale and spacescale of fluctuations in the arrival process...

DJW /24 Formulate a maths question What is the limiting queue-length process, as N , in the three regimes –large buffers B (N) =BN –intermediate buffers B (N) =B√N –small buffers B (N) =B Why are these the interesting limiting regimes? What are the alternatives? [Bain, 2003] N TCP flows Service rate NC Buffer size B (N) Round trip time RTT

DJW /24 Intermediate buffers B (N) = B√N Consider a standalone queue –fed by N Poisson flows, each of rate x pkts/sec where x varies slowly, served at constant rate NC, with buffer size B√N –i.e. an M Nx/ /D NC / / 1 /B√N queue –x t =0.95 then 1.05 pkts/sec, C=1 pkt/sec, B=3 pkt Observe –queue size fluctuates very rapidly; busy periods have duration O(1/N) so packet drop probability p t is a function of instantaneous arrival rate x t –queue size flips quickly from near-empty to near-full, in time O(1/√N) –packet drop probability is p t =(x t -C) + /x t N=50N=100N=500N=1000 queue size traffic intensity time

DJW /24 Intermediate buffers B (N) = B√N Consider a standalone queue –fed by N Poisson flows, each of rate x pkts/sec where x varies slowly, served at constant rate NC, with buffer size B√N –i.e. an M Nx/ /D NC / / 1 /B√N queue –x t =0.95 then 1.05 pkts/sec, C=1 pkt/sec, B=3 pkt Observe –queue size fluctuates very rapidly; busy periods have duration O(1/N) so packet drop probability p t is a function of instantaneous arrival rate x t –queue size flips quickly from near-empty to near-full, in time O(1/√N) –packet drop probability is p t =(x t -C) + /x t N=50N=100N=500N=1000 queue size traffic intensity time This formula was proposed for TCP by Srikant 2004

DJW /24 Closing the loop In the open-loop queue –we assumed Poisson inputs with slowly varying arrival rate –queue size flips quickly from near- empty to near-full –queue size fluctuates very rapidly, with busy periods of duration O(1/N), so drop probability p t is a function of instantaneous arrival rate x t In the closed-loop TCP system –the round trip time means that we can treat inputs over time [t,t+RTT] as an exogenous arrival process to the queue –the average arrival rate should vary slowly, according to a differential equation –the aggregate of many point processes converges to a Poisson process, and this carries through to queue size [Cao+Ramanan 2002] There is thus a separation of timescales –queue size fluctuates over short timescales, and we obtain p t from the open-loop M/D/1/B (N) model –TCP adapts its rate over long timescales, according to the fluid model differential equation

DJW /24 Intermediate buffers B (N) = B√N queue size [0–619pkt] drop probability [0–30%] traffic intensity [0–100%] queue size [594–619pkt] time [0–5s] time [50ms] 2Gb/s link, C=50 pkt/sec/flow 5000 TCP flows RTT 150–200ms buffer 619pkt simulator htsim by Mark Handley, UCL

DJW /24 Intermediate buffers B (N) = B√N queue size [0–619pkt] drop probability [0–30%] traffic intensity [0–100%] queue size [594–619pkt] time [0–5s] time [50ms] 2Gb/s link, C=50 pkt/sec/flow 5000 TCP flows RTT 150–200ms buffer 619pkt simulator htsim by Mark Handley, UCL Queue size fluctuations are O(1), invisible at the O(√N) scale When the traffic intensity changes between capacity, the buffer can fill or drain in time O(1/√N)

DJW /24 Instability and synchronization When the queue is not full, there is no packet loss, so transmission rates increase additively When the queue becomes full, there are short periods of concentrated packet loss This makes the flows become synchronized queue size [0–619pkt] TCP window sizes [0-25pkt] time [0–5s] This is the billiards model for TCP

DJW /24 Large buffers B (N) = BN queue size [0–BN pkt] drop probability [0–30%] traffic intensity [0–100%] queue size [(B-25)–B pkt] time [0–5s] time [50ms] Queue size fluctuations are O(1), invisible on the O(N) scale... The total arrival rate is Nx t and the service rate is NC, so it takes time O(1) for the buffer to fill or drain and we can calculate the packet drop probability when the queue is full

DJW /24 Small buffers B (N) = B queue size [0–BN pkt] drop probability [0–30%] traffic intensity [0–100%] queue size [(B-25)–B pkt] time [0–5s] time [50ms] Queue size fluctuations are of size O(1), over timescale O(1/N), enough to fill or drain the buffer... The queue size changes too quickly to be controlled, but the queue size distribution is controlled by the average arrival rate x t

DJW /24 Conclusion * ***** * Single TCP sawtooth Billiards model Fluid model Fine for a single flow Good for a small number of flows... This is also an extreme case of the fluid model, when flows are synchronized due to intermediate-sized buffers Good for a small number of flows... This is also an extreme case of the fluid model, when flows are synchronized due to intermediate-sized buffers Arises as a law- of-large- numbers limit, when there are many TCP flows

DJW /24 Conclusion * ***** * Single TCP sawtooth Billiards model Fluid model Proved by McDonald+Reynier

DJW /24 Instability & buffer size On the basis of these models, and of a control-theoretic analysis of the limit cycles that ensue, we claimed –a buffer of 30–50 packets should be sufficient –intermediate buffers will make the system become synchronized, and utilization will suffer –large buffers get high utilization, but cause synchronization and delay queue size distribution [0–100%] B=10pkt B=20pkt B=30pkt B=40pkt B=50pkt B=100pkt traffic intensity distribution [85–110%] Plot by Darrell Newman, Cambridge ½=1

DJW /24 Instability & buffer size On the basis of these models, and of a control-theoretic analysis of the limit cycles that ensue, we claimed –a buffer of 30–50 packets should be sufficient –intermediate buffers will make the system become synchronized, and utilization will suffer –large buffers get high utilization, but cause synchronization and delay queue size distribution [0–100%] B=10pkt B=20pkt B=30pkt B=40pkt B=50pkt B=100pkt traffic intensity distribution [85–110%] Plot by Darrell Newman, Cambridge ½=1 But when other people ran simulations they found... Small buffers lead to terribly low utilization Intermediate buffers work much better But when other people ran simulations they found... Small buffers lead to terribly low utilization Intermediate buffers work much better

DJW /24 Burstiness The problem is that the Poisson limit breaks down –A key step in the C+R proof is showing that each flow contributes at most one packet in a busy period –But TCP, on a fast access link, will produce bursty traffic: sources can send an entire window-full of packets back-to- back For very fast access speeds, i.e. very bursty TCP traffic, a batch-M/D/1/B model is appropriate

DJW /24 Burstiness & access speeds Simulations of a system with a single bottleneck link with capacity NC, where each flow is throttled by a slow access link access speed = 5C 40C TCP traffic is bursty over short timescales, so the queue's stochastic fluctuations are large... TCP has continual feedback, and can nearly maintain a steady state TCP traffic is bursty over short timescales, so the queue's stochastic fluctuations are large... TCP has continual feedback, and can nearly maintain a steady state TCP traffic is Poisson-like over short timescales, so the queue's stochastic fluctuations are small... TCP has bang-bang feedback and it is unable to maintain a steady state TCP traffic is Poisson-like over short timescales, so the queue's stochastic fluctuations are small... TCP has bang-bang feedback and it is unable to maintain a steady state

DJW /24 Burstiness & access speeds Simulations of a system with a single bottleneck link with capacity NC, where each flow is throttled by a slow access link access speed = 5C 40C You have to be careful about short-timescale traffic statistics when you model TCP Queueing theory still has a role! You have to be careful about short-timescale traffic statistics when you model TCP Queueing theory still has a role!