TCP Congestion Control

Slides:



Advertisements
Similar presentations
Congestion Control and Fairness Models Nick Feamster CS 4251 Computer Networking II Spring 2008.
Advertisements

Congestion Control Jennifer Rexford Advanced Computer Networks Tuesdays/Thursdays 1:30pm-2:50pm.
Congestion Control Created by M Bateman, A Ruddle & C Allison As part of the TCP View project.
CS640: Introduction to Computer Networks Mozafar Bag-Mohammadi Lecture 3 TCP Congestion Control.
Mathematical models of the Internet Frank Kelly Hood Fellowship Public Lecture University of Auckland 3 April 2012.
Congestion Control An Overview -Jyothi Guntaka. Congestion  What is congestion ?  The aggregate demand for network resources exceeds the available capacity.
Congestion Dr. Abdulaziz Almulhem. Almulhem©20012 Congestion It occurs when network resources are becoming scarce High demand Over utilized Offered load.
Congestion Control Reading: Sections COS 461: Computer Networks Spring 2011 Mike Freedman
1 Chapter 3 Transport Layer. 2 Chapter 3 outline 3.1 Transport-layer services 3.2 Multiplexing and demultiplexing 3.3 Connectionless transport: UDP 3.4.
Data Communication and Networks
Rethinking Internet Traffic Management: From Multiple Decompositions to a Practical Protocol Jiayue He Princeton University Joint work with Martin Suchara,
TCP Congestion Control
Congestion Control for High Bandwidth-delay Product Networks Dina Katabi, Mark Handley, Charlie Rohrs.
Multipath Protocol for Delay-Sensitive Traffic Jennifer Rexford Princeton University Joint work with Umar Javed, Martin Suchara, and Jiayue He
Introduction 1 Lecture 14 Transport Layer (Congestion Control) slides are modified from J. Kurose & K. Ross University of Nevada – Reno Computer Science.
Congestion Control Michael Freedman COS 461: Computer Networks
3: Transport Layer3b-1 Principles of Congestion Control Congestion: r informally: “too many sources sending too much data too fast for network to handle”
Transport Layer 4 2: Transport Layer 4.
Transport Layer3-1 Chapter 3 outline r 3.1 Transport-layer services r 3.2 Multiplexing and demultiplexing r 3.3 Connectionless transport: UDP r 3.4 Principles.
Transport Layer3-1 Chapter 3 outline r 3.1 Transport-layer services r 3.2 Multiplexing and demultiplexing r 3.3 Connectionless transport: UDP r 3.4 Principles.
Principles of Congestion Control Congestion: informally: “too many sources sending too much data too fast for network to handle” different from flow control!
Transport Layer 3-1 Chapter 3 Transport Layer Computer Networking: A Top Down Approach 6 th edition Jim Kurose, Keith Ross Addison-Wesley March
CS640: Introduction to Computer Networks Aditya Akella Lecture 15 TCP – III Reliability and Implementation Issues.
Computer Networking Lecture 18 – More TCP & Congestion Control.
CS640: Introduction to Computer Networks Aditya Akella Lecture 15 TCP – III Reliability and Implementation Issues.
T. S. Eugene Ngeugeneng at cs.rice.edu Rice University1 COMP/ELEC 429/556 Introduction to Computer Networks Principles of Congestion Control Some slides.
Jennifer Rexford Fall 2014 (TTh 3:00-4:20 in CS 105) COS 561: Advanced Computer Networks TCP.
David Wetherall Professor of Computer Science & Engineering Introduction to Computer Networks Fairness of Bandwidth Allocation (§6.3.1)
TCP. TCP ACK generation [RFC 1122, RFC 2581] Event at Receiver Arrival of in-order segment with expected seq #. All data up to expected seq # already.
Random Early Detection (RED) Router notifies source before congestion happens - just drop the packet (TCP will timeout and adjust its window) - could make.
© Janice Regan, CMPT 128, CMPT 371 Data Communications and Networking Congestion Control 0.
Peer-to-Peer Networks 13 Internet – The Underlay Network
Transmission Control Protocol (TCP) TCP Flow Control and Congestion Control CS 60008: Internet Architecture and Protocols Department of CSE, IIT Kharagpur.
@Yuan Xue A special acknowledge goes to J.F Kurose and K.W. Ross Some of the slides used in this lecture are adapted from their.
1 Transport Bandwidth Allocation 3/29/2012. Admin. r Exam 1 m Max: 65 m Avg: 52 r Any questions on programming assignment 2 2.
1 Network Transport Layer: TCP Analysis and BW Allocation Framework Y. Richard Yang 3/30/2016.
@Yuan Xue A special acknowledge goes to J.F Kurose and K.W. Ross Some of the slides used in this lecture are adapted from their.
Other Methods of Dealing with Congestion
CS450 – Introduction to Networking Lecture 19 – Congestion Control (2)
Internet Networking recitation #9
Approaches towards congestion control
Transport Layer CS 381 3/7/2017.
Chapter 3 outline 3.1 transport-layer services
Chapter 6 TCP Congestion Control
COMP 431 Internet Services & Protocols
Congestion Control.
Introduction to Congestion Control
CS 268: Lecture 6 Scott Shenker and Ion Stoica
Chapter 3 outline 3.1 Transport-layer services
TCP Congestion Control
TCP Congestion Control
TCP, XCP and Fair Queueing
TCP.
Flow and Congestion Control
Lecture 19 – TCP Performance
So far, On the networking side, we looked at mechanisms to links hosts using direct linked networks and then forming a network of these networks. We introduced.
Other Methods of Dealing with Congestion
Other Methods of Dealing with Congestion
The University of Adelaide, School of Computer Science
COMP/ELEC 429/556 Introduction to Computer Networks
CS640: Introduction to Computer Networks
Internet Networking recitation #10
COMP/ELEC 429/556 Fall 2017 Homework #2
TCP Congestion Control
Understanding Congestion Control Mohammad Alizadeh Fall 2018
Advanced Computer Networks
Transport Layer: Congestion Control
Chapter 3 outline 3.1 Transport-layer services
Congestion Michael Freedman COS 461: Computer Networks
Lecture 6, Computer Networks (198:552)
Presentation transcript:

TCP Congestion Control Jennifer Rexford Fall 2018 (TTh 1:30-2:50pm in Friend 006) COS 561: Advanced Computer Networks http://www.cs.princeton.edu/courses/archive/fall18/cos561/

Holding the Internet Together Distributed cooperation for resource allocation BGP: what end-to-end paths to take (for ~60K ASes) TCP: what rate to send over each path (for ~3B hosts) AS 2 AS 1 AS 3 AS 4

What Problem Does a Protocol Solve? BGP path selection Select a path that each AS on the path is willing to use Adapt path selection in the presence of failures TCP congestion control Prevent congestion collapse of the Internet Allocate bandwidth fairly and efficiently But, can we be more precise? Define mathematically what problem is being solved To understand the problem and analyze the protocol To predict the effects of changes in the system To design better protocols from first principles

Fairness

Fair and Efficient Use of a Resource Suppose n users share a single resource Like the bandwidth on a single link E.g., 3 users sharing a 30 Gbps link What is a fair allocation of bandwidth? Suppose user demand is “elastic” (i.e., unlimited) Allocate each a 1/n share (e.g., 10 Gbps each) But, “equality” is not enough Which allocation is best: [5, 5, 5] or [18, 6, 6]? [5, 5, 5] is more “fair”, but [18, 6, 6] more efficient What about [5, 5, 5] vs. [22, 4, 4]?

Fair Use of a Single Resource What if some users have inelastic demand? E.g., 3 users where 1 user only wants 6 Gbps And the total link capacity is 30 Gbps Should we still do an “equal” allocation? E.g., [6, 6, 6] But that leaves 12 Gbps unused Should we allocate in proportion to demand? E.g., 1 user wants 6 Gbps, and 2 each want 20 Gbps Allocate [4, 13, 13]? Or, give the least demanding user all he wants? E.g., allocate [6, 12, 12]?

Max-Min Fairness The allocation must be “feasible” Total allocation should not exceed link capacity Protect the less fortunate Any attempt to increase the allocation of one user … necessarily decreases the allocation of another user with equal or lower allocation Fully utilize a “bottlenecked” resource If demand exceeds capacity, the link is fully used Progressive filling algorithm Grow all rates until some users stop having demand Continue increasing all remaining rates till link is full

Resource Allocation Over Paths B Three users A, B, and C Two 30 Gbps links C Maximum throughput: [30, 30, 0] Total throughput of 60, but user C starves Max-min fairness: [15, 15, 15] Equal allocation, but throughput of just 45 Proportional fairness: [20, 20, 10] Balance trade-off between throughput and equality Throughput of 50, and penalize C for using 2 busy links

Distributed Algorithm for Achieving Fairness

Network Utility Maximization (NUM) Users (i) Rate allocation: xi Utility function: U(xi) Network links (l) Link capacity: cl Routes: Rli=1 if link l on path i, Rli=0 otherwise U(xi) xi If the utility function is concave, this is a convex optimization problem, and a locally optimal solution is a globally optimal solution maximize Si U(xi) subject to Si Rlixi ≤ cl variables xi ≥ 0

Network Utility and Fairness concave (diminishing returns) Alpha-fair utility U(x) = x1-a/(1-a) for a ≠ 1 U(x) = log(x) for a = 1 U(x) x Max throughput Proportional fairness Max-min fairness 1 ∞ small a (more elastic demand) large a (more fair)

Solving NUM Problems maximize Si U(xi) subject to Si Rlixi ≤ cl variables xi ≥ 0 Convex optimization Maximizing a concave objective Subject to convex constraints Benefits Locally optimal solution is globally optimal Can be solved efficiently on a centralized computer “Decomposable” into many smaller problems

Move Constraints to Objective decoupled across sessions max Si U(xi) subject to Si Rlixi ≤ cl variables xi ≥ 0 coupled across sessions “dual decomposition” (compute the Lagrangian) p_l are “prices” or “Lagrange multipliers” L(x,p) = max Si U(xi) + Sl pl (cl – Si in S(l) xi) link prices (Lagrange multipliers)

Decouple the Terms L(x,p) = max Si U(xi) + Sl pl (cl – Si in S(l) xi) decoupled across links decoupled across sessions rewrite L(x,p) = max Si [U(xi) – (Sl in L(i) pl ) xi] + Sl pl cl p_l are “prices” or “Lagrange multipliers” rewrite path price L(x,p) = max Si [U(xi) – qi xi] + Sl pl cl Then, maximize L for a given p, and minimize L for a given x

pl[t] = (pl[t-1] - b (cl – yl))+ Decomposition rates xi offered link load yl = S Rlixi User i Link l max (U(xi) – qixi) pl[t] = (pl[t-1] - b (cl – yl))+ xi path cost qi = S pl prices pl

Link Prices and Implicit Feedback What are the link prices pl? Measure of congestion Amount of traffic in excess of capacity That is, the packet loss! What are the path costs qi? Sum of the link prices pl along the path If loss is low, sum of link loss is roughly path loss No need for explicit feedback! User i can observe the path loss qi on path i Link l can observe the offered load yl on link l

Coming Back to TCP Reverse engineering Forward engineering TCP Reno Utilities are arctan(x) Prices are end-to-end packet loss TCP Vegas Utilities are log(x), i.e., proportional fairness Prices are end-to-end packet delays Forward engineering Use decomposition to design new variants of TCP E.g., TCP FAST Simplifications Fixed set of connections, focus on equilibrium behavior, ignore feedback delays and queuing dynamics

TCP Congestion Control

Congestion in Drop-Tail FIFO Queue Access to the bandwidth: first-in first-out queue Packets transmitted in the order they arrive Access to the buffer space: drop-tail queuing If the queue is full, drop the incoming packet ✗

How it Looks to the End Host Delay: Packet experiences high delay Loss: Packet gets dropped along path How can TCP sender learn this? Delay: Round-trip time estimate Loss: Timeout and/or duplicate acknowledgments Mark: Packets marked by routers with large queues ✗

TCP Congestion Window Each sender maintains congestion window Max number of bytes to have in transit (not ACK’d) Adapting the congestion window Decrease upon losing a packet: backing off Increase upon success: optimistically exploring Always struggling to find right transfer rate Tradeoff Pro: avoids needing explicit network feedback Con: continually under- and over-shoots “right” rate

Additive Increase, Multiplicative Decrease How much to adapt? Additive increase: On success of last window of data, increase window by 1 Max Segment Size (MSS) Multiplicative decrease: On loss of packet, divide congestion window in half Much quicker to slow than speed up! Over-sized windows (causing loss) are much worse than under-sized windows (causing lower throughput) AIMD: A necessary condition for stability of TCP

Leads to TCP Sawtooth Behavior Congestion Window timeout triple dup ACK slow start t

Receiver Window vs. Congestion Window Flow control Keep a fast sender from overwhelming slow receiver Congestion control Keep a set of senders from overloading the network Different concepts, but similar mechanisms TCP flow control: receiver window TCP congestion control: congestion window Sender TCP window = min { congestion window, receiver window }

TCP Tahoe vs. TCP Reno Two similar versions of TCP TCP Tahoe TCP Reno TCP Tahoe (SIGCOMM’88 paper) TCP Reno (1990) TCP Tahoe Always repeat slow start after a loss Assign slow-start threshold to half of congestion window TCP Reno Repeat slow start after timeout-based loss Divide congestion window in half after triple dup ACK

Discussion