David Wetherall Professor of Computer Science & Engineering Introduction to Computer Networks Fairness of Bandwidth Allocation (§6.3.1)

Slides:



Advertisements
Similar presentations
Congestion Control and Fairness Models Nick Feamster CS 4251 Computer Networking II Spring 2008.
Advertisements

Congestion Control Reasons: - too many packets in the network and not enough buffer space S = rate at which packets are generated R = rate at which receivers.
Congestion Control Created by M Bateman, A Ruddle & C Allison As part of the TCP View project.
TCP Congestion Control Dina Katabi & Sam Madden nms.csail.mit.edu/~dina 6.033, Spring 2014.
CS640: Introduction to Computer Networks Mozafar Bag-Mohammadi Lecture 3 TCP Congestion Control.
Introduction 1 Lecture 14 Transport Layer (Transmission Control Protocol) slides are modified from J. Kurose & K. Ross University of Nevada – Reno Computer.
Explicit Congestion Notification (ECN) RFC 3168 Justin Yackoski DEGAS Networking Group CISC856 – TCP/IP Thanks to Namratha Hundigopal.
School of Information Technologies TCP Congestion Control NETS3303/3603 Week 9.
1 Computer Networks Transport Layer (Part 4). 2 Transport layer So far… –Transport layer functions –Specific transport layers UDP TCP –In the middle of.
Transport Layer 3-1 Fast Retransmit r time-out period often relatively long: m long delay before resending lost packet r detect lost segments via duplicate.
15-744: Computer Networking L-10 Congestion Control.
High speed TCP’s. Why high-speed TCP? Suppose that the bottleneck bandwidth is 10Gbps and RTT = 200ms. Bandwidth delay product is packets (1500.
Week 9 TCP9-1 Week 9 TCP 3 outline r 3.5 Connection-oriented transport: TCP m segment structure m reliable data transfer m flow control m connection management.
1 689 Lecture 2 Review of Last Lecture Networking basics TCP/UDP review.
1 TCP Transport Control Protocol Reliable In-order delivery Flow control Responds to congestion “Nice” Protocol.
CSEE W4140 Networking Laboratory Lecture 7: TCP congestion control Jong Yul Kim
1 Chapter 3 Transport Layer. 2 Chapter 3 outline 3.1 Transport-layer services 3.2 Multiplexing and demultiplexing 3.3 Connectionless transport: UDP 3.4.
L13: Sharing in network systems Dina Katabi Spring Some slides are from lectures by Nick Mckeown, Ion Stoica, Frans.
TCP Congestion Control
Introduction 1 Lecture 14 Transport Layer (Congestion Control) slides are modified from J. Kurose & K. Ross University of Nevada – Reno Computer Science.
Courtesy: Nick McKeown, Stanford 1 TCP Congestion Control Tahir Azim.
3: Transport Layer3b-1 Principles of Congestion Control Congestion: r informally: “too many sources sending too much data too fast for network to handle”
Transport Layer 4 2: Transport Layer 4.
Transport Layer3-1 Chapter 3 outline r 3.1 Transport-layer services r 3.2 Multiplexing and demultiplexing r 3.3 Connectionless transport: UDP r 3.4 Principles.
Chapter 3 Transport Layer Computer Networking: A Top Down Approach 6 th edition Jim Kurose, Keith Ross Addison-Wesley March 2012 A note on the use of these.
Network Technologies essentials Week 8: TCP congestion control Compilation made by Tim Moors, UNSW Australia Original slides by David Wetherall, University.
CS540/TE630 Computer Network Architecture Spring 2009 Tu/Th 10:30am-Noon Sue Moon.
CS332, Ch. 26: TCP Victor Norman Calvin College 1.
CSE 461 University of Washington1 Topic How TCP implements AIMD, part 1 – “Slow start” is a component of the AI portion of AIMD Slow-start.
HighSpeed TCP for High Bandwidth-Delay Product Networks Raj Kettimuthu.
Computer Science & Engineering Introduction to Computer Networks Congestion Overview (§6.3, §6.5.10)
Lecture 9 – More TCP & Congestion Control
Transport Layer 3-1 Chapter 3 Transport Layer Computer Networking: A Top Down Approach 6 th edition Jim Kurose, Keith Ross Addison-Wesley March
CS640: Introduction to Computer Networks Aditya Akella Lecture 15 TCP – III Reliability and Implementation Issues.
Computer Networking Lecture 18 – More TCP & Congestion Control.
1 CS 4396 Computer Networks Lab TCP – Part II. 2 Flow Control Congestion Control Retransmission Timeout TCP:
CS640: Introduction to Computer Networks Aditya Akella Lecture 15 TCP – III Reliability and Implementation Issues.
Transport Layer3-1 Chapter 3 outline r 3.1 Transport-layer services r 3.2 Multiplexing and demultiplexing r 3.3 Connectionless transport: UDP r 3.4 Principles.
TCP Congestion Control Computer Networks TCP Congestion Control 1.
T. S. Eugene Ngeugeneng at cs.rice.edu Rice University1 COMP/ELEC 429/556 Introduction to Computer Networks Principles of Congestion Control Some slides.
Jennifer Rexford Fall 2014 (TTh 3:00-4:20 in CS 105) COS 561: Advanced Computer Networks TCP.
Winter 2008CS244a Handout 71 CS244a: An Introduction to Computer Networks Handout 7: Congestion Control Nick McKeown Professor of Electrical Engineering.
Advance Computer Networks Lecture#09 & 10 Instructor: Engr. Muhammad Mateen Yaqoob.
TCP Congestion Control 컴퓨터공학과 인공지능 연구실 서 영우. TCP congestion control2 Contents 1. Introduction 2. Slow-start 3. Congestion avoidance 4. Fast retransmit.
Spring Computer Networks1 Congestion Control Sections 6.1 – 6.4 Outline Preliminaries Queuing Discipline Reacting to Congestion Avoiding Congestion.
Transport Layer3-1 Chapter 3 outline r 3.1 Transport-layer services r 3.2 Multiplexing and demultiplexing r 3.3 Connectionless transport: UDP r 3.4 Principles.
Peer-to-Peer Networks 13 Internet – The Underlay Network
CS640: Introduction to Computer Networks Aditya Akella Lecture 15 TCP Congestion Control.
Transmission Control Protocol (TCP) TCP Flow Control and Congestion Control CS 60008: Internet Architecture and Protocols Department of CSE, IIT Kharagpur.
@Yuan Xue A special acknowledge goes to J.F Kurose and K.W. Ross Some of the slides used in this lecture are adapted from their.
Distributed Systems 8. Transport Layer Simon Razniewski Faculty of Computer Science Free University of Bozen-Bolzano A.Y. 2014/2015.
@Yuan Xue A special acknowledge goes to J.F Kurose and K.W. Ross Some of the slides used in this lecture are adapted from their.
@Yuan Xue A special acknowledge goes to J.F Kurose and K.W. Ross Some of the slides used in this lecture are adapted from their.
Distributed Systems 12. Transport Layer Simon Razniewski Faculty of Computer Science Free University of Bozen-Bolzano A.Y. 2015/2016.
1 Flow & Congestion Control Some slides are from lectures by Nick Mckeown, Ion Stoica, Frans Kaashoek, Hari Balakrishnan, and Sam Madden Prof. Dina Katabi.
Distributed Systems 11. Transport Layer
Internet Networking recitation #9
Approaches towards congestion control
Transport Layer CS 381 3/7/2017.
Chapter 3 outline 3.1 transport-layer services
COMP 431 Internet Services & Protocols
Introduction to Congestion Control
TCP Congestion Control
Lecture 19 – TCP Performance
So far, On the networking side, we looked at mechanisms to links hosts using direct linked networks and then forming a network of these networks. We introduced.
CS640: Introduction to Computer Networks
If both sources send full windows, we may get congestion collapse
TCP Congestion Control
CSE 4213: Computer Networks II
Transport Layer: Congestion Control
Presentation transcript:

David Wetherall Professor of Computer Science & Engineering Introduction to Computer Networks Fairness of Bandwidth Allocation (§6.3.1)

CSE 461 University of Washington2 Topic What’s a “fair” bandwidth allocation? – The max-min fair allocation

CSE 461 University of Washington3 Recall We want a good bandwidth allocation to be fair and efficient – Now we learn what fair means Caveat: in practice, efficiency is more important than fairness

CSE 461 University of Washington4 Efficiency vs. Fairness Cannot always have both! – Example network with traffic A  B, B  C and A  C – How much traffic can we carry? A B C 1 1

CSE 461 University of Washington5 Efficiency vs. Fairness (2) If we care about fairness: – Give equal bandwidth to each flow – A  B: ½ unit, B  C: ½, and A  C, ½ – Total traffic carried is 1 ½ units A B C 1 1

CSE 461 University of Washington6 Efficiency vs. Fairness (3) If we care about efficiency: – Maximize total traffic in network – A  B: 1 unit, B  C: 1, and A  C, 0 – Total traffic rises to 2 units! A B C 1 1

CSE 461 University of Washington7 The Slippery Notion of Fairness Why is “equal per flow” fair anyway? – A  C uses more network resources (two links) than A  B or B  C – Host A sends two flows, B sends one Not productive to seek exact fairness – More important to avoid starvation – “Equal per flow” is good enough

CSE 461 University of Washington8 Generalizing “Equal per Flow” Bottleneck for a flow of traffic is the link that limits its bandwidth – Where congestion occurs for the flow – For A  C, link A–B is the bottleneck A B C 1 10 Bottleneck

CSE 461 University of Washington9 Generalizing “Equal per Flow” (2) Flows may have different bottlenecks – For A  C, link A–B is the bottleneck – For B  C, link B–C is the bottleneck – Can no longer divide links equally … A B C 1 10

CSE 461 University of Washington10 Max-Min Fairness Intuitively, flows bottlenecked on a link get an equal share of that link Max-min fair allocation is one that: – Increasing the rate of one flow will decrease the rate of a smaller flow – This “maximizes the minimum” flow

CSE 461 University of Washington11 Max-Min Fairness (2) To find it given a network, imagine “pouring water into the network” 1.Start with all flows at rate 0 2.Increase the flows until there is a new bottleneck in the network 3.Hold fixed the rate of the flows that are bottlenecked 4.Go to step 2 for any remaining flows

Max-Min Example Example: network with 4 flows, links equal bandwidth – What is the max-min fair allocation? CSE 461 University of Washington12

Max-Min Example (2) When rate=1/3, flows B, C, and D bottleneck R4—R5 – Fix B, C, and D, continue to increase A CSE 461 University of Washington13 Bottleneck

Max-Min Example (3) When rate=2/3, flow A bottlenecks R2—R3. Done. CSE 461 University of Washington14 Bottleneck

Max-Min Example (4) End with A=2/3, B, C, D=1/3, and R2—R3, R4—R5 full – Other links have extra capacity that can’t be used, linksxample: network with 4 flows, links equal bandwidth – What is the max-min fair allocation? CSE 461 University of Washington15

Adapting over Time Allocation changes as flows start and stop CSE 461 University of Washington16 Time

Adapting over Time (2) CSE 461 University of Washington17 Flow 1 slows when Flow 2 starts Flow 1 speeds up when Flow 2 stops Time Flow 3 limit is elsewhere

CSE 461 University of Washington18 Topic Bandwidth allocation models – Additive Increase Multiplicative Decrease (AIMD) control law AIMD! Sawtooth

CSE 461 University of Washington19 Recall Want to allocate capacity to senders – Network layer provides feedback – Transport layer adjusts offered load – A good allocation is efficient and fair How should we perform the allocation? – Several different possibilities …

CSE 461 University of Washington20 Bandwidth Allocation Models Open loop versus closed loop – Open: reserve bandwidth before use – Closed: use feedback to adjust rates Host versus Network support – Who is sets/enforces allocations? Window versus Rate based – How is allocation expressed? TCP is a closed loop, host-driven, and window-based

CSE 461 University of Washington21 Bandwidth Allocation Models (2) We’ll look at closed-loop, host-driven, and window-based too Network layer returns feedback on current allocation to senders – At least tells if there is congestion Transport layer adjusts sender’s behavior via window in response – How senders adapt is a control law

CSE 461 University of Washington22 Additive Increase Multiplicative Decrease AIMD is a control law hosts can use to reach a good allocation – Hosts additively increase rate while network is not congested – Hosts multiplicatively decrease rate when congestion occurs – Used by TCP Let’s explore the AIMD game …

CSE 461 University of Washington23 AIMD Game Hosts 1 and 2 share a bottleneck – But do not talk to each other directly Router provides binary feedback – Tells hosts if network is congested Rest of Network Bottleneck Router Host 1 Host

CSE 461 University of Washington24 AIMD Game (2) Each point is a possible allocation Host 1 Host Fair Efficient Optimal Allocation Congested

CSE 461 University of Washington25 AIMD Game (3) AI and MD move the allocation Host 1 Host Fair, y=x Efficient, x+y=1 Optimal Allocation Congested Multiplicative Decrease Additive Increase

CSE 461 University of Washington26 AIMD Game (4) Play the game! Host 1 Host Fair Efficient Congested A starting point

CSE 461 University of Washington27 AIMD Game (5) Always converge to good allocation! Host 1 Host Fair Efficient Congested A starting point

CSE 461 University of Washington28 AIMD Sawtooth Produces a “sawtooth” pattern over time for rate of each host – This is the TCP sawtooth (later) Multiplicative Decrease Additive Increase Time Host 1 or 2’s Rate

CSE 461 University of Washington29 AIMD Properties Converges to an allocation that is efficient and fair when hosts run it – Holds for more general topologies Other increase/decrease control laws do not! (Try MIAD, MIMD, MIAD ) Requires only binary feedback from the network

Feedback Signals Several possible signals, with different pros/cons – We’ll look at classic TCP that uses packet loss as a signal CSE 461 University of Washington30 SignalExample ProtocolPros / Cons Packet lossTCP NewReno Cubic TCP (Linux) Hard to get wrong Hear about congestion late Packet delayCompound TCP (Windows) Hear about congestion early Need to infer congestion Router indication TCPs with Explicit Congestion Notification Hear about congestion early Require router support

CSE 461 University of Washington31 Topic The story of TCP congestion control – Collapse, control, and diversification What’s up? Internet

CSE 461 University of Washington32 Congestion Collapse in the 1980s Early TCP used a fixed size sliding window (e.g., 8 packets) – Initially fine for reliability But something strange happened as the ARPANET grew – Links stayed busy but transfer rates fell by orders of magnitude!

CSE 461 University of Washington33 Congestion Collapse (2) Queues became full, retransmissions clogged the network, and goodput fell Congestion collapse

CSE 461 University of Washington34 Van Jacobson (1950—) Widely credited with saving the Internet from congestion collapse in the late 80s – Introduced congestion control principles – Practical solutions (TCP Tahoe/Reno) Much other pioneering work: – Tools like traceroute, tcpdump, pathchar – IP header compression, multicast tools

CSE 461 University of Washington35 TCP Tahoe/Reno Avoid congestion collapse without changing routers (or even receivers) Idea is to fix timeouts and introduce a congestion window (cwnd) over the sliding window to limit queues/loss TCP Tahoe/Reno implements AIMD by adapting cwnd using packet loss as the network feedback signal

CSE 461 University of Washington36 TCP Tahoe/Reno (2) TCP behaviors we will study: – ACK clocking – Adaptive timeout (mean and variance) – Slow-start – Fast Retransmission – Fast Recovery Together, they implement AIMD

TCP Timeline CSE 461 University of Washington Origins of “TCP” (Cerf & Kahn, ’74) 3-way handshake (Tomlinson, ‘75) TCP Reno (Jacobson, ‘90) Congestion collapse Observed, ‘86 TCP/IP “flag day” (BSD Unix 4.2, ‘83) TCP Tahoe (Jacobson, ’88) Pre-history Congestion control... TCP and IP (RFC 791/793, ‘81)

CSE 461 University of Washington38 Topic The self-clocking behavior of sliding windows, and how it is used by TCP – The “ ACK clock” Tick Tock!

CSE 461 University of Washington39 Sliding Window ACK Clock Each in-order ACK advances the sliding window and lets a new segment enter the network – ACK s “clock” data segments Ack Data

Benefit of ACK Clocking Consider what happens when sender injects a burst of segments into the network CSE 461 University of Washington40 Fast link Slow (bottleneck) link Queue

Benefit of ACK Clocking (2) Segments are buffered and spread out on slow link CSE 461 University of Washington41 Fast link Slow (bottleneck) link Segments “spread out”

Benefit of ACK Clocking (3) ACK s maintain the spread back to the original sender CSE 461 University of Washington42 Slow link Acks maintain spread

Benefit of ACK Clocking (4) Sender clocks new segments with the spread – Now sending at the bottleneck link without queuing! CSE 461 University of Washington43 Slow link Segments spread Queue no longer builds

CSE 461 University of Washington44 Benefit of ACK Clocking (4) Helps the network run with low levels of loss and delay! The network has smoothed out the burst of data segments ACK clock transfers this smooth timing back to the sender Subsequent data segments are not sent in bursts so do not queue up in the network

CSE 461 University of Washington45 TCP Uses ACK Clocking TCP uses a sliding window because of the value of ACK clocking Sliding window controls how many segments are inside the network – Called the congestion window, or cwnd – Rate is roughly cwnd/RTT TCP only sends small bursts of segments to let the network keep the traffic smooth