Project 2 – Implement Reliable Transport

Slides:



Advertisements
Similar presentations
1 EE 122:TCP Congestion Control Ion Stoica TAs: Junda Liu, DK Moon, David Zats (Materials with thanks to Vern Paxson,
Advertisements

1 TCP Congestion Control. 2 TCP Segment Structure source port # dest port # 32 bits application data (variable length) sequence number acknowledgement.
Congestion Control Created by M Bateman, A Ruddle & C Allison As part of the TCP View project.
TCP Congestion Control Dina Katabi & Sam Madden nms.csail.mit.edu/~dina 6.033, Spring 2014.
Introduction 1 Lecture 14 Transport Layer (Transmission Control Protocol) slides are modified from J. Kurose & K. Ross University of Nevada – Reno Computer.
School of Information Technologies TCP Congestion Control NETS3303/3603 Week 9.
Chapter 3 Transport Layer slides are modified from J. Kurose & K. Ross CPE 400 / 600 Computer Communication Networks Lecture 12.
Transport Layer 3-1 Fast Retransmit r time-out period often relatively long: m long delay before resending lost packet r detect lost segments via duplicate.
Katz, Stoica F04 EECS 122: Introduction to Computer Networks Congestion Control Computer Science Division Department of Electrical Engineering and Computer.
Transport Layer3-1 Congestion Control. Transport Layer3-2 Principles of Congestion Control Congestion: r informally: “too many sources sending too much.
Congestion Control Reading: Sections COS 461: Computer Networks Spring 2011 Mike Freedman
1 Congestion Control EE122 Fall 2011 Scott Shenker Materials with thanks to Jennifer Rexford, Ion Stoica, Vern Paxson.
1 Lecture 9: TCP and Congestion Control Slides adapted from: Congestion slides for Computer Networks: A Systems Approach (Peterson and Davis) Chapter 3.
1 Chapter 3 Transport Layer. 2 Chapter 3 outline 3.1 Transport-layer services 3.2 Multiplexing and demultiplexing 3.3 Connectionless transport: UDP 3.4.
EEC-484/584 Computer Networks Lecture 14 Wenbing Zhao (Part of the slides are based on Drs. Kurose & Ross ’ s slides for their Computer.
L13: Sharing in network systems Dina Katabi Spring Some slides are from lectures by Nick Mckeown, Ion Stoica, Frans.
TCP Congestion Control
Introduction 1 Lecture 14 Transport Layer (Congestion Control) slides are modified from J. Kurose & K. Ross University of Nevada – Reno Computer Science.
TCP: flow and congestion control. Flow Control Flow Control is a technique for speed-matching of transmitter and receiver. Flow control ensures that a.
1 EE 122: Advanced TCP Ion Stoica TAs: Junda Liu, DK Moon, David Zats (Materials with thanks to Vern Paxson,
Courtesy: Nick McKeown, Stanford 1 TCP Congestion Control Tahir Azim.
Transport Layer3-1 Chapter 3 outline r 3.1 Transport-layer services r 3.2 Multiplexing and demultiplexing r 3.3 Connectionless transport: UDP r 3.4 Principles.
Transport Layer 3-1 Chapter 3 Transport Layer Computer Networking: A Top Down Approach 6 th edition Jim Kurose, Keith Ross Addison-Wesley Chapter3_3.
T. S. Eugene Ngeugeneng at cs.rice.edu Rice University1 COMP/ELEC 429 Introduction to Computer Networks Lecture 15: Congestion Control I Slides used with.
CS540/TE630 Computer Network Architecture Spring 2009 Tu/Th 10:30am-Noon Sue Moon.
Principles of Congestion Control Congestion: informally: “too many sources sending too much data too fast for network to handle” different from flow control!
TCP CS 168 Discussion Week 6 Many thanks to past EE 122 GSIs.
CSE 461 University of Washington1 Topic How TCP implements AIMD, part 1 – “Slow start” is a component of the AI portion of AIMD Slow-start.
EE 122: Congestion Control and Avoidance Kevin Lai October 23, 2002.
TCP: Congestion Control (part II) CS 168, Fall 2014 Sylvia Ratnasamy Material thanks to Ion Stoica, Scott Shenker,
Copyright © Lopamudra Roychoudhuri
Lecture 9 – More TCP & Congestion Control
What is TCP? Connection-oriented reliable transfer Stream paradigm
Transport Layer 3-1 Chapter 3 Transport Layer Computer Networking: A Top Down Approach 6 th edition Jim Kurose, Keith Ross Addison-Wesley March
CS640: Introduction to Computer Networks Aditya Akella Lecture 15 TCP – III Reliability and Implementation Issues.
Computer Networking Lecture 18 – More TCP & Congestion Control.
1 CS 4396 Computer Networks Lab TCP – Part II. 2 Flow Control Congestion Control Retransmission Timeout TCP:
Katz, Stoica F04 EECS 122: Introduction to Computer Networks Congestion Control Computer Science Division Department of Electrical Engineering and Computer.
CS640: Introduction to Computer Networks Aditya Akella Lecture 15 TCP – III Reliability and Implementation Issues.
Transport Layer3-1 Chapter 3 outline r 3.1 Transport-layer services r 3.2 Multiplexing and demultiplexing r 3.3 Connectionless transport: UDP r 3.4 Principles.
Transport Layer 3- Midterm score distribution. Transport Layer 3- TCP congestion control: additive increase, multiplicative decrease Approach: increase.
T. S. Eugene Ngeugeneng at cs.rice.edu Rice University1 COMP/ELEC 429/556 Introduction to Computer Networks Principles of Congestion Control Some slides.
David Wetherall Professor of Computer Science & Engineering Introduction to Computer Networks Fairness of Bandwidth Allocation (§6.3.1)
TCP. TCP ACK generation [RFC 1122, RFC 2581] Event at Receiver Arrival of in-order segment with expected seq #. All data up to expected seq # already.
TCP Congestion Control 컴퓨터공학과 인공지능 연구실 서 영우. TCP congestion control2 Contents 1. Introduction 2. Slow-start 3. Congestion avoidance 4. Fast retransmit.
Congestion Control EE 122: Intro to Communication Networks Fall 2010 (MW 4-5:30 in 101 Barker) Scott Shenker TAs: Sameer Agarwal, Sara Alspaugh, Igor Ganichev,
Transport Layer3-1 Chapter 3 outline r 3.1 Transport-layer services r 3.2 Multiplexing and demultiplexing r 3.3 Connectionless transport: UDP r 3.4 Principles.
1 Congestion Control EE122 Fall 2012 Scott Shenker Materials with thanks to Jennifer Rexford, Ion Stoica, Vern Paxson.
@Yuan Xue A special acknowledge goes to J.F Kurose and K.W. Ross Some of the slides used in this lecture are adapted from their.
@Yuan Xue A special acknowledge goes to J.F Kurose and K.W. Ross Some of the slides used in this lecture are adapted from their.
1 Flow & Congestion Control Some slides are from lectures by Nick Mckeown, Ion Stoica, Frans Kaashoek, Hari Balakrishnan, and Sam Madden Prof. Dina Katabi.
TCP - Part II Relates to Lab 5. This is an extended module that covers TCP flow control, congestion control, and error control in TCP.
CS450 – Introduction to Networking Lecture 19 – Congestion Control (2)
Transport Layer CS 381 3/7/2017.
Chapter 6 TCP Congestion Control
Introduction to Networks
COMP 431 Internet Services & Protocols
Introduction to Congestion Control
EECS 122: Introduction to Computer Networks Congestion Control
Project 2 is out! Goal: implement reliable transport protocol
TCP - Part II Relates to Lab 5. This is an extended module that covers TCP flow control, congestion control, and error control in TCP.
Lecture 19 – TCP Performance
So far, On the networking side, we looked at mechanisms to links hosts using direct linked networks and then forming a network of these networks. We introduced.
Chapter 6 TCP Congestion Control
COMP/ELEC 429/556 Introduction to Computer Networks
CS640: Introduction to Computer Networks
CS4470 Computer Networking Protocols
TCP Congestion Control
Transport Layer: Congestion Control
TCP flow and congestion control
Presentation transcript:

Project 2 – Implement Reliable Transport Bears – TP: A simple reliable transport protocol based on GBN Receiver code is provided Only implement sender Basic requirements (85%) , deal with: Loss, corruption and reordering Duplication and delay Performance requirements (15%): Fast retransmit Selective acknowledgement

Protocol Packet types: Start, data, ack, end, and sack Sliding window size: 5 packets Receiver returns cumulative acknowledgement start|0|<data>|<checksum> data|1|<data>|<checksum> ack|2|<checksum> end|2|<data>|<checksum> ack|3|<checksum>

python Sender.py -f <input file> The sender should be able to send a file to the receiver python Sender.py -f <input file> Implement a Go Back N based sender It should have a 500ms retransmission timeout It must not produce any console output

Test and Grading We provide TestHarness.py for testing Tips: and a similar version of TestHarness.py is used for grading Tips: Start your project early You may start with “Stop-and-Wait” Write your own test cases

Logistics GSIs: Peter, Radhika and Akshay Additional OH for help with the project – will be announced on Piazza These slides, Spec and code online midnight, today Due Nov 2, at noon

TCP: Congestion Control CS 168, Fall 2014 Sylvia Ratnasamy http://inst.eecs.berkeley.edu/~cs168 Material thanks to Ion Stoica, Scott Shenker, Jennifer Rexford, Nick McKeown, and many other colleagues

Administrivia HW2 due at midnight (not noon) on Oct 16 Project#2 due on Nov 2 (not Oct 27) Next lecture: midterm review Today’s material (CC) on the midterm? Very basic concepts not details (~ up to slide#26)

Last lecture Flow control: adjusting the sending rate to keep from overwhelming a slow receiver Today Congestion control: adjusting the sending rate to keep from overloading the network

Statistical Multiplexing  Congestion If two packets arrive at a router at the same time Router will transmit one and buffer/drop the other Internet traffic is bursty If many packets arrive close in time the router cannot keep up  gets congested causes packet delays and drops

A few design considerations …If you were starting with TCP? How do we know the network is congested? Who takes care of congestion? network, end hosts, both, … How do we handle of congestion?

TCP’s approach End hosts adjust sending rate Based on implicit feedback from network Not the only approach A consequence of history rather than planning

Some History: TCP in the 1980s Sending rate only limited by flow control Dropped packets  senders (repeatedly!) retransmit Led to “congestion collapse” in Oct. 1986 Throughput on the NSF network dropped from 32Kbits/s to 40bits/sec “Fixed” by Van Jacobson’s development of TCP’s congestion control (CC) algorithms

Van Jacobson Leader of the networking research group at LBL Many contributions to the early TCP/IP stack Most notably congestion control Creator of many widely used network tools Traceroute, tcpdump, pathchar, Berkeley Packet Filter Later Chief Scientist at Cisco, now Fellow at PARC

Jacobson’s Approach Extend TCP’s existing window-based protocol but adapt the window size in response to congestion A pragmatic and effective solution required no upgrades to routers or applications! patch of a few lines of code to TCP implementations Extensively researched and improved upon Especially now with datacenters and cloud services

Three Issues to Consider Discovering the available (bottleneck) bandwidth Adjusting to variations in bandwidth Sharing bandwidth between flows

Abstract View Ignore internal structure of router and model it as a single queue for a particular input-output pair A B Sending Host Buffer in Router Receiving Host

Discovering available bandwidth Pick sending rate to match bottleneck bandwidth Without any a priori knowledge Could be gigabit link, could be a modem A B 100 Mbps

Adjusting to variations in bandwidth Adjust rate to match instantaneous bandwidth Assuming you have rough idea of bandwidth A B BW(t)

Multiple flows and sharing bandwidth Two Issues: Adjust total sending rate to match bandwidth Allocation of bandwidth between flows A2 B2 BW(t) A1 A3 B3 B1

Reality 1Gbps 1Gbps 600Mbps Congestion control is a resource allocation problem involving many flows, many links, and complicated global dynamics

Possible Approaches (0) Send without care Many packet drops

Possible Approaches (0) Send without care (1) Reservations Pre-arrange bandwidth allocations Requires negotiation before sending packets Low utilization

Possible Approaches (0) Send without care (1) Reservations (2) Pricing Don’t drop packets for the high-bidders Requires payment model

Possible Approaches (0) Send without care (1) Reservations (2) Pricing (3) Dynamic Adjustment Hosts infer level of congestion; adjust Network reports congestion level to hosts; hosts adjust Combinations of the above Simple to implement but suboptimal, messy dynamics

Possible Approaches (0) Send without care (1) Reservations (2) Pricing (3) Dynamic Adjustment All three techniques have their place Generality of dynamic adjustment has proven powerful Doesn’t presume business model, traffic characteristics, application requirements But does assume good citizenship!

TCP’s Approach in a Nutshell TCP connection has window Controls number of packets in flight Sending rate: ~Window/RTT Vary window size to control sending rate

All These Windows… Sender-side window = minimum{CWND, RWND} Congestion Window: CWND How many bytes can be sent without overflowing routers Computed by the sender using congestion control algorithm Flow control window: AdvertisedWindow (RWND) How many bytes can be sent without overflowing receiver’s buffers Determined by the receiver and reported to the sender Sender-side window = minimum{CWND, RWND} Assume for this lecture that RWND >> CWND

Note This lecture will talk about CWND in units of MSS (Recall MSS: Maximum Segment Size, the amount of payload data in a TCP packet) This is only for pedagogical purposes Keep in mind that real implementations maintain CWND in bytes

Two Basic Questions How does the sender detect congestion? How does the sender adjust its sending rate? To address three issues Finding available bottleneck bandwidth Adjusting to bandwidth variations Sharing bandwidth

Detecting Congestion Packet delays Tricky: noisy signal (delay often varies considerably) Routers tell endhosts when they’re congested Packet loss Fail-safe signal that TCP already has to detect Complication: non-congestive loss (e.g., checksum errors)

Not All Losses the Same Duplicate ACKs: isolated loss Still getting ACKs Timeout: much more serious Not enough dupacks Must have suffered several losses Will adjust rate differently for each case

Rate Adjustment Basic structure: Upon receipt of ACK (of new data): increase rate Upon detection of loss: decrease rate How we increase/decrease the rate depends on the phase of congestion control we’re in: Discovering available bottleneck bandwidth vs. Adjusting to bandwidth variations

Bandwidth Discovery with Slow Start Goal: estimate available bandwidth start slow (for safety) but ramp up quickly (for efficiency) Consider RTT = 100ms, MSS=1000bytes Window size to fill 1Mbps of BW = 12.5 packets Window size to fill 1Gbps = 12,500 packets Either is possible!

“Slow Start” Phase Sender starts at a slow rate but increases exponentially until first loss Start with a small congestion window Initially, CWND = 1 So, initial sending rate is MSS/RTT Double the CWND for each RTT with no loss MSS vs MTU??

Slow Start in Action For each RTT: double CWND Simpler implementation: for each ACK, CWND += 1 Linear increase per ACK(CWND+1)  exponential increase per RTT (2xCWND)

Slow Start in Action For each RTT: double CWND Simpler implementation: for each ACK, CWND += 1 1 2 3 4 A 8 Src Dest A A A D D D D D

Adjusting to Varying Bandwidth Slow start gave an estimate of available bandwidth Now, want to track variations in this available bandwidth, oscillating around its current value Repeated probing (rate increase) and backoff (decrease) TCP uses: “Additive Increase Multiplicative Decrease” (AIMD) We’ll see why shortly…

AIMD Additive increase Multiplicative decrease Window grows by one MSS for every RTT with no loss For each successful RTT, CWND = CWND + 1 Simple implementation: for each ACK, CWND = CWND+ 1/CWND Multiplicative decrease On loss of packet, divide congestion window in half On loss, CWND = CWND/2

Leads to the TCP “Sawtooth” Window Loss t Exponential “slow start”

Slow-Start vs. AIMD When does a sender stop Slow-Start and start Additive Increase? Introduce a “slow start threshold” (ssthresh) Initialized to a large value On timeout, ssthresh = CWND/2 When CWND = ssthresh, sender switches from slow-start to AIMD-style increase

Why AIMD?

Recall: Three Issues Discovering the available (bottleneck) bandwidth Slow Start Adjusting to variations in bandwidth AIMD Sharing bandwidth between flows

Goals for bandwidth sharing Efficiency: High utilization of link bandwidth Fairness: Each flow gets equal share

Why AIMD? Some rate adjustment options: Every RTT, we can Multiplicative increase or decrease: CWND a*CWND Additive increase or decrease: CWND CWND + b Four alternatives: AIAD: gentle increase, gentle decrease AIMD: gentle increase, drastic decrease MIAD: drastic increase, gentle decrease MIMD: drastic increase and decrease

Simple Model of Congestion Control 1 Fairness line (x1 =x2) Efficiency line (x1+x2 = 1) Two users rates x1 and x2 Congestion when x1+x2 > 1 Unused capacity when x1+x2 < 1 Fair when x1 =x2 User 2’s rate (x2) congested   inefficient User 1’s rate (x1) 1

Example 1 User 2: x2 1 User 1: x1 Efficient: x1+x2=1 Fair fairness line Efficient: x1+x2=1 Fair (0.5, 0.5) Congested: x1+x2=1.2 (0.7, 0.5) Inefficient: x1+x2=0.7 (0.2, 0.5) User 2: x2 Efficient: x1+x2=1 Not fair (0.7, 0.3) congested   inefficient efficiency line User 1: x1 1

AIAD Increase: x + aI Decrease: x - aD Does not converge to fairness line (x1-aD+aI), x2-aD+aI)) Increase: x + aI Decrease: x - aD Does not converge to fairness (x1,x2) (x1-aD,x2-aD) User 2: x2 congested   inefficient efficiency line User 1: x1

AIAD Sharing Dynamics x1 A B x2 D E

MIMD Does not converge to fairness Increase: x*bI Decrease: x*bD line Increase: x*bI Decrease: x*bD Does not converge to fairness (x1,x2) (bIbDx1, bIbDx2) (bdx1,bdx2) User 2: x2 congested   inefficient efficiency line User 1: x1

AIMD Increase: x+aI Decrease: x*bD Converges to fairness (x1,x2) (bDx1+aI, bDx2+aI) fairness line Increase: x+aI Decrease: x*bD Converges to fairness (x1,x2) (bDx1,bDx2) User 2: x2 congested   inefficient efficiency line User 1: x1

AIMD Sharing Dynamics A B D E x1 x2 Rates equalize  fair share 50 pkts/sec D E Rates equalize  fair share

TCP Congestion Control Details

Implementation State at sender Events CWND (initialized to a small constant) ssthresh (initialized to a large constant) [Also dupACKcount and timer, as before] Events ACK (new data) dupACK (duplicate ACK for old data) Timeout

Event: ACK (new data) If CWND < ssthresh CWND += 1 CWND packets per RTT Hence after one RTT with no drops: CWND = 2xCWND

Event: ACK (new data) If CWND < ssthresh Else CWND += 1 CWND = CWND + 1/CWND Slow start phase “Congestion Avoidance” phase (additive increase) CWND packets per RTT Hence after one RTT with no drops: CWND = CWND + 1

Event: TimeOut On Timeout ssthresh  CWND/2 CWND  1

Event: dupACK dupACKcount ++ If dupACKcount = 3 /* fast retransmit */ ssthresh = CWND/2 CWND = CWND/2

Example Window Timeout SSThresh Set to Here Fast Retransmission Slow start in operation until it reaches half of previous CWND, I.e., SSTHRESH t Slow-start restart: Go back to CWND = 1 MSS, but take advantage of knowing the previous value of CWND

One Final Phase: Fast Recovery The problem: congestion avoidance too slow in recovering from an isolated loss

Example Consider a TCP connection with: CWND=10 packets Last ACK was for packet # 101 i.e., receiver expecting next packet to have seq. no. 101 10 packets [101, 102, 103,…, 110] are in flight Packet 101 is dropped What ACKs do they generate? And how does the sender respond?

Timeline ACK 101 (due to 102) cwnd=10 dupACK#1 (no xmit) RETRANSMIT 101 ssthresh=5 cwnd= 5 ACK 101 (due to 105) cwnd=5 + 1/5 (no xmit) ACK 101 (due to 106) cwnd=5 + 2/5 (no xmit) ACK 101 (due to 107) cwnd=5 + 3/5 (no xmit) ACK 101 (due to 108) cwnd=5 + 4/5 (no xmit) ACK 101 (due to 109) cwnd=5 + 5/5 (no xmit) ACK 101 (due to 110) cwnd=6 + 1/5 (no xmit) ACK 111 (due to 101)  only now can we transmit new packets Plus no packets in flight so ACK “clocking” (to increase CWND) stalls for another RTT

Solution: Fast Recovery Idea: Grant the sender temporary “credit” for each dupACK so as to keep packets in flight If dupACKcount = 3 ssthresh = cwnd/2 cwnd = ssthresh + 3 While in fast recovery cwnd = cwnd + 1 for each additional duplicate ACK Exit fast recovery after receiving new ACK set cwnd = ssthresh

Example Consider a TCP connection with: CWND=10 packets Last ACK was for packet # 101 i.e., receiver expecting next packet to have seq. no. 101 10 packets [101, 102, 103,…, 110] are in flight Packet 101 is dropped

Timeline ACK 101 (due to 102) cwnd=10 dup#1 REXMIT 101 ssthresh=5 cwnd= 8 (5+3) ACK 101 (due to 105) cwnd= 9 (no xmit) ACK 101 (due to 106) cwnd=10 (no xmit) ACK 101 (due to 107) cwnd=11 (xmit 111) ACK 101 (due to 108) cwnd=12 (xmit 112) ACK 101 (due to 109) cwnd=13 (xmit 113) ACK 101 (due to 110) cwnd=14 (xmit 114) ACK 111 (due to 101) cwnd = 5 (xmit 115)  exiting fast recovery Packets 111-114 already in flight ACK 112 (due to 111) cwnd = 5 + 1/5  back in congestion avoidance

TCP State Machine congstn. avoid. slow start fast recovery timeout new ACK slow start cwnd > ssthresh dupACK timeout dupACK new ACK timeout new ACK dupACK=3 dupACK=3 fast recovery dupACK

TCP State Machine congstn. avoid. slow start fast recovery timeout new ACK slow start cwnd > ssthresh dupACK timeout dupACK new ACK timeout new ACK dupACK=3 dupACK=3 fast recovery dupACK

TCP State Machine congstn. avoid. slow start fast recovery timeout new ACK slow start cwnd > ssthresh dupACK timeout dupACK new ACK timeout new ACK dupACK=3 dupACK=3 fast recovery dupACK

TCP State Machine congstn. avoid. slow start fast recovery timeout congstn. avoid. new ACK slow start cwnd > ssthresh dupACK timeout dupACK new ACK timeout new ACK dupACK=3 dupACK=3 fast recovery dupACK

TCP Flavors TCP-Tahoe TCP-Reno TCP-newReno TCP-SACK CWND =1 on triple dupACK TCP-Reno CWND =1 on timeout CWND = CWND/2 on triple dupack TCP-newReno TCP-Reno + improved fast recovery TCP-SACK incorporates selective acknowledgements Our default assumption