Presentation is loading. Please wait.

Presentation is loading. Please wait.

TCP and Congestion Control

Similar presentations

Presentation on theme: "TCP and Congestion Control"— Presentation transcript:

1 TCP and Congestion Control
Nick Feamster Computer Networking I Spring 2013

2 Sequence Numbers How large do sequence numbers need to be? E.g.
Must be able to detect wrap-around Depends on sender/receiver window size E.g. Max seq = 7, send win=recv win=7 If pkts 0..6 are sent succesfully and all acks lost Receiver expects 7,0..5, sender retransmits old 0..6!!! Max sequence must be ≥ send window + recv window Xxx picture

3 Sequence Numbers 32 Bits, Unsigned  for bytes not packets!
Circular Comparison Why So Big? For sliding window, must have |Sequence Space| > |Sending Window| + |Receiving Window| No problem Also, want to guard against stray packets With IP, packets have maximum lifetime of 120s Sequence number would wrap around in this time at 286MB/s =~ 2.3Gbit/s (hmm!) Max a b a < b Max b a b < a

4 TCP Flow Control TCP is a sliding window protocol
For window size n, can send up to n bytes without receiving an acknowledgement When the data is acknowledged then the window slides forward Each packet advertises a window size Indicates number of bytes the receiver has space for Original TCP always sent entire window But receiver buffer space != available net. capacity! Congestion control now limits this window = min(receiver window, congestion window)

5 Window Flow Control: Send Side
Sent but not acked Not yet sent Sent and acked Next to be sent Lecture 17: TCP & Congestion Control

6 Window Flow Control: Send Side
Packet Sent Packet Received Source Port Dest. Port Source Port Dest. Port Sequence Number Sequence Number Acknowledgment Acknowledgment HL/Flags Window HL/Flags Window D. Checksum Urgent Pointer D. Checksum Urgent Pointer Options… Options... App write acknowledged sent to be sent outside window Lecture 17: TCP & Congestion Control

7 Window Flow Control: Receive Side
What should receiver do? New Receive buffer Acked but not delivered to user Not yet acked window

8 Internet Pipes? How should you control the faucet? Goals
Too fast – sink overflows Too slow – what happens? Goals Fill the bucket as quickly as possible Avoid overflowing the sink Solution – watch the sink

9 The Problem of Congestion
What is congestion? Load is higher than capacity What do IP routers do? Drop the excess packets Why is this bad? Wasted bandwidth for retransmissions “congestion collapse” Goodput Increase in load that results in a decrease in useful work done. Load

10 Congestion Different sources compete for resources inside network
10 Mbps 1.5 Mbps 100 Mbps Different sources compete for resources inside network Why is it a problem? Sources are unaware of current state of resource Sources are unaware of each other Manifestations: Lost packets (buffer overflow at routers) Long delays (queuing in router buffers) Can result in throughput less than bottleneck link (1.5Mbps for the above topology)  a.k.a. congestion collapse

11 No Problem Under Circuit Switching
Source establishes connection to destination Nodes reserve resources for the connection Circuit rejected if the resources aren’t available Cannot have more than the network can handle

12 Congestion is Unavoidable
Two packets arrive at the same time The node can only transmit one … and either buffer or drop the other If many packets arrive in short period of time The node cannot keep up with the arriving traffic … and the buffer may eventually overflow

13 The Problem of Congestion
What is congestion? Load is higher than capacity What do IP routers do? Drop the excess packets Why is this bad? Wasted bandwidth for retransmissions “congestion collapse” Goodput Increase in load that results in a decrease in useful work done. Load

14 Congestion Collapse Definition: Increase in network load results in decrease of useful work done Many possible causes Spurious retransmissions of packets still in flight Classical congestion collapse How can this happen with packet conservation? RTT increases! Solution: better timers and TCP congestion control Undelivered packets Packets consume resources and are dropped elsewhere in network Solution: congestion control for ALL traffic

15 Congestion Control and Avoidance
A mechanism that: Uses network resources efficiently Preserves fair network resource allocation Prevents or avoids collapse Congestion collapse is not just a theory Has been frequently observed in many networks

16 Congestion Control Approaches
Two broad approaches End-end congestion control: No explicit feedback from network Congestion inferred from end-system observed loss, delay Approach taken by TCP Network-assisted congestion control: Routers provide feedback to end systems Single bit indicating congestion (SNA, DECbit, TCP/IP ECN, ATM) Explicit rate sender should send at Problem: makes routers complicated

17 Example: TCP Congestion Control
Very simple mechanisms in network FIFO scheduling with shared buffer pool Feedback through packet drops TCP interprets packet drops as signs of congestion and slows down This is an assumption: packet drops are not a sign of congestion in all networks E.g. wireless networks Periodically probes the network to check whether more bandwidth has become available.

18 Inferring From Implicit Feedback
? What does the end host see? What can the end host change?

19 Where Congestion Happens: Links
Simple resource allocation: FIFO queue & drop-tail Access to the bandwidth: first-in first-out queue Packets transmitted in the order they arrive Access to the buffer space: drop-tail queuing If the queue is full, drop the incoming packet

20 How it Looks to the End Host
Packet delay Packet experiences high delay Packet loss Packet gets dropped along the way How does TCP sender learn this? Delay Round-trip time estimate Loss Timeout Duplicate acknowledgments

21 What Can the End Host Do? Upon detecting congestion
Decrease the sending rate (e.g., divide in half) End host does its part to alleviate the congestion But, what if conditions change? Suppose there is more bandwidth available Would be a shame to stay at a low sending rate Upon not detecting congestion Increase the sending rate, a little at a time And see if the packets are successfully delivered

22 TCP Congestion Window Each TCP sender maintains a congestion window
Maximum number of bytes to have in transit I.e., number of bytes still awaiting acknowledgments Adapting the congestion window Decrease upon losing a packet: backing off Increase upon success: optimistically exploring Always struggling to find the right transfer rate Both good and bad Pro: avoids having explicit feedback from network Con: under-shooting and over-shooting the rate

23 Additive Increase, Multiplicative Decrease
How much to increase and decrease? Increase linearly, decrease multiplicatively A necessary condition for stability of TCP Consequences of over-sized window are much worse than having an under-sized window Over-sized window: packets dropped and retransmitted Under-sized window: somewhat lower throughput Multiplicative decrease On loss of packet, divide congestion window in half Additive increase On success for last window of data, increase linearly

24 Leads to the TCP “Sawtooth”
Window Loss halved t

25 Practical Details Congestion window Increasing the congestion window
Represented in bytes, not in packets (Why?) Packets have MSS (Maximum Segment Size) bytes Increasing the congestion window Increase by MSS on success for last window of data Decreasing the congestion window Never drop congestion window below 1 MSS

26 Receiver Window vs. Congestion Window
Flow control Keep a fast sender from overwhelming a slow receiver Congestion control Keep a set of senders from overloading the network Different concepts, but similar mechanisms TCP flow control: receiver window TCP congestion control: congestion window TCP window: min{congestion window, receiver window}

27 How Should a New Flow Start
Need to start with a small CWND to avoid overloading the network. Window But, could take a long time to get started! t

28 “Slow Start” Phase Start with a small congestion window
Initially, CWND is 1 Max Segment Size (MSS) So, initial sending rate is MSS/RTT That could be pretty wasteful Might be much less than the actual bandwidth Linear increase takes a long time to accelerate Slow-start phase (really “fast start”) Sender starts at a slow rate (hence the name) … but increases the rate exponentially … until the first loss event

29 Double CWND per round-trip time
Slow Start in Action Double CWND per round-trip time 1 2 4 8 Src D D A A D D D D D A A A A A Dest

30 Slow Start and the TCP Sawtooth
Window Loss t Exponential “slow start” Why is it called slow-start? Because TCP originally had no congestion control mechanism. The source would just start by sending a whole receiver window’s worth of data.

31 Two Kinds of Loss in TCP Timeout Triple duplicate ACK
Packet n is lost and detected via a timeout E.g., because all packets in flight were lost After the timeout, blasting away for the entire CWND … would trigger a very large burst in traffic So, better to start over with a low CWND Triple duplicate ACK Packet n is lost, but packets n+1, n+2, etc. arrive Receiver sends duplicate acknowledgments … and the sender retransmits packet n quickly Do a multiplicative decrease and keep going

32 Repeating Slow Start After Timeout
Window timeout Slow start in operation until it reaches half of previous cwnd. t Slow-start restart: Go back to CWND of 1, but take advantage of knowing the previous value of CWND.

33 Repeating Slow Start After Idle Period
Suppose a TCP connection goes idle for a while E.g., Telnet session where you don’t type for an hour Eventually, the network conditions change Maybe many more flows are traversing the link E.g., maybe everybody has come back from lunch! Dangerous to start transmitting at the old rate Previously-idle TCP sender might blast the network … causing excessive congestion and packet loss So, some TCP implementations repeat slow start Slow-start restart after an idle period

34 TCP Achieves Some Notion of Fairness
Effective utilization is not the only goal We also want to be fair to the various flows … but what the heck does that mean? Simple definition: equal shares of the bandwidth N flows that each get 1/N of the bandwidth? But, what if the flows traverse different paths? E.g., bandwidth shared in proportion to the RTT

35 What About Cheating? Some folks are more fair than others
Running multiple TCP connections in parallel Modifying the TCP implementation in the OS Use the User Datagram Protocol What is the impact Good guys slow down to make room for you You get an unfair share of the bandwidth Possible solutions? Routers detect cheating and drop excess packets? Peer pressure? ???

Download ppt "TCP and Congestion Control"

Similar presentations

Ads by Google