Download presentation
Presentation is loading. Please wait.
Published byFranklin McCoy Modified over 7 years ago
1
12. TCP Flow Control and Congestion Control – Part 1
Congestion control – general principles TCP congestion control overview TCP congestion control specifics Roch Guerin (with adaptations from Jon Turner and John DeHart, and material from Kurose and Ross)
2
TCP Flow Control Receive side of TCP connection has a receive buffer
If receiver’s application does not read data fast enough, the buffer may fill up The TCP flow control mechanism prevents the sender from sending more data than receiver can store essentially a speed-matching service that prevents the sender from going too fast for the receiver
3
How TCP Flow Control Works
To simplify the discussion, pretend TCP receiver discards out-of-order segments Spare room in buffer RcvWindow = RcvBuffer – (LastByteRcvd – LastByteRead) Receiver sends the value of RcvWindow in the RcvWindow field of each TCP segment Sender never allows the number of unacknowledged bytes to exceed RcvWindow EffectiveWindow=RcvWindow-(LastByteSent-LastByteAcked) guarantees receive buffer doesn’t overflow To avoid deadlock, sender continues to send one byte segments when RcvWindow=0 this ensures that sender is informed when RcvWindow>0
4
Silly Window Syndrome Two possible causes Outcome
Receiver is busy and keeps shrinking the receive window, which forces the sender to use small segments Sender greedily transmit as soon as it has some data Outcome Small, inefficient packets keep being used Sender Receiver
5
Silly Window Syndrome Solutions
Receiver-side solution Advertised receive window has size of either zero or larger than a minimum size (usually MSS) This delays “acknowledgements” but does not affect sender efficiency too much (e.g., sending 20 bytes every 1 ms vs. sending 1,500 bytes every 75 ms) Sender-side solution Nagle’s algorithm
6
Nagle’s Algorithm How long does sender delay sending data?
Too long: hurts interactive applications Too short: poor network utilization Nagle’s strategy: When new data is ready If MSS worth of data (and window open) Send it Else If there are pending ACKs Buffer data until ACK arrives
7
Protection Against Wrap Around
32-bit SequenceNum vs. 16-bit RcvdWindow (232 >> so we satisfy the window requirement) But packets can stay “alive” for 120 seconds Defined by Maximum Segment Lifetime (MSL) MSL: defined in early days of TCP when links were slow Bandwidth Time Until Wrap Around T1 (1.5 Mbps) hours Ethernet (10 Mbps) 57 minutes T3 (45 Mbps) 13 minutes FDDI (100 Mbps) 6 minutes STS-3 (155 Mbps) 4 minutes STS-12 (622 Mbps) 55 seconds < 120 seconds STS-24 (1.2 Gbps) 28 seconds < 120 seconds
8
Keeping the Pipe Full 16-bit RcvWindow = 64kbytes assuming 100ms RTT
Bandwidth Delay x Bandwidth Product T1 (1.5 Mbps) 18KB Ethernet (10 Mbps) 122KB T3 (45 Mbps) 549KB FDDI (100 Mbps) 1.2MB STS-3 (155 Mbps) 1.8MB STS-12 (622 Mbps) 7.4MB STS-24 (1.2 Gbps) 14.8MB assuming 100ms RTT 1.5Mbps = bits/sec
9
Principles of Congestion Control
What is meant by congestion? this is when the amount of traffic arriving at a network link exceeds the link rate for an “excessive time period” caused by sources sending too much data for the link to handle note that it’s the network that is limiting the traffic flow in this case, not the receiver can cause router queues to fill up and overflow Consequences (Why is congestion bad?) packets get lost at routers network delays get large network throughput can actually drop as load increases this happens because packets may be dropped after passing through several routers, wasting the capacity of “upstream” links since some network effort is wasted, throughput drops below peak
10
Congestion Scenario 1 two identical senders, two receivers
one router, infinite buffers: no drops Router Link rate: R no retransmission (timeouts turned off) unlimited shared output link buffers Host A lin : original data Host B lout large delays when congested (remember infinite queue) achieve max possible throughput R/2
11
shared output link finite buffers
Congestion Scenario 2 shared output link finite buffers Host A lin : original data Host B lout l'in: original data, plus retransmitted data One router, finite buffers There will be packet drops if/when buffer fills up Sender retransmission of timed-out packet application-layer input = application-layer output: lin = lout transport-layer input includes retransmissions: l’in≥ lin
12
shared output finite link buffers
Congestion Scenario 2a shared output finite link buffers Host A lin : original data Host B lout l'in: original data, plus retransmitted data copy R/2 lin lout Sender sends only when router buffers available (λin = λ’in ) By some magic sender knows when buffer space available.
13
Congestion Scenario 2b lin : original data lout
Host A lin : original data Host B lout l'in: original data, plus retransmitted data JDD: Why is asymptote R/2? Because of definition of scenario, Host only resends packets that are lost. So, what comes out of buffer is always goodput. If we continue to increase input rate, we should expect to continue to increase data rate coming out of buffer but it will never exceed R/2. Packets may get dropped at router due to full buffers Sender only resends if packet known to be lost no lost acks no premature timeouts R/2 lin lout when sending at R/2, some packets are retransmissions but asymptotic goodput is still R/2 (why?)
14
Congestion Scenario 2c lin lout l'in
Host A lin Host B lout l'in copy timeout lout R/2 lin when sending at R/2, some packets are retransmissions including duplicated that are delivered! Packets may get dropped at router due to full buffers Sender times out prematurely, sending two copies, both of which are delivered
15
shared output link finite buffers
Congestion Scenario 3 shared output link finite buffers Host A lin : original data Host D lout l'in : original data, plus retransmitted data Host C Host B As input rate increases for each Host, the second hop router for a path will have traffic arriving from two sources, a host and a router. The traffic from the host becomes a larger and larger percentage of the load offered to the router and the throughput for the traffic from the router goes to 0. Four senders Multihop paths Timeout/retransmit Q: What happens as l’in grows and exceeds R/2?
16
shared output link finite buffers
Congestion Scenario 3 shared output link finite buffers Host A lin : original data Host D lout l'in : original data, plus retransmitted data Host C Host B Z Z X Y X Y Y As input rate increases for each Host, the second hop router for a path will have traffic arriving from two sources, a host and a router. The traffic from the host becomes a larger and larger percentage of the load offered to the router and the throughput for the traffic from the router goes to 0. X Z Y Z X X > R sent from source Y after one hop Z after two hops Y = XR/(X+Y) R < X by symmetry Similarly, Z = YR/(X+Y) = R2/X(1+Y/X)2 So that Z → 0, as X → ∞
17
Congestion Control Options
Two major approaches to congestion control End-to-end congestion control – used by TCP no explicit feedback from network congestion inferred from loss, and/or delay observed by hosts relies on cooperation of end hosts (but most schemes do) Network-assisted congestion control routers provide feedback to end systems more rapid response to traffic changes than end-to-end approach simplest approach – single bit congestion indication TCP and IP support this, but capability is usually unavailable explicit rate control senders request a sending rate, routers decide allowable rates can prevent “greedy hosts” from hogging network capacity was used in Asynchronous Transfer Mode (ATM) networks
18
TCP Congestion Control Big Picture
Objective: send as fast as possible, but not too fast when lost segments are detected, sender reduces its rate (backs off) when there are no lost segments, sender increases its rate (keeps probing the network for more bandwidth) Key questions when it’s time to cut rate, by how much should it be cut? TCP cuts sending rate in half to reduce congestion quickly when it’s time to increase rate, by how much should it increase? TCP makes small incremental increases to avoid going right back into congestion but TCP allows a “new sender” to increase its rate more quickly Additive increase/multiplicative decrease (AIMD) during stable traffic periods, rates oscillate around ideal rates different end-to-end flows get roughly “fair shares” of capacity can be slow to respond to traffic changes (requires a packet loss)
19
AIMD: Additive Increase
Additive (Linear) Increase "Linear": 1 Maximum Size Segment (MSS) increase in cwnd every RTT Cautious increase of cwnd For each current cwnd-worth of segments ACKed within time-out period cwnd = cwnd + 1 (incremented by one full MSS segment) Actually, each ACKed segment yields a fractional increase cwnd_increment = MSS x MSS/cwnd bytes e.g., if cwnd = 8 MSS, an ACK increases cwnd by MSS/8 ESE 404/TCOM Introduction to Networks and Protocols
20
AIMD: Multiplicative Decrease
Sender scrambles to reduce CW as soon as congestion is detected Segment ACK Times-Out cwnd = cwnd/2 Essentially halves current transmission rate ESE 404/TCOM Introduction to Networks and Protocols
21
Basic AIMD Behavior Sawtooth pattern Trends towards “fairness”
time sending rate ideal sending rate Sawtooth pattern sending rate oscillates around “ideal rate” when ideal rate is large, the “cycle time” can also be large implies slow response time Trends towards “fairness” when several TCP connections share a common “bottleneck” link, it’s desirable that they each receive a roughly equal share AIMD makes things approximately equal in the long run connections tend to oscillate in sync with each other, so when total rate is too large for link, all halve their rates together this has bigger impact on higher rate senders caveat: connections with short RTTs get more capacity (Why?)
22
Equal-capacity-share line C Additive Increase (unit-slope) TCP
Conn. 2 Multiplicative Decrease (factor of 2) c2/2 TCP Conn. 1 C Connection 1 Capacity C Connection 2
23
TCP Evolution (2 connections – same RTT) 100Mbps Link, small buffer - 80Mbps/20Mpbs initial shares
Loss when R1(0)+R2(0)+2nδ = C, where C is link capacity and δ is rate increase per RTT Connection 1 Connection 2 80 20 40 10 65 35 32.5 17.5 57.5 42.5 28.75 21.25 53.75 46.25 26.88 23.13 51.88 48.13 25.94 24.06 50.94 49.06 25.47 24.53 50.47 49.53 25.23 24.77 50.23 49.77 25.12 24.88 50.12 49.88 Loss Loss Loss Loss Loss Loss Loss Loss
24
TCP Evolution (2 connections – same RTT)
TCP rate increase and decrease progression a0 and b0 as the initial connection rates just before the first loss, i.e., a0+b0=C a'0 and b’0 as the connection rates just after the first loss, i.e., a’0=a0/2, b’0=b0/2 (and a’0+b’0=C/2) a1 and b1 as the connection rates just before the second loss, i.e., a1=a0/2+C/4 and b1=b0/2+C/4 a'1 and b’1 as the connection rates just after the second loss, i.e., a’1=a0/4+C/8, b’0=b0/4+C/8 a2 and b2 as the connection rates just before the third loss, i.e., a2=a0/4+C/8+C/4 and b1=b0/4+C/8+C/4 a'2 and b’2 as the connection rates just after the second loss, i.e., a’1=a0/4+C/16+C/8, b’0=b0/4+C/16+C/8 Continuing the progression yields
25
TCP Evolution (2 connections – different RTTs) 100Mbps Link, small buffer - 80Mbps/20Mpbs initial shares Loss when R1(0)+R2(0)+3nδ = C, where C is link capacity and δ is rate increase per RTT of connection 2 (shorter RTT)
26
Simulation for Large and Small Buffers
B=20,000×MSS, N=100 RTT=100ms, R=500 Mb/s with large buffers get large delay variance B=1,000×MSS, N=100 RTT=100ms, R=500 Mb/s If there is always something in the queue then we are guaranteed to keep output link busy. with small buffers get underflow and lower throughput
27
Exercise Consider a scenario with a receiver that suddenly becomes very busy and can only process 100 bytes of data every 1ms. As a result, the ReceiveBuffer fills up and the receiver signals it to the sender by setting RcvBuffer=0. Assuming that the receiver acks every packet it receives from the sender and that RTT=20ms=10ms+10ms, what pattern of transmission would this produce at the sender? Why is it undesirable, and what would you suggest to fix it?
28
Exercise Consider a scenario with a receiver that suddenly becomes very busy and can only process 50 bytes of data every 10ms. As a result, the ReceiveBuffer fills up and the receiver signals it to the sender by setting RcvBuffer=0. Assuming that the receiver acks every packet it receives from the sender and that RTT=20ms=10ms+10ms, what pattern of transmission would this produce at the sender? Why is it undesirable, and what would you suggest to fix it? The sender sends a one-byte packet after getting an ACK with RcvBuffer=0. The packet arrives at the receiver 10ms later, at which point the receiver had cleared 100 bytes (since the last ACK), so that the receiver sends an ACK with RcvBuffer=100. When the sender receives it, it transmits a 100 bytes packet. The packet arrives at the receiver 10ms later when the receiver has cleared 100 more bytes, so that it again acks the packet with a value of RcvBuffer=100. The pattern repeats with the sender transmitting a 100 bytes packet every 20ms. This is undesirable as it is highly inefficient and would continue even after the receiver is not busy anymore. See the lecture slides for a solution to this “silly window” problem.
29
Exercises When TCP detects that a packet has been lost, it assumes that the loss was caused by congestion. Under what circumstances is this a reasonable assumption? In what common situation is it not reasonable? How does TCP’s assumption affect performance when packets may be lost for other reasons?
30
Exercises When TCP detects that a packet has been lost, it assumes that the loss was caused by congestion. Under what circumstances is this a reasonable assumption? In what common situation is it not reasonable? How does TCP’s assumption affect performance when packets may be lost for other reasons? This is unreasonable when losses are not indicative of congestion, e.g., when they are caused by errors, as would be the case on wireless links. This is reasonable when links are very reliable (e.g., fiber links), and losses are mostly due to buffer overflows that arise because of congestion. TCP’s assumption that all losses are indicative of congestion can result in poor performance (throughput) in cases where this assumption does not hold
31
Exercises Suppose that N TCP connections pass through a “bottleneck link” that has a rate of 1 Gb/s and a buffer capacity of 25 MB. Assume that for all connections, the RTT is 100 ms. Suppose that just before the buffer fills, the input rate is 1.2 Gb/s. Assuming that this causes all of the TCP senders to halve their sending rate, how much will the buffer level drop during the next 2 RTTs?
32
Exercises Suppose that N TCP connections pass through a “bottleneck link” that has a rate of 1 Gb/s and a buffer capacity of 25 MB. Assume that for all connections, the RTT is 100 ms. Suppose that just before the buffer fills, the input rate is 1.2 Gb/s. Assuming that this causes all of the TCP senders to halve their sending rate, how much will the buffer level drop during the next 2 RTTs? If all connections halve their rates, it drops down to 600 Mbps. The rate stays at that level for 100 ms (0.1 sec of 600Mbps in, 1Gb/s out) which allows the link to drain 40 Mbits or 5 MB worth of buffer capacity, so that the buffer content drops to 20 MB after that time. In the next round (RTT), the aggregate rate will actually increase by NxMSS/RTT, but if we assume that this increase only happens at the end of the round, e.g., because the senders only send full MSS packets, then the buffer again drops down by 5 MB to 15 MB.
Similar presentations
© 2025 SlidePlayer.com Inc.
All rights reserved.