Presentation is loading. Please wait.

Presentation is loading. Please wait.

Transport Layer 3-1 Chapter 3 Transport Layer Computer Networking: A Top Down Approach 6 th edition Jim Kurose, Keith Ross Addison-Wesley Chapter3_3.

Similar presentations


Presentation on theme: "Transport Layer 3-1 Chapter 3 Transport Layer Computer Networking: A Top Down Approach 6 th edition Jim Kurose, Keith Ross Addison-Wesley Chapter3_3."— Presentation transcript:

1 Transport Layer 3-1 Chapter 3 Transport Layer Computer Networking: A Top Down Approach 6 th edition Jim Kurose, Keith Ross Addison-Wesley Chapter3_3

2 Transport Layer 3-2 Chapter 3 outline r 3.1 Transport-layer services r 3.2 Multiplexing and demultiplexing r 3.3 Connectionless transport: UDP r 3.4 Principles of reliable data transfer r 3.5 Connection-oriented transport: TCP m segment structure m reliable data transfer m flow control m connection management r 3.6 Principles of congestion control r 3.7 TCP congestion control

3 Transport Layer 3-3 Principles of Congestion Control Congestion: r informally: “too many sources sending too much data too fast for network to handle” r different from flow control! m Flow Control: Sender decreases packet transmission to accommodate receiver. r manifestations: m lost packets (buffer overflow at routers) m long delays (queueing in router buffers) r a top-10 problem!

4 Transport Layer 3-4 Causes/costs of congestion: scenario 1 r Two senders, two receivers r One router, infinite buffers m No errors/retransmissions m No flow control m No congestion control m Both hosts transmitting continuously at the same time unlimited shared output link buffers Host A in : original data Host B out

5 Transport Layer 3-5 Causes/costs of congestion: scenario 1 r Maximum throughput for two senders: R/2 m Link can’t deliver packets to receiver at a rate that exceeds R/2 r As the sending rate approaches R/2, the average package delay increases (as the number of buffered packets increase). r When the sending rate exceeds R/2, packet delay becomes infinite.

6 Transport Layer 3-6 Causes/costs of congestion: scenario 1 r First cost of a congested network: m Large queuing delays are experienced as the packet arrival rate nears the link capacity Why? Processing packet header takes time!

7 Transport Layer 3-7 Causes/costs of congestion: scenario 2 r Two senders, two receivers r one router, finite buffers r Packets dropped when arriving to router with full buffer m sender retransmission of lost packet finite shared output link buffers Host A in : original data Host B out ' in : original data, plus retransmitted data

8 Transport Layer 3-8 Causes/costs of congestion: scenario 2 r Case 1: Hosts only transmits data when it knows buffer space in the router is free. m No packet loss m Throughput up to R/2 m Sending rate cannot exceed R/2 since packet loss is assumed never to occur r Realistic scenario? m NO! Why? R/2 in out

9 Transport Layer 3-9 Causes/costs of congestion: scenario 2 r Case 2: Sender retransmits only when a packet is know for certain to be lost m How does the sender know for sure a packet is lost? m Performance is decreased as the sender resends lost packets m Assume for each 3 packets transmitted, 1 packet is duplicated m 33% performance decrease due to packet retransmissions in this case r Second cost of a congested network: m Sender must retransmit to compensate for lost packets due to router buffer overflow R/2 in out R/3

10 Transport Layer 3-10 Causes/costs of congestion: scenario 2 r Case 3: Sender times out prematurely, retransmits a dup packet that has been delayed in the router buffer, but not lost m Destination receives duplicate data m Work done by router to forward duplicate data is wasted m Assume each packet has to be forwarded twice by the router Due to large queueing delays 50% performance decrease due to packet retransmissions in this case r Third cost of a congested network: m Duplicate transmissions from sender due to large delays may cause routers to forward unneeded copies of data R/2 in out R/4

11 Transport Layer 3-11 Causes/costs of congestion: scenario 3 r Four senders, four receivers r Four routers, finite buffers r Packets dropped when arriving to router with full buffer m sender retransmission of lost packet finite shared output link buffers Host A in : original data Host D out ' in : original data, plus retransmitted data Host B Host C

12 Transport Layer 3-12 Causes/costs of congestion: scenario 3 r Fourth cost of a congested network: m when a packet is dropped along a path, any “upstream” transmission capacity used for that packet is wasted!

13 Transport Layer 3-13 Approaches towards congestion control network-assisted congestion control: r routers provide feedback to end systems about current buffer state. m Two ways: “Choke packet”: Examine sender packet header to obtain IP, notify about state of buffers. Router can update a field in the sender’s packet header that indicates state of buffers. Note: This can take up to 1 RTT. m Complexity? Router explicit communication with end host processes m Flooding, loss of choke packets, etc. two broad approaches towards congestion control:

14 Transport Layer 3-14 Approaches towards congestion control end-end congestion control: r no explicit feedback from network core r Congestion of network “inferred” by the end systems based on packet loss and delay r Approach taken by TCP m Segment loss (3 dup ACK, timeout) indicates congestion m TCP adjusts sender’s transmission rate based on congestion two broad approaches towards congestion control:

15 Transport Layer 3-15 Chapter 3 outline r 3.1 Transport-layer services r 3.2 Multiplexing and demultiplexing r 3.3 Connectionless transport: UDP r 3.4 Principles of reliable data transfer r 3.5 Connection-oriented transport: TCP m segment structure m reliable data transfer m flow control m connection management r 3.6 Principles of congestion control r 3.7 TCP congestion control

16 Transport Layer 3-16 TCP congestion control: r goal: TCP sender should transmit as fast as possible, but without congesting network m Q: how to find rate just below congestion level? r decentralized: each TCP sender sets its own rate, based on implicit feedback: m ACK: segment received (a good thing!), network not congested, so increase sending rate m lost segment: assume loss due to congested network, so decrease sending rate

17 Transport Layer 3-17 TCP congestion control: r TCP sender rate is dynamic based on current status of the network r Questions: m How does a TCP sender limit transmission rate? m How does a TCP sender perceive network congestion? m How does TCP react to network congestion?

18 Transport Layer 3-18 TCP congestion control: r How does a TCP sender limit transmission rate? m Ignoring flow control, a TCP sender keeps track of a variable, congestion window, or cwnd. m By adhering to the constraint: LastByteSent – LastByteAcked <= min{cwnd, rwnd} m The TCP sender modifies the value of cwnd to adjust the rate at which it sends data into the network.

19 Transport Layer 3-19 TCP congestion control: r How does a TCP sender perceive network congestion? m Excessive congestion in the network causes packets to be dropped or delayed, which creates a “loss” event at the TCP sender m TCP considers a loss event as: Timeout for receiving ACK Receiving 3 dup ACKs m The TCP sender perceives the above loss events as network congestion and adjusts its transmission rate to accommodate

20 Transport Layer 3-20 TCP congestion control: r How does a TCP sender react to network congestion? m A lost packet implies congestion TCP sender rate should be decreased m Successfully ACK’d packets implies that the network is delivering packets to the destination TCP sender rate should be increased

21 Transport Layer 3-21 TCP congestion control: bandwidth probing r “probing for bandwidth”: increase transmission rate on receipt of ACK, until eventually loss occurs, then decrease transmission rate m continue to increase on ACK, decrease on loss (since available bandwidth is changing, depending on other connections in network) ACKs being received, so increase rate X X X X X loss, so decrease rate sending rate time r Q: how fast to increase/decrease? m details to follow TCP’s “sawtooth” behavior

22 Transport Layer 3-22 TCP Congestion Control: details r sender limits rate by limiting number of unACKed bytes “in pipeline”:  cwnd: differs from rwnd (how, why?)  sender limited by min(cwnd,rwnd) r roughly,  cwnd is dynamic, function of perceived network congestion rate = cwnd RTT bytes/sec LastByteSent-LastByteAcked  cwnd cwnd bytes RTT ACK(s)

23 Transport Layer 3-23 TCP Congestion Control: more details segment loss event: reducing cwnd r timeout: no response from receiver  cut cwnd to 1 m Aggressive response to congestion Why reduce cwnd to 1? r 3 duplicate ACKs: at least some segments getting through (recall fast retransmit)  cut cwnd in half, less aggressively than on timeout m Not as aggressive as timeout event Why?

24 Transport Layer 3-24 TCP Congestion Control: more details ACK received: increase cwnd r slowstart phase: m increase exponentially fast (despite name) at connection start, or following timeout m Doubles cwnd for each successful ACK, until cwnd reaches predetermined threshold r congestion avoidance: m cwnd increase linearly m Generally happens after slowstart phase reaches threshold

25 Transport Layer 3-25 TCP Slow Start  when connection begins, cwnd = 1 MSS m example: MSS = 500 bytes & RTT = 200 msec m initial rate = 20 kbps r available bandwidth may be >> MSS/RTT m desirable to quickly ramp up to respectable rate r increase rate exponentially until first loss event or when threshold reached  double cwnd every RTT  done by increasing cwnd for every ACK received Host A one segment RTT Host B time two segments four segments

26 Transport Layer 3-26 Transitioning into/out of slowstart ssthresh: cwnd threshold maintained by TCP  on loss event: set ssthresh to cwnd/2  remember (half of) TCP rate when congestion last occurred  when cwnd >= ssthresh : transition from slowstart to congestion avoidance phase slow start timeout ssthresh = cwnd/2 cwnd = 1 MSS dupACKcount = 0 retransmit missing segment timeout ssthresh = cwnd/2 cwnd = 1 MSS dupACKcount = 0 retransmit missing segment  cwnd > ssthresh cwnd = cwnd+MSS dupACKcount = 0 transmit new segment(s),as allowed new ACK dupACKcount++ duplicate ACK  cwnd = 1 MSS ssthresh = 64 KB dupACKcount = 0 congestion avoidance

27 Transport Layer 3-27 TCP: congestion avoidance r Due to previous loss events, the TCP sender will probe for bandwidth less aggressively  when cwnd > ssthresh grow cwnd linearly  increase cwnd by 1 MSS per RTT m approach possible congestion slower than in slowstart  implementation: cwnd = cwnd + MSS/cwnd for each ACK received

28 Transport Layer 3-28 TCP: congestion avoidance r AIMD: Additive Increase Multiplicative Decrease  ACKs: increase cwnd by 1 MSS per RTT: additive increase  loss: cut cwnd in half (non-timeout- detected loss ): multiplicative decrease m Remember: the TCP sender responds to all timeout events by reducing cwnd to 1.

29 Transport Layer 3-29 TCP congestion control FSM: overview slow start congestion avoidance fast recovery cwnd > ssthresh loss: timeout loss: timeout new ACK loss: 3dupACK loss: 3dupACK loss: timeout

30 3-30 TCP: Fast Recovery r TCP Tahoe: m Reaction to loss events 3 dup ACK’s: set ssthresh = cwnd/2, then cwnd to 1 and enter slowstart until cwnd >= ssthresh. Then enter congestion avoidance Timeout: set ssthresh = cwnd/2, then cwnd to 1 and enter slowstart until cwnd >= ssthresh. Then enter congestion avoidance m Earlier version of TCP r TCP Reno: m Reaction to loss events Timeout: set ssthresh = cwnd/2, then cwnd to 1 and enter slowstart until cwnd >= ssthresh. Then enter congestion avoidance 3 dup ACK’s: set cwnd = cwnd/2 = ssthresh and enter congestion avoidance

31 Transport Layer 3-31 Popular “flavors” of TCP ssthresh TCP Tahoe TCP Reno Transmission round cwnd window size (in segments)

32 Transport Layer 3-32 Summary: TCP Congestion Control  when cwnd < ssthresh, the TCP sender is in slow-start phase, window grows exponentially.  when cwnd >= ssthresh, the TCP sender is in congestion- avoidance phase, window grows linearly.  when triple duplicate ACK occurs, ssthresh set to cwnd/2, cwnd set to ~ ssthresh  when timeout occurs, ssthresh set to cwnd/2, cwnd set to 1 MSS.

33 Transport Layer 3-33 TCP Futures: TCP over “long, fat pipes” r example: 1500 byte segments, 100ms RTT, want 10 Gbps throughput r requires window size W = 83,333 in-flight segments m Very large amount of unACK’d segments! r throughput in terms of loss rate:  ➜ L = 2·10 -10 (1 out of 5 billion packets) m TCP is sensitive to packet loss over very high speed networks r new versions of TCP for high-speed networks

34 Transport Layer 3-34 fairness goal: if K TCP sessions share same bottleneck link of bandwidth R, each should have average rate of R/K Note: this goal does not observe fairness for hosts! Why would this be a problem? TCP connection 1 bottleneck router capacity R TCP connection 2 TCP Fairness

35 Transport Layer 3-35 Why is TCP fair? Two competing sessions: r Additive increase gives slope of 1, as throughout increases r multiplicative decrease decreases throughput proportionally R R equal bandwidth share Connection 1 throughput Connection 2 throughput congestion avoidance: additive increase loss: decrease window by factor of 2 congestion avoidance: additive increase loss: decrease window by factor of 2

36 Transport Layer 3-36 Fairness (more) Fairness and UDP r multimedia apps often do not use TCP m does not want rate throttled by congestion control m No flow control r instead use UDP: m pump audio/video at constant rate m tolerates packet loss r Research interests for developing congestion- control mechanisms for the Internet that prevent UDP traffic from unfairly transmitting data

37 Transport Layer 3-37 Fairness (more) Fairness and parallel TCP connections r Nothing prevents app from opening parallel connections between 2 hosts. r Web browsers do this r Example: link of rate R supporting 9 connections; m New app asks for 1 TCP connection, gets rate R/10 m New app instead asks for 11 TCP connections, gets R/2 !

38 Transport Layer 3-38 Chapter 3: Summary r principles behind transport layer services: m multiplexing, demultiplexing m reliable data transfer m flow control m congestion control r instantiation and implementation in the Internet m UDP m TCP Next: r leaving the network “edge” (application, transport layers) r into the network “core”


Download ppt "Transport Layer 3-1 Chapter 3 Transport Layer Computer Networking: A Top Down Approach 6 th edition Jim Kurose, Keith Ross Addison-Wesley Chapter3_3."

Similar presentations


Ads by Google