Presentation is loading. Please wait.

Presentation is loading. Please wait.

Chapter 3, slide: 1 CS 372 – introduction to computer networks* Tuesday July 13 Acknowledgement: slides drawn heavily from Kurose & Ross * Based in part.

Similar presentations


Presentation on theme: "Chapter 3, slide: 1 CS 372 – introduction to computer networks* Tuesday July 13 Acknowledgement: slides drawn heavily from Kurose & Ross * Based in part."— Presentation transcript:

1 Chapter 3, slide: 1 CS 372 – introduction to computer networks* Tuesday July 13 Acknowledgement: slides drawn heavily from Kurose & Ross * Based in part on slides by Bechir Hamdaoui and Paul D. Paulson.

2 Chapter 3, slide: 2 TCP: Overview RFCs: 793, 1122, 1323, 2018, 2581 r full duplex data:  bi-directional data flow in same connection  MSS: maximum segment size r connection-oriented:  handshaking (exchange of control msgs) init’s sender, receiver state before data exchange r flow controlled:  sender will not overwhelm receiver r point-to-point:  one sender, one receiver r reliable, in-order byte stream:  no “message boundaries” r pipelined:  TCP congestion and flow control set window size r send & receive buffers

3 TCP segment structure source port # dest port # 32 bits application data (variable length) sequence number acknowledgement number Receive window Urg data pnter checksum F SR PAU head len not used Options (variable length, padded to 32 bits) URG: urgent data (generally not used) ACK: ACK # valid PSH: push data now (generally not used) RST, SYN, FIN: connection estab (setup, teardown commands) # bytes rcvr willing to accept counting by bytes of data (not segments!) Internet checksum (as in UDP) Chapter 3, slide: 3

4 4 TCP Connection Establishment Three way handshake: Step 1: client host sends TCP SYN segment to server  specifies initial seq #  no data Step 2: server host receives SYN, replies with SYNACK segment  server allocates buffers  specifies server initial seq. # Step 3: client receives SYNACK, replies with ACK segment, which may contain data client server SYN, ClientSeq# Connection request SYN, ACK, ServerSeq# Connection granted ACK, DATA ACK In Java, this is equivalent to: Socket clientSocket = new Socket("hostname","port#); In Java, this is equivalent to: Socket connectionSocket = new welcomeSocket.accept();

5 Chapter 3, slide: 5 TCP Connection: tear down Closing a connection: Step 1: client: r closes socket c lientSocket.close(); r sends TCP FIN control segment to server Step 2: server: r receives FIN, r replies with ACK r closes connection, r sends FIN. client server FIN close ACK FIN closing

6 TCP Connection: tear down Closing a connection: Step 3: client: r receives FIN, r replies with ACK. r enters “timed wait” - keep responding with ACKs to received FINs Step 4: server: r receives ACK r connection closed. client server FIN close ACK FIN closing ACK timed wait closed

7 Chapter 3, slide: 7 TCP: a reliable data transfer r TCP creates rdt service on top of IP’s unreliable service r Pipelined segments r Cumulative acks r TCP uses single retransmission timer r Retransmissions are triggered by:  timeout events  duplicate acks r Initially consider simplified TCP sender:  ignore duplicate acks  ignore flow control, congestion control

8 Chapter 3, slide: 8 TCP sender events: data rcvd from app: r Create segment with seq # r seq # is byte-stream number of first data byte in segment r start timer if not already running (think of timer as for oldest unACK’ed segment)  expiration interval: TimeOutInterval timeout: r retransmit segment that caused timeout r restart timer Ack rcvd: r If acknowledges previously unACK’ed segments  update what is known to be ACK’ed  start timer if there are outstanding segments

9 Chapter 3, slide: 9 TCP seq. #’s and ACKs Seq. #’s:  byte stream “number” of first byte in segment’s data ACKs:  seq # of next byte expected from other side  cumulative ACK Host A Host B Seq=42, ACK=79, data = ‘C’ User types ‘C’ Seq=43, ACK=80 host ACKs receipt of echoed ‘C’ Seq=79, ACK=43, data = ‘C’ host ACKs receipt of ‘C’, echoes back ‘C’ time simple telnet scenario

10 Chapter 3, slide: 10 TCP: retransmission scenarios Host A Seq=92, 8 bytes data ACK=100 loss timeout lost ACK scenario Host B X Seq=92, 8 bytes data ACK=100 time SendBase = 100 Host A Seq=100, 20 bytes data ACK=100 time premature timeout Host B Seq=92, 8 bytes data ACK=120 Seq=92, 8 bytes data Seq=92 timeout ACK=120 Seq=92 timeout SendBase = 120 SendBase = 120 Sendbase = 100

11 Chapter 3, slide: 11 TCP retransmission scenarios (more) Host A Seq=92, 8 bytes data ACK=100 loss timeout Cumulative ACK scenario Host B X Seq=100, 20 bytes data ACK=120 time SendBase = 120

12 Chapter 3, slide: 12 TCP ACK generation [RFC 1122, RFC 2581] Event at Receiver Arrival of in-order segment with expected seq #. All data up to expected seq # already ACKed Arrival of in-order segment with expected seq #. One other segment has ACK pending Arrival of out-of-order segment higher-than-expect seq. #. Gap detected Arrival of segment that partially or completely fills gap TCP Receiver action Delayed ACK. Wait up to 500ms for next segment. If no next segment, send ACK Immediately send single cumulative ACK, ACKing both in-order segments Immediately send duplicate ACK, indicating seq. # of next expected byte Immediate send ACK, provided that segment starts at lower end of gap

13 Chapter 3, slide: 13 Fast Retransmit r Suppose: Packet 0 gets lost  Q: when retrans. of Packet 0 will happen?? Why happens at that time?  A: typically at t 1 ; we think it is lost when timer expires r Can we do better?? Think of what means to receive many duplicate ACKs  it means Packet 0 is lost  Why wait till timeout since we know packet 0 is lost => Fast retransmit => better perfermance r Why 3 dup ACK, not just 1 or 2  Think of what happens when pkt0 arrives after pkt1 (delayed, not lost)  Think of what happens when pkt0 arrives after pkt1 & 2, etc. client server Packet 0 Timer is set at t 0 Timer expires at t 1 Packet 1 Packet 2 Packet 3 ACK0 Packet 0

14 Chapter 3, slide: 14 Fast Retransmit: recap r Receipt of duplicate ACKs indicate lost of segments  Sender often sends many segments back-to- back  If segment is lost, there will likely be many duplicate ACKs. This is how TCP works: r If sender receives 3 ACKs for the same data, it supposes that segment after ACK’ed data was lost: r fast retransmit:  resend segment before timer expires  better performance

15 Chapter 3, slide: 15 TCP Flow Control (cont.) r receive side of TCP connection has a receive buffer: r speed-matching service: matching the send rate to the receiving app’s drain rate r app process may be slow at reading from buffer sender won’t overflow receiver’s buffer by transmitting too much, too fast flow control

16 TCP Flow Control r TCP uses sliding window for flow control r Receiver specifies window size (window advertisement )  Specifies how many bytes in the data stream can be sent  Carried in segment along with ACK r Sender can transmit any number of bytes  any size segment between last acknowledged byte and within advertised window size Chapter 3, slide: 16

17 Chapter 3, slide: 17 TCP Flow control: how it works  Rcvr advertises spare room by including value of RcvWindow in segment header  Sender limits unACKed data to RcvWindow  guarantees receive buffer doesn’t overflow (suppose TCP receiver discards out-of-order segments)  unused buffer space: = rwnd = RcvBuffer-[LastByteRcvd - LastByteRead]

18 Sliding window problem r Under some circumstances, sliding window can result in transmission of many small segments  If receiving application consumes a few data bytes, receiver will advertise a small window  Sender will immediately send small segment to fill window  Inefficient in processing time and network bandwidth Why? r Solutions:  Receiver delays advertising new window  Sender delays sending data when window is small Chapter 3, slide: 18

19 Chapter 3, slide: 19 Review questions Problem: r TCP connection between A and B r B received upto 248 bytes r A sends back-to-back 2 segments to B with 40 and 60 bytes r B ACKs every pkt it receives Q1: Seq# in 1 st and 2 nd seg. from A to B ? Q2: Spse: 1 st seg. gets to B first. What is seq# in 1 st ACK? Host A Host B Seq=249, 40 bytes Seq=289, 60 bytes ACK=289

20 Chapter 3, slide: 20 Review questions Problem: r TCP connection between A and B r B received upto 248 bytes r A sends back-to-back 2 segments to B with 40 and 60 bytes r B ACKs every pkt it receives Q1: Seq# in 2 nd seg. from A to B ? Q2: Spse: 1 st seg. gets to B first. What is seq# in 1 st ACK? Q3: Spse: 2 nd seg. gets to B first. What is seq# in 1 st ACK? And in 2 nd ACK? Host A Host B Seq=249, 40 bytes Seq=289, 60 bytes ACK= ??

21 Chapter 3, slide: 21 Review questions Problem: r TCP connection between A and B r B received upto 248 bytes r A sends back-to-back 2 segments to B with 40 and 60 bytes r B ACKs every pkt it receives Q1: Seq# in 2 nd seg. from A to B ? Q2: Spse: 1 st seg. gets to B first. What is seq# in 1 st ACK? Q3: Spse: 2 nd seg. gets to B first. What is seq# in 1 st ACK? And in 2 nd ACK? Host A Host B Seq=249, 40 bytes Seq=289, 60 bytes ACK=249 ACK=349

22 Chapter 3, slide: 22 Chapter 3 outline r 1 Transport-layer services r 2 Multiplexing and demultiplexing r 3 Connectionless transport: UDP r 4 Principles of reliable data transfer r 5 Connection-oriented transport: TCP r 6 Principles of congestion control r 7 TCP congestion control

23 Chapter 3, slide: 23 Principles of Congestion Control r cause: end systems are sending too much data too fast for network/routers to handle r manifestations:  lost/dropped packets (buffer overflow at routers)  long delays (queueing in router buffers) r different from flow control! r a top-10 problem!

24 Chapter 3, slide: 24 Causes/costs of congestion: scenario 1 r two senders, two receivers r one router, infinite buffers r no retransmission r large delays when congested unlimited shared output link buffers Host A in : original data Host B out

25 Chapter 3, slide: 25 Causes/costs of congestion: scenario 2 r one router, finite buffers r sender retransmission of lost packet finite shared output link buffers Host A in : original data Host B out ' in : original data, plus retransmitted data

26 Chapter 3, slide: 26 Causes/costs of congestion: scenario 2 r Case (a): (no retransmission) r Case (b): “perfect” retransmission only when loss: r Case (c): retransmission of delayed (not lost) packet makes larger (than perfect case) for same in = in out > in out R/2 in out b. R/2 in out a. R/2 in out c. R/4 R/3 in

27 Chapter 3, slide: 27 Approaches towards congestion control End-end congestion control: r no explicit feedback from network r congestion inferred from end-system observed loss, delay r approach taken by TCP Network-assisted congestion control: r routers provide feedback to end systems  single bit indicating congestion ECN (explicit congestion notification)  explicit rate at which sender should send ICMP (internet control messaging protocol) Two broad approaches towards congestion control:

28 Chapter 3, slide: 28 Chapter 3 outline r 1 Transport-layer services r 2 Multiplexing and demultiplexing r 3 Connectionless transport: UDP r 4 Principles of reliable data transfer r 5 Connection-oriented transport: TCP r 6 Principles of congestion control r 7 TCP congestion control

29 Chapter 3, slide: 29 TCP congestion control r Keep in mind:  Too slow: under-utilization => waste of network resources by not using them!  Too fast: over-utilization => waste of network resources by congesting them! r Challenge is then:  Not too slow, nor too fast!! r Approach:  Increase slowly the sending rates to probe for usable bandwidth  Decrease the sending rates when congestion is observed => Additive-increase, multiplicative decrease (AIMD)

30 Chapter 3, slide: 30 Additive-increase, multiplicative decrease (AIMD) (also called “congestion avoidance”)  additive increase: increase CongWin by 1 MSS every RTT until loss detected  multiplicative decrease: cut CongWin in half after loss time congestion window size Saw tooth behavior: probing for bandwidth TCP congestion control: AIMD

31 Chapter 3, slide: 31 TCP Congestion Control: details r sender limits transmission: LastByteSent-LastByteAcked  CongWin r Roughly,  CongWin is dynamic, function of perceived network congestion How does sender perceive congestion? r loss event  timeout or  3 duplicate ACKs  TCP sender reduces rate ( CongWin ) after loss event Improvements: r AIMD, any problem??  Think of the start of connections r Solution: start a little faster, and then slow down =>“slow-start” rate = CongWin RTT Bytes/sec

32 Chapter 3, slide: 32 TCP Slow Start  When connection begins, CongWin = 1 MSS  Example: MSS = 500 Bytes RTT = 200 msec  initial rate = 20 kbps r available bandwidth may be >> MSS/RTT  desirable to quickly ramp up to respectable rate TCP addresses this via Slow-Start mechanism r When connection begins, increase rate exponentially fast r When loss of packet occurs (indicates that connection reaches up there), then slow down

33 Chapter 3, slide: 33 TCP Slow Start (more) How it is done r When connection begins, increase rate exponentially until first loss event:  double CongWin every RTT  done by incrementing CongWin for every ACK received r Summary: initial rate is slow but ramps up exponentially fast Host A one segment RTT Host B time two segments four segments

34 Chapter 3, slide: 34 Refinement: TCP Tahoe Question : When should exponential increase (Slow-Start) switch to linear (AIMD)? Here is how it works: r Define a variable, called Threshold r Start “Slow-Start” r When CongWin =Threshold:  Switch to AIMD (linear) r At loss event:  Set Threshold = 1/2 CongWin  Start over with CongWin = 1 Question: What should Threshold be set to at first?? Answer: start Slow-Start until packet loss occurs When loss occurs, set Threshold = ½ current CongWin

35 Chapter 3, slide: 35 More refinement: TCP Reno Loss event: Timeout vs. dup ACKs r 3 dup ACKs: fast retransmit client server Packet 0 Timer is set at t 0 Timer expires at t 1 Packet 1 Packet 2 Packet 3 ACK0 Packet 0

36 Chapter 3, slide: 36 More refinement: TCP Reno Loss event: Timeout vs. dup ACKs r 3 dup ACKs: fast retransmit r Timeout: retransmit client server Packet 0 Timer is set at t 0 Timer expires at t 1 Packet 1 Packet 2 Packet 3 Packet 0 Any difference (think congestion) ?? r 3 dup ACKs indicate network still capable of delivering some segments after a loss r Timeout indicates a “more” alarming congestion scenario

37 Chapter 3, slide: 37 More refinement: TCP Reno TCP Reno treats “3 dup ACKs” different from “timeout” How does TCP Reno work? r After 3 dup ACKs:  CongWin is cut in half  congestion avoidance (window grows linearly) r But after timeout event:  CongWin instead set to 1 MSS;  Slow-Start (window grows exponentially)

38 Chapter 3, slide: 38 Summary: TCP Congestion Control  When CongWin is below Threshold, sender in slow-start phase, window grows exponentially.  When CongWin is above Threshold, sender is in congestion-avoidance phase, window grows linearly.  When a triple duplicate ACK occurs, Threshold set to CongWin/2 and CongWin set to Threshold.  When timeout occurs, Threshold set to CongWin/2 and CongWin is set to 1 MSS.

39 Chapter 3, slide: 39 Average throughput of TCP Avg. throughout as a function of window size W and RTT?  Ignore Slow-Start  Let W, the window size when loss occurs, be constant r When window is W, throughput is ?? throughput(high) = W/RTT r Just after loss, window drops to W/2, throughput is ?? throughput(low) = W/(2RTT). r Throughput then increases linearly from W/(2RTT) to W/RTT r Hence, average throughout = 0.75 W/RTT

40 Chapter 3, slide: 40 Fairness goal: if K TCP sessions share same bottleneck link of bandwidth R, each should have average rate of R/K TCP connection 1 bottleneck router capacity R TCP connection 2 TCP Fairness

41 Chapter 3, slide: 41 Fairness (more) Fairness and UDP r Multimedia apps often do not use TCP  do not want rate throttled by congestion control r Instead use UDP:  pump audio/video at constant rate, tolerate packet loss Fairness and parallel TCP connections r nothing prevents app from opening parallel coneccion between 2 hosts. r Web browsers do this r Example: link of rate R supporting 9 connections;  new app asks for 1 TCP, gets rate R/10  other new app asks for 11 TCPs, gets R/2 !

42 Chapter 3, slide: 42 Question? Again assume two competing sessions only: r Additive increase gives slope of 1, as throughout increases r Constant/equal decrease instead of multiplicative decrease!!! (Call this scheme: AIED) TCP connection 1 bottleneck router capacity R TCP connection 2

43 Chapter 3, slide: 43 Question? Again assume two competing sessions only: r Additive increase gives slope of 1, as throughout increases r Constant/equal decrease instead of multiplicative decrease!!! (Call this scheme: AIED) R R equal bandwidth share Connection 1 throughput Connection 2 throughput (R1, R2) Question : r How would (R1, R2) vary when AIED is used instead of AIMD? r Is it fair? Does it converge to equal share?

44 Chapter 3, slide: 44 Why is TCP fair? Two competing sessions: r Additive increase gives slope of 1, as throughout increases r multiplicative decrease decreases throughput proportionally R R equal bandwidth share Connection 1 throughput Connection 2 throughput congestion avoidance: additive increase loss: decrease window by factor of 2 congestion avoidance: additive increase loss: decrease window by factor of 2


Download ppt "Chapter 3, slide: 1 CS 372 – introduction to computer networks* Tuesday July 13 Acknowledgement: slides drawn heavily from Kurose & Ross * Based in part."

Similar presentations


Ads by Google