Presentation is loading. Please wait.

Presentation is loading. Please wait.

The University of Adelaide, School of Computer Science

Similar presentations

Presentation on theme: "The University of Adelaide, School of Computer Science"— Presentation transcript:

1 The University of Adelaide, School of Computer Science
14 April 2017 Chapter 5 End-to-End Protocols Copyright © 2010, Elsevier Inc. All rights Reserved Chapter 2 — Instructions: Language of the Computer

2 The University of Adelaide, School of Computer Science
Problem 14 April 2017 How to turn this host-to-host packet delivery service into a process-to-process communication channel Chapter 2 — Instructions: Language of the Computer

3 End-to-end Protocols Application-level processes that use end-to-end protocol services have certain requirements E.g. Common properties that a transport protocol can be expected to provide Guarantees message delivery Delivers messages in the same order they were sent Delivers at most one copy of each message Supports arbitrarily large messages Supports synchronization between the sender and the receiver Allows the receiver to apply flow control to the sender Supports multiple application processes on each host

4 End-to-end Protocols Typical limitations of the network on which transport protocol will operate Drop messages Reorder messages Deliver duplicate copies of a given message Limit messages to some finite size Deliver messages after an arbitrarily long delay Such a network is said to provide a best-effort level of service.

5 End-to-end Protocols Challenge for Transport Protocols
Develop algorithms that turn the less-than-desirable properties of the underlying network into the high level of service required by application programs

6 End-to-End Protocols TCP

7 Simple Demultiplexer (UDP)
Extends host-to-host delivery service of the underlying network into a process-to-process communication service Adds a level of demultiplexing which allows multiple application processes on each host to share the network

8 Simple Demultiplexer (UDP)
Sender: multiplex of UDP datagrams. UDP datagrams are received from multiple application programs. A single sequence of UDP datagrams is passed to the IP layer. Receiver: demultiplex of UDP datagrams. A single sequence of UDP datagrams is received from the IP layer. Each UDP datagram received is passed to appropriate application program. Port 1 Port 2 Port 3 UDP Demultiplexing based on port UDP Datagram arrival at receiver IP layer

9 UDP provides application multiplexing (via port numbers)
Processes Processes UDP Multiplexer UDP Demultiplexer IP IP

10 Simple Demultiplexer (UDP)
Format for UDP header

11 Simple Demultiplexer (UDP)
Multiplexing and demultiplexing is only based on port numbers. Port can be used for sending a UDP datagram to other hosts. Port can be used for receiving a UDP datagram from other hosts.

12 Simple Demultiplexer (UDP)
Typically, a port is implemented by a message queue (buffer) Upon message receipt: When a message arrives, the protocol (e.g., UDP) appends the message to the end of the queue If the queue is full, the message is discarded Message Process: When an application process wants to receive a message, one is removed from the front of the queue. Application processthen processes the message. UDP Message Queue

13 Simple Demultiplexer (UDP)
How does a process learns the port for the process to which it wants to send a message? Typically, a client process initiates a message exchange with a server process. Once a client has contacted a server, the server knows the client’s port (from the SrcPrt field contained in the message header) and can reply to it.

14 Simple Demultiplexer (UDP)
How does the client learns the server’s port in the first place? A common approach is for the server to accept messages at a well-known port. i.e each server receives its messages at some fixed port that is widely published (Static Binding) Like well-known emergency phone number 911. IANA assigns port numbers to specific services. List of all assignments is universally published by IANA Port numbers have a range of Static Binding: ports DNS : port 53 , SMTP: port 25 , FTP: port 21

15 Part of a IANA Port Assignment table:

16 Static Vs Dynamic Binding
The University of Adelaide, School of Computer Science Static Vs Dynamic Binding 14 April 2017 Dynamic binding Server port is well defined (static binding) Client's port number is often chosen from the dynamic port range Program dynamically receives port from operating system. Port range ( ) IANA later changed this to UDP and TCP: hybrid approach. Some port numbers are assigned to fixed services. Many port numbers are available for dynamic assignment to application programs. Port numbers have a range of (although often 0 has special meaning). In the original BSD TCP implementation, only root can bind to ports , and dynamically assigned ports were assigned from the range ; the others were available for unprivileged static assignment. These days is often not enough dynamic ports, and IANA has now officially designated the range for dynamic port assignment. However even that is not enough dynamic ports for some busy servers, so the range is usually configurable (by an administrator). On modern Linux and Solaris systems (often used as servers), the default dynamic range now starts at Mac OS X and Windows Vista default to Chapter 2 — Instructions: Language of the Computer

17 Simple Demultiplexer (UDP)
Fixed UDP Port Numbers Linux and Unix: /etc/services

18 The University of Adelaide, School of Computer Science
Example 1 14 April 2017 A client-server application such as DNS uses the services of UDP because a client needs to send a short request to a server and to receive a quick response from it. The request and response can each fit in one user datagram. Since only one message is exchanged in each direction, the connectionless feature is not an issue; i.e. The client or server does not worry that messages are delivered out of order. If you've ever used the Internet, it's a good bet that you've used the Domain Name System, or DNS, even without realizing it. DNS is a protocol within the set of standards for how computers exchange data on the Internet and on many private networks, known as the TCP/IP protocol suite. Its basic job is to turn a user-friendly domain name like "" into an Internet Protocol (IP) address like that computers use to identify each other on the network. It's like your computer's GPS for the Internet. Chapter 2 — Instructions: Language of the Computer

19 Question Is using UDP for sending s appropriate? Why?

20 Answer A client-server application such as SMTP, which is used in electronic mail, CANNOT use the services of UDP because a user can send a long message, which may include multimedia (images, audio, or video). If the application uses UDP and the message does not fit in one single user datagram, the message must be split by the application into different user datagrams. Here the connectionless service may create problems. The user datagrams may arrive and be delivered to the receiver application out of order. The receiver application may not be able to reorder the pieces. This means the connectionless service has a disadvantage for an application program that sends long messages.

21 Question Can downloading a large text file from internet use UDP as transport protocol? Why?

22 Answer Assume we are downloading a very large text file from the Internet. We definitely need to use a transport layer that provides reliable service. We don’t want part of the file to be missing or corrupted when we open the file. The delay created between the delivery of the parts are not an overriding concern for us; we wait until the whole file is composed before looking at it. In this case, UDP is not a suitable transport layer.

23 Question Is using UDP for watching a real time stream video OK? Why?

24 Answer Assume we are watching a real-time stream video on our computer. Such a program is considered a long file; it is divided into many small parts and broadcast in real time. The parts of the message are sent one after another. If the transport layer is supposed to resend a corrupted or lost frame, the synchronizing of the whole transmission may be lost The viewer suddenly sees a blank screen and needs to wait until the second transmission arrives. This is not tolerable. However, if each small part of the screen is sent using one single user datagram, the receiving UDP can easily ignore the corrupted or lost packet and deliver the rest to the application program. That part of the screen is blank for a very short period of the time, which most viewers do not even notice. However, video cannot be viewed out of order, so streaming audio, video, and voice applications that run over UDP must reorder or drop frames that are out of sequence.

25 Reliable Byte Stream (TCP)
IP provides a connectionless/unrealible packet delivery to host UDP provides delivery to multiple ports within a host In contrast to UDP, Transmission Control Protocol (TCP) offers the following services Byte-stream service: Application programs see stream of bytes flowing from sender to receiver. Reliable: The sequence of bytes sent is the same as the sequence of bytes received. Technique: sliding window protocol. Connection oriented: A connection between sender and receiver is established before sending data Like UDP: Delivery to multiple ports within a host

26 Reliable Byte Stream (TCP)
TCP Provides : Flow control : Prevent senders from overrunning the capacity of the receivers I.e Packages are not sent faster than they can be received. Technique: TCP variant of sliding window protocol. Congestion control: Prevent too much data from being injected into the network, i.e stop switches or links becoming overloaded

27 TCP Segment TCP is a byte-oriented protocol: Buffered Transfer:
sender writes bytes into a TCP connection receiver reads bytes out of the TCP connection. Buffered Transfer: Application software may transmit individual bytes across the stream. However, TCP does NOT transmit individual bytes over the Internet. Rather, bytes are aggregated to packets before sending for efficiency

28 TCP Segment TCP on the destination host then empties the contents of the packet into a receive buffer The receiving process reads from this buffer. The packets exchanged between TCP peers are called segments.

29 How TCP manages a byte stream.
TCP Segment How TCP manages a byte stream.

30 TCP Header SrcPort and DstPort:
TCP Header Format SrcPort and DstPort: identify the source and destination ports, respectively. Acknowledgment, SequenceNum, and AdvertisedWindow : involved in TCP’s sliding window algorithm. SequenceNum: Each byte of data has a sequence number Contains the sequence number for the first byte of data carried in that segment.

31 TCP Header Flags: relay control information between TCP peers.
Possible flags :SYN, FIN, RESET, PUSH, URG, and ACK. The SYN and FIN flags are used when establishing and terminating a TCP connection The ACK flag is set any time the Acknowledgment field is valid The URG flag signifies that this segment contains urgent data. The PUSH flag signifies that the sender invoked the push operation The RESET flag signifies that the receiver has become confused it received a segment it did not expect to receive so receiver wants to abort the connection. Checksum: like in UDP URGENT POINTER 16 bit. • Support for sending out of band data; e.g. a keyboard sequence that interrupts the program at the other end. • URGENT Pointer specifies position in segment where urgent data ends. • On receipt of segment with urgent data, TCP software tells application program to go into “urgent mode” and immediately delivers the urgent data.

32 TCP Connections Use of protocol port numbers.
A TCP connection is identified by a pair of endpoints. An endpoint is a pair (host IP address, port number). E.g. Connection 1: ( , 1069) and ( , 25). Connection 2: ( , 1184) and ( , 53). Connection 3: ( , 1184) and ( , 53). Different connections may share the endpoints! Multiple clients may be connected to the same service.

33 Connection Establishment/Termination in TCP
Connection setup: Client do an active open to a server Two sides engage in an exchange of messages to establish the connection. Then send data Both sides must signal that they are ready to transfer data. Both sides must also agree on initial sequence numbers. Initial sequence numbers are randomly chosen Timeline for three-way handshake algorithm

34 Connection Establishment
Segment1 (ClientServer): State the initial sequence number it plans to use Flags = SYN, SequenceNum = x. Segment2(ServerClient): Acknowledges client’s sequence number Flags = ACK, Ack = x+1 States its own beginning sequence number Flags = SYN, SequenceNum = y Segment3(ClientServer): Acknowledges the server’s sequence number Flags = ACK, Ack = y +1 1 2 3 ACK filed acknowledge next sequence number expected from other side

35 Connection Establishment/Termination in TCP
Connection Teardown: When participant is done sending data, it closes one direction of the connection Either side can initiate tear down Send FIN signal “I’m not going to send any more data” Other side can continue sending data Half open connection Must continue to acknowledge Acknowledging FIN Acknowledge last sequence number + 1 Client Server FIN, SeqenceNum =a ACK Acknowledgement =a+1 ACK n Frame n FIN, SeqenceNum =b ACK Acknowledgement =b+1

36 Connection Establishment/Termination in TCP
Connection setup is an asymmetric one side does a passive open the other side does an active open Connection teardown is symmetric each side has to close the connection independently

37 End-to-end Issues At the heart of TCP is the sliding window algorithm
TCP runs over the Internet rather than a point-to-point link The following issues need to be addressed by the sliding window algorithm TCP supports logical connections between processes TCP connections are likely to have widely different RTT times Packets may get reordered in the Internet

38 Sliding Window Revisited
Simple Positive Acknowledgement Protocol Normal situation: Positive acknowledgement with retransmission Sender Receiver Frame 1 Ack 1 Frame 2 Ack 2 time

39 Error in transmission: packet loss
Under-utilize the network capacity. There is only one message at a time in the network. Sender Receiver Frame 1 Ack 1 Timeout + retransmission

40 Sliding Window Protocol
Better form of positive acknowledgement and retransmission. Sender may transmit multiple packets before waiting for an acknowledgement. Windows of packets of small fixed size N. All packets within window may be transmitted. If first packet inside the window is acknowledged, window slides one element further. Sender sliding window Send frame1- 4 1 2 3 4 5 6 7 8 9 Ack 1 arrive  send frame5 1 2 3 4 5 6 7 8 9 Ack 2 arrive  send frame6 1 2 3 4 5 6 7 8 9

41 Sliding Window Revisited
Performance depends on window size N and on speed of packet delivery. N = 1: simple positive acknowledgement protocol. By increasing N, network idle time may be eliminated. Steady state: sender transmits packets as fast as network can transfer them. Well tuned sliding window protocol keeps network saturated. Sender Receiver Frame 1 Ack 1 Frame 2 Ack 2 Frame 3 Frame 4 Ack 3 Ack 4 Frame 5

42 Sliding Window Revisited
Relationship between TCP send buffer (a) and receive buffer (b).

43 (advertised by the receiver)
TCP Sliding Window Sending Side LastByteAcked ≤ LastByteSent ≤ LastByteWritten Receiving Side LastByteRead < NextByteExpected ≤ LastByteRcvd + 1 E.g. At Sender: Sending window size (advertised by the receiver) 1 2 3 4 5 6 7 8 9 10 11 Bytes 1-2 acknowledged Bytes 3-6 sent but Not yet acknowledged Bytes 7-9 not sent Will send next Bytes cannot be sent until window moves LastByteAcked LastByteSent LastByteWritten Fig. A Send Window

44 TCP’s Sliding Window Mechanism
TCP’s variant of the sliding window algorithm, which serves several purposes: (1) it guarantees the reliable delivery of data, (2) it ensures that data is delivered in order, and (3) it enforces flow control between the sender and the receiver.

45 TCP Sliding Window Operates on byte level:
Bytes in window are sent from first to last. First bytes have to be acknowledged before window may slide. The size of a window is variable. Receiver delivers window advertisements to sender: On setup of a connection, the receiver informs the sender about the size of its window =how many bytes the receiver is willing to accept = size of buffer on receiver). The sender sends at most as many bytes as determined by the window advertisement. Every ACK message from the receiver contains a new window advertisement The sender adapts its window size correspondingly.

46 TCP Sliding Window Solves the problem of flow control:
As the window slides, the receiver may adjust the speed of the sender to its own speed by modifying the window size.

47 Triggering Transmission
How does TCP decide to transmit a segment? TCP supports a byte stream abstraction Application programs write bytes into streams It is up to TCP to decide that it has enough bytes to send a segment

48 Triggering Transmission
What factors governs triggering transmission? Ignore flow control: window is wide open TCP has three mechanism to trigger the transmission of a segment TCP maintains a variable MSS(Maximum Segment Size) TCP sends a segment as soon as it has collected MSS bytes from the sending process Sending process has explicitly asked TCP to send it TCP supports push operation : immediate send rather than buffer E.g. Telnet application: interactive, each keystroke immediately sent to other side When a timer fires Resulting segment contains as many bytes as are currently buffered for transmission Ref: TCP Immediate Data Transfer: "Push" Function  (Page 1 of 2)The fact that TCP takes incoming data from a process as an unstructured stream of bytes gives it great flexibility in meeting the needs of most applications. There is no need for an application to create blocks or messages; it just sends the data to TCP when it is ready for transmission. For its part, TCP has no knowledge or interest in the meaning of the bytes of data in this stream. They are “just bytes” and TCP just sends them without any real concern for their structure or purpose. This has a couple of interesting impacts on how applications work. One is that TCP does not provide any natural indication of the dividing point between pieces of data, such as database records or files. The application must take care of this. Another result of TCP's byte orientation is that TCP cannot decide when to form a segment and send bytes between devices based on the contents of the data. TCP will generally accumulate data sent to it by an application process in a buffer. It chooses when and how to send data based solely on the sliding window system discussed in the previous topic, in combination with logic that helps to ensure efficient operation of the protocol. This means that while an application can control the rate and timing with which it sends data to TCP, it cannot inherently control the timing with which TCP itself sends the data over the internetwork. Now, if we are sending a large file, for example, this isn't a big problem. As long as we keep sending data, TCP will keep forwarding it over the internetwork. It's generally fine in such a case to let TCP fill its internal transmit buffer with data and form a segment to be sent when TCP feels it is appropriate. Problems with Accumulating Data for TransmissionHowever, there are situations where letting TCP accumulate data before transmitting it can cause serious application problems. The classic example of this is an interactive application such as theTelnet Protocol. When you are using such a program, you want each keystroke to be sent immediately to the other application; you don't want TCP to accumulate hundreds of keystrokes and then send them all at once. The latter may be more “efficient” but it makes the application unusable, which is really putting the cart before the horse. Even with a more mundane protocol that transfers files, there are many situations in which we need to say “send the data now”. For example, many protocols begin with a client sending a request to a server—like the hypothetical one in the example in the preceding topic, or a request for a Web page sent by a Web browser. In that circumstance, we want the client's request sent immediately; we don't want to wait until enough requests have been accumulated by TCP to fill an “optimal-sized” segment. Forcing Immediate Data Transfer Naturally, the designers of TCP realized that a way was needed to handle these situations. When an application has data that it needs to have sent across the internetwork immediately, it sends the data to TCP, and then uses the TCP push function. This tells the sending TCP to immediately “push” all the data it has to the recipient's TCP as soon as it is able to do so, without waiting for more data. When this function is invoked, TCP will create a segment (or segments) that contains all the data it has outstanding, and will transmit it with the PSH control bit set to 1. The destination device's TCP software, seeing this bit sent, will know that it should not just take the data in the segment it received and buffer it, but rather push it through directly to the application. It's important to realize that the push function only forces immediate delivery of data. It does not change the fact that TCP provides no boundaries between data elements. It may seem that an application could send one record of data and then “push” it to the recipient; then send the second record and “push” that, and so on. However, the application cannot assume that because it sets the PSH bit for each piece of data it gives to TCP, that each piece of data will be in a single segment. It possible that the first “push” may contain data given to TCP earlier that wasn't yet transmitted, and it's also possible that two records “pushed” in this manner may end up in the same segment anyway. Key Concept: TCP includes a special “push” function to handle cases where data given to TCP needs to be sent immediately. An application can send data to its TCP software and indicate that it should be pushed. The segment will be sent right away rather than being buffered. The pushed segment’s PSH control bit will be set to one to tell the receiving TCP that it should immediately pass the data up to the receiving application.

49 The University of Adelaide, School of Computer Science
Silly Window Syndrome 14 April 2017 Caused by poorly implemented TCP flow control.  Silly Window Syndrome: Each ACK advertises a small window Each segment carries a small amount of data Problems: Poor use of network bandwidth Unnecessary computational overhead  E.g. Server/Receiver consume data slowly. So it requests client/sender to reduce the sending window size If the server continues to be unable to process all incoming data: The window becomes smaller and smaller (shrink to a silly value) Data transmission becomes extremely inefficient. Segment size < max segment size Chapter 2 — Instructions: Language of the Computer

50 Silly Window Syndrome Syndrome created by the Receiver Problem:
Receiving application program consumes data slowly So receiver advertise smaller window to sender Solution : Clark’s solution Sending an ACK but announcing a window size of zero until there is enough space to accommodate a segment of max. size or until half of the buffer is empty i.e. receiver consumes data until “enough” space available to advertise TCP/IP Protocol Suite

51 Silly Window Syndrome Syndrome created by the Sender Goal: Problem:
Sending application program creates data slowly Goal: Do not send smaller segments Wait and collect data to send in a larger block How long should the sending TCP wait before transmitting? Solution: Nagle’s algorithm Nagle’s algorithm takes into account (1) the speed of the application program that creates the data, and (2) the speed of the network that transports the data

52 Nagle’s Algorithm Solves slow sender problem
If there is data to send but the window open is less than MSS, then we may want to wait some amount of time before sending the available data But how long? If we wait too long, then we hurt interactive applications like Telnet If we don’t wait long enough, then we risk sending a bunch of tiny packets and falling into the silly window syndrome Solution: Introduce a timer and to transmit when the timer expires

53 Nagle’s Algorithm We could use a clock-based timer(e.g. fires every 100 ms) Nagle introduced an elegant self-clocking solution Key Idea: As long as TCP has any data in flight, the sender will eventually receive an ACK This ACK can be treated like a timer firing, triggering the transmission of more data

54 Nagle’s Algorithm When the application produces data to send if both the available data and the window ≥ MSS send a full segment else if there is unACKed data in flight buffer the new data until an ACK arrives send all the new data now

55 Adaptive Retransmission
TCP estimates the round-trip delay for each active connection For each connection: TCP generates a sequence of round-trip estimates and produces a weighted average

56 Question Why would TCP estimate RTT delay per connection?

57 Answer The goal is to wait long enough to decide that a packet was lost and needs to be retransmitted, without waiting longer than necessary We don’t want timeout to expire too soon or too late. Ideally, we would like to set the retransmission timer to a value just slightly larger than the round-trip time (RTT) between the two TCP devices Ref:

58 The problem is that there is no such “typical” round-trip time.

59 There are two main reasons for this:
Differences In Connection Distance:  Pinging to Georgia State University, GA from within csbsju United State Pinging to a Oxford University in UK from csbsju United States Ping to Kharagpur college in India from csbsju United States

60 Transient Delays and Variability:
The amount of time it takes to send data between any two devices will vary over time due to various happenings on the internetwork: fluctuations in traffic, router loads etc. E.g.

61 It is for these reasons that TCP does not attempt to use a static, single number for its retransmission timers (timeout). Instead, TCP uses a dynamic, or adaptive retransmission scheme.

62 Adaptive Retransmission
In addition to round trip delay estimations, TCP also maintains an estimate of the variance and uses a linear combination of the estimated mean and variance as the value of the timeout, as shown below Sender Receiver Frame 1 Ack 1 Frame 2 Ack 2 estimate1 estimate2 timeout Packet lost Sender Receiver Frame 1 Ack 1 Frame 2 Ack 2 estimate1 estimate2 timeout Packet lost Example 1: Connection with longer RTT Example 2: Connection with shorter RTT

63 Adaptive Retransmission
Adaptive Retransmission Based on Round-Trip Time Calculations TCP attempts to determine the approximate round-trip time between the devices, and adjusts it over time to compensate for increases or decreases in the average delay. To learn how this is done look at RFC 2988 RTTestimated = ( * RTTestimated) + ((1- ) * RTTsample)  is a smoothing factor between 0 and 1 New estimate of Rtt Smoothing factor [0..1] Previous estimate of Rtt Rtt for most recent segment/Ack

64 Question Which values will be appropriate for  ?

65 Answer Higher values of  (closer to 1)
RTTestimated = ( * RTTestimated) + ((1- ) * RTTsample) Higher values of  (closer to 1) provide better smoothing avoid sudden changes as a result of one very fast or very slow RTT measurement. Conversely, this also slows down how quickly TCP reacts to more sustained changes in round-trip time. Lower values of  (closer to 0) : make the RTT change more quickly in reaction to changes in measured RTT can cause “over-reaction” when RTTs fluctuate wildly.

66 Adaptive Retransmission
Original Algorithm Measure SampleRTT for each segment/ ACK pair Compute weighted average of RTT EstRTT = a x EstRTT + (1 - a )x SampleRTT a between 0.8 and 0.9 Set timeout based on EstRTT TimeOut = 2 x EstRTT Why? Why? We need to make room for variance too!

67 Original Algorithm Problem (Acknowledgement Ambiguity)
ACK does not really acknowledge a transmission It actually acknowledges the receipt of data When a segment is retransmitted and then an ACK arrives at the sender It is impossible to decide if this ACK should be associated with the first or the second transmission for calculating RTTs Why? Why? 1. ACK could be a delayed Ack for the original packet -original data packet was received at receiver -receiver sent ACK -it was delayed or 2. ACK could be for duplicate packet -original data packet or original ack was lost -successful ACK for the second packet receipt Acknowledgment AmbiguityMeasuring the round-trip time between two devices is simple in concept: note the time that a segment is sent, note the time that an acknowledgment is received, and subtract the two. The measurement is more tricky in actual implementation, however. One of the main potential “gotchas” occurs when a segment is assumed lost and is retransmitted. The retransmitted segment carries nothing that distinguishes it from the original. When an acknowledgment is received for this segment, it's unclear as to whether this corresponds to the retransmission or the original segment. (Even though we decided the segment was lost and retransmitted it, it's possible the segment eventually got there, after taking a long time; or that the segment got their quickly but the acknowledgmenttook a long time!) This is called acknowledgment ambiguity, and is not trivial to solve. We can't just decide to assume that an acknowledgment always goes with the oldest copy of the segment sent, because this makes the round-trip time appear too high. We also don't want to just assume an acknowledgment always goes with the latest sending of the segment, as that may artificially lower the average round-trip time.

68 Original Algorithm RTT appear too high RTT appear too low
Associating the ACK with (a) original transmission versus (b) retransmission

69 Question How do you correct this problem?

70 Answer: Karn/Partridge Algorithm
Surprisingly simple solution! Refines RTT Calculation: Do not sample RTT when retransmitting Only measures SampleRTT for segments that have been sent only once. Algorithm also included a second small change: Double timeout after each retransmission Do NOT base timeout on the last EstimatedRTT Use Exponential backoff (similar to Ethernets) Why? Why? The motivation for using exponential backoff is simple: Congestion is the most likely cause of lost segments, meaning that the TCP source should not react too aggressively to a timeout. In fact, the more times the connection times out, the more cautious the source should become. Refinements to RTT Calculation and Karn's Algorithm TCP's solution to round-trip time calculation is based on the use of a technique called Karn's algorithm, after its inventor, Phil Karn. The main change this algorithm makes is the separation of the calculation of average round-trip time from the calculation of the value to use for timers on retransmitted segments. The first change made under Karn's algorithm is to not use measured round-trip times for any segments that are retransmitted in the calculation of the overall average round-trip time for the connection. This completely eliminates the problem of acknowledgment ambiguity. However, this by itself would not allow increased delays due to retransmissions to affect the average round-trip time. For this, we need the second change: incorporation of a timer backoff scheme for retransmitted segments. We start by setting the retransmission timer for each newly-transmitted segment based on the current average round-trip time. When a segment is retransmitted, the timer is not reset to the same value it was set for the initial transmission. It is “backed off” (increased) using a multiplier (typically 2) to give the retransmission more time to be received. The timer continues to be increased until a retransmission is successful, up to a certain maximum value. This prevents retransmissions from being sent too quickly and further adding to network congestion. Once the retransmission succeeds, the round-trip timer is kept at the longer (backed-off) value until a valid round-trip time can be measured on a segment that is sent and acknowledged without retransmission. This permits a device to respond with longer timers to occasional circumstances that cause delays to persist for a period of time on a connection, while eventually having the round-trip time settle back to a long-term average when normal conditions resume.

71 Karn/Partridge Algorithm
Karn-Partridge algorithm was an improvement over the original approach But it does not eliminate congestion We need to understand how timeout is related to congestion If you timeout too soon, you may unnecessarily retransmit a segment which adds load to the network

72 Karn/Partridge Algorithm
Main problem with the original computation is that it does not take variance of Sample RTTs into consideration. If the variance among Sample RTTs is small Then the Estimated RTT can be better trusted There is no need to multiply this by 2 to compute the timeout EstRTT = a x EstRTT + (1 - a )x SampleRTT TimeOut = 2 x EstRTT

73 Karn/Partridge Algorithm
On the other hand, a large variance in the samples suggest that timeout value should not be tightly coupled to the Estimated RTT Jacobson/Karels proposed a new scheme for TCP retransmission

74 Jacobson/Karels Algorithm
Difference = SampleRTT − EstimatedRTT EstimatedRTT = EstimatedRTT + ( × Difference) Deviation = Deviation + (|Difference| − Deviation)  is a fraction between 0 and 1. i.e. we calculate both the mean RTT and the variation in that mean. TimeOut = μ × EstimatedRTT +  × Deviation μ is typically set to 1  is set to 4. Thus, when the variance is small: TimeOut is close to EstimatedRTT; When variance is large : It causes the deviation term to dominate the calculation. Demo: Difference = SampleRTT − EstimatedRTT could be + or – + : SampleRTT > EstimatedRTT - : SampleRTT < EstimatedRTT EstimatedRTT = EstimatedRTT + ( × Difference) - Adjusts estimatedRTT based on difference/variance Difference + : EstimatedRTT increase Difference - : EstimatedRTT decrease Deviation = Deviation + (|Difference| − Deviation) -adjusts deviation based on difference/variance -|Difference| - only magnitude of difference matters - |Difference| > Deviation  add to current deviation - |Difference| < Deviation  subtract from current deviation - how much weight u give to this depend on  TimeOut = μ × EstimatedRTT +  × Deviation - a combination of EstimatedRTT and Deviation

75 Summary We have discussed how to convert host-to-host packet delivery service to process-to-process communication channel. We have discussed UDP We have discussed TCP

Download ppt "The University of Adelaide, School of Computer Science"

Similar presentations

Ads by Google