2The University of Adelaide, School of Computer Science Problem14 April 2017How to turn this host-to-host packet delivery service into a process-to-process communication channelChapter 2 — Instructions: Language of the Computer
3End-to-end ProtocolsApplication-level processes that use end-to-end protocol services have certain requirementsE.g. Common properties that a transport protocol can be expected to provideGuarantees message deliveryDelivers messages in the same order they were sentDelivers at most one copy of each messageSupports arbitrarily large messagesSupports synchronization between the sender and the receiverAllows the receiver to apply flow control to the senderSupports multiple application processes on each host
4End-to-end ProtocolsTypical limitations of the network on which transport protocol will operateDrop messagesReorder messagesDeliver duplicate copies of a given messageLimit messages to some finite sizeDeliver messages after an arbitrarily long delaySuch a network is said to provide a best-effort level of service.
5End-to-end Protocols Challenge for Transport Protocols Develop algorithms that turn the less-than-desirable properties of the underlying network into the high level of service required by application programs
7Simple Demultiplexer (UDP) Extends host-to-host delivery service of the underlying network into a process-to-process communication serviceAdds a level of demultiplexing which allows multiple application processes on each host to share the network
8Simple Demultiplexer (UDP) Sender: multiplex of UDP datagrams.UDP datagrams are received from multiple application programs.A single sequence of UDP datagrams is passed to the IP layer.Receiver: demultiplex of UDP datagrams.A single sequence of UDP datagrams is received from the IP layer.Each UDP datagram received is passed to appropriate application program.Port 1Port 2Port 3UDPDemultiplexing based on portUDP Datagram arrival at receiverIP layer
9UDP provides application multiplexing (via port numbers) ProcessesProcessesUDPMultiplexerUDPDemultiplexerIPIP
10Simple Demultiplexer (UDP) Format for UDP header
11Simple Demultiplexer (UDP) Multiplexing and demultiplexing is only based on port numbers.Port can be used for sending a UDP datagram to other hosts.Port can be used for receiving a UDP datagram from other hosts.
12Simple Demultiplexer (UDP) Typically, a port is implemented by a message queue (buffer)Upon message receipt:When a message arrives, the protocol (e.g., UDP) appends the message to the end of the queueIf the queue is full, the message is discardedMessage Process:When an application process wants to receive a message, one is removed from the front of the queue.Application processthen processes the message.UDP Message Queue
13Simple Demultiplexer (UDP) How does a process learns the port for the process to which it wants to send a message?Typically, a client process initiates a message exchange with a server process.Once a client has contacted a server, the server knows the client’s port (from the SrcPrt field contained in the message header) and can reply to it.
14Simple Demultiplexer (UDP) How does the client learns the server’s port in the first place?A common approach is for the server to accept messages at a well-known port.i.e each server receives its messages at some fixed port that is widely published (Static Binding)Like well-known emergency phone number 911.IANA assigns port numbers to specific services.List of all assignments is universally published by IANAPort numbers have a range ofStatic Binding: portsDNS : port 53 , SMTP: port 25 , FTP: port 21
16Static Vs Dynamic Binding The University of Adelaide, School of Computer ScienceStatic Vs Dynamic Binding14 April 2017Dynamic bindingServer port is well defined (static binding)Client's port number is often chosen from the dynamic port rangeProgram dynamically receives port from operating system.Port range ( )IANA later changed this toUDP and TCP: hybrid approach.Some port numbers are assigned to fixed services.Many port numbers are available for dynamic assignment to application programs.Port numbers have a range of (although often 0 has special meaning). In the original BSD TCP implementation, only root can bind to ports , and dynamically assigned ports were assigned from the range ; the others were available for unprivileged static assignment. These days is often not enough dynamic ports, and IANA has now officially designated the range for dynamic port assignment. However even that is not enough dynamic ports for some busy servers, so the range is usually configurable (by an administrator). On modern Linux and Solaris systems (often used as servers), the default dynamic range now starts at Mac OS X and Windows Vista default toChapter 2 — Instructions: Language of the Computer
17Simple Demultiplexer (UDP) Fixed UDP Port NumbersLinux and Unix: /etc/services
18The University of Adelaide, School of Computer Science Example 114 April 2017A client-server application such as DNS uses the services of UDP because a client needs to send a short request to a server and to receive a quick response from it.The request and response can each fit in one user datagram.Since only one message is exchanged in each direction, the connectionless feature is not an issue;i.e. The client or server does not worry that messages are delivered out of order.If you've ever used the Internet, it's a good bet that you've used the Domain Name System, or DNS, even without realizing it. DNS is a protocol within the set of standards for how computers exchange data on the Internet and on many private networks, known as the TCP/IP protocol suite. Its basic job is to turn a user-friendly domain name like "howstuffworks.com" into an Internet Protocol (IP) address like that computers use to identify each other on the network. It's like your computer's GPS for the Internet.Chapter 2 — Instructions: Language of the Computer
19QuestionIs using UDP for sending s appropriate?Why?
20AnswerA client-server application such as SMTP, which is used in electronic mail, CANNOT use the services of UDP because a user can send a long message, which may include multimedia (images, audio, or video).If the application uses UDP and the message does not fit in one single user datagram, the message must be split by the application into different user datagrams.Here the connectionless service may create problems. The user datagrams may arrive and be delivered to the receiver application out of order. The receiver application may not be able to reorder the pieces. This means the connectionless service has a disadvantage for an application program that sends long messages.
21QuestionCan downloading a large text file from internet use UDP as transport protocol?Why?
22AnswerAssume we are downloading a very large text file from the Internet.We definitely need to use a transport layer that provides reliable service.We don’t want part of the file to be missing or corrupted when we open the file.The delay created between the delivery of the parts are not an overriding concern for us; we wait until the whole file is composed before looking at it.In this case, UDP is not a suitable transport layer.
23QuestionIs using UDP for watching a real time stream video OK?Why?
24AnswerAssume we are watching a real-time stream video on our computer.Such a program is considered a long file;it is divided into many small parts and broadcast in real time.The parts of the message are sent one after another.If the transport layer is supposed to resend a corrupted or lost frame, the synchronizing of the whole transmission may be lostThe viewer suddenly sees a blank screen and needs to wait until the second transmission arrives. This is not tolerable.However, if each small part of the screen is sent using one single user datagram, the receiving UDP can easily ignore the corrupted or lost packet and deliver the rest to the application program.That part of the screen is blank for a very short period of the time, which most viewers do not even notice.However, video cannot be viewed out of order, so streaming audio, video, and voice applications that run over UDP must reorder or drop frames that are out of sequence.
25Reliable Byte Stream (TCP) IP provides a connectionless/unrealible packet delivery to hostUDP provides delivery to multiple ports within a hostIn contrast to UDP, Transmission Control Protocol (TCP) offers the following servicesByte-stream service:Application programs see stream of bytes flowing from sender to receiver.Reliable:The sequence of bytes sent is the same as the sequence of bytes received.Technique: sliding window protocol.Connection oriented:A connection between sender and receiver is established before sending dataLike UDP: Delivery to multiple ports within a host
26Reliable Byte Stream (TCP) TCP Provides :Flow control :Prevent senders from overrunning the capacity of the receiversI.e Packages are not sent faster than they can be received.Technique: TCP variant of sliding window protocol.Congestion control:Prevent too much data from being injected into the network,i.e stop switches or links becoming overloaded
27TCP Segment TCP is a byte-oriented protocol: Buffered Transfer: sender writes bytes into a TCP connectionreceiver reads bytes out of the TCP connection.Buffered Transfer:Application software may transmit individual bytes across the stream.However, TCP does NOT transmit individual bytes over the Internet.Rather, bytes are aggregated to packets before sending for efficiency
28TCP SegmentTCP on the destination host then empties the contents of the packet into a receive bufferThe receiving process reads from this buffer.The packets exchanged between TCP peers are called segments.
29How TCP manages a byte stream. TCP SegmentHow TCP manages a byte stream.
30TCP Header SrcPort and DstPort: TCP Header FormatSrcPort and DstPort:identify the source and destination ports, respectively.Acknowledgment, SequenceNum, and AdvertisedWindow :involved in TCP’s sliding window algorithm.SequenceNum:Each byte of data has a sequence numberContains the sequence number for the first byte of data carried in that segment.
31TCP Header Flags: relay control information between TCP peers. Possible flags :SYN, FIN, RESET, PUSH, URG, and ACK.The SYN and FIN flags are used when establishing and terminating a TCP connectionThe ACK flag is set any time the Acknowledgment field is validThe URG flag signifies that this segment contains urgent data.The PUSH flag signifies that the sender invoked the push operationThe RESET flag signifies that the receiver has become confusedit received a segment it did not expect to receiveso receiver wants to abort the connection.Checksum:like in UDPURGENT POINTER 16 bit.• Support for sending out of band data; e.g. a keyboard sequence that interrupts the program at the other end.• URGENT Pointer specifies position in segment where urgent data ends.• On receipt of segment with urgent data, TCP software tells application program to go into “urgent mode” and immediately delivers the urgent data.
32TCP Connections Use of protocol port numbers. A TCP connection is identified by a pair of endpoints.An endpoint is a pair (host IP address, port number).E.g.Connection 1: ( , 1069) and ( , 25).Connection 2: ( , 1184) and ( , 53).Connection 3: ( , 1184) and ( , 53).Different connections may share the endpoints!Multiple clients may be connected to the same service.
33Connection Establishment/Termination in TCP Connection setup:Client do an active open to a serverTwo sides engage in an exchange of messages to establish the connection.Then send dataBoth sides must signal that they are ready to transfer data.Both sides must also agree on initial sequence numbers.Initial sequence numbers are randomly chosenTimeline for three-way handshake algorithm
34Connection Establishment Segment1 (ClientServer):State the initial sequence number it plans to useFlags = SYN, SequenceNum = x.Segment2(ServerClient):Acknowledges client’s sequence numberFlags = ACK, Ack = x+1States its own beginning sequence numberFlags = SYN, SequenceNum = ySegment3(ClientServer):Acknowledges the server’s sequence numberFlags = ACK, Ack = y +1123ACK filed acknowledge next sequence number expected from other side
35Connection Establishment/Termination in TCP Connection Teardown:When participant is done sending data, it closes one direction of the connectionEither side can initiate tear downSend FIN signal“I’m not going to send any more data”Other side can continue sending dataHalf open connectionMust continue to acknowledgeAcknowledging FINAcknowledge last sequence number + 1ClientServerFIN, SeqenceNum =aACKAcknowledgement =a+1ACK nFrame nFIN, SeqenceNum =bACKAcknowledgement =b+1
36Connection Establishment/Termination in TCP Connection setup is an asymmetricone side does a passive openthe other side does an active openConnection teardown is symmetriceach side has to close the connection independently
37End-to-end Issues At the heart of TCP is the sliding window algorithm TCP runs over the Internet rather than a point-to-point linkThe following issues need to be addressed by the sliding window algorithmTCP supports logical connections between processesTCP connections are likely to have widely different RTT timesPackets may get reordered in the Internet
39Error in transmission: packet loss Under-utilize the network capacity.There is only one message at a time in the network.SenderReceiverFrame 1Ack 1Timeout+retransmission
40Sliding Window Protocol Better form of positive acknowledgement and retransmission.Sender may transmit multiple packets before waiting for an acknowledgement.Windows of packets of small fixed size N.All packets within window may be transmitted.If first packet inside the window is acknowledged, window slides one element further.Sender sliding windowSend frame1- 4123456789Ack 1 arrive send frame5123456789Ack 2 arrive send frame6123456789
41Sliding Window Revisited Performance depends on window size N and on speed of packet delivery.N = 1: simple positive acknowledgement protocol.By increasing N, network idle time may be eliminated.Steady state:sender transmits packets as fast as network can transfer them.Well tuned sliding window protocol keeps network saturated.SenderReceiverFrame 1Ack 1Frame 2Ack 2Frame 3Frame 4Ack 3Ack 4Frame 5
42Sliding Window Revisited Relationship between TCP send buffer (a) and receive buffer (b).
43(advertised by the receiver) TCP Sliding WindowSending SideLastByteAcked ≤ LastByteSent ≤ LastByteWrittenReceiving SideLastByteRead < NextByteExpected ≤ LastByteRcvd + 1E.g. At Sender:Sending window size(advertised by the receiver)1234567891011Bytes 1-2acknowledgedBytes 3-6 sent butNot yet acknowledgedBytes 7-9 not sentWill send nextBytes cannotbe sent until window movesLastByteAckedLastByteSentLastByteWrittenFig. A Send Window
44TCP’s Sliding Window Mechanism TCP’s variant of the sliding window algorithm, which serves several purposes:(1) it guarantees the reliable delivery of data,(2) it ensures that data is delivered in order, and(3) it enforces flow control between the sender and the receiver.
45TCP Sliding Window Operates on byte level: Bytes in window are sent from first to last.First bytes have to be acknowledged before window may slide.The size of a window is variable.Receiver delivers window advertisements to sender:On setup of a connection, the receiver informs the sender about the size of its window=how many bytes the receiver is willing to accept = size of buffer on receiver).The sender sends at most as many bytes as determined by the window advertisement.Every ACK message from the receiver contains a new window advertisementThe sender adapts its window size correspondingly.
46TCP Sliding Window Solves the problem of flow control: As the window slides, the receiver may adjust the speed of the sender to its own speed by modifying the window size.
47Triggering Transmission How does TCP decide to transmit a segment?TCP supports a byte stream abstractionApplication programs write bytes into streamsIt is up to TCP to decide that it has enough bytes to send a segment
48Triggering Transmission What factors governs triggering transmission?Ignore flow control: window is wide openTCP has three mechanism to trigger the transmission of a segmentTCP maintains a variable MSS(Maximum Segment Size)TCP sends a segment as soon as it has collected MSS bytes from the sending processSending process has explicitly asked TCP to send itTCP supports push operation : immediate send rather than bufferE.g. Telnet application: interactive, each keystroke immediately sent to other sideWhen a timer firesResulting segment contains as many bytes as are currently buffered for transmissionRef:TCP Immediate Data Transfer: "Push" Function (Page 1 of 2)The fact that TCP takes incoming data from a process as an unstructured stream of bytes gives it great flexibility in meeting the needs of most applications. There is no need for an application to create blocks or messages; it just sends the data to TCP when it is ready for transmission. For its part, TCP has no knowledge or interest in the meaning of the bytes of data in this stream. They are “just bytes” and TCP just sends them without any real concern for their structure or purpose.This has a couple of interesting impacts on how applications work. One is that TCP does not provide any natural indication of the dividing point between pieces of data, such as database records or files. The application must take care of this. Another result of TCP's byte orientation is that TCP cannot decide when to form a segment and send bytes between devices based on the contents of the data. TCP will generally accumulate data sent to it by an application process in a buffer. It chooses when and how to send data based solely on the sliding window system discussed in the previous topic, in combination with logic that helps to ensure efficient operation of the protocol.This means that while an application can control the rate and timing with which it sends data to TCP, it cannot inherently control the timing with which TCP itself sends the data over the internetwork. Now, if we are sending a large file, for example, this isn't a big problem. As long as we keep sending data, TCP will keep forwarding it over the internetwork. It's generally fine in such a case to let TCP fill its internal transmit buffer with data and form a segment to be sent when TCP feels it is appropriate.Problems with Accumulating Data for TransmissionHowever, there are situations where letting TCP accumulate data before transmitting it can cause serious application problems. The classic example of this is an interactive application such as theTelnet Protocol. When you are using such a program, you want each keystroke to be sent immediately to the other application; you don't want TCP to accumulate hundreds of keystrokes and then send them all at once. The latter may be more “efficient” but it makes the application unusable, which is really putting the cart before the horse.Even with a more mundane protocol that transfers files, there are many situations in which we need to say “send the data now”. For example, many protocols begin with a client sending a request to a server—like the hypothetical one in the example in the preceding topic, or a request for a Web page sent by a Web browser. In that circumstance, we want the client's request sent immediately; we don't want to wait until enough requests have been accumulated by TCP to fill an “optimal-sized” segment.Forcing Immediate Data TransferNaturally, the designers of TCP realized that a way was needed to handle these situations. When an application has data that it needs to have sent across the internetwork immediately, it sends the data to TCP, and then uses the TCP push function. This tells the sending TCP to immediately “push” all the data it has to the recipient's TCP as soon as it is able to do so, without waiting for more data.When this function is invoked, TCP will create a segment (or segments) that contains all the data it has outstanding, and will transmit it with the PSH control bit set to 1. The destination device's TCP software, seeing this bit sent, will know that it should not just take the data in the segment it received and buffer it, but rather push it through directly to the application.It's important to realize that the push function only forces immediate delivery of data. It does not change the fact that TCP provides no boundaries between data elements. It may seem that an application could send one record of data and then “push” it to the recipient; then send the second record and “push” that, and so on. However, the application cannot assume that because it sets the PSH bit for each piece of data it gives to TCP, that each piece of data will be in a single segment. It possible that the first “push” may contain data given to TCP earlier that wasn't yet transmitted, and it's also possible that two records “pushed” in this manner may end up in the same segment anyway.Key Concept: TCP includes a special “push” function to handle cases where data given to TCP needs to be sent immediately. An application can send data to its TCP software and indicate that it should be pushed. The segment will be sent right away rather than being buffered. The pushed segment’s PSH control bit will be set to one to tell the receiving TCP that it should immediately pass the data up to the receiving application.
49The University of Adelaide, School of Computer Science Silly Window Syndrome14 April 2017Caused by poorly implemented TCP flow control. Silly Window Syndrome:Each ACK advertises a small windowEach segment carries a small amount of dataProblems:Poor use of network bandwidthUnnecessary computational overhead E.g.Server/Receiver consume data slowly.So it requests client/sender to reduce the sending window sizeIf the server continues to be unable to process all incoming data:The window becomes smaller and smaller (shrink to a silly value)Data transmission becomes extremely inefficient.Segment size < max segment sizeChapter 2 — Instructions: Language of the Computer
50Silly Window Syndrome Syndrome created by the Receiver Problem: Receiving application program consumes data slowlySo receiver advertise smaller window to senderSolution :Clark’s solutionSending an ACK but announcing a window size of zero until there is enough space to accommodate a segment of max. size or until half of the buffer is emptyi.e. receiver consumes data until “enough” space available to advertiseTCP/IP Protocol Suite
51Silly Window Syndrome Syndrome created by the Sender Goal: Problem: Sending application program creates data slowlyGoal:Do not send smaller segmentsWait and collect data to send in a larger blockHow long should the sending TCP wait before transmitting?Solution: Nagle’s algorithmNagle’s algorithm takes into account(1) the speed of the application program that creates the data, and(2) the speed of the network that transports the data
52Nagle’s Algorithm Solves slow sender problem If there is data to send but the window open is less than MSS, then we may want to wait some amount of time before sending the available dataBut how long?If we wait too long, then we hurt interactive applications like TelnetIf we don’t wait long enough, then we risk sending a bunch of tiny packets and falling into the silly window syndromeSolution:Introduce a timer and to transmit when the timer expires
53Nagle’s AlgorithmWe could use a clock-based timer(e.g. fires every 100 ms)Nagle introduced an elegant self-clocking solutionKey Idea:As long as TCP has any data in flight, the sender will eventually receive an ACKThis ACK can be treated like a timer firing, triggering the transmission of more data
54Nagle’s AlgorithmWhen the application produces data to send if both the available data and the window ≥ MSS send a full segment else if there is unACKed data in flight buffer the new data until an ACK arrives send all the new data now
55Adaptive Retransmission TCP estimates the round-trip delay for each active connectionFor each connection:TCP generates a sequence of round-trip estimates andproduces a weighted average
56QuestionWhy would TCP estimate RTT delay per connection?
57AnswerThe goal is to wait long enough to decide that a packet was lost and needs to be retransmitted, without waiting longer than necessaryWe don’t want timeout to expire too soon or too late.Ideally, we would like to set the retransmission timer to a value just slightly larger than the round-trip time (RTT) between the two TCP devicesRef:
58The problem is that there is no such “typical” round-trip time. Why?
59There are two main reasons for this: Differences In Connection Distance: Pinging to Georgia State University, GA from within csbsju United StatePinging to a Oxford University in UK from csbsju United StatesPing to Kharagpur college in India from csbsju United States
60Transient Delays and Variability: The amount of time it takes to send data between any two devices will vary over time due to various happenings on the internetwork:fluctuations in traffic, router loads etc.E.g.
61It is for these reasons that TCP does not attempt to use a static, single number for its retransmission timers (timeout).Instead, TCP uses a dynamic, or adaptive retransmission scheme.
62Adaptive Retransmission In addition to round trip delay estimations, TCP also maintains an estimate of the variance and uses a linear combination of the estimated mean and variance as the value of the timeout, as shown belowSenderReceiverFrame 1Ack 1Frame 2Ack 2estimate1estimate2timeoutPacket lostSenderReceiverFrame 1Ack 1Frame 2Ack 2estimate1estimate2timeoutPacket lostExample 1: Connection with longer RTTExample 2: Connection with shorter RTT
63Adaptive Retransmission Adaptive Retransmission Based on Round-Trip Time CalculationsTCP attempts to determine the approximate round-trip time between the devices, and adjusts it over time to compensate for increases or decreases in the average delay.To learn how this is done look at RFC 2988RTTestimated = ( * RTTestimated) + ((1- ) * RTTsample) is a smoothing factor between 0 and 1New estimate of RttSmoothing factor [0..1]Previous estimate of RttRtt for most recent segment/Ack
64QuestionWhich values will be appropriate for ?
65Answer Higher values of (closer to 1) RTTestimated = ( * RTTestimated) + ((1- ) * RTTsample)Higher values of (closer to 1)provide better smoothingavoid sudden changes as a result of one very fast or very slow RTT measurement.Conversely, this also slows down how quickly TCP reacts to more sustained changes in round-trip time.Lower values of (closer to 0) :make the RTT change more quickly in reaction to changes in measured RTTcan cause “over-reaction” when RTTs fluctuate wildly.
66Adaptive Retransmission Original AlgorithmMeasure SampleRTT for each segment/ ACK pairCompute weighted average of RTTEstRTT = a x EstRTT + (1 - a )x SampleRTTa between 0.8 and 0.9Set timeout based on EstRTTTimeOut = 2 x EstRTTWhy?Why?We need to make room for variance too!
67Original Algorithm Problem (Acknowledgement Ambiguity) ACK does not really acknowledge a transmissionIt actually acknowledges the receipt of dataWhen a segment is retransmitted and then an ACK arrives at the senderIt is impossible to decide if this ACK should be associated with the first or the second transmission for calculating RTTsWhy?Why?1. ACK could be a delayed Ack for the original packet-original data packet was received at receiver-receiver sent ACK-it was delayedor2. ACK could be for duplicate packet-original data packet or original ack was lost-successful ACK for the second packet receiptAcknowledgment AmbiguityMeasuring the round-trip time between two devices is simple in concept: note the time that a segment is sent, note the time that an acknowledgment is received, and subtract the two. The measurement is more tricky in actual implementation, however. One of the main potential “gotchas” occurs when a segment is assumed lost and is retransmitted. The retransmitted segment carries nothing that distinguishes it from the original. When an acknowledgment is received for this segment, it's unclear as to whether this corresponds to the retransmission or the original segment. (Even though we decided the segment was lost and retransmitted it, it's possible the segment eventually got there, after taking a long time; or that the segment got their quickly but the acknowledgmenttook a long time!)This is called acknowledgment ambiguity, and is not trivial to solve. We can't just decide to assume that an acknowledgment always goes with the oldest copy of the segment sent, because this makes the round-trip time appear too high. We also don't want to just assume an acknowledgment always goes with the latest sending of the segment, as that may artificially lower the average round-trip time.
68Original Algorithm RTT appear too high RTT appear too low Associating the ACK with (a) original transmission versus (b) retransmission
70Answer: Karn/Partridge Algorithm Surprisingly simple solution!Refines RTT Calculation:Do not sample RTT when retransmittingOnly measures SampleRTT for segments that have been sent only once.Algorithm also included a second small change:Double timeout after each retransmissionDo NOT base timeout on the last EstimatedRTTUse Exponential backoff (similar to Ethernets)Why?Why?The motivation for using exponential backoff is simple: Congestion is the most likely cause of lost segments, meaning that the TCP source should not react too aggressively to a timeout. In fact, the more times the connection times out, the more cautious the source should become.Refinements to RTT Calculation and Karn's AlgorithmTCP's solution to round-trip time calculation is based on the use of a technique called Karn's algorithm, after its inventor, Phil Karn. The main change this algorithm makes is the separation of the calculation of average round-trip time from the calculation of the value to use for timers on retransmitted segments.The first change made under Karn's algorithm is to not use measured round-trip times for any segments that are retransmitted in the calculation of the overall average round-trip time for the connection. This completely eliminates the problem of acknowledgment ambiguity.However, this by itself would not allow increased delays due to retransmissions to affect the average round-trip time. For this, we need the second change: incorporation of a timer backoff scheme for retransmitted segments. We start by setting the retransmission timer for each newly-transmitted segment based on the current average round-trip time. When a segment is retransmitted, the timer is not reset to the same value it was set for the initial transmission. It is “backed off” (increased) using a multiplier (typically 2) to give the retransmission more time to be received. The timer continues to be increased until a retransmission is successful, up to a certain maximum value. This prevents retransmissions from being sent too quickly and further adding to network congestion.Once the retransmission succeeds, the round-trip timer is kept at the longer (backed-off) value until a valid round-trip time can be measured on a segment that is sent and acknowledged without retransmission. This permits a device to respond with longer timers to occasional circumstances that cause delays to persist for a period of time on a connection, while eventually having the round-trip time settle back to a long-term average when normal conditions resume.
71Karn/Partridge Algorithm Karn-Partridge algorithm was an improvement over the original approachBut it does not eliminate congestionWe need to understand how timeout is related to congestionIf you timeout too soon, you may unnecessarily retransmit a segment which adds load to the network
72Karn/Partridge Algorithm Main problem with the original computation is that it does not take variance of Sample RTTs into consideration.If the variance among Sample RTTs is smallThen the Estimated RTT can be better trustedThere is no need to multiply this by 2 to compute the timeoutEstRTT = a x EstRTT + (1 - a )x SampleRTTTimeOut = 2 x EstRTT
73Karn/Partridge Algorithm On the other hand, a large variance in the samples suggest that timeout value should not be tightly coupled to the Estimated RTTJacobson/Karels proposed a new scheme for TCP retransmission
74Jacobson/Karels Algorithm Difference = SampleRTT − EstimatedRTTEstimatedRTT = EstimatedRTT + ( × Difference)Deviation = Deviation + (|Difference| − Deviation) is a fraction between 0 and 1.i.e. we calculate both the mean RTT and the variation in that mean.TimeOut = μ × EstimatedRTT + × Deviationμ is typically set to 1 is set to 4.Thus, when the variance is small:TimeOut is close to EstimatedRTT;When variance is large :It causes the deviation term to dominate the calculation.Demo:Difference = SampleRTT − EstimatedRTTcould be + or –+ : SampleRTT > EstimatedRTT- : SampleRTT < EstimatedRTTEstimatedRTT = EstimatedRTT + ( × Difference)- Adjusts estimatedRTT based on difference/varianceDifference + : EstimatedRTT increaseDifference - : EstimatedRTT decreaseDeviation = Deviation + (|Difference| − Deviation)-adjusts deviation based on difference/variance-|Difference| - only magnitude of difference matters- |Difference| > Deviation add to current deviation- |Difference| < Deviation subtract from current deviation- how much weight u give to this depend on TimeOut = μ × EstimatedRTT + × Deviation- a combination of EstimatedRTT and Deviation
75SummaryWe have discussed how to convert host-to-host packet delivery service to process-to-process communication channel.We have discussed UDPWe have discussed TCP