Download presentation
Presentation is loading. Please wait.
1
MITM753: Advanced Computer Networks
Chapter 3 Transport Layer
2
Introduction A transport-layer protocol provides for logical communication between processes running on different hosts. From an application’s perspective, it is as if the source and destination hosts are directly connected. Simplify programming task – no need to worry about the complicated infrastructure in between. Transport-layer protocols are only implemented in end systems.
3
Introduction Upon receiving a message from the application layer, the transport layer: Break the message into smaller chunks Attach a transport-layer header on each chunk to create a transport-layer segment. Pass each segment to the network layer. On the receiving side, the transport layer: Receive segments from the network layer. Read and process the header. Pass the message to the application layer.
4
Introduction
5
Why Break Message into Smaller Chunks?
6
Transport Layer in the Internet
The Internet provides two transport-layer protocols that can be used by applications: UDP (user datagram protocol) TCP (transmission control protocol) An application needs to choose either one of the two protocols above. Which protocol to choose depends on the application’s requirements. The services provided by UDP and TCP extends the services provided by Internet’s network layer protocol (IP).
7
Transport Layer in the Internet
The IP service model is a best-effort delivery service. It makes its “best effort” to deliver segments to receiving host, but it makes no guarantees. IP is said to be an unreliable service. UDP provides two minimal services: Process-to-process delivery Error checking UDP is also an unreliable service.
8
Transport Layer in the Internet
TCP provides the following services: Process-to-process delivery Error checking Reliable data transfer Using flow control, sequence number, acknowledgement and timers. Congestion control A service for the general good (most of the time, it is not good for the application itself). Regulate the rate at which sending-side TCPs can send traffic into the network.
9
Process-to-Process Delivery
A host may run one or more network programs. A network program may have one or more sockets. A socket is a “door” between a process and the network. Each socket is tied to a particular receiver. Each socket has a unique identifier One important task of a transport-layer protocol is to make sure the received data is directed to the correct process. Done using the port number.
10
Process-to-Process Delivery
A port number is a 16-bit value (ranging from 0 to 65535) that identifies a socket. Since a socket is tied to an application process, a port number also identifies a particular process. Port number from 0 to 1023 are well know port numbers reserved for well-known application protocols. Well-known port numbers are defined in RFC 1700 and RFC 3232. Example: HTTP – port 80, FTP – port 21, etc. These reserved port numbers only apply on the server side. When we develop a new application we must assign it a port number that does not conflict with existing applications.
11
Process-to-Process Delivery
12
Process-to-Process Delivery
Upon receiving a segment from the network, the transport-layer implementation in the operating system will: Check the destination port number Send the segment to the corresponding process What would be the use of the source port number in the header? For replying the message It is important that the message is replied to the process that sends the message in the first place
13
Process-to-Process Delivery
14
Process-to-Process Delivery
Sometimes a process may spawn another process or create a new thread to service each new client connection. Example: in high-performing Web servers The new process or thread may have the same port number as the main process In this case, the source port number, the destination port number and the source IP address are needed to identify the correct receiving process.
15
Process-to-Process Delivery
16
Connectionless Transport: UDP
UDP (RFC 768) does as minimum as a transport protocol can do. UDP only does: Process-to-process delivery Error checking Other than that, UDP adds nothing more to the functionalities provided by IP. Even without reliability mechanism, UDP may be more suitable for certain applications. This is due to the following reasons:
17
Connectionless Transport: UDP
Finer application-level control on what data is sent and when Segment transmission will not be delayed by the reliability and congestion control mechanism No connection establishment No delay in establishing connection before data transfer No connection state Use less memory and system resources Small packet overhead Smaller segment size
18
Connectionless Transport: UDP
19
Connectionless Transport: UDP
It is still possible for applications using UDP to have reliable data transfer. Implement reliability mechanism inside the application itself. Involved implementing retransmission and acknowledgement mechanism. Implemented by many streaming applications. Having too many applications using UDP can be dangerous to the network. Applications send too many packets in the network. Possibly causing severe congestion.
20
UDP Segment Structure
21
UDP Segment Structure Source and destination port numbers: Length:
Identify sending and receiving process Length: The length of the entire UDP segment Specified in bytes Checksum: Contains error detection codes Used for error checking
22
UDP Checksum The checksum code is generated by the sender.
The receiver use the code to check whether the received data contains any bit errors or not. Although UDP provides error checking, it does not do anything to recover from an error. Because UDP provides no reliable data transfer mechanism. Depending on implementation, UDP might: Discard the damaged segment. Pass the damaged segment to the application with a warning.
23
Error Detection and Correction Techniques
Many transport, network and link layer protocols provide a mechanism for detecting errors. Some mechanisms can also correct simple errors. Error correction and detection techniques involve attaching an error detection and correction code to the packet. These techniques allow the receiver to sometimes (but not always) detect bit errors. There is always a small probability of having undetected bit errors. With more sophisticated techniques, the probability of having undetected bit errors will be lower. But this may require more computation and overhead.
24
Error Detection and Correction Techniques
25
Parity Checks The simplest form of error detection.
Adds a single parity bit at the end of the data. Two types: Even parity: the parity bit is added such that the total number of 1s is even. Odd parity: the parity bit is added such that the total number of 1s is odd.
26
Parity Checks Can only detect errors if odd number of bits are corrupted. 1, 3, 5, 7, etc. In computer networks, errors are often clustered together in bursts (bursty errors). When error occurs, a group of bits are affected. If parity check is used, the probability of undetected bit errors can reach 50%. Only used in simple data transmission such as transmission over a serial cable.
27
Two-dimensional Parity Scheme
This scheme not only can detect bit errors, but it can also correct a single bit error. The data bits are divided into i rows and j columns. A parity value is computed for each row and column.
28
Two-dimensional Parity Scheme
29
Checksum Sum a sequence of k-bit integers, and use the result as error detection bits. Used in TCP, UDP and IPv4. Requires a relatively low packet overhead. In TCP / UDP, only 16 bits is required in the header to carry the error detection bits. Better than parity check, but not as good as cyclic redundancy check (CRC). Easier to be implemented in software (compared to CRC).
30
Checksum The checksum is calculated at the sender side and the result is put in the checksum field in the header. The checksum is calculated as follows: Divide data into 16-bit words and sum them up Wrap around any overflow 1’s complement the result Example: supposed that we have the following 16-bit words:
31
Checksum The sum of the first two 16-bit words is: Adding the third word to the result above gives:
32
Checksum How does the receiver use the checksum value to check for errors? Sum all the 16-bit words, including the checksum value. The result should be If there is any 0, that means the are bit errors in the segment. The 16-bit words to be added depend on the protocol. Example: UDP – Header and data (i.e. the whole segment) IP – Header only
33
Cyclic Redundancy Check (CRC)
Link-layer protocols commonly use CRC rather than checksum. CRC provides stronger protection against errors compared to checksum. Since link-layer protocol is implemented in hardware, it can rapidly perform the more complex CRC operations. CRC operates as follows: Consider a piece of data D that contains d bits. The sender and receiver need to agree on an r+1 bit pattern, known as a generator, G. The most significant (leftmost) bit of G must be 1.
34
Cyclic Redundancy Check (CRC)
The CRC codes, R, are obtained by dividing D (padded with r number of zeros) by G using modulo-2 arithmetic. The remainder of the division will become R. R will contain r bits. R will be attached to the original data D, and then transmitted to the receiver. At the receiving side, the received bits are divided by G using modulo 2 arithmetic. If the remainder is non-zero, then an error has occurred. Otherwise, the data is accepted as being correct.
35
Cyclic Redundancy Check (CRC)
36
Cyclic Redundancy Check (CRC)
International standards have been defined for 8-, 12-, 16- and 32-bit generators, G. Each CRC standard can detect burst error of fewer than r+1 bits. Under appropriate assumptions, a burst of length greater than r+1 bits can also be detected with probability 1 – 0.5r. Each CRC standard can also detect any number odd number of bit errors. Example: Standard 32-bit CRC generator (used in Ethernet). GCRC-32 =
37
Principles of Reliable Data Transfer
Reliability is one of the most important problems in networking. With a reliable channel: No data bits are corrupted (flipped from 0 to 1 or vice versa). All data are delivered in order in which they were sent. Here, we will discuss the problem of reliable data transfer in a general context. Applies to computer networks in general, not just transport layer.
38
Principles of Reliable Data Transfer
39
Building a Reliable Data Transfer Protocol
We will incrementally develop the sender and receiver sides of a reliable data transfer protocol, considering increasingly complex models of the underlying channel. Reliable data transfer over a channel with bit errors Reliable data transfer over a lossy channel with bit errors
40
Reliable Data Transfer over a Channel with Bit Errors
When a packet is transmitted, bits in the packet may be corrupted. However, we will assume that the packets are received in the order in which they were sent. The key to recover from bit errors is to perform retransmission. Upon receiving a message the receiver will perform error detection and send an acknowledgement. ACK – positive acknowledgement NAK – negative acknowledgement
41
Reliable Data Transfer over a Channel with Bit Errors
If a NAK is received, the sender needs to retransmit. Protocols based on this mechanism are known as ARQ (Automated Repeat reQuest) protocols. The sender will not send a new piece of data until it is sure that the receiver has correctly received the current packet. Protocols with this behavior are known as stop-and-wait protocols.
42
Reliable Data Transfer over a Channel with Bit Errors
So far, we assume that only the data packets can be corrupted. But in reality, even the ACK and NAK packets can be corrupted. One solution to this would be to have the sender retransmit the current data packet every time it receives a corrupted ACK or NAK packet. This solution may introduce another problem: The receiver may be receiving duplicate packets. When a packet arrive, is it a new one or a retransmission?
43
Reliable Data Transfer over a Channel with Bit Errors
A simple solution to this problem would be to put sequence number inside the data packet. For now, a 1-bit sequence number should be enough. The receiver would then expect packets with alternating (0 and 1) sequence numbers. If the receiver receives two packets with the same sequence number back-to-back, then it knows that the second one is a duplicate. The duplicated packet can be discarded.
44
Reliable Data Transfer over a Channel with Bit Errors
The use of two types of acknowledgement packets (ACK and NAK) is unnecessary. The same effect can be accomplished by just using one type of acknowledgement packet. Only use ACK Each ACK is given a sequence number The receiver replies with an ACK that has the same sequence number as the packet receiver. For packet 0, send ACK 0 For packet 1, send ACK 1
45
Reliable Data Transfer over a Channel with Bit Errors
Since there is no longer a NAK, what would the receiver do when a corrupted packet is received? Send an ACK but with the sequence number of the last correctly received packet. If the sender receives two ACKs of the same packet, then it knows that the receiver did not correctly receive the packet following the packet being ACKed twice.
46
Reliable Data Transfer over a Lossy Channel with Bit Errors
In addition to having bit errors, now the underlying channel can also lose packets. In this condition, there are two issues that need to be addressed by the protocol: How to detect packet loss What to do when packet loss occurs The second issue can already be solved with the techniques discussed so far. Sequence number, ACK packets, retransmission.
47
Reliable Data Transfer over a Lossy Channel with Bit Errors
The task of detecting packet loss will be done by the sender. The sender can “assume” a packet to be lost if no acknowledgement is received from the receiver. But how long should the sender wait? In practice, the length of time for the sender to wait can be implemented with a countdown timer. A timer is set for every packet sent If the timer times out, retransmit the packet
48
Reliable Data Transfer over a Lossy Channel with Bit Errors
Even after retransmission, the sender does not actually know what happen. It could be: The packet is actually lost The ACK is lost The ACK was simply overdelayed In the case where the ACK is lost or overdelayed, the receiver may end up receiving two copies of the same packet. However, our previous mechanism can already take care of this problem.
49
Reliable Data Transfer over a Lossy Channel with Bit Errors
50
Reliable Data Transfer over a Lossy Channel with Bit Errors
51
Pipelined Reliable Data Transfer Protocol
The reliable data transfer mechanism discussed previously works, but unfortunately it is a stop-and-wait protocol. Only one packet is sent at one time. No new packet will be sent until the current one is confirmed to be received correctly. Stop-and-wait behavior has a very bad performance. If we have 1000 bytes, to be transmitted over 1Gbps link, and the RTT is 30msec, the effective throughput is only 267kbps!
52
Pipelined Reliable Data Transfer Protocol
Solution: use pipelining technique. Transmit multiple packets in one time and then wait for acknowledgement. Pipelining has the following consequences for reliable data transfer protocol. The range of sequence number must be increased. The sender and receiver must be able to buffer more than one packet. Two basic approaches for pipelined error recovery: Go-Back-N Selective repeat
53
Pipelined Reliable Data Transfer Protocol
54
Pipelined Reliable Data Transfer Protocol
55
Go-Back-N (GBN) The sender is allowed to transmit multiple packets without waiting for acknowledgement, but constrained to a maximum allowable number, N. N is referred to as the window size. Packets will be numbered from 0 to N-1. In practice, N is limited by the number of bits allocated for sequence number in the header. If k bits are allocated, then the sequence number range would be [0, 2k-1].
56
Go-Back-N (GBN) This is how the sender’s view the sequence number in GBN:
57
Go-Back-N (GBN) The sender keeps track of two variables:
Base – the sequence number of the oldest unacknowledged packet. Nextseqnum – the smallest unused sequence number. As the protocol operates, the window slides forward over the sequence number space. Due to this behavior, GBN is also knows as the sliding-window protocol.
58
Go-Back-N (GBN) The GBN sender needs to respond to three events:
Request from upper layer to send data Check whether the window is full or not. If it is full, refuse to accept the data. If it is not full, prepare the packet to be sent. Receive an ACK from the receiver When an ACK is received for a particular sequence number, it is assumed all the packets up to the ACKed sequence number has been received correctly. This is called a cumulative acknowledgement.
59
Go-Back-N (GBN) The GBN receiver needs to do the following:
When a timeout occurs Retransmit all packets that have been sent but not yet acknowledged. The GBN receiver needs to do the following: If a packet with sequence number n is received correctly and in order, send an ACK for packet n. Otherwise, discard the packet and send an ACK for the last correctly received, in-order packet. In GBN, the receiver will discard all out-of-order packets.
60
Go-Back-N (GBN) Although it seems wasteful to discard correctly received, but out-of-order packets, this simplifies implementation. There is no need to buffer out-of-order packets. Easier to implement. Use less memory. Of course, the disadvantage of doing this is that packets may need to be retransmitted unnecessarily. Use up more bandwidth that required.
61
Go-Back-N (GBN)
62
Selective Repeat (SR) GBN may suffer from performance problem.
A single packet loss may cause many packets to be retransmitted unnecessarily. SR protocol avoids unnecessary retransmission by only retransmitting packets that are received in error (lost or corrupted). Therefore, each packet should be acknowledged individually (no cumulative acknowledgement). A window of size N is still used to limit the number of unacknowledged packets.
63
Selective Repeat (SR) However, some of the packets in the window may have already been ACKed.
64
Selective Repeat (SR) Each packet sent by the sender will have a timer associated with it. When the timer times out, the packet is assumed to be lost and is retransmitted. The receiver will acknowledge a correctly received packet regardless of whether it is in order or not. Out-of-order packets are buffered until missing packets are received. Any duplicates must also be acknowledged.
65
Selective Repeat (SR) The windows size, N, in SR is limited to half the sequence number space. N <= 2k (k = number of bits allocated for sequence number) Although SR seems to be better in terms of performance, it has several disadvantages: More complicated implementation. Sender needs to be able to transmit out-of-sequence. Receiver must be able to insert packets into the buffer in the proper sequence. Smaller window size compared to GBN for the same number of sequence number bits.
66
Selective Repeat (SR)
67
Connection-oriented Transport: TCP
Now that we have covered the principles of reliable data transfer, we are ready to discuss about TCP. TCP uses many of the mechanisms that we have discussed before: Error detection, retransmission, cumulative acknowledgement, sequence numbers and timers. TCP is defined in several RFCs: RFC 793, RFC 1122, RFC 1323, RFC 2018 and RFC 2581.
68
TCP Connection TCP is said to be connection-oriented because before data transfer a “handshake” must be performed. TCP connection does not involve intermediate network elements. The connection is strictly between the two end systems. Characteristics of TCP connection: Full-duplex Point-to-point
69
TCP Connection The connection establishment procedure is often referred to as a three-way handshake. The client sends a special TCP segment. The server responds with a second special TCP segment. The client responds again with a third special TCP segment. This third segment may include application-layer data. Once a TCP connection is established, the two applications can start sending data.
70
TCP Connection The TCP connection is required to prepare both sides for data transfer: Initialize TCP state variables Allocate send and receive buffers When an application sends data through the socket, the data will go to TCP send buffer. From time to time, TCP will take an amount of data from the send buffer and encapsulates it with a TCP header. This TCP segment is then passed to the network layer.
71
TCP Connection
72
TCP Connection Each side of the connection has its own send and receive buffer. The amount of data that can be taken at one time is limited by the maximum segment size (MSS). Depends on the TCP implementation (determined by the OS). Can be configured by the application. Common values are 1460 bytes, 536 bytes and 512 bytes.
73
TCP Segment Structure
74
TCP Segment Structure Source and destination port numbers are used for process-to-process delivery. The checksum field is used for error detection. The 32-bit sequence number and acknowledgement number fields are used to implement reliable data transfer. The 4-bit length field specifies the length of the TCP header in 32-bit words. The header can be of variable length due to the TCP option field.
75
TCP Segment Structure The 16-bit receive window field indicates the number of bytes that the receiver is willing to accept. It is used for flow control The option field is optional and can be of variable length. Among others, it is used for: Sender and receiver to negotiate MSS Window scaling factor Time-stamping
76
TCP Segment Structure The flag field contains 6 bits:
ACK: Used to indicate whether the value in the acknowledgement field is valid or not. RST, SYN and FIN: Used for connection setup and teardown. PSH: When this bit is set, it indicates that the receiver should pass the data to the upper layer immediately. URG: Used to indicate that there is data in this segment that the upper-layer entity has marked as “urgent”. The location of the last byte of this urgent data is indicated by the 16-bit urgent data pointer field.
77
Sequence Numbers and Acknowledgements Numbers
TCP views data as a stream of bytes. The value that TCP puts in the sequence number field is the byte-stream number of the first byte in the segment. For example, suppose that the data stream consists of a file consisting of 500,000 bytes. Assume that the MSS is 1000 bytes. TCP will then construct 500 segments out of this data stream. The first segment gets assigned sequence number 0, the second segment gets assigned sequence number 1000, the third segment gets assigned sequence number 2000, and so on.
78
Sequence Numbers and Acknowledgements Numbers
79
Sequence Numbers and Acknowledgements Numbers
Assuming a communication between A and B, the acknowledgement number that host A puts in its segment is the sequence number of the next byte that host A is expecting from host B. For example, suppose A has received all bytes numbered 0 to 535 from B and that A is about to send a segment to B. A is currently waiting for byte 536 from B. Therefore A will put 536 in the acknowledgement number field.
80
Sequence Numbers and Acknowledgements Numbers
As another example, suppose that A has received one segment from host B containing bytes 0 through 535 and another segment containing bytes 900 through 1000. For some reason, bytes 536 through 899 have not been received. In this case, A’s next segment to B will contain 536 in the acknowledgement number field. TCP only acknowledges bytes up to the first missing byte in the stream (cumulative acknowledgements)
81
Sequence Numbers and Acknowledgements Numbers
In the previous example, A is receiving data out of order. The TCP RFCs does not impose any rules on what to do in this case. It is up to the programmers who are implementing the TCP to decide what to do. Basically there are only 2 choices: Discard out-of-order bytes. Keeps the out-of-order bytes and waits for the missing bytes.
82
Sequence Numbers and Acknowledgements Numbers
The second choice is more efficient in terms of network bandwidth. This is the approach taken in practice. So far, we have assumed that the initial sequence number is zero. In practice, both sides of TCP randomly choose an initial sequence number. Done to avoid confusion should somehow there are segments from an earlier, already-terminated connection still present in the network.
83
Round-Trip Time Estimation and Timeout
TCP uses timeout/retransmit mechanism to recover from lost segments. How do we decide the length of the timeout interval? Clearly, it should be larger than RTT. But it cannot be too large that it causes significant delays before a lost packet is retransmitted. In general, TCP decides the timeout interval by sampling the RTT of each segment sent and estimating the RTT of the next segment.
84
Estimating the Round-Trip Time
The sample RTT (SampleRTT) for a segment is the amount of time from when a segment is sent until an acknowledgment from the segment is received. This value is sampled once every RTT. One segment will be selected to be sampled (this must not be a retransmitted segment). The value of SampleRTT will fluctuate from segment to segment due to the congestion at routers and varying load on the end systems.
85
Estimating the Round-Trip Time
TCP maintains an average, called the EstimatedRTT, of the SampleRTT values. Upon receiving an acknowledgement and obtaining a new SampleRTT, TCP updates the EstimatedRTT using the following formula: EstimatedRTT = (1 – α)*EstimatedRTT + α*SampleRTT Recommended value for α: α =
86
Estimating the Round-Trip Time
87
Estimating the Round-Trip Time
TCP also measures the variation of RTT (DevRTT). DevRTT = (1 – β)*DevRTT + β *| SampleRTT – EstimatedRTT | Recommended value for β: β = 0.25 DevRTT measures the difference between SampleRTT and EstimatedRTT. DevRTT is small if SampleRTT values have little fluctuation (and vice versa).
88
Setting and Managing the Retransmission Timeout Interval
The timeout is set to be equal to the EstimatedRTT plus some margin. This margin should be large if there is a lot of fluctuation in the SampleRTT values. It should be small when there is little fluctuation. The timeout interval is set using the following formula: TimeoutInterval = EstimatedRTT + 4*DevRTT
89
Reliable Data Transfer
TCP creates a reliable data transfer service on top of IP’s unreliable best effort service. Data stream in the TCP receive buffer is uncorrupted, without duplication and in sequence. Our discussion here assumes: A single retransmission timer is used. Data is being sent in only one direction (A to B) and that Host A is sending a large file. There are three major events that need to be handled by a TCP sender: Data is received from the application above Timer timeout An ACK is received
90
Reliable Data Transfer
91
Reliable Data Transfer
Whenever TCP retransmits, the timeout interval will be set to twice the previous value. The timeout derived from SampleRTT and EstimatedRTT will not be used. If the same segment times out again, the next timeout interval will be doubled again. Provide a limited form of congestion control. However, for new data received from the application above, the TimeoutInterval value will be used.
92
Reliable Data Transfer
When a segment is lost, a long timeout may delay retransmission of the lost segment. Retransmission can be done faster by having the receiver sends a duplicate ACK. Happen when receiver receives data out of order The receiver reacknowledges the last in-order bytes of data that it has received If a sender receives 3 duplicate ACKs for the same data, it will assume that the segment following the ACKed segment has been lost. The sender will then retransmit This is called a fast retransmit
93
Reliable Data Transfer
94
Reliable Data Transfer
Is TCP a Go-Back-N or a Selective Repeat protocol? In some way, TCP looks like GBN: TCP performs cumulative acknowledgement Out-of-order segments are not ACKed In some other way, TCP looks like SR: Many TCP implementations will buffer correctly out-of-order segments Only lost segments are retransmitted Therefore, TCP is best categorized as a hybrid GBN and SR protocol.
95
Flow Control Bytes that are correctly received and in sequence by the receiving TCP will be put in a receive buffer. The corresponding application process will then read from this buffer. The application decides when to read the data. Data is not necessarily read at the instant it arrives. It is possible that if the application is slow in reading the data, the receive buffer may overflow. TCP provides a flow control service to prevent the sender from overflowing the receiver’s buffer.
96
Flow Control The TCP sender maintains a variable called the receive window (RcvWindow). This variable tells how much free space is available at the receiver. RcvWindow is actually calculated at the receiver. LastByteRead: The last byte number read by the receiving application. LasyByteRcvd: The last byte number arrived from the network and put in the receive buffer. RcvWindow = RcvBuffer – [LastByteRcvd – LastByteRead]
97
Flow Control
98
Flow Control Consider a communication between host A and B (A sends a file to B). B tells A the current value of its RcvWindow by placing it in the receive window field of every segment it sends to A. Host A keeps track of two variables: LastByteRead and LastByteAcked To ensure that A does not overflow B, A must make sure that: LastByteSent – LastByteAcked <= RcvWindow
99
TCP Connection Management
Here, we will look in more detail on how a TCP connection is established and terminated. If a client wants to set up a connection to a server, the client-side TCP first send a connection request segment (called a SYN segment) to the server-side TCP. Contains no application-layer data. SYN flag bit is set to 1. The client chooses an initial sequence number (client_isn) and puts this number in the sequence number field.
100
TCP Connection Management
Upon receiving the TCP SYN segment, the server allocates TCP buffers and variables to the connection and sends a connection-granted segment (called a SYNACK segment) to the client. Contains no application-layer data. SYN flag bit is set to 1. The acknowledgement field is set to client_isn+1. The server chooses its own initial sequence number (server_isn) and puts this number in the sequence number field.
101
TCP Connection Management
Upon receiving the TCP SYNACK segment from the server, the client also allocates buffers and variables to the connection. The client then sends a segment to the server to acknowledge the server’s connection-granted segment. May contain application-layer data. SYN flag bit is set to 0. The acknowledgement field is set to server_isn+1. The sequence number field is set to client_isn+1.
102
TCP Connection Management
103
TCP Connection Management
This procedure is called a three-way handshake. After this, the two hosts can start sending segments containing data to each other. In each of these future segments, the SYN bit will be set to 0. The TCP connection can be terminated by either host. When the connection ends, the resources (buffers and connection variables) at both hosts are de-allocated.
104
TCP Connection Management
Suppose that the client wants to close the connection. It will send a special TCP segment to the server with its FIN bit set to 1. When the server receives the segment, it will send an ACK segment in return. The server then sends its own shutdown segment. This segment has FIN bit set to 1. The client will in turn send an ACK.
105
TCP Connection Management
106
TCP Connection Management
The connection is only really closed after a certain time wait interval: Allows the TCP client to retransmit the final ACK in case it is lost. The time spent in this state is implementation dependent. Typical values are 30 seconds, 1 minute or 2 minutes.
107
Principles of Congestion Control
Congestion control is also another important problem in networking. The retransmission mechanism discussed earlier can treat the symptom of congestion but not treat the cause of congestion. In general, congestion control is performed by slowing down the senders when congestion occurs. Here, we will discuss the problem of congestion control in general. Why congestion is a bad thing? How do we know that there is congestion in the network? What can we do to avoid or react to network congestion?
108
The Causes and Costs of Congestion
Congestion is caused by high packet arrival rate. Causes high queuing delay. Packets may also be dropped. The sender must perform retransmission in order to compensate for dropped packets due to buffer overflow. If premature timeout occurs, packets will be unnecessarily retransmitted. Waste router and link resources. When a packet is dropped along a path, the transmission capacity that has been used to transmit the packet up to the point where it is dropped ends up being wasted.
109
Approaches to Congestion Control
In general, when the network is congested, the sending host should lower its data rate. But how can the sending host know whether the network is congested or not? There are two broad approaches toward congestion control: Network-assisted congestion control End-to-end congestion control Network-assisted congestion control: Network-layer components (routers) provide explicit feedback to the sender regarding the congestion state in the network.
110
Approaches to Congestion Control
Feedback can be done in one of two ways: Router sends a choke packet to the sender to tell that it is congested. Router marks a field in a packet flowing from sender to receiver. The receiver then notifies the sender of the congestion indication. The information sent by the router to the sender can take various forms: A single bit indicating congestion. Tell the exact transmission rate that the router can support at that time.
111
Approaches to Congestion Control
This approach is used by a number of networks: IBM SNA DEC DECnet ATM ABR (available bit rate) congestion control. End-to-end congestion control: Network layer provides no explicit support for congestion control. The presence of congestion must be implied based on network behavior such as packet loss and delays. Used by the Internet.
112
TCP Congestion Control
TCP uses end-to-end congestion control. TCP limits the rate at which each sender sends traffic into its connection as a function of perceived network congestion. If the sender perceived there is no or little congestion on the path, the send rate is increased. And vice versa. To perform this, we need to solve three problems: How does the sender know there is congestion in the path? How does the sender limit the send rate? What algorithm should be used to change the send rate?
113
TCP Congestion Control
During congestion, one or more router buffers along the path overflows. Datagrams will be dropped. Datagrams that are not dropped may be overly delayed. Therefore, congestion can be implied by a “loss event”. Timeout The receipt of three duplicate ACKs
114
TCP Congestion Control
To limit the data rate, TCP keeps track of a variable called congestion window (CongWin). The congestion window impose a constraint on the rate at which a TCP sender can send traffic into the network. LastByteSent – LastByteAcked <= min{CongWin, RcvWindow} Assuming RcvWindow is much larger than CongWin, Sender’s rate = CongWin/RTT bytes/sec Adjusting the CongWin value allows the sender to adjust the rate at which data is sent.
115
TCP Congestion Control Algorithm
TCP congestion control has evolved over the years (and will continue to evolve). What was good for the Internet several years ago may not be good for the Internet now. Due to the types of application that run on the Internet. There are several versions of TCP congestion control algorithms: TCP Tahoe algorithm TCP Reno algorithm (we will choose this for discussion) TCP Vegas algorithm For each of these versions there may be variants in terms of implementation.
116
TCP Congestion Control Algorithm
When a TCP connection begins, CongWin = 1 MSS. Initial data rate = MSS/RTT bytes/second Example: If MSS = 500 bytes and RTT = 200msec, data rate = 20 kbps. For each acknowledgement received before a loss event, the CongWin value is doubled. 1 MSS 2 MSS 4 MSS 8 MSS 16 MSS, etc. The CongWin value increases exponentially. This procedure continues until a loss event occurs. This phase is called the slow start (SS) phase.
117
TCP Congestion Control Algorithm
What happen when a loss event occurs? If the sender receives three duplicate ACKs: Congestion window is cut in half. Increase linearly (no longer exponentially). If timeout occurs: Enter the slow start phase. Congestion window grows exponentially until it reaches half of the size before the timeout occurred. Then it will increase linearly. This linear increase phase is referred to as the congestion avoidance (CA) phase.
118
TCP Congestion Control Algorithm
To manage the complexity of having a dynamic congestion window size, TCP uses a variable called Threshold. Threshold is initially set to a very high value. 65 Kbytes in practice. Whenever a loss event occurs, the Threshold value is set to ½ the size of the current CongWin. Threshold separates the SS and CA phases. If CongWin < Threshold, we are in SS phase. If CongWin > Threshold, we are in CA phase.
119
TCP Congestion Control Algorithm
120
TCP Congestion Control Algorithm
In general, TCP: Additively increase its rate when the path is congestion-free. Multiplicatively decreases its rate when the path is congested. For this reason, TCP is referred to as an additive-increase, multiplicative-decrease (AIMD) algorithm. Say that W is the congestion window size before a loss event occurs: Average throughput =
121
TCP Congestion Control Algorithm
122
Secure Transport Layer Protocols
Security in transport layer is provided by: SSL (Secure Sockets Layer) TLS (Transport Layer Security) TLS is the successor to SSL. SSL got up to version 3.0 The next one is TLS 1.0 (sometimes also called SSL 3.1) The latest version is TLS 1.2 Provides encryption and authentication to application data.
Similar presentations
© 2025 SlidePlayer.com Inc.
All rights reserved.