Download presentation
Presentation is loading. Please wait.
1
Process-to-Process Delivery:
Transport Layer Process-to-Process Delivery: UDP & TCP
2
Transport layer (TL) Transport layer: Process-to-Process delivery. (A process is an application/message program running on a host). Network layer: source-to-destination delivery (of individual packets to be treated independently). No relationship btw those packets. Transport layer ensures that packets belong to an application arrive intact and in order, overseeing both error control and flow control. A transport layer protocol can be either: Connectionless: treats each segment as independent packet and delivery to TL at destination host. (UDP) Connection-oriented: TL makes connection with destination host prior to packet delivery. (TCP, SCTP – not covered here) A message is usually divided into several segments. UDP treat each segment separately (non-related), while TCP creates a relationship between the segments using sequence numbers. Flow and error control in TL is performed end to end.
3
Process-to-process Communication
4
Position of UDP, TCP, and SCTP in TCP/IP suite
5
Transport Layer Addressing using Port Number
Client/Server paradigm: A process from Client (local host) needs a service from a process running on Server (remote host). Since local or remote host can run several processes, need to define: Local host/Local process & Remote host/remote process TL addressing using Port number (16 bit: 0 to 65535) To choose among multiple processes running on destination host. destination port number – delivery / Source port number – for reply. Client port number can be randomly assigned (e.g. ephemarel port number = 52000), but Server port number must be fixed for a server process: (e.g. well-known port number = 13).
6
IP addresses versus port numbers, socket address
As far as addressing is concerned, IP address (at Network layer) and port numbers (at transport layer) play different roles in selecting the final destination of data. The destination IP address defines the host among the different hosts in the world. After the host has been selected, the port number defines one of the processes on this particular host. Process-to-process delivery needs two identifier: IP add. & port number – the socket address. Client and Server socket addresses: 2 pairs for each process
7
Multiplexing / Demutiplexing
At sender site, there may be several processes that need to send packets, however, there is only one transport protocol at any time. (Many-to-1). Multiplexing at TL accepts packets from different processes, differentiated them by port numbers. After adding the header, TL passes the packet to NL. At the receiver site, the relationship is the opposite (1-to-many) that requires Demultiplexing process. After error checking and dropping of header, TL delivers each message to appropriate process based on port number.
8
Reliability at the data link layer is between two nodes (pink links)
Reliable service using Error control Reliability at the data link layer is between two nodes (pink links) Reliability at transport layer ensures end-to-end reliability. Network layer is unreliable (best effort delivery) because it only concerns about routing to appointed address.
9
User Datagram Protocol (UDP)
10
User Datagram Protocol (UDP)
The User Datagram Protocol (UDP) is called a connectionless, unreliable transport protocol. It does not add anything to the services of IP except to provide process-to-process communication instead of host-to-host communication. Very limited error checking. Why needed UDP if so unreliable? It’s a simple protocol using minimum overhead, hence fast delivery. e.g. a process wants to send a small message & doesn’t care about it reliability. UDP packets have a fixed-size header of 8 bytes. UDP length = IP length – IP header’s length
11
Pseudoheader for checksum calculation for UDP
UDP checksum includes 3 sections: 1. Pseudoheader 2. UDP header 3. Data from AL The pseudoheader is extracted from part of the header of IP packet. If the IP header is corrupted, it may be delivered to the wrong host. The protocol field (UDP=17) ensures packet belongs to UDP only. If the value changed due to error, the checksum will detect and drops the packet. No flow control and receiver may overflow with incoming messages. Sender has no feedback if the message is lost or duplicated.
12
Figure below shows the checksum calculation for a very small user datagram with only 7 bytes of data. Because the number of bytes of data is odd, padding is added for checksum calculation. The pseudoheader as well as the padding will be dropped when the user datagram is delivered to IP.
13
Encapsulation and decapsulation
14
Queues in UDP
15
Multiplexing and demultiplexing
16
Transmission Control Protocol (TCP)
17
Transmission Control Protocol (TCP)
The Transmission Control Protocol (TCP) is called a connection-oriented, reliable transport protocol. Like UDP it is a process-to-process protocol that uses port numbers. Unlike UDP, TCP creates a virtual connection between two TCPs to send data. In addition, it also uses flow and error control (Reliable). TCP is also reliable for stream-oriented protocol: allows sending and receiving streams of bytes that are related in a ‘virtual tube’. The segments of packet are related using a sequence number.
18
Sending and receiving buffers handle disparity of speed
Empty buffers Empty buffers Wait to be sent Wait to be read Sent but no ACK Because the speed writing and reading of the date is different for sending and receiving process, TCP needs buffers for storage. Buffers are also needed for flow and error control mechanism used in TCP. Buffers store and deliver the data to specific program (using port numbers) but do so when the program is ready or when it is convenient to receive.
19
TCP segments versus IP packet
IP layer sends data in packets, not stream of bytes. At transport layer, TCP groups a number of bytes together into a packet called segment. TCP adds a header to each segment (for control) and delivers to IP layer. Hence the segment is encapsulated in IP datagrams, which are transparent to the receiver but must be received in order. TCP segments can be different in size.
20
TCP TCP offers full duplex services, in which data can flow in both direction at the same time. Each TCP then has a sending and receiving buffer and segments move in both directions. (mostly for flow and congestion control) When site A wants to send and receive data from another site B: The two TCPs establish a connection between them Data are exchanged in both directions The connection is terminated when finish The connection is virtual, not physical. TCP keeps track of the segments by sequence and ACK numbers. Some TCP segments can carry a combination of data and control information (Piggy-backing), using a sequence number and ACK. These segments are used for connection establishment, termination or abortion.
21
Sequence Number The bytes of data being transferred in each connection are numbered by TCP to establish the relationship between data bytes/segments being sent. The numbering starts with a randomly generated number btw 0 to (not necessarily from zero). The sequence number is actually a range of bytes. The value in the sequence number field of a segment defines the number of the first data byte contained in that segment. The byte numbering is also used for flow and error control.
22
Example Suppose a TCP is transferring a file of 5000 bytes. The 1st byes is 10,001. What are the sequence number for each segment if data are sent in 5 equal segments? The following shows the sequence number for each segment: Solution
23
Imagine a TCP connection is transferring a file of 6000 bytes
Imagine a TCP connection is transferring a file of 6000 bytes. The first byte is numbered What are the sequence numbers for each segment if data is sent in five segments with the first four segments carrying 1,000 bytes and the last segment carrying 2,000 bytes? Example Solution The following shows the sequence number for each segment: Segment 1 10, (10,010 to 11,009) Segment 2 11, (11,010 to 12,009) Segment 3 12, (12,010 to 13,009) Segment 4 13, (13,010 to 14,009) Segment 5 14, (14,010 to 16,009)
24
ACK Number TCP is full-duplex where two parties can send and receive data at the same time. Each party numbers the bytes (usual with a different starting byte number). The sequence number in each direction shows the number of the 1st byte carried by this segment. Each party also uses acknowledgement number defines the number to confirm the bytes it has received. The value of the acknowledgment field in a segment defines the number of the next byte a party expects to receive. The acknowledgment number is cumulative, which means that the party takes the number of the last byte that it has received safely and adds ‘1’ to it and announces this sum as the ACK number.
25
TCP header format 20 (if no option) to 60 bytes header btw 5 & 15 (*4)
(Receiving) For urgent data 6 different Control fields
26
Control field Description of flags in the control field
27
TCP Connection TCP is connection-oriented transport protocol that establishes a virtual path between source and destination. i.e. Using a single virtual pathway for entire message, facilitates the ACK process and retransmission of damaged or lost frames or frames arrive out of order. Although TCP uses service of IP (connectionless) to deliver individual segments, it has full control of the connection. IP is unaware of any lost or retransmission of packets but TCP does. TCP can holds it until the missing segments arrive. In TCP, connection-oriented transmission requires 3 phases: Connection Establishment Data Transfer Connection Termination Usually involves request and acknowledge procedures in all above phases.
28
1. TCP Connection establishment using three-way handshaking
The Client program issues a request for active open and tells its TCP that it needs to be connected to the specific server tells its TCP that it is ready to accept a connection Client sends SYN: for synchronisation of sequence number The TCP starts 3-way handshaking Server sends SYN (for return path) & ACK (for previous SYN) Note that seq & ack number are the same Client sends ACK for previous SYN sent by server
29
A SYN segment cannot carry data, but it consumes one sequence number.
A SYN + ACK segment cannot carry data, but does consume one sequence number. An ACK segment, if carrying no data, consumes no sequence number.
30
TCP Data Transfer After the TCP connection is established, bi-directional data transfer can take place. The client and server can both send data and acknowledgements, at the same time – Piggyback: The data is piggybacked with ACK or vice versa. Pushing Data: when sending TCP do not want to wait for the receiver buffer window to be filled but to ‘push’ the data to be delivered to the receiver program instantly. E.g. Interactive messenger service. Urgent Data: when an application program needs to send urgent bytes (to be treated first), which requires a piece of data to be read out of normal order by the receiving program. E.g. to abort/interrupt process after sending remaining data but discover previous result is wrong.
31
2. TCP Data transfer The Client sends two 2000 bytes of data in 2 segments at the same time acknowledged previous sending by server: 15001 The Server sends ACK (10001) and piggybacked with another 2000 bytes of data starting with sequence number 15001 The Client ACK with 17001 Note that PSH flag is used so that server TCP knows to deliver data to server process as soon as they are received.
32
TCP Connection Termination
Any of the two parties involved in exchanging data (client or server) can close the connection, although it is usually initiated by the client. Connection termination using three-way handshaking Half-close: In TCP, one end can stop sending data while still receiving data. Either end can issue a half-close, although it is normally initiated by client. This can occur when the server needs all the data before processing can begin. E.g. sorting application. The client can apply half-close (i.e. closing outbound connection) after sending all necessary data, while inbound connection remains open to receive the sorted data.
33
3. Connection termination using three-way handshaking
The Client TCP, after receive a close command from the client process, sends the 1st segment, a FIN, to initiate active close. The server TCP, after received the FIN segment, informs its process and sends back a FIN + ACK, to confirm previous FIN and announce closing of its return path The Client TCP sends last ACK segment, to confirm the receipt of the FIN from server
34
The FIN segment consumes one sequence number if it does not carry data.
The FIN + ACK segment consumes one sequence number if it does not carry data.
35
Half-close
36
TCP Flow Control Unlike UDP, TCP provides the flow control mechanism.
The receiver controls the amount of data that are to be sent by the sender, to prevent overflowing at destination (by announcing the value of window size in the window size field of the TCP header). Similar to data link layer, TCP uses sliding window (SW) and numbering system to handle flow control to make transmission more efficient as well as to control the flow of data. However, this is done on end-to-end basis. The SW protocol is something btw Go-Back-N and Select Repeat. Two difference from the Data Link Layer: SW in TCP is byte oriented, but SW in DL is frame oriented TCP SW is variable size, but DL SW is fixed size.
37
Sliding window moving right wall to the left moving left wall to the right moving right wall to the right The 3 activities: opening, closing & shrinking are in the control of the receiver (depending on congestion in the network), not the sender. Sender must obey the commands of the receiver in this matter. Opening allows more new bytes in the buffer that are eligible for sending. Closing means some bytes have been acknowledged and the sender need not to worry about then anymore. Shrinking means shorten the size of window for congestion control purpose
38
TCP header format Window size field defines the size of the window in bytes, that the other party must maintain. 16 bits can allow upt bytes. This is normally refer to the receiver window (rwnd) and is determined by the receiver. The sender must obey the dictation of the receiver in this matter
39
Example What is the value of the receiver window (rwnd) for host A if the receiver, host B, has a buffer size of 5000 bytes and 1000 bytes of received and unprocessed data? Solution The value of rwnd = 5000 − 1000 = 4000. Host B can receive only 4000 bytes of data before overflowing its buffer. Host B advertises this value in its next segment to A.
40
Example What is the size of the window for host A if the value of rwnd is 3000 bytes and the value of cwnd is 3500 bytes? Solution The size of the window is the smaller of rwnd and cwnd, which is 3000 bytes. The size of the window at one end is determined by the lesser of two values: receiver window (rwnd) or congestion window (cwnd). Receiver window advertised by receiver tells the number of bytes it can accepts before its buffer overflows. Congestion window value is determined by network.
41
Example Figure below shows another example of a sliding window. The sender has sent bytes up to 202. We assume that cwnd is 20 (in reality this value is thousands of bytes). The receiver has sent an acknowledgment number of 200 with an rwnd of 9 bytes (in reality this value is thousands of bytes). The size of the sender window is the minimum of rwnd and cwnd, or 9 bytes. Bytes 200 to 202 are sent, but not acknowledged. Bytes 203 to 208 can be sent without worrying about acknowledgment. Bytes 209 and above cannot be sent.
42
TCP Error Control Similar to UDP, TCP ensure reliable delivery using error control. This means that an application program that delivers a stream of data to TCP relies on TCP to deliver the entire stream to the other side, in orderly manner, without any error, lost or duplication. Error control includes mechanism for detecting corrupted segments, lost segments, out-of-order segments and duplicated segments. It also includes mechanism for correcting errors after they are detected, which is achieved through the use of: Checksum: to check for corrupted segment and discard it. Acknowledgement (ACK): to confirm the receipt of segments. Time-out (TO): Timer set for retransmission of segments. The heart of error control mechanism is Retransmission. The retransmission happens when the TO timer is expired: meaning no receipt of ACK segment from opposite party. No retransmission and timer set for an ACK segment.
43
TCP Error Control A recent implementation of TCP maintains one retransmission time-out (RTO) timer for all outstanding (sent but no ACK) segments. When the timer matures, the earliest outstanding segment is retrans-mitted (even through ACK is received later due to delay). The value of RTO is dynamic in TCP and is updated based on the round-trip-time (RTT) of segment. An RTT is the time needed for a segment to reach a destination and for an ACK to be received. Data may arrive out of order and be temporarily stored by the receiving TCP, but TCP guarantees that no out-of-order segment is delivered to the process (delivers only ordered data to the process). Problem occurs when the RTO value is large and the receiver receives so many out-of-order segments that they cannot be saved (limited buffer size). Remedy: FAST retransmission. Fast retransmission is applied when sender receives 3 duplicate ACKs.
44
TCP header format Error Control fields: Checksum, Acknowledgement number Lost and corrupted segments are treated the same way by the receiver. Both are treated as lost!!! (Lost segments discarded somewhere in network by some routers and corrupted segments discarder by receiver itself).
45
Normal operation Wait for more segments to come Wait for more segments to come Note a slight delay waiting of 500 ms is needed before sending off the ACK, to see any more segments arrive so that ACK can cover for this range of time (e.g. ACK: 6001 is skipped).
46
Lost segment RTO timer set for each sending ACK instantly after 500ms receive out-of-order segment, but stores it in buffer with a gap, RTO time-out for segment-3 The above example is unidirectional data transfer. Segment 1 & 2 received and ACK is sent. Segment-3 is lost but Segment-4 arrives and to be stored. The receiver sends ACK (indicating the expected segment to be received; i.e. 701). After RTO is expired, Segment-3 is re-sent and buffer at receiver fill in the gap and ACK back.
47
Fast retransmission When segments 3,4,5 are received, it triggers an ACK because it recognises an out-of-order situation (gap in buffer). 3 duplicate ACKs are returned. The sender noted and resend the lost segments immediately: FAST re-transmission.
48
Lost segment
49
Corrupted segment
50
Lost acknowledgment
51
Congestion Control & Quality of Service
52
Congestion Control & QoS
Congestion control & QoS are two issues that bound together closely; improving one means improving the other or ignoring one also ignoring the other. Not only related to just Transport layer but all 3 layers involved: Data Link layer Network layer Transport layer Need to think of them as co-operated property. Involve directly with managing data traffic
53
DATA TRAFFIC The main focus of congestion control and quality of service is data traffic. In congestion control we try to avoid traffic congestion. In quality of service, we try to create an appropriate environment for the traffic. So, before talking about congestion control and quality of service, we discuss the data traffic itself. Traffic Descriptor Traffic Profiles
54
Traffic descriptors
55
Three traffic profiles
56
CONGESTION Congestion in a network may occur if the load on the network—the number of packets sent to the network—is greater than the capacity of the network—the number of packets a network can handle. Congestion control refers to the mechanisms and techniques to control the congestion and keep the load below the capacity. Congestion happens in any system that involves waiting or queuing.
57
Queues in a router Two issues:
If the rate of packet arrival is higher than the packet processing rate, the input queues become longer and longer If the packer departure rate is lesser than the packet processing, the output queues become longer and longer.
58
Packet delay and throughput as functions of load
Packet performance: Packet delay and throughput as functions of load Congestion Control involves two factors that measure the performance of a network: Delay: queuing due to processing delay and propagation delay Throughput: number of packet passing thru in a unit of time
59
TCP assumes that the cause of a lost segment is due to congestion in the network.
If the cause of the lost segment is congestion, retransmission of the segment not only does not remove the cause, it aggravates it.
60
CONGESTION CONTROL Congestion control refers to techniques and mechanisms that can either prevent congestion, before it happens, or remove congestion, after it has happened. In general, we can divide congestion control mechanisms into two broad categories: Open-loop congestion control (prevention) and Closed-loop congestion control (removal).
61
Congestion control categories
62
Open-Loop Congestion Control
Retransmission: Sender resend packet that is lost or corrupted. Retransmission in general may increase congestion in the traffic, hence need a good retransmission policy/timers to optimise efficiency and prevent congestion build-up. Windowing: Types of window used – Selective Repeat window is better than Go-back-N window; to prevent duplication of good data. ACK: also affect congestion because it is part of the load: a receiver may send an ACK only if it has packet to be sent or special timer expires or may decide to ACK only N packets at a time. Discarding: need good discarding policy to not harm the integrity of the transmission: e.g. audio packet: discard less sensitive packet, so that quality of the sound is still preserved. Admission: a QoS mechanism to avoid congestion in virtual circuit networks. Switches check the resource requirement of a flow before admitting it to the network. A router can deny establishing a virtual-circuit connection if there is severe congestion in the network.
63
Closed-Loop Congestion Control
Backpressure: The technique of backpressure refers to congestion control mechanism in which a congested node stops receiving data from the immediate upstream node or nodes. This may cause the node or nodes to become congested and reject data from their upstreams. Backpressure is a node-to-node congestion control that starts with a node and propagates in the opposite direction of data flow. Can only applied to virtual-circuit networks. (each node knows upstream node). Choke: A choke packet is sent by a node to the source to inform of any congestions. The warning is directly sent to the source (rather than propagate backward from node to node), asking the source to slow down. Intermediate nodes are not warned and hence no action. Implicit signalling: No communication btw the congested node & source. The source guesses that there is a congestion somewhere from other symptoms: e.g. delay or no ACKs for a while. (TCP case) Explicit signalling: Unlike choke, signal is inserted in with the data packet to inform any congestion to the source. (Frame Relay case).
64
Backpressure method for alleviating congestion
Node III has more input data than it can handle. It drops some packets in its input buffer and informs node II to slow down. Node II, in turn, may be congested because it is slowing down the output flow of data. If node II is congested, it informs node I to slow down, which may in turn create a temporally congestion. If so, node I informs the source of data to slow down. This in time alleviates the congestion. Pressure on node III is moved backward to the source to remove the congestion.
65
Choke packet
66
Congestion Control in TCP
So far, we assume that it is only receiver can dictate to the sender the size of its window and network entity has been ignored. If the network cannot deliver the data as fast as they are created by the sender, it must tell the sender to slow down. In addition to the receiver, the network is a second entity that determines the size of the sender’s window. Hence, the sender’s window size is determined not only by the receiver but also by congestion in the network. Window size = min(rwnd,cwnd) TCP’s general policy for handling congestion is based on 3 phases: Slow start: Exponential Increase Congestion avoidance: Additive Increase Congestion detection: Multiplicative Decrease Sender starts with a slow rate but then increase the rate rapidly until a threshold, then reduce it to a linear rate to avoid congestion. Finally, if congestion is detected, the sender goes back to the slow-start phase.
67
1. Slow start, exponential increase
In the slow-start algorithm, the size of the congestion window increases exponentially until it reaches a threshold.
68
2. Congestion avoidance, additive increase
In the congestion avoidance algorithm, the size of the congestion window increases additively until congestion is detected.
69
❏ If detection is by time-out, a new slow start phase starts.
3. Congestion detection An implementation reacts to congestion detection in one of the following ways: ❏ If detection is by time-out, a new slow start phase starts. ❏ If detection is by three ACKs, a new congestion avoidance phase starts.
70
Congestion example
71
Multiplicative decrease
72
Congestion avoidance strategies
73
TCP congestion policy summary
74
QUALITY OF SERVICE (QoS)
The main focus of congestion control & QoS is traffic. In congestion control we try to avoid traffic congestion. In quality of service, we try to create an appropriate environment for the traffic. QoS is an internetworking issue that has been discussed more than defined. We can informally define quality of service as something a flow seeks to attain. Flow Characteristics Flow Classes
75
Flow characteristics Reliability: vital characteristic that a flow needs. Losing reliability means losing a packet or ACK, which entails retransmission. Some application needs reliability more than others; e.g. , file transfer and Internet access require reliable transmission more than audio conferencing. Delay: degree of tolerance for later packet. Audio conferencing needs minimum delay but delay in file transfer or is less crucial. Jitter: variation in delay for packets belonging to the same flow. High jitter means the difference between delay is large. Low jitter means low variation of delay between packets. Bandwidth: Different application needs different bandwidth. Video transmission needs millions of bps to refresh the screen while needs minimum bandwidth to send a file.
76
Techniques to improve QoS
There are four common methods: scheduling, traffic shaping, admission control, resource reservation.
77
Methods to improve QoS Scheduling: Packets from different flows arrive at a router for processing. A good scheduler treats different flows in a fair and appropriate manner to improve the QoS. These scheduling methods are: FIFO queuing, Priority queuing & Weighted fair queue Traffic shaping: mechanism to control the amount and the rate of the traffic sent to the network. The traffic shaping techniques are: Leaky bucket & token bucket Both techniques can be combined to credit an idle host and regulate the traffic at the same time. Resource reservation: QoS is improved if recourses such as buffer, bandwidth are reserved beforehand. – Integrated Services. Admission Control: mechanism used by router/switch to accept or reject a flow based on flow specifications.
78
Scheduling: First In First Out - FIFO queuing
In FIFO, packets wait in a buffer (queue) until the node is ready to process them. If the average arrival rate is higher than the average processing rate, the queue will fill up and new arriving packets will be discarded. (Just like queuing for bus scenario).
79
Scheduling: Priority queuing
In Priority queue, packets are first assigned a priority class. Each priority class has its own queue. The packets in the highest priority queue are processed first, while packets in the lowest priority queue will be processed last. A switch over when queue empty. Priority queuing gives a better QoS because higher priority traffic such as multimedia can reach the destination with less delay. Potential drawback: starvation – continuous flow in high priority flow means packets in lower priority flow never get processed – less fair to lower priority traffic.
80
Scheduling: Weighted fair queuing
In weighted fair queuing, packets are still assigned to different classes and queues but the queue are weighted based on priority; higher priority means higher weights. The system processes packets in each queue in a round-robin fashion with the number of packets selected from each queue based on corresponding weight. If the system does not impose priority, then all queue has equal weight. Much fairer queuing in QoS.
81
Traffic Shaping: Leaky bucket
Bucket with a hole, leaking at constant rate as long as there are water in bucket. The input rate can varies but output rate remains constant. To smooth out bursty traffic in networking, bursty chunks are stored and send out at an average rate. Without leaking bucket, the starting burst may have hurt the network – congestion. In the example, the bursty data of 12Mbps, 2 sec and 2Mbps, 3 sec is average out to 3Mbps, 10sec.
82
Traffic Shaping: Token bucket
Leaky bucket is very restric-tive. It does not credit a idle host. e.g. if a host is not sending for a while & the bucket becomes empty. Now, if the host has bursty data, the leaky bucket allows only an average rate. The time when the host was idle is not taken into account. Token bucket allows idle hosts to accumulate credit for the future in the forms of token. The token bucket allows bursty traffic at a regulated maximum rate. Token to be collected based on idle time and can be used later to sends bursty packets as long as it is allowed by the bucket.
83
The token bucket allows bursty traffic at a regulated maximum rate.
A leaky bucket algorithm shapes bursty traffic into fixed-rate traffic by averaging the data rate. It may drop the packets if the bucket is full. The token bucket allows bursty traffic at a regulated maximum rate.
Similar presentations
© 2025 SlidePlayer.com Inc.
All rights reserved.