Presentation is loading. Please wait.

Presentation is loading. Please wait.

Elements of Transport Protocols

Similar presentations


Presentation on theme: "Elements of Transport Protocols"— Presentation transcript:

1 Elements of Transport Protocols

2 Transport protocols resemble the data link protocols
The transport service is implemented by a transport protocol used between the two transport entities Transport protocols resemble the data link protocols Both have to deal with error control, sequencing, and flow control, among other issues. Differences are due to major dissimilarities between the environments in which the two protocols operate

3 Elements of Transport Protocols
Addressing Connection Establishment Connection Release Flow Control and Buffering Multiplexing Crash Recovery

4 1.Addressing When an application (e.g., a user) process wishes to set up a connection to a remote application process, it must specify which one to connect to. The method normally used is to define transport addresses to which processes can listen for connection requests. In the Internet, these end points are called ports. We will use the generic term TSAP, (Transport Service Access Point). The analogous end points in the network layer (i.e., network layer addresses) are then called NSAPs. IP addresses are examples of NSAPs.

5 2.Establishing a connection
It would seem sufficient for one transport entity to just send a CONNECTION REQUEST TPDU to the destination and wait for a CONNECTION ACCEPTED reply. The problem occurs when the network can lose, store, and duplicate packets. This causes serious complications. The crux of the problem is the existence of delayed duplicates. It can be attacked in various ways using throw-away transport addresses. In this approach, each time a transport address is needed, a new one is generated. When a connection is released, the address is discarded and never used again. give each connection a connection identifier. After each connection is released, each transport entity could update a table listing obsolete connections. Whenever a connection request comes in, it could be checked against the table, to see if it belonged to a previously-released connection. this scheme has a basic flaw: it requires each transport entity to maintain a certain amount of history information indefinitely. If a machine crashes and loses its memory, it will no longer know which connection identifiers have already been used.

6 Restricted subnet design. Putting a hop counter in each packet.
Rather than allowing packets to live forever within the subnet, devise a mechanism to kill off aged packets that are still hobbling about. If no packet lives longer than some known time, the problem becomes somewhat more manageable. Packet lifetime can be restricted to a known maximum using one of the following techniques: Restricted subnet design. Putting a hop counter in each packet. Timestamping each packet.

7 Three-way handshake protocol is used
Once both transport entities have agreed on the initial sequence number, any sliding window protocol can be used for data flow control. Three-way handshake protocol is used This protocol does not require both sides to begin sending with the same sequence number. Host 1 chooses a sequence number, x, and sends a CONNECTION REQUEST TPDU containing it to host 2. Host 2 replies with an ACK TPDU acknowledging x and announcing its own initial sequence number, y. Finally, host 1 acknowledges host 2's choice of an initial sequence number in the first data TPDU that it sends.

8 Connection Establishment
Three protocol scenarios for establishing a connection using a three-way handshake. CR denotes CONNECTION REQUEST. (a) Normal operation, (b) Old CONNECTION REQUEST appearing out of nowhere. (c) Duplicate CONNECTION REQUEST and duplicate ACK.

9 Fig b: shows how the three-way handshake works in the presence of delayed duplicate control TPDUs .
The first TPDU is a delayed duplicate CONNECTION REQUEST from an old connection. This TPDU arrives at host 2 without host 1's knowledge. Host 2 reacts to this TPDU by sending host 1 an ACK TPDU, in effect asking for verification that host 1 was indeed trying to set up a new connection. When Host 1 Rejects Host 2 ack, Then Host 2 realizes that it was tricked by a delayed duplicate and abandons the connection. In this way, a delayed duplicate does no damage The worst case is when both a delayed CONNECTION REQUEST and an ACK are floating around in the subnet. This case is shown in fig c

10 host 2 gets a delayed CONNECTION REQUEST and replies to it.
host 2 has proposed using y as the initial sequence number for host 2 to host 1 traffic, knowing full well that no TPDUs containing sequence number y or acknowledgements to y are still in existence. When the second delayed TPDU arrives at host 2, the fact that z has been acknowledged rather than y tells host 2 that this, too, is an old duplicate.

11 3. Releasing a connection
Are of 2 types : Asymmetric release Like telephone s/m: when one party hangs up, the connection is broken. Symmetric release It treats the connection as 2 separate unidirectional connections and requires each one to be released separately.

12 Abrupt disconnection with loss of data.
Asymmetric Release Abrupt disconnection with loss of data.

13 Then host 1 sends another TPDU.
After the connection is established, host 1 sends a TPDU that arrives properly at host 2. Then host 1 sends another TPDU. Unfortunately, host 2 issues a DISCONNECT before the second TPDU arrives. The result is that the connection is released and data are lost.

14 Symmetric Release A more sophisticated release protocol is needed to avoid data loss. One way is to use symmetric release, in which each direction is released independently of the other one. Here, a host can continue to receive data even after it has sent a DISCONNECT TPDU. Symmetric release does the job when each process has a fixed amount of data to send and clearly knows when it has sent it. One can envision a protocol in which host 1 says: I am done. Are you done too? If host 2 responds: I am done too. Goodbye, the connection can be safely released

15 The Two-Army problem Imagine that a white army is encamped in a valley, as shown in fig. On both of the surrounding hillsides are blue armies. The white army is larger than either of the blue armies alone, but together the blue armies are larger than the white army. If either blue army attacks by itself, it will be defeated, but if the two blue armies attack simultaneously, they will be victorious.

16 The two-army problem.

17 The blue armies want to synchronize their attacks.
However, their only communication medium is to send messengers on foot down into the valley, where they might be captured and the message lost (i.e., they have to use an unreliable communication channel). Does a protocol exist that allows the blue armies to win?

18 Suppose that the commander of blue army #1 sends a message reading: ''I propose we attack at dawn on March 29. How about it?'' Now suppose that the message arrives, the commander of blue army #2 agrees, and his reply gets safely back to blue army #1. Will the attack happen? Probably not, because commander #2 does not know if his reply got through. If it did not, blue army #1 will not attack, so it would be foolish for him to charge into battle.

19 Now improve the protocol by making it a three-way handshake.
The initiator of the original proposal must acknowledge the response. Assuming no messages are lost, blue army #2 will get the acknowledgement, but the commander of blue army #1 will now hesitate. After all, he does not know if his acknowledgement got through, and if it did not, he knows that blue army #2 will not attack. We could make a four-way handshake protocol, but that does not help either.

20 To see the relevance of the two-army problem to releasing connections, just substitute ''disconnect'' for ''attack.'‘ If neither side is prepared to disconnect until it is convinced that the other side is prepared to disconnect too, the disconnection will never happen.

21 Four protocol scenarios for releasing a connection
Four protocol scenarios for releasing a connection. (a) Normal case of a three-way handshake. (b) final ACK lost. 6-14, a, b

22 (c) Response lost. (d) Response lost and subsequent DRs lost.
6-14, c,d

23 This situation results in a half-open connection.
While this protocol usually works, in theory it can fail if the initial DR and N retransmissions are all lost. The sender will give up and release the connection, while the other side knows nothing at all about the attempts to disconnect and is still fully active. This situation results in a half-open connection. We could avoid this problem by not allowing the sender to give up after N retries but forcing it to go on forever until it gets a response. One way to kill off half-open connections is to have a rule saying that if no TPDUs have arrived for a certain number of seconds, the connection is then automatically disconnected

24 4.Flow Control and Buffering
How connections are managed when in use: Flow control In some ways the flow control problem in the transport layer is the same as in the data link layer, but in other ways it is different. The basic similarity is that in both layers a sliding window or other scheme is needed on each connection to keep a fast transmitter from overrunning a slow receiver. The main difference is that a router usually has relatively few lines, whereas a host may have numerous connections. This difference makes it impractical to implement the data link buffering strategy in the transport layer.

25 with reliable network service, other trade-offs become possible.
if the network service is unreliable, the sender must buffer all TPDUs sent, just as in the data link layer. with reliable network service, other trade-offs become possible. In particular, if the sender knows that the receiver always has buffer space, it need not retain copies of the TPDUs it sends. if the receiver cannot guarantee that every incoming TPDU will be accepted, the sender will have to buffer anyway. In the latter case, the sender cannot trust the network layer's acknowledgement, because the acknowledgement means only that the TPDU arrived, not that it was accepted.

26 Even if the receiver has agreed to do the buffering, there still remains the question of the buffer size. If most TPDUs are nearly the same size, it is natural to organize the buffers as a pool of identically-sized buffers, with one TPDU per buffer, as in Fig (a). if there is wide variation in TPDU size, a pool of fixed-sized buffers presents problems. If the buffer size is chosen equal to the largest possible TPDU, space will be wasted whenever a short TPDU arrives. If the buffer size is chosen less than the maximum TPDU size, multiple buffers will be needed for long TPDUs, with the attendant complexity.

27 (a) Chained fixed-size buffers. (b) Chained variable-sized buffers
(a) Chained fixed-size buffers. (b) Chained variable-sized buffers. (c) One large circular buffer per connection.

28 Another approach to the buffer size problem is to use variable-sized buffers, as in Fig(b).
The advantage here is better memory utilization, at the price of more complicated buffer management. A third possibility is to dedicate a single large circular buffer per connection, as in Fig. (c). This system also makes good use of memory, provided that all connections are heavily loaded, but is poor if some connections are lightly loaded.

29 5.Multiplexing In the transport layer the need for multiplexing can arise in a number of ways. For Eg, if only one network address is available on a host, all transport connections on that machine have to use it. When a TPDU comes in, some way is needed to tell which process to give it to. This situation, called upward multiplexing, is shown in fig a. In this figure, 4 distinct transport connections all use the same network connection (e.g., IP address) to the remote host. If a user needs more bandwidth than one virtual circuit can provide, a way out is to open multiple network connections and distribute the traffic among them on a round-robin basis, as indicated in fig b. This modus operandi is called downward multiplexing

30 (a) Upward multiplexing. (b) Downward multiplexing.

31 THE INTERNET TRANSPORT PROTOCOL
(TCP and UDP)

32 The Internet has two main protocols in the transport layer,
connectionless protocol connection-oriented protocol connectionless protocol is UDP connection-oriented protocol is TCP UDP is basically just IP with a short header added

33 Introduction to TCP TCP (Transmission Control Protocol) was designed to provide a reliable end-to-end byte stream over an unreliable internetwork TCP was designed to dynamically adapt to properties of the internetwork and to be robust in the face of many kinds of failures. Each machine supporting TCP has a TCP transport entity, either a library procedure a user process or part of the kernel The IP layer gives no guarantee that datagrams will be delivered properly, so it is up to TCP to time out and retransmit them as need be. Datagrams that do arrive may do so in the wrong order; it is also up to TCP to reassemble them into messages in the proper sequence. TCP must furnish the reliability that most users want and that IP does not provide.

34 The TCP Service Model TCP service is obtained by both the sender and receiver creating end points, called sockets. Each socket has a socket number (address) consisting of the IP address of the host and a 16-bit number local to that host, called a port. A port is the TCP name for a TSAP. For TCP service to be obtained, a connection must be explicitly established between a socket on the sending machine and a socket on the receiving machine. A socket may be used for multiple connections at the same time. two or more connections may terminate at the same socket.

35 All TCP connections are full duplex and point-to-point.
Port numbers below 256 are called well-known ports and are reserved for standard services. For example, any process wishing to establish a connection to a host to transfer a file using FTP can connect to the destination host's port 21 to contact its FTP daemon . All TCP connections are full duplex and point-to-point. Full duplex means that traffic can go in both directions at the same time. Point-to-point means that each connection has exactly two end points. TCP does not support multicasting or broadcasting.

36 Some assigned ports. Port Protocol Use 21 FTP File transfer 23 Telnet
Remote login 25 SMTP 69 TFTP Trivial File Transfer Protocol 79 Finger Lookup info about a user 80 HTTP World Wide Web 110 POP-3 Remote access 119 NNTP USENET news

37 A TCP connection is a byte stream, not a message stream.
Message boundaries are not preserved end to end. For example, if the sending process does four 512-byte writes to a TCP stream, these data may be delivered to the receiving process as four 512-byte chunks, two 1024-byte chunks, or one 2048-byte chunk There is no way for the receiver to detect the units in which the data were written.

38 (a) Four 512-byte segments sent as separate IP datagrams.
(b) The 2048 bytes of data delivered to the application in a single READ CALL.

39 The TCP Protocol Every byte on a TCP connection has its own 32-bit sequence number. Separate 32-bit sequence numbers are used for acknowledgements and for the window mechanism. The sending and receiving TCP entities exchange data in the form of segments. A TCP segment consists of a fixed 20-byte header (plus an optional part) followed by zero or more data bytes. The TCP software decides how big segments should be. Two limits restrict the segment size. Each segment, including the TCP header, must fit in the 65,515-byte IP payload. Each network has a Maximum Transfer Unit, or MTU, and each segment must fit in the MTU. In practice, the MTU is generally 1500 bytes (the Ethernet payload size) and thus defines the upper bound on segment size. A segment that is too large for a n/w can be broken into multiple segments by a router

40 When a sender transmits a segment, it also starts a timer.
The basic protocol used by TCP entities is the sliding window protocol. When a sender transmits a segment, it also starts a timer. When the segment arrives at the destination, the receiving TCP entity sends back a segment (with data if any exist, otherwise without data) bearing an acknowledgement number equal to the next sequence number it expects to receive. If the sender's timer goes off before the acknowledgement is received, the sender transmits the segment again.

41 The TCP Segment Header Every segment begins with a fixed-format 20-byte header. The fixed header may be followed by header options. After the options, if any, up to 65, = 65,495 data bytes may follow, where the first 20 refer to the IP header and second to the TCP header. Segments without any data are legal and are commonly used for acknowledgements and control messages.

42 TCP Header.

43 Next comes a 6-bit field that is not used .
The Source port and Destination port fields identify the local end points of the connection. A port plus its host's IP address forms a 48-bit unique end point (TSAP). The Sequence number and Acknowledgement number fields perform their usual functions. Acknowledgement number specifies the next byte expected, not the last byte correctly received. Both are 32 bits long The TCP header length tells how many 32-bit words are contained in the TCP header. This information is needed because the Options field is of variable length, so the header is, too. Next comes a 6-bit field that is not used .

44 URG is set to 1 if the Urgent pointer is in use.
Now come six 1-bit flags. URG is set to 1 if the Urgent pointer is in use. The Urgent pointer is used to indicate a byte offset from the current sequence number at which urgent data are to be found. The ACK bit set to 1 to indicate that the Acknowledgement number is valid. If ACK is 0, the segment does not contain an acknowledgement so the Acknowledgement number field is ignored. The PSH bit indicates PUSHed data. Applications can use the PUSH flag, which tells TCP not to delay the transmission. The RST bit used to reset a connection that has become confused due to a host crash or some other reason. It is also used to reject an invalid segment or refuse an attempt to open a connection. if you get a segment with the RST bit on, you have a problem on your hands.

45 The SYN bit is used to establish connections.
The connection request has SYN = 1 and ACK = 0 to indicate that the piggyback acknowledgement field is not in use. The connection reply bears an acknowledgement it has SYN = 1 and ACK = 1. In essence the SYN bit is used to denote CONNECTION REQUEST and CONNECTION ACCEPTED, with the ACK bit used to distinguish between those two possibilities. The FIN bit is used to release a connection. It specifies that the sender has no more data to transmit. Both SYN and FIN segments have sequence numbers and are thus guaranteed to be processed in the correct order.

46 Flow control in TCP is handled using a variable-sized sliding window.
The Window size field tells how many bytes may be sent starting at the byte acknowledged. A Checksum is also provided for extra reliability. The Options field provides a way to add extra facilities not covered by the regular header. The most important option is the one that allows each host to specify the maximum TCP payload it is willing to accept.

47 ATM AAL LAYER PROTOCOLS

48 The AAL layer in ATM is radically different than TCP.
This is mainly because ATM is primarily used for transmitting voice and video streams, in which rapid delivery is more important than accurate delivery. ATM layer outputs 53-byte cells one after another. It has no error control, no flow control and no other control. To bridge this gap , ITU defined an end- to-end layer on top of the ATM layer. This layer is called AAL(ATM Adaptation Layer). The goal of AAL is to provide useful services to application programs and to shield them from the mechanics of chopping data up into cells at the src and reassembling them at the desn.

49 Technical requirements for classes C & D were similar
When ITU began defining AAL, it realized that different applications had different requirements, so it organized the service space along 3 axes: Real-time service VS Non real-time services. Constant bit rate services VS variable bit rate services. Connection-oriented service VS connection less service. ITU felt only 4 of these were of any use and named them class A,B,C and D. To handle these 4 classes of service, ITU defined 4 protocols, AAL1 thru AAL4 respectively. Technical requirements for classes C & D were similar So combined AAL3 & AAL4 to form AAL3/4

50 Original service classes supported by AAL (now obsolete)
Real Time Real Time Real Time Real Time Timing None None None None Bit Rate Constant Variable Constant Variable Mode Connection oriented Connection less

51 Structure of the AAL AAL Convergence sublayer (service specific part)
Convergence sublayer (common part) Segmentation Reassembly sublayer ATM layer Physical layer

52 The AAL is divided into 2 major parts.
The upper part of AAL is called Convergence Sublayer. Its job is to provide interface to the application. It consists of 2 subparts that is common to all applications and an application specific sub part The functions of each of these parts are protocol dependent but can include msg framing and error detection. It is also responsible for accepting bit streams and breaking them up into bytes for transmition . Message boundaries are preserved when present

53 These payloads are then given to the ATM layer for transmition.
The lower layer of AAL is called SAR (Segmentation And Reassembly sublayer. It can add headers and trailers to the data units given to it by the CS to form cell payloads. These payloads are then given to the ATM layer for transmition. At the destn the SAR sublayer reassembles the cells into msgs. The SAR sublayer is basically concerned with cells, but the CS sublayer is concerned with msgs. It has some additional functions for some service classes It sometime handles error detection & multiplexing

54 AAL1 Is the protocol used for transmiting class A traffic, that is
real-time, constant bit-rate , connection-oriented traffic- eg: uncompressed audio and video. Bits are fed in by the application at a constant rate and must be delivered at destn at the same constant rate , with a min. of delay and overhead. The input is a stream of bits, with no msg boundaries. For this traffic, error-detecting protocols such as stop-and-wait are not used because the delays introduced by timeouts and retransmition are unacceptable. Missing cells are reported to the application

55 AAL1 uses a Convergence Sublayer and an SAR sublayer.
The Convergence Sublayer (CS) detects lost and misinserted cells. smoothes out incoming traffic to provide delivery of cells at a constant rate. breaks up the inputp msg or stream into 46 or 47 byte units that r given for SAR for txn. At the other end it extracts these and reconstructs the original i/p. The AAL1 CS does not have any protocol headers of its own. But the AAL1 SAR sublayer does have.

56 The AAL 1 CELL FORMAT SN SNP 47-byte Payload Non-P Even Parity 1 SN
SN SNP 47-byte Payload Non-P Even Parity 1 SN SNP Parity 46-byte Payload P 48 bytes SAR header

57 AAL1 SAR It has 1 byte headercontaining a 3 bit cell seq number SN
to detect missing or misinserted cells 3 bit Sequence Number Protection SNP (like check sum) Allows correction of single errors & detection of double errors in seq no. field One bit for parity –even bit parity P cells used when message boundaries must be preserved Pointer field -1 byte Used to give offset of start of next message Higher order bit is reserved for future use

58 AAL 2 For pure uncompressed audio/video , or any other data stream in which having a few garbled bits once in a while is not a problem-AAL 1 is adequate. For compressed audio or video, the rate can vary strongly in time. Eg: Many compression schemes transmit a full video stream periodically, and then send only the differences betwn subsequent frames and the last full frame for several frames. When the camera is stationary and nothing is moving , the differenz betwn frames are small. But when the camera is panning rapidly, they r large. Also msg boundaries must be preserved so that the start of the next full frame can be recognized , even in the presence of lost cells or bad data.

59 It has a 1-byte header and a 2-byte trailer-45 byte data also.
For these reason another protocol AAL2 is used. As in AAL 1 , the CS sublayer does not have a protocol but the SAR sublayer does. It has a 1-byte header and a 2-byte trailer-45 byte data also. SN IT 45-Byte Payload LI CRC The SN (sequence Number) is used for numbering cells in order to detect missing or misinserted cells. The IT field (InformationType) is used to indicate that the cell is the start, middle or end of a msg. The LI( Length Indicator) field tells how big the payload is, in bytes(<45 bytes). The CRC is a checksum over the entire cell-so errors can be detected. field sizes not included in std

60 AAL 3/4 For classes C and D (connection-oriented & connectionless service) ITU combined 3 and 4 together to form AAL 3 /4. It can operate in 2 modes: stream or message. In msg mode each call from the application to AAL 3 /4 injects 1 msg into the n/w. The msg is delivered as such and boundaries are preserved. In stream mode the boundaries are not preserved. It provides MULTIPLEXING. AAL 3/4 allows multiple sessions from a single host to travel along the same VC and be separated at the destn.

61 AAL 3/4 CS msg format Bits CS header CS Trailer CPI-Common Part Indicator, gives the msg type and the counting unit for the BA size and Length fields. Btag and Etag are used to frame msgs. These 2 bytes must be same & incremented by 1 on every new msg sent The BA size is used for buffer allocation. Tells receiver how much buffer space to be allocated for message in advance of its arrival The Length field gives the payload length. In msg mode length must be equal to BA size.. Trailer has 1 unused byte

62 AAL3/4 SAR format After the CS has constructed and added a header and trailer to the msg, it passes the msg to the SAR sublayer, which chops the msg up into 44-byte chunks. The SAR sublayer inserts 44-byte chunk into the payload of a cell whose format is shown below. bits 2 4 10 6 10 ST SN MID 44-byte payload LI CRC 00 middle 01 End 10 beginning 11 single cell msg 1-44

63 ST (Segment Type)- for msg framing
00 –middle (continuation of message COM) 01 –end of message (EOM) 10 –beginning of message (BOM) 11 –single segment message (SSM) SN (Sequence Number) for detecting missing and misinserted cells. The MID (Multiplexing Identification) is used to keep track of which cell belongs to which session. Trailer consist of LI (Lenth Indicator)- indicates payload length CRC –cell checksum

64 AAL 5 The AAL 1 thru AAL 3/4 protocols were largely designed by the Tele Communications industry and standardized by ITU without a lot of i/p from the computer industry. For computer industry a new protocol is invented and it was called SEAL (Simple Efficient Adaptation Layer). Later it is renamed as AAL 5. AAL 5 offers both reliable and unreliable services. It supports both unicast and multicast For multicast guaranteed delivery is not provided

65 Like AAL 3/4 ,AAL 5 supports both msg mode and stream mode.
In msg mode an application can pass a datagram of length 1-65,535 bytes to the AAL layer and have it delivered to the destn, either on a guaranteed or a best effort basis. Upon arrival in the CS ,a msg is padded out and a trailer added. The amount of padding( 0-47 bytes) is chosen to make the entire msg be a multiple of 48 bytes. AAL 5 does not have a CS header , just an 8 byte trailer.

66 AAL 5 Convergence Sublayer format
Bytes 1 1 2 4 Payload (1-65,535 bytes) UU Length CRC AAL 5 Convergence Sublayer format

67 AAL5 CS format UU- User to User
field is not used by the AAL itself. It is available for a higher layer for its own purpose , like sequencing or muxing. Length- tells how long the true payload is-in bytes (without padding). Value =0 used to abort current message in midstream CRC –is a std 32-bit checksum The msg is txed by passing it to the SAR sublayer, which does not add any headers or trailers. Instead it breaks the msg into 48-byte units and passes each of these to the ATM layer for txn. The main advtg of AAL 5 over AAL 3/4 is the much greater efficiency.

68 AAL 5 has a slightly large trailer/msg (8 bytes).
The lack of the seq. number is compensated for by the longer checksum, which an detect lost, misinserted , or missing cells without using seq. numbers. Within the Internet Community , it is expected that normal way of interfacing to ATM n/ws will be to transport IP packets with the AAL 5 payload field.


Download ppt "Elements of Transport Protocols"

Similar presentations


Ads by Google