Presentation is loading. Please wait.

Presentation is loading. Please wait.

Chapter 6 The Transport Layer The Transport Service & Elements of Transport Protocols.

Similar presentations


Presentation on theme: "Chapter 6 The Transport Layer The Transport Service & Elements of Transport Protocols."— Presentation transcript:

1 Chapter 6 The Transport Layer The Transport Service & Elements of Transport Protocols

2 Transport Service Services provided to the Upper Layers
Transport Service Primitives Berkeley Sockets Example of Socket Programming: Internet File Server

3 Services Provided to the Upper Layers
Fig.: The network, transport, and application layers Transport layer responsibilities: Establishment, data transfer and release Make it more reliable than the underlying layer

4 Transport Service Primitives (1)
The primitives for a simple transport service

5 Transport Service Primitives (2)
Nesting of segments, packets, and frames.

6 Berkeley Sockets (1) A state diagram for a simple connection management scheme. Transitions labeled in italics are caused by packet arrivals. The solid lines show the client’s state sequence. The dashed lines show the server’s state sequence.

7 The socket primitives for TCP
Berkeley Sockets (2) The socket primitives for TCP

8 Elements of Transport Protocols (1)
Addressing Connection establishment Connection release Error control and flow control Multiplexing Crash recovery

9 Similarity between data link and transport layer
Connection establishment Connection release Error control and flow control

10 Elements of Transport Protocols (2)
Environment of the data link layer. Environment of the transport layer.

11 TSAPs, NSAPs, and transport connections
Addressing (1) TSAPs, NSAPs, and transport connections

12 Addressing (2) How a user process in host 1 establishes a connection with a mail server in host 2 via a process server.

13 Connection Establishment (1)
Techniques to enable receiver to distinguish between retransmitted packets and packets delivered late: Throwaway transport addresses. Assign each connection a different identifier: <Peer transport entity, connection entity> Limitation: Nodes have to maintain history information indefinitely.

14 Connection Establishment (2)
Techniques for restricting packet lifetime: Restricted network design. Putting a hop counter in each packet. Timestamping each packet.

15 Connection Establishment (3)
3 protocol scenarios for establishing a connection using a 3-way handshake. CR denotes CONNECTION REQUEST. (1) Normal operation.

16 Connection Establishment (4)
3 protocol scenarios for establishing a connection using a 3-way handshake. CR denotes CONNECTION REQUEST. (2) Old duplicate CONNECTION REQUEST appearing out of nowhere.

17 Connection Establishment (5)
3 protocol scenarios for establishing a connection using a 3-way handshake. CR denotes CONNECTION REQUEST. (3) Duplicate CONNECTION REQUEST and duplicate ACK.

18 Connection Release (1) Abrupt disconnection with loss of data – one way release or two way release

19 The two-army problem Synchronization which will go on infinitely
Connection Release (2) The two-army problem Synchronization which will go on infinitely

20 Connection Release (3) Four protocol scenarios for releasing a connection. (1) Normal case of three-way handshake

21 Connection Release (4) Four protocol scenarios for releasing a connection. (2) Final ACK lost.

22 Four protocol scenarios for releasing a connection. (3) Response lost
Connection Release (5) Four protocol scenarios for releasing a connection. (3) Response lost

23 Connection Release (6) Four protocol scenarios for releasing a connection. (4) Response lost and subsequent DRs lost.

24 Error Control and Flow Control (1)
(a) Chained fixed-size buffers. (b) Chained variable-sized buffers. (c) One large circular buffer per connection.

25 Error Control and Flow Control (2)
For low-bandwidth bursty traffic, it is better not to allot buffer on connection establishment. Dynamic allotment of buffer a better strategy. Decouple sliding window protocol.

26 Error Control and Flow Control (3)
Dynamic buffer allocation. The arrows show the direction of transmission. An ellipsis (...) indicates a lost segment

27 Error Control and Flow Control (4)
Assuming buffer size is infinite, carrying capacity becomes a bottleneck. If adjacent routers can exchange at most x packets/sec, and there are k disjoint paths between a pair of hosts, max. rate of segment-exchange between hosts = kx segments/sec

28 (a) Multiplexing. (b) Inverse multiplexing.

29 Different combinations of client and server strategies
Crash Recovery Different combinations of client and server strategies

30 Congestion Control Desirable bandwidth allocation
Regulating the sending rate

31 Desirable Bandwidth Allocation (1)
(a) Goodput and (b) Delay as a function of offered load

32 Desirable Bandwidth Allocation (2)
Best network performance when bandwidth is allocated up to when delay shoots up. Power: Power = Load / Delay Power rises initially with load. Reaches maximum and falls when delay grows rapidly. Load with highest power – efficient load.

33 Desirable Bandwidth Allocation (3)
Notion of max-min fairness. B/w given to one flow can’t be increased without decreasing b/w for another flow, by an equal amt. Max-min bandwidth allocation for four flows

34 Desirable Bandwidth Allocation (4)
Convergence: Good congestion control algorithm should rapidly converge to ideal point. It should track ideal operating point over time. Changing bandwidth allocation over time

35 Regulating the Sending Rate (1)
A fast network feeding a low-capacity receiver. A slow network feeding a high-capacity receiver.

36 Regulating the Sending Rate (2)
Signals of some congestion control protocols

37 Regulating the Sending Rate (3)
Additive and multiplicative bandwidth adjustments

38 Regulating the Sending Rate (4)
Additive Increase Multiplicative Decrease (AIMD) control law.

39 Congestion Control in Wireless
Congestion control over a path with a wireless link

40 The Internet Transport Protocols: UDP (User Datagram Protocol)
Introduction to UDP Remote Procedure Call Real-Time Transport

41 Introduction to UDP (1) The UDP header. Source port field from incoming segment is copied to destination port field of outgoing segment. UDP length field includes 8-byte header, and data. Optional checksum field for extra reliability: checksums header, data and IP pseudo-header.

42 The IPv4 pseudo-header included in the UDP checksum.
Introduction to UDP (2) The IPv4 pseudo-header included in the UDP checksum. UDP does not do flow-control, congestion control, retransmission. Does de-multiplexing of multiple processes using ports. Does optional end-to-end error detection. UDP useful in client-server situations; client sends short request to server & expects short reply. If time-out, retransmit. Use-case: Sending host name to a DNS server.

43 Remote Procedure Call (RPC)
Steps in making a remote procedure call. The stubs are shaded. Packing the parameters is called marshalling. Example: get_IP_address (host name)

44 Call-by-reference  call-by-copy-restore
Doesn’t work if it has complicated data structure If the value is not know – like inner product of vectors Type – pintf (mix of parameters) Global variables Run by UDP..

45 Operations need to be idempotent (i.e. safe to repeat) like DNS
If reply is larger than the largest possible UDP packets - multiple requests overlap - proper synchronization is required

46 Real-Time Transport Protocol (1)
(a) The position of RTP in the protocol stack. (b) Packet nesting.

47 Real-Time Transport Protocol (2)
It is difficult to say which layer RTP is in – generic and application independent. Best Description: transport protocol implemented in the application layer. Basic Function of RTP: multiplex several real-time data streams onto a single stream of UDP packets. Each packet is given a number one higher than its predecessor. Allows the destination to determine whether any packets are missing; then interpolate. Another usage: timestamping – relative values are obtained. Allows multiple streams (audio/video) to combine together.

48 The multimedia application consists of multiple audio, video, text, and possibly other streams.
These are fed into the RTP library, which is in user space along with the application. This library multiplexes the streams and encodes them in RTP packets, which it stuffs into a socket.

49 RTP defines several profiles (e. g
RTP defines several profiles (e.g., a single audio stream), and for each profile, multiple encoding formats may be allowed. For example, a single audio stream may be encoded as 8-bit PCM samples at 8 kHz using delta encoding, predictive encoding, GSM encoding, MP3 encoding, and so on. Timestamping reduce the effects of variation in network delay, but it also allows multiple streams to be synchronized with each other.

50 Real-Time Transport Protocol (3)
The RTP header

51 The CC field tells how many contributing sources are present, from 0 to 15. The M bit is an application-specific marker bit. It can be used to mark the start of a video frame, the start of a word in an audio channel, or something else that the application understands. The Payload type field tells which encoding algorithm has been used (e.g., uncompressed 8-bit audio, MP3, etc.). Since every packet carries this field, the encoding can change during transmission.. The Timestamp is produced by the stream’s source to note when the first sample in the packet was made. The Synchronization source identifier tells which stream the packet belongs to. It is the method used to multiplex and demultiplex multiple data streams onto a single stream of UDP packets. Finally, the Contributing source identifiers, if any, are used when mixers are present in the studio. In that case, the mixer is the synchronizingsource, and the streams being mixed are listed here.

52 RTCP—The Real-time Transport Control Protocol
It is defined along with RTP in RFC 3550 and handles feedback, synchronization, and the user interface. The first function can be used to provide feedback on delay, variation in delay or jitter, bandwidth, congestion, and other network properties to the sources. This information can be used by the encoding process to increase the data rate (and give better quality) when the network is functioning well and to cut back the data

53 An issue with providing feedback is that the RTCP reports are sent to all participants. For a multicast application with a large group, the bandwidth used by RTCP would quickly grow large. RTCP also handles interstream synchronization. The problem is that different streams may use different clocks, with different granularities and different drift rates. RTCP can be used to keep them in sync.

54 Real-Time Transport Protocol (4)
Playout with Buffering and Jitter Control Smoothing the output stream by buffering packets (Playback Point)

55 Real-Time Transport Protocol (5)
(a) High jitter (b) Low jitter

56 TCP: Introduction (1) Introduction to TCP The TCP service model
The TCP protocol The TCP segment header TCP connection establishment TCP connection release

57 TCP: Introduction (2) TCP connection management modeling
TCP sliding window TCP timer management TCP congestion control The future of TCP

58 The TCP Service Model (1)
Fig.: Some assigned ports Internet Daemon (inetd) attaches itself to multiple ports and waits for the first connection request, then forks to that service.

59 The TCP Service Model (2)
All TCP connections are full duplex and point-to-point. Each connection has exactly two ends. TCP doesn’t support multicasting or broadcasting.

60 The TCP Service Model (3)
The TCP is byte-stream and not a message-stream, so messages are not differentiated. Four 512-byte segments sent as separate IP diagrams. The 2048 bytes of data delivered to the application in a single READ call.

61 The TCP Service Model (4)
To force data out – PUSH flag. Too many PUSH-es; then all PUSH-es are collected together and sent. URGENT flag – on pushing Ctrl-C to break-off remote computation, the sending application puts some control flag.

62 urgent data. for example, if an interactive user hits the CTRL-C key to break off a remote computation that has already begun, the sending application can put some control information in the data stream and give it to TCP along with the URGENT flag. This event causes TCP to stop accumulating data and transmit everything it has for that connection immediately. When the urgent data are received at the destination, the receiving application is interrupted (e.g., given a signal in UNIX terms) so it can stop whatever it was doing and read the data stream to find the urgent data. However, while urgent data is potentially useful, it found no compelling application early on and fell into disuse. Its use is now discouraged because of implementation differences, leaving applications to handle their own signaling. Perhaps future transport protocols will provide better signaling.

63 The TCP Protocol: Overview
Every byte on a TCP connection has its own 32-bit sequence number. Separate 32-bit sequence numbers are carried on packets for the sliding window position in one direction and for ACKs in reverse. The sending and receiving TCP entities exchange data in the form of segments. TCP segment – 20 byte IP header, 20 byte TCP header, total 65,535 = 64 KB. Two limits restrict the segment size: Each segment, including the TCP header, must fit in the 65,515-byte IP payload. Each link has an MTU (Maximum Transfer Unit).

64 The TCP Segment Header (1)
ACK number = next expected byte TCP header length – how many 32-bit words One-bit flags: CWR/ECE – Congestion signaling bits (ECE – signal ECN-Echo, CWR – Congestion window reduced) URG – Urgent; Urgent pointer – byte offset where urgent data is present ACK – Acknowledgement; PSH – Pushed; RST – reset – some problem has occured. SYN = 1 (Connection request, connection accepted); FIN = Release Connection ACK =0 (Request), ACK = 1 (Accept) Window size = how many buffers may be granted, can be zero.

65 The TCP Segment Header (2)
Checksum – (IP Addr. + TCP pseudo-header + data) Add all the 16 bits word in 1’s complement and then take 1’s complement of the sum. Options field – Extra facilities not covered by the regular header Example: MSS (Maximum segment size) host wants to accept

66 The TCP Segment Header (3)
Options: Window scale: Sender and receiver negotiate a window scale factor Timestamp: Included in every packet – useful in estimating RTT PAWS (Protection against Wrapped Sequence numbers) uses timestamp to discard solve wrap-around problem SACK (Selective ACK): Ranges of seq. nos. received by receiver

67 TCP Connection Management Modeling (2)
TCP connection management finite state machine. The heavy solid line is the normal path for a client. The heavy dashed line is the normal path for a server. The light lines are unusual events. Each transition is labeled by the event causing it and the action resulting from it, separated by a slash.

68 Fig: Window management in TCP
TCP Sliding Window (1) Fig: Window management in TCP When window is 0, sender doesn’t send segments except: Urgent data (kill a remote process) Window probe: 1- byte segment to make receiver re-announce next expected byte

69 Consider a connection to a remote terminal, for example using SSH or telnet,
that reacts on every keystroke. In the worst case, whenever a character arrives at the sending TCP entity, TCP creates a 21-byte TCP segment, which it gives to IP to send as a 41-byte IP datagram. At the receiving side, TCP immediately sends a 40-byte acknowledgement (20 bytes of TCP header and 20 bytes of IP header). Later, when the remote terminal has read the byte, TCP sends a window update, moving the window 1 byte to the right. This packet is also 40 bytes. Finally, when the remote terminal has processed the character, it echoes the character for local display using a 41-byte packet. In all, 162 bytes of bandwidth are used and four segments are sent for each character typed. When bandwidth is scarce, this method of doing business is not desirable.

70 Nagle’s Algorithm Interactive Editor – Sending 1-byte would involve 162 bytes (41 to send, 40 to ACK, 40 to ACK, 41 to update) Delayed acknowledgements. The idea is to delay acknowledgements and window updates for up to 500 msec in the hope of acquiring some data on which to hitch a free ride. Nagle’s Algorithm – When data comes into the sender one byte at a time, just send the first byte and buffer the rest until the outstanding byte is acknowledged. Once acknowledged, send all of the buffered data in one segment. (deadlock with delayed acknowledgement)

71 Fig.: Silly window syndrome
TCP Sliding Window (2) Fig.: Silly window syndrome Clark’s solution – prevent receiver from sending a window update for 1 byte. Specifically, the receiver should not send a window update until it gets the maximum segment advertised free.

72 TCP Sliding Window (3) Receiver’s side:
Receiver – Block READ from the application until a large chunk of data arrives. Reduces number of calls to TCP, increases response time (but efficiency is more important to non-interactive applications than response time). Issue of out-of-order packets: Packets buffered and sent to the application in order. Cumulative Acknowledgement If 0, 1, 4, 5, 6, 7 received, then acknowledge up to 2. Sender retransmits 3. On receipt of 3, acknowledge up to 7.

73 TCP Timer Management (a) RTO (Retransmission TimeOut)
Probability density of acknowledgment arrival times in data link layer. … for TCP

74 Retransmission Timer (1)
TCP maintains a variable, SRTT (Smoothed Round-Trip Time) TCP measures how long an acknowledgement took (say, R), and updates SRTT as: SRTT = α SRTT + (1 – α) R Where α is the smoothing factor; typically α = 7/8. RTTVAR (Round-Trip Time VARiation) is updated using: RTTVAR = β RTTVAR + (1 – β) | SRTT – R | Typically, β = 3/4. Retransmission TimeOut (RTO) is set to: RTO = SRTT + 4 x RTTVAR

75 Retransmission Timer (2)
Karn’s algorithm – In dynamic estimation of RTT, when a segment times out and is resent. It is not clear whether the acknowledgement is from the original or resend. So don’t include resent packet’s RTT into calculation. Each time there is failure, double the Time-out value.

76 Persistence Timer Keep-alive Timer
When Persistence timer goes off, the transmitter pings the receiver to know whether buffer space is available. Keep-alive Timer If idle  checks whether the connection is active and then if not closes connection CONTROVERSIAL – It may stop healthy connection due to transient network partitioning

77 TCP Congestion Control (1)
Fig: A burst of packets from a sender and the returning ack clock. Two windows: (1) Flow control window: number of bytes the receiver can buffer (2) Congestion window: number of bytes the sender may have in the network at any time

78 A second consideration is that the AIMD rule will take a very long time to reach a good operating point on fast networks if the congestion window is started from a small size. Consider a modest network path that can support 10 Mbps with an RTT of 100 msec. The appropriate congestion window is the bandwidth-delay product, which is 1 Mbit or 100 packets of 1250 bytes each. If the congestion window starts at 1 packet and increases by 1 packet every RTT, it will be 100 RTTs or 10 seconds before the connection is running at about the right rate.

79 TCP Congestion Control (2)
Fig: Slow start from an initial congestion window of one segment. Slow start Start with small window, grow exponentially Grow until a timeout occurs or the congestion window exceeds slow start threshold

80 TCP Congestion Control (3)
Fig: Additive increase from an initial congestion window of one segment. When slow start threshold is crossed, TCP switches from slow start to additive increase. Congestion window is increased by 1 segment every RTT.

81 TCP Congestion Control (4)
Slow start followed by additive increase in TCP Tahoe.

82 TCP Congestion Control (5)
Fast recovery and the sawtooth pattern of TCP Reno.

83 TCP Congestion Control (6)
Selective acknowledgements

84 Future of TCP TCP does not provide the transport semantics that all applications want - TCP with its standard sockets interface does not meet needs well. Congestion control is difficult with packet loss as an indicator in fast networks.

85 Continued … Chapter 6


Download ppt "Chapter 6 The Transport Layer The Transport Service & Elements of Transport Protocols."

Similar presentations


Ads by Google