Presentation is loading. Please wait.

Presentation is loading. Please wait.

Chapter 6 The Transport Layer.

Similar presentations


Presentation on theme: "Chapter 6 The Transport Layer."— Presentation transcript:

1 Chapter 6 The Transport Layer

2 The transport layer is not just another layer
The transport layer is not just another layer. It is the heart of the whole protocol hierarchy. Its task is to provide reliable, cost-effective data transport from the source machine to the destination machine, independently of the physical network or networks currently in use. Without the transport layer, the whole concept of layered protocols would make little sense. In this chapter we will study the transport layer in detail, including its services, design, protocols, and performance.

3 6.1 The Transport Service Services Provided to the Upper Layers
Transport Service Primitives Berkeley Sockets An Example of Socket Programming: An Internet File Server

4 6.1.1 Services Provided to the Upper Layers
The ultimate goal of the transport layer is to provide efficient, reliable, and cost-effective service to its users, normally processes in the application layer. To achieve this goal, the transport layer makes use of the services provided by the network layer. The hardware and/or software within the transport layer that does the work is called the transport entity. The (logical) relationship of the network, transport, and application layers is illustrated in Fig. 6-1. Fig. 6-1 The network, transport, and application layers.

5 The connection-oriented and the connectionless
Why need the transport layer. Just as there are two types of network service, connection-oriented and connectionless, there are also two types of transport service. The transport service is similar to the network service in many ways. The transport code runs entirely on the users' machines, but the network layer mostly runs on the routers, which are operated by the carrier (at least for a wide area network). What happens if the network layer offers inadequate service? Suppose that it frequently loses packets? What happens if routers crash from time to time? Problems occur, that's what. The users have no real control over the network layer, so they cannot solve the problem of poor service by using better routers or putting more error handling in the data link layer. The only possibility is to put on top of the network layer another layer that improves the quality of the service. In essence, the existence of the transport layer makes it possible for the transport service to be more reliable than the underlying network service. Lost packets and mangled data can be detected and compensated for by the transport layer. Furthermore, the transport service primitives can be implemented as calls to library procedures in order to make them independent of the network service primitives. Thanks to the transport layer, application programmers can write code according to a standard set of primitives and have these programs work on a wide variety of networks, without having to worry about dealing with different subnet interfaces and unreliable transmission. For this reason, many people have traditionally made a distinction between layers 1 through 4 on the one hand and layer(s) above 4 on the other. The bottom four layers can be seen as the transport service provider, whereas the upper layer(s) are the transport service user. This distinction of provider versus user has a considerable impact on the design of the layers and puts the transport layer in a key position, since it forms the major boundary between the provider and user of the reliable data transmission service.

6 6.1.2 Transport Service Primitives
Review primitives. Transport primitives are very important, because many programs (and thus programmers) see the transport primitives. Consequently, the transport service must be convenient and easy to use. Fig. 6-2 The primitives for a simple transport service.

7 Figure 6-3. The nesting of TPDUs, packets, and frames.
TPDU (Transport Protocol Data Unit) is a term used for messages sent from transport entity to transport entity. Thus, TPDUs (exchanged by the transport layer) are contained in packets (exchanged by the network layer). In turn, packets are contained in frames (exchanged by the data link layer). When a frame arrives, the data link layer processes the frame header and passes the contents of the frame payload field up to the network entity. The network entity processes the packet header and passes the contents of the packet payload up to the transport entity. This nesting is illustrated in Fig. 6-3. 书上举了个客户/服务器的例子 Figure 6-3. The nesting of TPDUs, packets, and frames.

8 State Diagrams Figure 6-4. A state diagram for a simple connection management scheme. Transitions labeled in italics are caused by packet arrivals. The solid lines show the client's state sequence. The dashed lines show the server's state sequence.

9 Figure 6-5. The socket primitives for TCP.
6.1.3 Berkeley Sockets Figure 6-5. The socket primitives for TCP.

10 Figure 6-5-1. The working principal of Berkeley socket.
BIND LISTEN ACCEPT CONNECT SEND RECEIVE CLOSE Client Server A new socket Figure The working principal of Berkeley socket.

11 6.1.4 Socket Programming Example: Internet File Server
6-6-1 Figure 6-6. Client code using sockets.

12 Socket Programming Example: Internet File Server (2)
Figure Client code using sockets.

13 6.2 Elements of Transport Protocols
Addressing Connection Establishment Connection Release Flow Control and Buffering Multiplexing Crash Recovery

14 Difference Between Transport Protocol and Data link Protocol
Figure 6-7.(a) Environment of the data link layer. (b) Environment of the transport layer.

15 Figure 6-8. TSAPs, NSAPs and transport connections.
6.2.1 Addressing When an application process wishes to set up a connection to a remote application process, it must specify which one to connect to. The method normally used is to define transport addresses to which processes can listen for connection requests. In the Internet, these end points are called ports. In ATM networks, they are called AAL-SAPs. We will use the generic term TSAP, (Transport Service Access Point). The analogous end points in the network layer (i.e., network layer addresses) are then called NSAPs. IP addresses are examples of NSAPs. Figure 6-8. TSAPs, NSAPs and transport connections.

16 How does the client process know the server’s TSAP?
Two ways: One is that the server’s TSAP is well known TSAP. Another one is that the server host run a special process called a name server or sometimes a directory server. A better scheme is needed for rarely used servers. It is known as the initial connection protocol. Each machine that wishes to offer services to remote users has a special process server that acts as a proxy for less heavily used servers. This scheme is illustrated in Fig. 6-9.

17 Process Server Figure 6-9.How a user process in host 1 establishes a connection with a time-of-day server in host 2.

18 6.2.2 Connection Establishment
Establishing a connection sounds easy, but it is actually surprisingly tricky. At first glance, it would seem sufficient for one transport entity to just send a CONNECTION REQUEST TPDU to the destination and wait for a CONNECTION ACCEPTED reply. The problem occurs when the network can lose, store, and duplicate packets. This behavior causes serious complications. Restrict packet lifetime Equipping each host with a time-of-day clock. Three-way handshake

19 Three-way handshake Figure Three protocol scenarios for establishing a connection using a three-way handshake. CR denotes CONNECTION REQUEST. (a) Normal operation, (b) Old CONNECTION REQUEST appearing out of nowhere. (c) Duplicate CONNECTION REQUEST and duplicate ACK.

20 Figure 6-12. Abrupt disconnection with loss of data.
6.2.3 Connection Release Asymmetric release Symmetric release Figure Abrupt disconnection with loss of data.

21 Connection Release (2) The two-army problem.

22 Release Connection Using a Three-way Handshake
6-14, a, b Figure Four protocol scenarios for releasing a connection. (a) Normal case of a three-way handshake. (b) final ACK lost.

23 6-14, c,d (c) Response lost. (d) Response lost and subsequent DRs lost.

24 6.2.4 Flow Control and Buffering
In transport layer, similarity to data link layer, a sliding window or other scheme is needed on each connection to keep a fast transmitter from overrunning a slow receiver. The main problem deal with buffering. Sender buffering or receiver buffering For low-bandwidth bursty traffic, it is better to buffer at the sender, and for high bandwidth smooth traffic, it is better to buffer at the receiver. The buffer size of the receiver, see Fig (a)Chained fixed-size buffers. (b)Chained variable-sized buffers. (c) One large circular buffer per connection Dynamically buffer allocations, see Figure 一台主机能以多快的速度发送,与接收能力和网络容量两个因素有关,前者涉及到流量控制,后者涉及到拥塞控制。由于数据链路层两节点间的连接数目多,可以为每个连接分配固定数量缓冲区,而传输层两主机间连接却很多,一般采用为多个连接共用缓冲区的策略。

25 The buffer size of the receiver
Figure 6-15.(a) Chained fixed-size buffers. (b) Chained variable-sized buffers. (c) One large circular buffer per connection.

26 Dynamically buffer allocations
Figure 6-16.Dynamic buffer allocation. The arrows show the direction of transmission. An ellipsis (…) indicates a lost TPDU.

27 Figure 6-17. (a) Upward multiplexing. (b) Downward multiplexing.
Multiple transport connections use one network connection, called upward multiplexing. One transport connection use multiple network connection, called downward multiplexing. Figure (a) Upward multiplexing. (b) Downward multiplexing.

28 6.2.6 Crash Recovery If hosts and routers are subject to crashes, recovery from these crashes becomes an issue. If the transport entity is entirely within the hosts, recovery from network and router crashes is straightforward. A more troublesome problem is how to recover from host crashes. The host must decide whether to retransmit the most recent TPDU after recovery from a crash. Eg. A client and a server communication, then the server crashes, see fig.6-18. No matter how the transport entity is programmed, there are always situations where the protocol fails to recover properly, because the acknowledgement and the write can’t be done at the same time. Conclusion: When a crash occurred in layer N, only the layer N+1 can recovery.

29 Figure 6-18. Different combinations of client and server strategy.

30 6.3 A Simple Transport Protocol
The Example Service Primitives The Example Transport Entity The Example as a Finite State Machine

31 6.4 The Internet Transport Protocols: UDP
Introduction to UDP Remote Procedure Call The Real-Time Transport Protocol

32 6.4.1 Introduction to UDP UDP (User Datagram Protocol) provides a way for applications to send encapsulated IP datagrams and send them without having to establish a connection. UDP transmits segments consisting of an 8-byte header followed by the payload. Figure The UDP header.

33 6.4.2 Remote Procedure Call The idea behind RPC is to make a remote procedure call look as much as possible like a local one. To call a remote procedure, the client program must be bound with a small library procedure, called the client stub, that represents the server procedure in the client's address space. Similarly, the server is bound with a procedure called the server stub. These procedures hide the fact that the procedure call from the client to the server is not local. Figure Steps in making a remote procedure call. The stubs are shaded.

34 6.4.3 The Real-Time Transport Protocol
(a) The position of RTP in the protocol stack. (b) Packet nesting.

35 6.5 The Internet Transport Protocols: TCP
Introduction to TCP The TCP Service Model The TCP Protocol The TCP Segment Header TCP Connection Establishment TCP Connection Release TCP Connection Management Modeling TCP Transmission Policy TCP Congestion Control TCP Timer Management Wireless TCP and UDP Transactional TCP

36 6.5.1 Introduction to TCP TCP (Transmission Control Protocol) was specifically designed to provide a reliable end-to-end byte stream over an unreliable internetwork. An internetwork differs from a single network because different parts may have wildly different topologies, bandwidths, delays, packet sizes, and other parameters. TCP was designed to dynamically adapt to properties of the internetwork and to be robust in the face of many kinds of failures.

37 6.5.2 The TCP Service Model full duplex and point-to-point byte stream
TCP service is obtained by both the sender and receiver creating end points, called sockets. Each socket has a socket number (address) consisting of the IP address of the host and a 16-bit number local to that host, called a port. A port is the TCP name for a TSAP. For TCP service to be obtained, a connection must be explicitly established between a socket on the sending machine and a socket on the receiving machine. full duplex and point-to-point byte stream immediate data urgent data Port Protocol Use 21 FTP File transfer 23 Telnet Remote login 25 SMTP 69 TFTP Trivial File Transfer Protocol 79 Finger Lookup info about a user 80 HTTP World Wide Web 110 POP-3 Remote access 119 NNTP USENET news Figure Some assigned ports.

38 A TCP connection is a byte stream
Figure 6-28.(a) Four 512-byte segments sent as separate IP datagrams. (b) The 2048 bytes of data delivered to the application in a single READ CALL.

39 6.5.4 The TCP Segment Header Figure TCP Header.

40 Figure 6-30. The pseudoheader included in the TCP checksum.
A Checksum is also provided for extra reliability. It checksums the header, the data, and the conceptual pseudoheader shown in Fig Figure The pseudoheader included in the TCP checksum.

41 6.5.5 TCP Connection Establishment
6-31 Figure 6-31.(a) TCP connection establishment in the normal case. (b) Call collision.

42 6.5.6 TCP Connection Release
FIN ACK FIN,ACK (a) (b) Figure (a) Normally, four TCP segments are needed to release a connection . (b) It is possible for the first ACK and the second FIN to be contained in the same segment .

43 6.5.8 TCP Transmission Policy
Figure Window management in TCP.

44 Nagle's Algorithm and Silly Window Syndrome
Figure Silly window syndrome.

45 6.5.9 TCP Congestion Control
Two factor: Network capacity Receiver’s capacity Figure (a) A fast network feeding a low capacity receiver. (b) A slow network feeding a high-capacity receiver.

46 Figure 6-37. An example of the Internet congestion algorithm.
Slow Start Algorithm The number of bytes that may be sent is the minimum of the two windows. Receiver window Congestion window threshold Figure An example of the Internet congestion algorithm.

47 TCP Timer Management The solution of deciding the retransmission timer is to use a highly dynamic algorithm that constantly adjusts the timeout interval, based on continuous measurements of network performance. Figure 6-38.(a) Probability density of ACK arrival times in the data link layer. (b) Probability density of ACK arrival times for TCP.

48 Timeout For each connection, TCP maintains a variable, RTT, that is the best current estimate of the round-trip time to the destination in question. When a segment is sent, a timer is started, both to see how long the acknowledgement takes and to trigger a retransmission if it takes too long. If the acknowledgement gets back before the timer expires, TCP measures how long the acknowledgement took, say, M. It then updates RTT according to the formula. RTT=α RTT+(1- α )M Typically a = 7/8. Another smoothed variable, D, is the deviation, that is | RTT - M |. D= α D +(1- α ) | RTT - M | Timeout=RTT+4D

49 6.6 Performance Issues Performance Problems in Computer Networks
Network Performance Measurement System Design for Better Performance Fast TPDU Processing Protocols for Gigabit Networks

50 6.6.1 Performance Problems in Computer Networks
Some performance problems: congestion structural resource imbalance synchronous overload tuning issue Utilization of gigabit networks jitter

51 Performance Problems in Computer Networks(2)
Figure 6-41.The state of transmitting one megabit from San Diego to Boston (a) At t = 0, (b) After 500 μsec, (c) After 20 msec, (d) after 40 msec.

52 6.6.2 Network Performance Measurement
The basic loop for improving network performance. Measure relevant network parameters, performance. Try to understand what is going on. Change one parameter.

53 6.6.3 System Design for Better Performance
Rules: CPU speed is more important than network speed. Reduce packet count to reduce software overhead. Minimize context switches. Minimize copying. You can buy more bandwidth but not lower delay. Avoiding congestion is better than recovering from it. Avoid timeouts.

54 System Design for Better Performance (2)
Figure Response as a function of load.

55 System Design for Better Performance (3)
Figure Four context switches to handle one packet with a user-space network manager.

56 The processing steps on this path are shaded.
*6.6.4 Fast TPDU Processing Figure The fast path from sender to receiver is shown with a heavy line. The processing steps on this path are shaded.

57 Fast TPDU Processing (2)
Figure (a) TCP header. (b) IP header. In both cases, the shaded fields are taken from the prototype without change.

58 Fast TPDU Processing (3)
Figure A timing wheel.

59 6.6.5 Protocols for Gigabit Networks
Figure Time to transfer and acknowledge a 1-megabit file over a 4000-km line.


Download ppt "Chapter 6 The Transport Layer."

Similar presentations


Ads by Google