Presentation is loading. Please wait.

Presentation is loading. Please wait.

Notes and Handouts The Transport Layer

Similar presentations


Presentation on theme: "Notes and Handouts The Transport Layer"— Presentation transcript:

1 Notes and Handouts The Transport Layer
September 16, 2018 Computer Networks 2 The Transport Layer September 16, 2018 Veton Këpuska Veton Këpuska

2 The Transport Layer It is the heart of the whole protocol hierarchy.
Its task is to provide: Efficient Reliable Cost-effective, data transport from source machine to the destination machine Details of Transport Layer covered: Services Design Protocols Performance September 16, 2018 Veton Këpuska

3 The Transport Service Transport Layer Provides Services to Application Layer. Kind of service provided to Application layer will be presented next. Services Provided to the Upper Layers: Efficient, Reliable, and cost-effective service to its users (typically processes in the application layer). To achieve this goal Transport Layer makes use of services provided by Network Layer. September 16, 2018 Veton Këpuska

4 Services Provided to the Upper Layers:
Transport Entity: Hardware and/or Software within transport layer that does the work. It can be located in: Operating System Kernel Separate User Process Library Package for Network Applications Network Interface card. Relationship of Network, Transport and Application Layers is illustrated in the next Figure. September 16, 2018 Veton Këpuska

5 The Network, Transport and Application Layer
September 16, 2018 Veton Këpuska

6 Types of Transport Service
Connection-Oriented Service Connectionless-Oriented Service Both services have 3 phases: Establishment Data Transfer Release Those services are similar to corresponding Network layer services. They are similar in: Addressing, and Flow Control September 16, 2018 Veton Këpuska

7 The Transport Service If the services of Transport layer are similar to Network layer services then why there are those two distinct layers in network protocol architecture? Transport Layer runs entirely on the users machines. Network Layer mostly runs on the routers (Operated by Carriers). What happens if Network Layer offers inadequate Service? Frequently lost packets. Routers crashes, etc. => Problems in delivery and communication. Transport layer has to deal with those problems without having real control over the network layer: It can not resort to improving the service by using: Better Routers, More robust error handling of Data Link Layer, etc. September 16, 2018 Veton Këpuska

8 The Transport Service Transport Entity - in connection-oriented subnet: If a transmission has been abruptly terminated halfway, and There is no indication of what has happened to the data currently in transit, then transport service can: Setup up a new network connection to the remote transport entity. Send a query to its peer asking which data arrived and which did not. It can pick up from where it left off. September 16, 2018 Veton Këpuska

9 The Transport Service Transport Layer makes possible for Transport Service to be more reliable than underlying Network Layer Services. Lost Packets and Corrupted Data can be detected and compensated for by the transport layer. This is done with Transport Service Primitives Transport Service Primitives can be implemented as Calls to Library Procedures: Hide Network Service Calls. Provide uniform interface to underlying different network services and protocols. Application Programmers thus can: Write code according to a standard set of primitives, Ignore the details of wide variety of: Networks Interfaces, and Unreliable transmissions. September 16, 2018 Veton Këpuska

10 The Transport Service If all real network were
Flawless, and Had the same service primitives, and Services were guaranteed never to change: Transmission layer might not be needed. Since this can not be the case: Transport layer fulfills the key function of isolating the upper layers from the: Technology Design, and Imperfections of subnets. First 4 layers are considered: Transport Service Provider Upper 4 Layers are considered: Transport Service User September 16, 2018 Veton Këpuska

11 Transport Service Primitives
Access to Transport Service is provided to application programs through transport service interface. Each Transport Service has its own interface via a set of Transport Service Primitives. Processes (Applications) using Transport Service Primitives expect that connection between them to be perfect. September 16, 2018 Veton Këpuska

12 Primitives for a Simple Transport Service
Fig 6-2. Primitives for a simple transport services. September 16, 2018 Veton Këpuska

13 Example of Usage of Transport Service Primitives
Consider Application with a Server and a number of remote Clients. Server initially executes LISTEN primitive. This function call typically is done to a library procedure which in turn makes a system call to block the server until a client turns up. When a Client wants to talk to the Server, it executes CONNECT primitive. The transport entity carries out this primitive by blocking the caller and sending a packet to the server. Messages send from transport entity to another transport entity are called TPDU (Transport Protocol Data Unit). TPDU’s are contained in Network Packets Network Packets in turn are contained in Data Link Frames. Nesting of TPDUs, packets and frames (Fig 6-3): September 16, 2018 Veton Këpuska

14 Transport Service Primitives
Client’s CONNECT call causes a CONNECTION REQUEST TPDU to be sent to the Server. Transport Entity in the Server Checks if Server is blocked on a LISTEN mode (ready to handle requests). Unblocks the Server and sends a CONNECTION ACCEPTED TPDU back to the Client. When connection accepted TPDU arrives at the Client: Client is unblocked, and Connection is Established. Data can now be exchanged using SEND and RECEIVE primitives. In the simplest form either party can do RECEIVE to wait for the other party to do a SEND. When TPDU arrives the receiver is unblocked. It then can process the TPDU and send a reply. September 16, 2018 Veton Këpuska

15 Transport Service Primitives
At the Transport Layer, even a simple unidirectional data exchange is more complicated than at the network layer. Every data packet send will also be acknowledged. Control TPDU packets are also acknowledged implicitly or explicitly. Acknowledgments are Managed by the transport entities using the network layer protocol, and are Transport entities also need to worry about timers and retransmissions. All these mechanisms are NOT visible to the transport users. September 16, 2018 Veton Këpuska

16 Transport Service Primitives
Termination of a Connection: When existing connection is no longer needed it must be released to free up tables from two transport entities. Disconnection can e done in two possible variants: Asymmetric: either of transport entities can issue DICONNECT TPDU and send to a remote transport entity. Upon arrival the connection is released. Symmetric: each direction is closed separately and independently of the other one. In this variant connection is released only when both sides have done a DISCONNECT. September 16, 2018 Veton Këpuska

17 Berkeley Sockets Another set of Transport Primitives are socket primitives in Berkeley UNIX for TCP. Those primitives are widely used for Internet Programming. Figure 6-5. The socket primitives for TCP. September 16, 2018 Veton Këpuska

18 Berkeley Sockets SERVER:
SOCKET, BIND, LISTEN, ACCEPT – are executed in that order by a Server. SOCKET: Creates a new end point and allocates table space for it within the transport entity. The parameters of the call specify: Addressing format to be used Type of service desired , and Protocol. Return of a call is an ordinary file descriptor to be used in succeeding calls (just like OPEN call does). BIND: New Sockets do not have network address. These are assigned using the BIND primitive. Once server has bound an address to a socket, remote clients can connect to it. LISTEN: is a primitive which allocates space to queue incoming calls in case that several clients try to connect at the same time. In contrast to LISTEN primitive of the first example, socket model of LISTEN is not a blocking primitive. ACCEPT: is a blocking primitive that server executes to block wait for an incoming connection. When a TPDU asking for a connection arrives, the transport entity creates a new socket with the same properties as the original and returns a file descriptor for it. The server can then fork off a process or thread to handle the connection on the new socket and go back to waiting for the next connection on the original socket. ACCEPT returns a normal file descriptor, which can be used for reading and writing in the standard way, the same as for files. September 16, 2018 Veton Këpuska

19 Berkeley Sockets CLIENT: SOCKET: is used to create a socket.
BIND: is not required since the address used does not matter to the server. CONNECT: blocks the caller and actively starts the connection process. When this process is complete – that is when appropriate TPDU is received from the server: Client process is unblocked, and Connection is established. Both sides can use SEND and RECV to transmit and receive data over the full-duplex connection. In addition if none of special options of SEND and RECV are required, standard UNIX system calls READ and WRITE can be used. Connection release with sockets is symmetric. When both sides execute CLOSE primitive the connection is released. September 16, 2018 Veton Këpuska

20 An Example of Socket Programming: An Internet File Server.
Compilation: cc –o client client.c –lsocket -lnsl cc –o server server.c –lsocket -lnsl Execution: server client flits.cs.vu.nl /usr/tom/filename > f September 16, 2018 Veton Këpuska

21 Server Unix Based Program
Server Port: arbitrarily chosen number (valid range 1024 – 65545). Chosen port number can not be use by other process. Client and Server must use the same port #. BUF_SIZE: chunk size used for file transfer. QUEUE_SIZE: number of pending connections held before additional ones are disregarded. September 16, 2018 Veton Këpuska

22 Client Unix Based Program:
Argv[1] contains server’s name (flits.cs.vu.nl) and is converted to an IP address by gethostbyname. This look up uses DNS (Domain Name System). After client creates a socket it attempts to establish a TCP connection to the server using connect. If the server is up and running on the named machine and attached to SERVER_PORT and is either idle or has room it its listen queue, the connection will eventually be established. September 16, 2018 Veton Këpuska

23 Elements of Transport Protocols
Transport Service is implemented by a Transport Protocol (used between the two transport entities). Similarities of Transport Protocols with Data Link Protocols; both have to deal with: Error Control, Sequencing, Flow Control, etc. Dissimilarities of Transport Protocols and Data Link Protocols are due to major dissimilarities between environments in which the two protocols operate (Figure 6-7). Data Link Protocol applies to direct communication of two routers via a physical channel. Transport Layer Protocol operates on the level of an entire subnet. This distinction has many important implications for the Transport Layer protocols. September 16, 2018 Veton Këpuska

24 Elements of Transport Protocols
Environment of the data link layer. Environment of the transport layer September 16, 2018 Veton Këpuska

25 Elements of Transport Protocols
Destination Address In Data Link Layer, router does not necessarily need to specify which router it wants to talk-to; each outgoing line uniquely specifies a particular router. In Transport Layer explicit addressing of destinations is required. Initial Connection Process establishing connection over the wire is simple: the other end is always there unless has crashed. In either case there is not much to do. In transport layer initial connection establishment is more complicated. Communication Protocol to handle Potential Existence of Storage Capacity in the Subnet. When a router sends a frame: It may arrive, or It is lost. Either way it cannot bounce around for a while, go into hiding and bounce back after 30 sec later. If the subnet uses datagrams and adaptive routing there is nonnegligible probability that a packet may be stored for a number of seconds and then delivered later. This requires special protocols to deal with this issue. Buffer size allocation for Flow Control In data link layer commonly has a fixed number of buffers for each line is allocated so that when frame arrives a buffer is always available. In transport layer due to large number of connections that must be managed make the idea of dedicating many buffers to each one less feasible. September 16, 2018 Veton Këpuska

26 Addressing When application process sets up connection to a remote application process, it must specify which process to connect to. The method normally used is to define transport addresses to which processes can listen for connection requests. These end points in Internet are called ports. ATM they are called AAL-SAP (ATM Adaptive – Service Access Point). Generic term for Transport Layer: TSAP (Transport Service Access Point), and Analogous end points in network layer (i.e., network layer addresses) are called NSAPs. IP addresses are examples of NSAP. Figure 6-8 illustrates the relationship between the NSAP, TSAP and transport connection September 16, 2018 Veton Këpuska

27 Addressing Application Processes (both Clients and Servers) can attach themselves to a TSAP to establish a connection to a remote TSAP. These connections run through NSAPs on each host (as shown in the figure). TSAPs are required because in some networks each computer has a single NSAP, thus a mechanism is needed to distinguish multiple transport end points that share that particular NSAP. A possible scenario for transport connection is given in the next slide. TSAPs, NSAPs and Transport Connection September 16, 2018 Veton Këpuska

28 Addressing TSAPs, NSAPs and Transport Connection September 16, 2018
A time of day server process on host 2 attaches itself to TSAP 1522 to wait for incoming call. A call such as LISTEN might be used. (Note that how a process attaches itself to a TSAP is outside the networking model and depends entirely on the local operating system). An application on host 1 wants to find out the time-of-day, so it issues a CONNECT request specifying TSAP 1208 as the source and TSAP 1522 as the destination. This action ultimately results in a transport connection being established between the application process on host 1 and server 1 on host2. The application processes then sends over a request for the time. The time server process responds with the current time. The transport connection is then released. TSAPs, NSAPs and Transport Connection September 16, 2018 Veton Këpuska

29 Addressing Note: There may be other servers on host 2 that are attached to other TSAPs and waiting for incoming connections that arrive over the same NSAP. Second issue is: How does the user process on host 1 know the time-of-day server is attached to TSAP 1522? TSAPs, NSAPs and Transport Connection September 16, 2018 Veton Këpuska

30 Addressing Possible solution:
Established conventions – time-of-day server has been attaching itself to TSAP 1522 for years and gradually all the network users have learned this. In this model, services have stable TSAP addresses that are listed in files in well-know places, such as the /etc/services file on UNIX systems which lists which servers are permanently attached to which ports. Stable TSAP addresses work for small number of key services that never change. Users processes, in general, often want to talk to other user processes that only exist for a short time and do not have a TSAP address that is known in advance. It is wasteful to have each of server processes to be active and listen to a stable TSAP address of their own if there are many such processes and most of which are rarely used. A better scheme (see Fig. 6-9) know as initial connection protocol is used. September 16, 2018 Veton Këpuska

31 Initial Addressing Protocol
How a user process in host 1 establishes a connection with a time-of-day server in host 2. September 16, 2018 Veton Këpuska

32 Initial Connection Protocol
Instead of each conceivable server listening at well-know TSAP, each machine that wishes to offer services to remote users has a special process server that acts as a proxy for less heavily used servers. Process Server listens to a set of ports at the same time waiting for CONNECT request. CONNECT request specifies the TSAP address of the service they want. If no server is waiting for them, they get process server. Process Server then spawns the requested server, allowing to inherit the existing connection with the user. New server does the requested work. Process Server goes back to listening for new requests. There are many situations in which services do need to exist independently of Process Server (e.g., file server can not be created on the fly when one wants to talk to it). An alternative scheme is to use a special process called a name server/directory server (equivalent to directory assistance). September 16, 2018 Veton Këpuska

33 Initial Connection Protocol
To find a TSAP address corresponding to a given service name (e.g., time of day) a user sets up a connection to the name server (which listens to a well-know TSAP). User sends a message specifying the service name. Server sends back the TSAP address. Users releases connection with name server and establishes a new connection with desired service. This method requires registering of created new service with the name server containing service name and corresponding TSAP. September 16, 2018 Veton Këpuska

34 Connection Establishment
Why establishing a connections is not easy? Problems occur when the network can lose, store, and duplicate packets. This behavior causes serious complications. Assume a subnet that is so congested that: Acknowledgments hardly ever get back in time, and Each packets times out and is retransmitted two or three times. Subnet uses datagrams and thus every packets follows a different route. Some of the packets get stuck in a traffic jam inside the subnet and take a long time to arrive; they are stored in the subnet and pop out much later. Worst possible scenario: A user establishes a connection with a bank, Sends messages telling the bank to transfer a large amount of money to the account of a “not-entirely-trustworthy” person. Releases the connection. Each packet in this scenario is duplicated and stored in the subnet. After the connection has been released, all the packets pop out of the subnet and arrive at the destination in order – asking the bank to establish anew connection, transfer money (again) and release the connection. The bank has no way of telling that this is a duplicate. Thus it must assume that this is a second independent transaction, and transfers the money again. In the remainder of this section the problem of delayed duplicates with special emphasis on algorithms for establishing connections in a reliable way will be presented, so that scenarios like above do not happen. September 16, 2018 Veton Këpuska

35 Connection Establishment
The heart of the problem is the existence of delayed duplicates. Solution is based on killing off the aged packets that are still hobbling about. Ensuring that no packet lives longer than some known time. This way the problem becomes more manageable. Packet lifetime can be restricted to a know maximum using one (or more) of the following techniques: Restricted subnet design. This approach includes any method that prevents packets from looping, combined with some way of bounding congestion delay over the longest possible path. Putting a hop counter in each packet. This method consists of having the hop count initialized to some appropriate value and decremented each time the packet is forwarded. The network protocol simply discards any packet whose hop counter becomes zero. Time-stamping each packet. This method requires the router clocks to be synchronized (which in itself is not a trivial task unless synchronization is achieved external to the network – e.g. GPS or some radio station that broadcasts the precise time periodically) September 16, 2018 Veton Këpuska

36 Connection Establishment
In addition to guaranteeing elimination of timed out packets, all associated acknowledgement packets also must be killed. In order to ensure elimination of undesirable duplicate packets; a time T is introduced that denotes a value that is a small multiple of the packets true maximum lifetime. This value T is protocol dependent and simply has the effect of making T longer. If transport entity waits a time T after a packet has been sent, it can be sure that all traces of it are now gone and that neither it nor its acknowledgement will suddenly appear out of the blue . September 16, 2018 Veton Këpuska

37 Three Protocol Scenarios for Establishing a Connection using a three-way handshake.
With packet lifetime bounded, it is possible to devise a foolproof way to establish connections safely. Three-way handshake: Three possible scenarios presented in the figure are discussed next. September 16, 2018 Veton Këpuska

38 Connection Establishment
Three-way hand shake Protocol - Normal setup procedure when host 1 initiates connection is shown in Fig.6-11(a): Host 1 chooses a sequence number, x, and send a CONNECTION REQUEST TPDU containing it to Host 2. Host 2 replies with an ACK TPDU acknowledging x and announcing its own initial sequence number, y. Host 1, acknowledges host 2’s choice of an initial sequence number in the first data TPDU that it sends. September 16, 2018 Veton Këpuska

39 Connection Establishment
Old duplicate appearing out of nowhere (Fig.6-11(b)). The first TPDU is a delayed duplicate CONNECTION REQUEST from an old connection. Delayed duplicate CONNECTION REQUEST arrives at host 2 without host 1’s knowledge. Host 2 reacts to this TPDU by sending host 1 an ACK TPDU, in effect asking for verification that host 1 was indeed trying to set up a new connection. When host 1 rejects host 2’s attempt to establish a connection, host 2 realizes that it was tricked by a delayed duplicate and abandons the connection. September 16, 2018 Veton Këpuska

40 Connection Establishment
Worst case scenario is when both delayed CONNECTION REQUEST and an ACK are floating around in the subnet. This case is shown in Fig.6-11(c). Delayed duplicate CONNECTION REQUEST arrives at host 2 without host 1’s knowledge. Host 2 reacts to this TPDU by sending host 1 an ACK TPDU, in effect asking for verification that host 1 was indeed trying to set up a new connection. In addition Host 2 has proposed using y as the initial sequence number from host 2 to host 1 traffic, knowing full well that no TPDU’s containing sequence number y or acknowledgments to y are still in existence. When the second delayed TPDU arrives at host 2, the fact that z has been acknowledged rather than y tell host 2 that this too, is an old duplicate. September 16, 2018 Veton Këpuska

41 Connection Release Asymmetric release (this is how telephone system works) My cause loss of data. Abrupt disconnection with loss of data. September 16, 2018 Veton Këpuska

42 Connection Release Symmetric Release:
There is no full proof solution Three way hand-shake adequate solution User sends DR (DISCONNECTION REQUEST) TPDU to initiate connection release and starts its timer. Recipient sends back a DR TPDU as well and starts its timer. When original sender receives DR it sends back ACK TPDU and releases connection. Finally when ACK TPDU arrives and the recipient (host 2) the receiver also releases the connection. Normal case of three-way handshake. September 16, 2018 Veton Këpuska

43 Connection Release Symmetric Release:
Scenario when final ACK TPDU is lost September 16, 2018 Veton Këpuska

44 Connection Release Symmetric Release:
Scenario when Response TPDU from Receiving host is lost. September 16, 2018 Veton Këpuska

45 Connection Release Symmetric Release: Scenario when:
Response TPDU from Receiving host is lost, and Repeated retransmit the DR also fail due to lost TPDUs. After N retries the sender just gives up and releases the connection, and Receiver times out and also exits. September 16, 2018 Veton Këpuska

46 Connection Release Symmetric Release Failure Scenario:
If initial DR and all of N retransmission attempts fail due to the loss of TPDU’s the protocol will fail because receiver can not release the connection because it has not received DR request. Sender is not allowed to give up after N retries => it can go on forever until it gets a response. If the other (receiving) side is allowed to time out, the sender will indeed go on forever because no response will ever be forthcoming. If we do not allow the other (receiving) side to time out then the protocol hangs. One way to kill of half-open connections is to have a rule saying that if no TPDUs have arrived for a certain number of seconds, the connection is then automatically disconnected. This way if one side ever disconnects, the other side will detect the lack of activity and also disconnect. September 16, 2018 Veton Këpuska

47 Flow Control and Buffering
Flow Control in Transport layer is in some ways similar to Data Link layer flow control but in others quite different. Main Similarities: A sliding window or other scheme is needed on each connection to keep a fast transmitter form overrunning a slow receiver. Main Differences: Router has very few lines, vs. Host may have numerous connections. September 16, 2018 Veton Këpuska

48 Flow Control and Buffering
In Data Link layer: The sending side must buffer outgoing frames because they might have to be retransmitted. In Transport Layer: If the subnet provides datagram service, the sending transport entity must also buffer the TPDUs because they may have to be retransmitted (just like in data link layer). Receiver, if it knows that the sender buffers all TPDU’s until they are acknowledged, may or may not dedicate specific buffers to specific connections as it sees fit. When a TPDU comes in an attempt is made to dynamically acquire a new buffer. If one is available the TPDU is accepted; otherwise it is discarded. Since the sender is prepared to retransmit TPDUs lost by the subnet no harm is done by having the receiver drop TPDUs (some resources are wasted). Sender keeps retransmitting until it gets an acknowledgment. September 16, 2018 Veton Këpuska

49 Flow Control and Buffering
If the network service is unreliable: The sender mast buffer all TPDUs sent. With reliable network service other tradeoffs are possible: Sender may not retain copies of the TPDUs it sends if receiver always had buffer space for incoming TPDUs. If Receiver cannot guarantee that every incoming TPDY will be accepted the Sender will have to buffer anyway. The sender cannot trust the network layer’s acknowledgment, because the acknowledgment means only that the TPDU arrived, not that it was accepted. September 16, 2018 Veton Këpuska

50 Flow Control and Buffering
Buffer Size and Organization Issue: If most TPDUs are nearly the same size buffers can be organized as: A pool of identically-sized buffers, and One TPDU per buffer (Fig (a)). If TPDUs vary in size significantly then the buffers can be organized as a chain of variable length buffers. Better memory utilization More complicated buffer management. Single large Circular buffer per connection. Good memory utilization if all connections are heavily loaded Poor memory utilization if some connections are lightly loaded. Chained fixed-size buffers Chained variable-sized buffers One large circular buffer per connection. September 16, 2018 Veton Këpuska

51 Flow Control and Buffering
Optimum trade-off between source buffering and destination buffering depends on the type of traffic carried by the connection. For low-bandwidth bursty traffic, such as that produced by an interactive terminal it is better not to dedicate any buffers but rather to acquire them dynamically at both ends. Better to Buffer at the Sender For file transfer and other high-bandwidth traffic, it is better if the receiver does dedicate a full window of buffers, to allow the data to flow at maximum speed. Better to Buffer at the Receiver September 16, 2018 Veton Këpuska

52 Flow Control and Buffering
When buffer space no longer limits the maximum flow, another bottleneck will appear: Subnets carrying capacity: If adjacent routers can exchange at most x packets/sec and there are k disjoint paths between a pair of hosts, maximal rate of data transfer between those two routers is kx TPDUs/sec no matter how much buffer space is available at each end. If the sender transmits to fast (i.e., sends more than kx TPDUs/sec), the subnet will become congested because it will be unable to deliver TPDUs fast enough. A mechanism is needed based on the subnet’s carrying capacity rather than on the receiver’s buffering capacity. Clearly flow control has to be applied to the sender to prevent it from having too many unacknowledged TPDUs outstanding at once. Solution: Sliding Window Flow Control Scheme September 16, 2018 Veton Këpuska

53 Flow Control and Buffering
Sliding Window Flow Control Scheme Sender dynamically adjust the window size to match the network's carrying capacity. Network can handle c TPDUs/sec Cycle Time is r (it includes transmission, propagation, queuing, processing at the receiver, and return of the acknowledgement). Senders window should be c*r Adjustment of the window size: Carrying capacity can be determined by simply counting the number of TPDUs acknowledged during some time period and then dividing by that time period. During the measurement sender should send as fast as it can, to make sure that the network’s carrying capacity and not the low input rate is the factor limiting the acknowledgment. The time required for transmitted TPDU to be acknowledged can be measured exactly and a running mean can be maintained. September 16, 2018 Veton Këpuska


Download ppt "Notes and Handouts The Transport Layer"

Similar presentations


Ads by Google