Presentation is loading. Please wait.

Presentation is loading. Please wait.

Application Protocols

Similar presentations


Presentation on theme: "Application Protocols"— Presentation transcript:

1 Application Protocols
Lecture 12 part 1 Application Protocols

2 OSI Model and Communication Protocols
We will build Application layer protocols These protocols will use TCP or UDP – which are found at the transport layer Application Protocols Examples: HTTP – WorldWideWeb uses http protocol to fetch websites DNS – Retrieving IP of hostnames is fundamental for web surfing

3 Communication Requirements
Three requires must be enforced in a communication protocol for successful communication. Syntax: Deciding on message structure Message is the smallest unit that is transmitted between two machines Semantics: Commands and responses for each command HTTP Example: Command: GET. Response: 200 OK Synchronization: Ensuring order of communication Deciding on whose turn it is to speak

4 TCP 3-Way Handshake Establish/ tear down TCP socket connections
computers attempting to communicate can negotiate network TCP socket connection both ends can initiate and negotiate separate TCP socket connections at the same time

5 TCP 3-Way Handshake (SYN,SYN-ACK,ACK)

6 What happens behind accept()
A sends a  SYNchronize packet to B B receives A's SYN B sends a SYNchronize-ACKnowledgement A receives B's SYN-ACK A sends ACKnowledge B receives ACK.  TCP socket connection is ESTABLISHED.

7 Client-Server Communication Cycle
A client wishes to send requests to a server Client connects to the server Client sends request to the server One or more requests may be sent in sequence Server receives the sequence of requests Server handles each request and prepares a response Server sends a response back to client for each request received. This is done in sequence as well Client may either repeat the cycle, or close connection

8 Message Format Protocol syntax: message is the atomic unit of data exchanged throughout the protocol. message = letter concentrate on the delivery mechanism.

9 Message Framing For streaming protocols - TCP
separate between different messages all messages are sent on the same stream, one after the other, receiver should distinguish between different messages. Solution: message framing - taking the content of the message, and encapsulating it in a frame (letter - envelop).

10 Framing – what is it good for?
sender and receiver agree on the framing method beforehand framing is part of message format/protocol enable receiver to discover in a stream of bytes where message starts/ends

11 Framing – how? Simple framing protocol for strings:
special FRAMING character (e.g., a line break). each message is framed by two FRAMING characters at beginning and end. message will not contain a FRAMING character Framing protocol by adding a special tag at start and end. <begin> / <end> strings. avoid having <begin> / <end> in message body. framing protocol by employing a variable length message format special tag to mark start of a frame message contains information on message's length

12 Many protocols exchange data in textual form
Textual data Many protocols exchange data in textual form strings of characters, in character encoding, (UTF-8) very easy to document/debug - print messages Limitation: difficult to send non-textual data. how do we send a picture? video? audio file?

13 Binary Data non-textual data is called binary data.
all data is eventually encoded in "binary" format, as a sequence of bits. "binary data" = data that cannot be encoded as a readable string of characters?

14 Binary Data Transmission: Base64 Encoder
Any data that cannot be encoded as a readable string Binary data needs to be encoded before sent as well: This is required in systems that are used to send textual data Otherwise, sending raw binary data may break the message frame Message frame contains special tags/characters to encapsulate the message itself Base64 encoder/decoder: Converts every three bytes to four ASCII characters Disadvantage: data size is increased by 25%. Advantage: Binary data can be sent safely as attachment!

15 Base64 - Example Text: Base64:
Man is distinguished, not only by his reason, but by this singular passion from other animals, which is a lust of the mind, that by a perseverance of delight in the continued and indefatigable generation of knowledge, exceeds the short vehemence of any carnal pleasure. Base64: TWFuIGlzIGRpc3Rpbmd1aXNoZWQsIG5vdCBvbmx5IGJ5IGhpcyByZWFzb24sIG J1dCBieSB0aGlzIHNpbmd1bGFyIHBhc3Npb24gZnJvbSBvdGhlciBhbmltYWxzLC B3aGljaCBpcyBhIGx1c3Qgb2YgdGhlIG1pbmQsIHRoYXQgYnkgYSBwZXJzZXZlcm FuY2Ugb2YgZGVsaWdodCBpbiB0aGUgY29udGludWVkIGFuZCBpbmRlZmF0aW dhYmxlIGdlbmVyYXRpb24gb2Yga25vd2xlZGdlLCBleGNlZWRzIHRoZSBzaG9ydC B2ZWhlbWVuY2Ugb2YgYW55IGNhcm5hbCBwbGVhc3VyZS4= Anything can be converted using Base64, even text!

16 Protocol and Server Separation
Code reuse is one of our design goals! Generic implementation of server, which handles all communication details Generic protocol interface: Handles incoming messages Implements protocol's semantics Generates the reply messages.

17 Protocol-Server Separation: protocol object
Protocol object is in charge of implementing expected behavior of our server: What actions should be performed upon the arrival of a request. Requests may be correlated one to another, meaning protocol should save an appropriate state per client. E.g. authentication (logins)

18 A software architecture that separates tasks into separate interfaces

19 The actions that need to be performed by the server
Accept new connections. Receive new bytes from the connected client. Parse the bytes into masseges (called “de- serialization”, “unframing”, or “decoding”). Dispatch the message to the right method to execute whatever the request specifies. Send back the answer.

20 Interfaces & classes We define the following interfaces:
ConnectionHandler: Handles incoming messages/session from the client. Holds the Socket, the MassageEncoderDecoder and the MessagingProtocol instances. MessageEncoderDecoder: implements the protocol's syntax Encoding and decoding messages from and to bytes. MessagingProtocol: implements the protocol's semantics. Recieves the input from MessageEncoderDecoder. (This way we can replace the messaging formats but keep the protocol.) Handling the received messages and generating the appropriate responses.

21 Multi-Client Server Tasks Separation: Modules
MessageEncoderDecoder: Handles conversion from bytes to String, and from String to bytes (parsing the streams). Contains: byte[] bytes – to store the data received from client MessageEncoderDecoder API: T decodeNextByte(byte nextByte); Coverts one byte to its corresponding text value and stores it in bytes array If the additional data creates a complete message, it is removed from bytes array and returned, otherwise null is returned byte[] encode(T message); Converts a message to its corresponding bytes values, returned as array of bytes

22 MessageEncoderDecoder

23 Multi-Client Server Tasks Separation: Modules
MessagingProtocol: Processes received message and creates corresponding response. Changes termination flag if termination message is received Contains: boolean shouldTerminate  - initialized as false MessagingProtocol API: T process(T message); Receives as input a message, returns a response If received message contains termination data, then termination flag is changed to true boolean shouldTerminate(); Returns true if a termination message is received from client

24 MessagingProtocol We allow to use any type of message (T).

25 Multi-Client Server Tasks Separation: Modules
ConnectionHandler: (Cont. next slide) Implements complete communication flow for one client Each ConnectionHandler Runnable object is executed in own thread! Contains: MessageEncoderDecoder MessagingProtocol Socket – send and receive data from a specific client ConnectionHandler flow - while not termination: read() one byte from Socket Byte decoded to string using decodeNexByte() If the additional data does not create a complete message, read another byte Process message to create a response using process() Encode response to bytes using encode() write() response bytes to Socket

26 ConnectionHandler Generic – works with any protocol.
Receives the socket (sock) from the server (the output of accept()).

27

28 Code Example: “echo” Server
Echo server: The server sends back an identical copy of the data it received. Message Definition: [MessageEncoderDecoder] The server decides a complete message once a break line character is received (‘\n’).

29

30

31 MessagingProtocol implementation for “echo server”
Protocol: [MessagingProtocol] When the server receives a message: It prints it on the screen (on the server side) together with the time it received. Then, returns it back to the sender while repeating the last two chars a couple of times. Example: Request example contains “hello” Response will contain the “echo” of “hello”: “[time] hello .. lo .. lo ..“ Termination: Once a “bye” is received, the communication with the client is terminated

32

33

34 Generic base server implementation The actual server - an object that listen to new connection and assigned them to connection handlers.

35

36 Some notes Supplier - an interface in java that has one non default function called get(). A factory is a supplier of objects. Our TCP server needs to create a new Protocol and EncoderDecoder for every connection it receives. Since it is generic, it does not know how to create such objects. This problem is solved using factories, the server receives factories in its constructor that create those objects for it.

37 Concurrency Models of TCP Servers
Server quality criteria: Scalability: capability to server a large number of concurrent clients. Low accept latency: acceptance wait time. Low reply latency: reply wait time after message received. High efficiency: use little resources on the server host (RAM, number of threads CPU usage). A TCP server should strive to optimize the following quality criteria:

38 Concurrency models To obtain good quality, a TCP server will most often use multiple threads. We will now investigate three simple models of concurrency for servers. Single thread. Thread per client. Constant number of threads.

39 Server Model 1: Single Thread
One thread for: Accepting a new client Dealing requests, by applying run method of the passive ConnectionHandler object.

40 Server Model 1: Single Thread

41 Single Thread Model: Quality
no scalability: at any given time, it serves one client only. high accept latency: a second client must wait until first client disconnects. low reply latency: all resources are concentrated on serving one client. good efficiency: server uses exactly the resources needed to serve one client. Suites only in cases where the process time is small (echo/linePrint).

42 Server Model 2: Thread per Client
Assigns a new thread, for each connected client. In execute(), it allocates a new thread. Invokes the start() method over the runnable ConnectionHandler object.

43

44 Model Quality: Scalability
Scalability: server can serve several concurrent clients, up to max threads running in the process. RAM - each thread allocates a stack and thus consumes RAM Approx threads become active in a single process. The process does not defend itself – keeps creating new threads - dangerous for the host.

45 Model Quality: Latency
Low accept latency: time from one accept() to the next accept() is the time to create a new thread short compared to delay in incoming client connections. Reply latency: resources of the server are spread among concurrent connections. As long as we have a reasonable number of active connections (~hundreds), load requested relatively low in CPU and RAM,

46 Model Quality: Efficiency
Low efficiency: server creates full thread per connection, connection may be bound to Input/Output operations. ConnectionHandler thread blocked waiting for IO still use the resources of the thread (RAM and Thread).

47 Server Model 3: Constant Number of Threads
Constant number of 10 threads (given by the Executor interface of Java) Adding runnable ConnectionHandler object to task queue of a thread pool executor

48 Model Quality Avoids host crash when too many clients connect at the same time. Up to N concurrent client connections -server behaves as "thread-per-connection" Above N, accept latency will grow Scalability is limited to amount of concurrent connections we believe we can support.

49


Download ppt "Application Protocols"

Similar presentations


Ads by Google