Presentation is loading. Please wait.

Presentation is loading. Please wait.

Application Protocols

Similar presentations


Presentation on theme: "Application Protocols"— Presentation transcript:

1 Application Protocols
Lecture 12 Application Protocols

2 OSI Model and Communication Protocols
We will build Application layer protocols These protocols will use TCP or UDP – which are found at the transport layer Application Protocols Examples: HTTP – WorldWideWeb uses http protocol to fetch websites DNS – Retrieving IP of hostnames is fundamental for web surfing

3 Communication Requirements
Three requires must be enforced in a communication protocol for successful communication. Syntax: Deciding on message structure Message is the smallest unit that is transmitted between two machines Semantics: Commands and responses for each command HTTP Example: Command: GET. Response: 200 OK Synchronization: Ensuring order of communication Deciding on whose turn it is to speak

4 Client-Server Communication Cycle
A client wishes to send requests to a server Client connects to the server Client sends request to the server One or more requests may be sent in sequence Server receives the sequence of requests Server handles each request and prepares a response Server sends a response back to client for each request received. This is done in sequence as well Client may either repeat the cycle, or close connection

5 Protocol Syntax: Message Framing
Data is sent from source to destination machine – without any separation If a client sends two requests one after another over the same connection The server will receive all the data bit by bit Received data needs to be segmented at destination to its corresponding requests - we expect the server to handle two requests, not one! The server protocol needs to segment the received data into two message objects Sender and receiver must agree on the framing method beforehand Each message is handled separately – A response is created and sent back to client One response per request received

6 Message Framing - Example
String message framing: A protocol may decide that once a special character is received – it denotes the end of a message sent - and the beginning of the next message Special character cannot be part of message! A message will never contain the special character as part of it Special characters example: A line break – \n Benefits: Allows variable message size!

7 Binary Data Transmission: Base64 Encoder
Any data that cannot be encoded as a readable string Binary data needs to be encoded before sent as well: This is required in systems that are used to send textual data Otherwise, sending raw binary data may break the message frame Message frame contains special tags/characters to encapsulate the message itself Base64 encoder/decoder: Converts every three bytes to four ASCII characters Disadvantage: data size is increased by 25%. Advantage: Binary data can be sent safely as attachment!

8 Base64 - Example Text: Base64:
Man is distinguished, not only by his reason, but by this singular passion from other animals, which is a lust of the mind, that by a perseverance of delight in the continued and indefatigable generation of knowledge, exceeds the short vehemence of any carnal pleasure. Base64: TWFuIGlzIGRpc3Rpbmd1aXNoZWQsIG5vdCBvbmx5IGJ5IGhpcyByZWFzb24sIGJ1dCBi eSB0aGlzIHNpbmd1bGFyIHBhc3Npb24gZnJvbSBvdGhlciBhbmltYWxzLCB3aGljaCBpcy BhIGx1c3Qgb2YgdGhlIG1pbmQsIHRoYXQgYnkgYSBwZXJzZXZlcmFuY2Ugb2YgZGVsaW dodCBpbiB0aGUgY29udGludWVkIGFuZCBpbmRlZmF0aWdhYmxlIGdlbmVyYXRpb24g b2Yga25vd2xlZGdlLCBleGNlZWRzIHRoZSBzaG9ydCB2ZWhlbWVuY2Ugb2YgYW55IGNh cm5hbCBwbGVhc3VyZS4= Anything can be converted using Base64, even text!

9 Sending and Receiving Data via Sockets
Sending Data: A socket has an internal buffer used for sending data – output stream Using write(), data is copied from our array to the internal buffer The RTE then sends bytes from the buffer to network Receiving Data: A socket has an internal buffer used for receiving data – input stream RTE retrieves bytes from the networks and saves into the internal buffer Using read(), data is copied from the internal buffer to our array

10 Java Blocking Sockets Blocking Sockets: accept() write(byte [] buffer)
Control does not return to calling thread until operation completes This kind of sockets is used in thread-per-client solutions. accept() Calling thread is blocked until new connection established If no request is sent by a new client – thread stays blocked! write(byte [] buffer) Calling thread is blocked until all buffer sent to network Thread is blocked until all buffer is sent to client int read() Calling thread is blocked until byte is received If client did not send data yet – server thread is blocked!

11 Multi-Client Server: Tasks Separation
Accepting new connection requests Receiving data from client Decoding data received from bytes to text Segmenting text to requests: One complete message is denoted as one request For each request: Handling one request Creating one response for one request Response is sent back to client Sending responses back to client Encoding response message to bytes Sending the bytes to client

12 Multi-Client Server Tasks Separation: Modules
MessageEncoderDecoder: Handles conversion from bytes to String, and from String to bytes. Contains: byte[] bytes – to store the data received from client MessageEncoderDecoder API: T decodeNextByte(byte nextByte); Coverts one byte to its corresponding text value and stores it in bytes array If the additional data creates a complete message, it is removed from bytes array and returned, otherwise null is returned byte[] encode(T message); Converts a message to its corresponding bytes values, returned as array of bytes

13 Multi-Client Server Tasks Separation: Modules
MessagingProtocol: Processes received message and creates corresponding response Changes termination flag if termination message is received Contains: boolean shouldTerminate  - initialized as false MessagingProtocol API: T process(T message); Receives as input a message, returns a response If received message contains termination data, then termination flag is changed to true boolean shouldTerminate(); Returns true if a termination message is received from client

14 Multi-Client Server Tasks Separation: Modules
ConnectionHandler: Implements complete communication flow for one client Each ConnectionHandler Runnable object is executed in own thread! Contains: MessageEncoderDecoder MessagingProtocol Socket – send and receive data from a specific client ConnectionHandler flow - while not termination: read() one byte from Socket Byte decoded to string using decodeNexByte() If the additional data does not create a complete message, read another byte Process message to create a response using process() Encode response to bytes using encode() write() response bytes to Socket

15 Code Example: “echo” Server
Protocol: [MessagingProtocol] We wish to implement a server that “echoes” received messages as follows: Request example contains “hello” Response will contain the “echo” of “hello”: “[time] hello .. lo .. lo ..“ Termination: Once a “bye” is received, the communication with the client is terminated Message Definition: [MessageEncoderDecoder] The server decides a complete message once a break line character is received.

16 MessageEncoderDecoder: Code Example
public class LineMessageEncoderDecoder implements MessageEncoderDecoder<String> {     private byte[] bytes = new byte[1024];     private int len = 0;     public String decodeNextByte(byte nextByte) {         if (nextByte == '\n') {              return popString();         }         pushByte(nextByte);         return null; //not a line yet     }     public byte[] encode(String message) {         return (message + "\n").getBytes(); //uses utf8 by default

17 MessageEncoderDecoder: Private Methods
private void pushByte(byte nextByte) {         if (len >= bytes.length) {             bytes = Arrays.copyOf(bytes, len * 2);         }         bytes[len++] = nextByte;     }     private String popString() {         String result = new String(bytes, 0, len, StandardCharsets.UTF_8);         len = 0;         return result; }

18 MessagingProtocol: Code Example
public class EchoProtocol implements MessagingProtocol<String> {     private boolean shouldTerminate = false;     public String process(String msg) {         shouldTerminate = "bye".equals(msg);         System.out.println("[" + LocalDateTime.now() + "]: " + msg);         return createEcho(msg);     }     private String createEcho(String message) {         String echoPart = message.substring( Math.max(message.length() - 2, 0), message.length());         return message + " .. " + echoPart + " .. " + echoPart + " ..";     public boolean shouldTerminate() {         return shouldTerminate; }

19 ConnectionHandler: Code Example
public class ConnectionHandler<T> implements Runnable {     private final MessagingProtocol<T> protocol;     private final MessageEncoderDecoder<T> encdec;     private final Socket sock;     public ConnectionHandler(Socket sock, MessageEncoderDecoder<T> reader,  MessagingProtocol<T> protocol) {         this.sock = sock;         this.encdec = reader;         this.protocol = protocol;     }

20 ConnectionHandler: Code Example
  @Override     public void run() {                  try (   Socket sock = this.sock; //for automatic closing                 BufferedInputStream in = new BufferedInputStream(sock.getInputStream());                 BufferedOutputStream out = new BufferedOutputStream(sock.getOutputStream())) {             int read;             while (!protocol.shouldTerminate() && (read = in.read()) >= 0) {                 T nextMessage = encdec.decodeNextByte((byte) read);                 if (nextMessage != null) {                     T response = protocol.process(nextMessage);                     if (response != null) {                         out.write(encdec.encode(response));                         out.flush();                     }                 }             }         } catch (IOException ex) {             ex.printStackTrace();         }     } }

21 Multi-Client Server: Base Server Code
public abstract class BaseServer {     private final int port;     private final Supplier<MessagingProtocol> protocolFactory;     private final Supplier<MessageEncoderDecoder> encdecFactory;     public BaseServer(             int port,             Supplier<MessagingProtocol> protocolFactory,             Supplier<MessageEncoderDecoder> encdecFactory) {         this.port = port;         this.protocolFactory = protocolFactory;         this.encdecFactory = encdecFactory;     }

22 Multi-Client Server: Base Server Code
public void serve() {         try (ServerSocket serverSock = new ServerSocket(port)) {             while (!Thread.currentThread().isInterrupted()) {                 Socket clientSock = serverSock.accept();                 ConnectionHandler handler = new ConnectionHandler(                         clientSock,                         encdecFactory.get(),                         protocolFactory.get());                 execute(handler);             }         } catch (IOException ex) {             ex.printStackTrace();         }         System.out.println("server closed!!!");     }     protected abstract void execute(ConnectionHandler handler); }

23 How to implement execute method?
Three examples: Single thread – handles one connection at any given time Thread per connection (per client) Constant number of threads – handles predefined number of clients concurrently We will discuss their performance in four categories: Scalability Accept latency Reply latency Resource efficiency

24 Measuring Server Performance
Scalability: The ability to serve larger number of concurrent clients without modifying the code - only by increasing hardware power We expect by doubling the hardware power to receive double server performance Once this stop working, we approach closer to the upper limit Accept Latency: The time wasted from the moment the request is received until the connection is established Reply Latency: The time the client is needed to wait until the response is received In other words, the time it takes the server to fetch the request, process the request, create a response and send the response back to the client. Resource Efficiency: The resources the server is needed to operate from RAM, CPU power, and HDD storage

25 Server: Single Thread Solution
Implementing execute by calling ConnectionHandler’s run() method Sequentially handling clients one by one This solution will allow one client to be connected to the server at any given time! Can be suitable for cases where clients send a request, expect a response fast, then disconnect. Not used in practice, however.

26 Server: Single Thread Code
public class SingleThreadedServer extends BaseServer {     public SingleThreadedServer(             int port,             Supplier<MessagingProtocol> protocolFactory,             Supplier<MessageEncoderDecoder> encoderDecoderFactory) {         super(port,protocolFactory,encoderDecoderFactory);     }     protected void execute(ConnectionHandler handler) {         handler.run(); }

27 Server: Single Thread Performance
Scalability Zero scalability! It handles one client, adding more hardware will not change this fact! Accept Latency For the first client – low latency For the second client onwards – high latency More clients will get stuck in queue – the more concurrent clients the worse the latency Reply Latency Since we handle one client at a time – creating a response will be fast all the resources of the server are concentrated on serving one client! Low latency Resource Efficiency High – since we only serve one client – it is a non-issue.

28 Server: Thread Per Client Solution
Implementing execute by sending ConnectionHandler object to be executed in its own thread May handle concurrent connections since each one is handled in its own thread The more threads in the system the more context switching is required, the worse the performance is This type of server can handle 10,000 concurrent connections – denoted as the C10K problem Propose a simple way to crash this type of servers!

29 Server: Thread Per Client Code
public class ThreadPerClientServer extends BaseServer {     public ThreadPerClientServer(             int port,             Supplier<MessagingProtocol> protocolFactory,             Supplier<MessageEncoderDecoder> encoderDecoderFactory) {         super(port, protocolFactory, encoderDecoderFactory);     }     protected void execute(ConnectionHandler handler) {         new Thread(handler).start(); }

30 Server: Thread Per Client Performance
Scalability Up to a limit – 10K concurrent connections Accept Latency Low latency – since there is a designated thread to accept new connections! Still suffers from the context switch issue - The more threads in the system the worse its latency Reply Latency Low latency - Still suffers from context switch issues - thus increasing the number of threads will worsen the latency Resource Efficiency A connection requires a thread object – alongside its memory stack Even if threads are in blocking mode – they use these resources as long as the connection is active! – low efficiency!

31 Server: Thread-Pool Solution
Since Thread-Per-Client in the “old” way are prone to server attacks. A Thread-pool solution is proposed. This solution is assisted by a limited size thread-pool object that handles thread management. This solution still have the same scalability issues that the thread-per- client has. Advantages: No need to manage threads – they are managed by the thread-pool No performance deterioration – due to limiting the accepted number of concurrent connections in advance

32 Server: Thread-Pool Code
public class FixedThreadPoolServer extends BaseServer {     private final ExecutorService pool;     public FixedThreadPoolServer(             int numThreads,             int port,             …………         this.pool = Executors.newFixedThreadPool(numThreads);     }     public void serve() {         super.serve();         pool.shutdown();     protected void execute(ConnectionHandler handler) {         pool.execute(handler); }

33 Server Performance Chart
Single Thread Thread Per Client Thread Pool Scalability None Low Accept Latency High Low up to a limit High following the limit Reply Latency Resource Efficiency The upper limit of thread-per-client is exactly the same of that of thread-pool solution.

34


Download ppt "Application Protocols"

Similar presentations


Ads by Google