Presentation is loading. Please wait.

Presentation is loading. Please wait.

Reactor Design Pattern

Similar presentations


Presentation on theme: "Reactor Design Pattern"— Presentation transcript:

1 Reactor Design Pattern
Lecture 13 Reactor Design Pattern

2 Overview Blocking sockets - impact on server scalability.
Non-blocking IO in Java - java.nio package Involves with complications due to asynchronous nature Impact on message parsing algorithms Reactor design pattern Generic server More scalable than our earlier solutions …

3 Sockets: internal implementation
To understand how non-blocking operations work – we first need to understand how RTE internally manages IO There’s a buffer associated with each socket. Example: write to socket: The RTE copies bytes to the internal buffer. RTE then sends bytes from the buffer to network. When we write: no guarantee that the buffer has room – if not, we wait (block).

4 Blocking vs Non-Blocking IO
Blocking Read: If a thread invokes read(), and internal buffer is empty Thread is then blocked until it completes successfully Non-Blocking Read: Check if socket has some data available to read. Non-block read available data from socket. Return any amount of data. RTE copies bytes available in socket's input buffer to a process buffer, and returns number of copied bytes If zero bytes are read – then an empty buffer is returned!

5 Java Non-Blocking Sockets
Blocking Write: If the output buffer is full – the process invoking write() is blocked until output buffer has enough free space for data. Non-Blocking Write: Check if socket can send some data. Non-block write data to socket. Return immediately – even if not the complete data was written! RTE copies bytes as possible, returns the number of bytes successfully written. Writing zero bytes is a successful invocation – it means the buffer is full!

6 Java Non-Blocking Sockets
Non-Block Accept: Process holds until there is a connection. Check if new connection is requested. If so, accept it, otherwise return.

7 RTE perspective When do we try? How many times? When?
We need someone to notify us! We partition the solution in two logical parts: Readiness notification Non-blocking input output. Modern RTEs supply both mechanisms: Is data available for read() in socket? Is socket ready to send some data? Is there new connection pending for socket?

8 Disadvantages of Thread Per Client
It's wasteful Creating a new Thread is relatively expensive. Each thread requires a fair amount of memory. Threads are blocked most of the time waiting for network IO. It's not scalable The server can't grow to accommodate hundreds of concurrent requests It's also vulnerable to Denial Of Service attacks Poor availability It takes a long time to create a new thread for each new client. The response time degrades as the number of clients rises. Solution The Reactor Pattern is another (better) design for handling several concurrent clients.

9 Reactor Pattern – the idea
It's based on the observation: if threads do not wait for Network IO, a single thread could easily manage thousands of client requests alone. Uses Non-Blocking IO, so threads don't waste time waiting. Have one thread in charge of the communication: Accepting new connections and handling network IO. As the Network is non-blocking, read, write and accept operations "take no time", and a single thread is enough for all the clients. Have a fixed number of threads, in charge of the protocol. These threads perform the message framing, decoding and encoding And perform message processing to create responses

10 Java Non-blocking IO NIO provides readiness notification.
In thread-per-client solution, the server gets stuck on msg = in.readLine() clientSocket = serverSocket.accept() Write() Java NIO: provides an efficient Input/Output package, which supports Non-blocking IO. NIO provides readiness notification. Fundamental NIO ingredients: Channels Buffers Selectors

11 Channels [the new Sockets]
SocketChannel: Same as regular Socket object. Difference: read(), write() can be non-blocking. ServerSocketChannel: accept() returns SocketChannel Same as regular ServerSocket object. Difference: accept() can be non-blocking. Checks if a client is trying to connect, if so returns a new socket, otherwise returns null. By default - new channels are in blocking mode. They must be set manually to non-blocking mode.

12 Buffers [containers] Buffers are wrapper classes used by NIO to send and receive data through a Channel. ByteBuffer for sending and receiving bytes. A buffer can be in “write mode” when: sock.read(buff). The socket reads from the stream and writes to buff. A buffer can be in “read mode” when: sock.write(buff). The socket reads from the buff and writes to stream. In between read and write: flip().

13 Buffer IO Operations Reading from a channel – writing to a buffer:
numBytesRead = socketChannel.read(buf); Contents found in socketChannel are read from their internal container object to our buffer. Writing from a buffer – reading from a channel: numBytesWritten = socketChannel.write(buf); Contents from our buf object are written to the socketChannel’s internal container to be sent. If read or write returns -1, it means that the channel is closed. Read and write operations on Buffers update the position marker accordingly.

14 Buffer Markers Writing data to buffer: Reading data from buffer:
Buffers can be in “write mode” or in “read mode”. Between writing and reading from a buffer – we should invoke flip(). Each buffer has capacity, limit, and position markers. Capacity: A Buffer has a certain fixed size, also called its "capacity". You can only write capacity bytes into the Buffer. Once the Buffer is full, you need to empty it (read or clear the data) before you can write more data into it. Position: Writing data to buffer: Initially the position is 0. When a byte is written to the Buffer the position is advanced. Reading data from buffer: When you flip a Buffer from writing mode to reading mode, the position is reset to 0. As you read data from the Buffer you do so from position, and position is advanced.

15 Buffer Markers Limit: In write mode: After flip() - read mode:
The limit of a Buffer is the limit of how much data you can write into the buffer. The limit is equal to the capacity of the Buffer. After flip() - read mode: The limit means the limit of how much data you can read from the data. When flipping limit is set to position of the write mode. In other words, you can read as many bytes as were written (limit is set to the number of bytes written, which is marked by position).

16 Illustration

17 Usage Example

18 read/write operations
A read operation reads a specified number of bytes from the current position, and updates the position marker to point to the yet unread bytes. A write operation writes some bytes from the current position, and advances the position according to the number of written bytes. You can't read or write more than the limit of the buffer. You can't increase the limit over the capacity. It can be described as: 0 ≤ 𝑝𝑜𝑠𝑖𝑡𝑖𝑜𝑛 ≤ 𝑙𝑖𝑚𝑖𝑡 ≤ 𝑐𝑎𝑝𝑎𝑐𝑖𝑡𝑦

19 Buffer Flipping The flip() method switches a Buffer from writing mode to reading mode. Calling flip() sets the position back to 0, and sets the limit to where position just was. The position marker now marks the reading position, and limit marks how many bytes were written in the buffer. That is the limit of how many bytes can be read. Usage: You create a ByteBuffer. Write data into the buffer. Flip() Send the buffer to the channel.

20 Selectors Implements readiness notification.
Channels may be registered to a selector for specific readiness events Read /write /accept Selector can be polled to get list of ready channels Creating a Selector: Selector selector = Selector.open();

21 Selectors Channel ready for read guarantees that a read operation will return some bytes. Channel ready for write guarantees that a write operation will write some bytes Channel ready for accept guarantees that accept() will result in a new connection. Example: Selector selector = Selector.open(); channel.configureBlocking(false); SelectionKey key = channel.register(selector, SelectionKey.OP_READ);

22 Selection keys Four options for events SelectionKey.OP_CONNECT
SelectionKey.OP_ACCEPT SelectionKey.OP_READ SelectionKey.OP_WRITE Selection keys hold The interest set: an int representing the events the selector listens to with respect to he channel. boolean isInterestedInRead = interestSet & SelectionKey.OP_READ; The ready set (same as interest set only for “ready”) selectionKey.isReadable(); The Channel The Selector An attached object (optional), e.g. an associated buffer.

23 Selectors Selector class abstracts a service given by OS under the system call select. Holds the a set of keys, selectedKeys (those who are ready), and canceledKeys. select() blocks until at least one channel is ready for the events you it registered for. After Select(): Set<SelectionKey> selectedKeys = selector.selectedKeys();

24 Selector example (after select() returns)
for (key: selector.selectedKeys()) { if (key.isAcceptable()) { // a connection was accepted by a ServerSocketChannel. } else if (key.isConnectable()) { // a connection was established with a remote server. } else if (key.isReadable()) { // a channel is ready for reading } else if (key.isWritable()) { // a channel is ready for writing } selector.selectedKeys().clear(); Alternative code: Set<SelectionKey> selectedKeys = selector.selectedKeys(); Iterator<SelectionKey> keyIterator = selectedKeys.iterator(); while (keyIterator.hasNext()) { SelectionKey key = keyIterator.next(); if (key.isAcceptable()) { // a connection was accepted by a ServerSocketChannel. } else if (key.isConnectable()) { // a connection was established with a remote server. } else if (key.isReadable()) { // a channel is ready for reading } else if (key.isWritable()) { // a channel is ready for writing } keyIterator.remove();

25 Reactor IO Reactor server is accepting new connections.
If bytes ready to read from socket, reactor read bytes and transfer to protocol (previous lecture). If socket is ready for writing, reactor checks if there is a write request - if so, reactor sends data.

26 Reactor class (a server class)
Has port. Has abstract protocol and message decoder. Holds a Thead pool and the main thread. Holds a selector. Holds a Task (“Runnable”) queue. It defines a NonBlockingConnectionHandler. Handles each client. The tasks of processing data are performed on a different thread.

27 Main Reactor thread (selectorThread)
main reactor thread performs the following: Creates new thread pool (executor). Creates new ServerSocketChannel, bind to port. Creates new Selector. Registers ServerSocketChannel in Selector, asking for ACCEPT readiness. While(true) - wait for selector notifications For each notification event check: Accept notification - server socket is ready to accept new connection - call accept. new socket created - register socket in  Selector. Write notification - socket ready for writing, if protocol ask to write - write bytes to socket Read notification - socket ready for reading, read bytes and pass them to protocol handler

28 Thread pool The work defined in the protocol will be achieved with the use of thread pool;  message processing is assigned as task for pool. Event handling is done by two threads: ReactorThread pulls the bytes from socket and places them in a buffer. Thread-pool thread: Processes bytes using a encoderDecoder and protocol. Writes the protocol response back to the connection handler outgoing buffer

29

30 We clear the selected keys set so we won’t have to handle those events again.

31 Here selectorThread changes the notifications of the listening keys.
Here we put the connectionHandler as “attachment”, to a selectionKey. The handler is the state of the session. Here we submit the task to the thread pool. Following a read(), there might be a heavy task. Here selectorThread changes the notifications of the listening keys. See “undateInterestedOps()”.

32 There is a ConnectionHandler for each socket channel.
Selector class allows to attach an arbitrary object to a channel (in the SelectionKey), which can later be retrieved. We associate the ConnectionHandler with the socket created when accepting new connection. Closing function:

33 Here we force that all the changes in the interestedSet will be performed by the selectorThread, to avoid concurrency issues regarding this set. Also, interestedOps() operated by a different thread will block until select() returns. SelectorTasks is protected since it is a “concurrentList”. Wakeup() wakes the selector from select().

34

35 Read() = -1: the connection is closed.
Chan.read(): we write to the buffer. If success, we need to flip() in order to read.

36 Chan. read(): we write to the buffer
Chan.read(): we write to the buffer. If success, we need to flip() in order to read.

37

38

39 Direct vs. non-direct buffers ByteBuffer.allocateDirect()
A byte buffer is either direct or non-direct. Given a direct byte buffer, the Java virtual machine will make a best effort to perform native I/O operations directly upon it. That is, it will attempt to avoid copying the buffer's content to (or from) an intermediate buffer before (or after) each invocation of one of the underlying operating system's native I/O operations. The direct buffers typically have somewhat higher allocation and deallocation costs than non-direct buffers. It is therefore recommended that direct buffers be allocated primarily for large, long-lived buffers that are subject to the underlying system's native I/O operations.

40 Buffer Pool The connectionHandler uses a buffer pool.
That is because DirectByteBuffers are expensive to allocate/deallocate. BufferPool in ConnectionHandler caches the already-used buffers. This is called the “Fly-weight design pattern”.

41 Suggested Solution: Concurrency Issues
Reading tasks are performed by different threads. What about consecutive reads from the same client? Assume a client that sends two messages; M1 and M2 to the server The server then, creates two tasks; T1 and T2 corresponding to these messages Since two different threads may handle the two tasks concurrently T2 may complete before T1! The response to M2 will be sent before the response to M1! The protocol order may be broken!

42 Current code:

43 Solution: Task Queue for Each Client
Create a task queue for each client Synchronize over the queue to execute the task at beginning of queue Ensures no other task is taken from queue until current task is completed Result: Protocol order is ensured Concerns: Performance issues due to synchronization Task queues management is required

44 Naïve solution: queue of tasks for each connection handler.

45 Proposed Solution: Issues
If a client sends two requests, the thread pool will have two tasks taking up two slots! The first is executed, while the other is blocked. We may clog the thread pool! We could have used the other slot to serve other clients in the meanwhile! Issue 1.5: Suppose that a message from a client is split into three “reads”, and somehow ends up at three threads, one working and the other two waiting. There is no guarantee that synchronized allows the second one before the third. Issue 2: tasksQueue needs to be managed (initialized and deleted). In the proposed solution above there is no real management What happens when new client connects? What happens when current client disconnects? We need a better solution!

46 Actor Thread Pool Terminology: Design: Actor – A ConnectionHandler
Action – Task of an Actor Design: Create dependency list of Actions for each Actor, Ensure that only one Action per Actor is submitted to the executor at any given time

47 Actor Thread Pool Implementation:
Upon new Action submission -> check if another Action for same Actor is being executed If there are none, submit to executor for execution If there is one, add the new Action to the Action pending list for this Actor Once current Action is complete -> get first Action from pending list and execute it.

48 Line 3: Inactive Actors that have pending Actions to be executed
Line 4: ReadWrite Lock used to synchronize the Acts Line 5: The list of currently active Actors that have an Action currently being executed in the thread-pool. In addition to that, they may have pending Actions waiting to be executed Line 19: If no other Actions of this Actor are being executed -> Execute the Action in the thread-pool Line 21: If there is another Action currently being executed -> Add this Action to pending list of this Actor execute

49 This function is used to retrieve the pending Actions of a given Actor
Line 30: If the list exists, it is fetched, and returned Line 38: If the list does not exist, an empty queue is created, added to Actors, and returned

50

51 Some notes: WeakHashMap acts
the ActorThreadPool uses WeakHashMap to hold the task queues of the actors. An entry in a WeakHashMap will automatically be removed when its key is no longer in ordinary use. The presence of a mapping for a given key will not prevent the key from being discarded by the garbage collector. When a key has been discarded its entry is effectively removed from the map. This class is not synchronized. And therefore we will guard access to it using the read-write lock.

52 Using ActorsPool in the reactor


Download ppt "Reactor Design Pattern"

Similar presentations


Ads by Google