Proactor Pattern Venkita Subramonian & Christopher Gill

Slides:



Advertisements
Similar presentations
Threads, SMP, and Microkernels
Advertisements

Why Concurrency? Allows multiple applications to run at the same time  Analogy: juggling.
Threads, SMP, and Microkernels Chapter 4. Process Resource ownership - process is allocated a virtual address space to hold the process image Scheduling/execution-
Chapter 4 Threads, SMP, and Microkernels Patricia Roy Manatee Community College, Venice, FL ©2008, Prentice Hall Operating Systems: Internals and Design.
Computer Systems/Operating Systems - Class 8
CSE 451: Operating Systems Section 6 Project 2b; Midterm Review.
1 Processes and Pipes COS 217 Professor Jennifer Rexford.
Seyed Mohammad Ghaffarian ( ) Computer Engineering Department Amirkabir University of Technology Fall 2010.
Precept 3 COS 461. Concurrency is Useful Multi Processor/Core Multiple Inputs Don’t wait on slow devices.
Server Architecture Models Operating Systems Hebrew University Spring 2004.
Threads 1 CS502 Spring 2006 Threads CS-502 Spring 2006.
1 Chapter 4 Threads Threads: Resource ownership and execution.
Multithreading in Java Nelson Padua-Perez Chau-Wen Tseng Department of Computer Science University of Maryland, College Park.
A. Frank - P. Weisberg Operating Systems Introduction to Tasks/Threads.
Threads CS 416: Operating Systems Design, Spring 2001 Department of Computer Science Rutgers University
Multithreading in Java Nelson Padua-Perez Bill Pugh Department of Computer Science University of Maryland, College Park.
Programming Network Servers Topic 6, Chapters 21, 22 Network Programming Kansas State University at Salina.
Flash An efficient and portable Web server. Today’s paper, FLASH Quite old (1999) Reading old papers gives us lessons We can see which solution among.
1 Lecture 4: Threads Operating System Fall Contents Overview: Processes & Threads Benefits of Threads Thread State and Operations User Thread.
Pattern Oriented Software Architecture for Networked Objects Based on the book By Douglas Schmidt Michael Stal Hans Roehnert Frank Buschmann.
Operating System Concepts Ku-Yaw Chang Assistant Professor, Department of Computer Science and Information Engineering Da-Yeh University.
Java Threads 11 Threading and Concurrent Programming in Java Introduction and Definitions D.W. Denbo Introduction and Definitions D.W. Denbo.
E81 CSE 532S: Advanced Multi-Paradigm Software Development Chris Gill and Venkita Subramonian Department of Computer Science and Engineering Washington.
Lecture 3 Process Concepts. What is a Process? A process is the dynamic execution context of an executing program. Several processes may run concurrently,
E81 CSE 532S: Advanced Multi-Paradigm Software Development Venkita Subramonian, Christopher Gill, Guandong Wang, Zhenning Hu, Zhenghui Xie Department of.
Threads, SMP, and Microkernels Chapter 4. Process Resource ownership - process is allocated a virtual address space to hold the process image Scheduling/execution-
Games Development 2 Concurrent Programming CO3301 Week 9.
Explore Patterns in Context-Aware Applications --Using Reactor Pattern to Develop In/Out Board Fall 2002 Yu Du.
E81 CSE 532S: Advanced Multi-Paradigm Software Development Chris Gill Department of Computer Science and Engineering Washington University, St. Louis
CS333 Intro to Operating Systems Jonathan Walpole.
Architectural pattern: Reactor Source: POSA II pp 179 – 214POSA II Environment: an application that receives multiple requests simultaneously but may process.
Chapter 4: Threads. 4.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th edition, Jan 23, 2005 Chapter 4: Threads Overview Multithreading.
Department of Computer Science and Software Engineering
E81 CSE 532S: Advanced Multi-Paradigm Software Development Venkita Subramonian, Christopher Gill, Ying Huang, Marc Sentany Department of Computer Science.
CS533 - Concepts of Operating Systems 1 Threads, Events, and Reactive Objects - Alan West.
1 Why Threads are a Bad Idea (for most purposes) based on a presentation by John Ousterhout Sun Microsystems Laboratories Threads!
Interrupts and Exception Handling. Execution We are quite aware of the Fetch, Execute process of the control unit of the CPU –Fetch and instruction as.
CSE 451: Operating Systems Section 6 Project 2b. Midterm  Scores will be on Catalyst and midterms were handed back on Friday(?) in class  Talk to Ed,
Threads, SMP, and Microkernels Chapter 4. Processes and Threads Operating systems use processes for two purposes - Resource allocation and resource ownership.
High degree of user interaction Interactive Systems: Model View Controller Presentation-abstraction-control.
Introduction to threads
Component Configurator
Chapter 4: Threads.
Processes and threads.
Event Handling Patterns Asynchronous Completion Token
Advanced Topics in Concurrency and Reactive Programming: Asynchronous Programming Majeed Kassis.
Midterm Review David Ferry, Chris Gill
Chapter 4: Multithreaded Programming
Lecture 21 Concurrency Introduction
Concurrency: Threads, Address Spaces, and Processes
Chapter 4: Threads.
The Active Object Pattern
Threads, SMP, and Microkernels
Levels of Parallelism within a Single Processor
Chapter 15, Exploring the Digital Domain
Concurrency: Threads, Address Spaces, and Processes
Modified by H. Schulzrinne 02/15/10 Chapter 4: Threads.
Multithreading.
Half-Sync/Half-Async (HSHA) and Leader/Followers (LF) Patterns
Lecture 4- Threads, SMP, and Microkernels
Threads Chapter 4.
Monitor Object Pattern
Multithreaded Programming
Levels of Parallelism within a Single Processor
Why Threads Are A Bad Idea (for most purposes)
Chapter 4: Threads.
Why Threads Are A Bad Idea (for most purposes)
Why Threads Are A Bad Idea (for most purposes)
Chapter 13: I/O Systems.
Concurrency: Threads, Address Spaces, and Processes
Presentation transcript:

Proactor Pattern Venkita Subramonian & Christopher Gill E81 CSE 532S: Advanced Multi-Paradigm Software Development Proactor Pattern Venkita Subramonian & Christopher Gill Department of Computer Science and Engineering Washington University, St. Louis cdgill@cse.wustl.edu Title slide

Proactor An architectural pattern for asynchronous, decoupled operation initiation and completion In contrast to Reactor architectural pattern Synchronous, coupled initiation and completion I.e., reactive initiation completes when hander call returns Except for reactive completion only, e.g., for connector Proactor separates initiation and completion more Without multi-threading overhead/complexity Performs additional bookkeeping to match them up Dispatches a service handler upon completion Asynch Handler does post-operation processing Still separates application from infrastructure A small departure vs. discussing other patterns We’ll focus on using rather than implementing proactor I.e., much of the implementation already given by the OS

Context Asynchronous operations used by application Application thread should not block Application needs to know when an operation completes Decoupling application/infrastructure is useful Reactive performance is insufficient Multi-threading incurs excessive overhead or programming model complexity an application must not block indefinitely waiting on any single source unnecessary utilization of the CPU(s) should be avoided Minimal modifications and maintenance effort should be required to integrate new or enhanced services Application should be shielded from the complexity of multi-threading and synchronization mechanism

Design Forces Separation of application from infrastructure Flexibility to add new application components Performance benefits of concurrency Reactive has coarse interleaving (handlers) Multi-threaded has fine interleaving (instructions) Complexity of multi-threading Concurrency hazards: deadlock, race conditions Coordination of multiple threads Performance issues with multi-threading Synchronization re-introduces coarser granularity Overhead of thread context switches Sharing resources across multiple threads an application must not block indefinitely waiting on any single source unnecessary utilization of the CPU(s) should be avoided Minimal modifications and maintenance effort should be required to integrate new or enhanced services Application should be shielded from the complexity of multi-threading and synchronization mechanism

Compare Reactor vs. Proactor Side by Side Application Application ASYNCH accept/read/write handle_events Reactor Handle handle_event handle_events Event Handler Proactor accept/read/write handle_event Handle Completion Handler

Proactor in a nutshell create handlers 1 2 register handlers asynch_io create handlers 1 2 register handlers asynch_io ACT1 ACT2 1 2 handle events 4 Completion Handler2 Completion Handler1 Application ACT 8 handle_event 1 2 3 associate handles with I/O completion port wait 5 complete 7 Proactor I/O Completion port ACT 6 completion event OS (or AIO emulation)

Motivating Example: A Web Server Remote clients use server to record status information Logging records are written to various output devices Clients and server use a connection-oriented protocol, such as TCP Clients and server bound to transport endpoints that uniquely ID them Multiple clients can access server simultaneously Each client maintains its own connection with the logging server A new client connection request is indicated by a CONNECT event A request to process logging records is indicated by a READ event The logging records and connection requests that clients issue can arrive concurrently at the logging server May incur the following liabilities: may be inefficient and non-scalable due to context switching, synchronization, and data movement among CPUs . may require the use of complex concurrency control schemes not available on all operating systems or non-portable semantics may be better to align threading strategy to available resources From http://www.cs.wustl.edu/~schmidt/PDF/proactor.pdf

First Approach: Reactive (1/2) Web Server Acceptor 5 create HTTP Handler Web Browser 3 connect 4 connection request 6 register for socket read Reactor register acceptor 1 handle events 2

First Approach: Reactive (2/2) Web Server read request 3 parse request 4 Acceptor 1 HTTP Handler Web Browser GET/etc/passwd 5 send file socket read ready 10 register for file read 2 8 7 register for socket write read file Reactor 6 file read ready File System 9 socket write ready

Analysis of the Reactive Approach Application-supplied acceptor creates, registers handlers A factory Single-threaded A handler at a time Concurrency Good with small jobs (e.g., TCP/IP stream fragments) With large jobs? Remote clients use server to record status information Logging records are written to various output devices Clients and server use a connection-oriented protocol, such as TCP Clients and server bound to transport endpoints that uniquely ID them Multiple clients can access server simultaneously Each client maintains its own connection with the logging server A new client connection request is indicated by a CONNECT event A request to process logging records is indicated by a READ event The logging records and connection requests that clients issue can arrive concurrently at the logging server May incur the following liabilities: may be inefficient and non-scalable due to context switching, synchronization, and data movement among CPUs . may require the use of complex concurrency control schemes not available on all operating systems or non-portable semantics may be better to align threading strategy to available resources From http://www.cs.wustl.edu/~schmidt/PDF/proactor.pdf

A Second Approach: Multi-Threaded Acceptor spawns, e.g., a thread-per-connection Instead of registering handler with a reactor Handlers are active Multi-threaded Highly concurrent May be physically parallel Concurrency hazards Any shared resources between handlers Locking / blocking costs Remote clients use server to record status information Logging records are written to various output devices Clients and server use a connection-oriented protocol, such as TCP Clients and server bound to transport endpoints that uniquely ID them Multiple clients can access server simultaneously Each client maintains its own connection with the logging server A new client connection request is indicated by a CONNECT event A request to process logging records is indicated by a READ event The logging records and connection requests that clients issue can arrive concurrently at the logging server May incur the following liabilities: may be inefficient and non-scalable due to context switching, synchronization, and data movement among CPUs . may require the use of complex concurrency control schemes not available on all operating systems or non-portable semantics may be better to align threading strategy to available resources From http://www.cs.wustl.edu/~schmidt/PDF/proactor.pdf

A Third Approach: Proactive Acceptor/handler registers itself with OS, not with a separate dispatcher Acts as a completion dispatcher itself OS performs work E.g., accepts connection E.g., reads a file E.g., writes a file OS tells completion dispatcher it’s done Accepting connect Performing I/O Remote clients use server to record status information Logging records are written to various output devices Clients and server use a connection-oriented protocol, such as TCP Clients and server bound to transport endpoints that uniquely ID them Multiple clients can access server simultaneously Each client maintains its own connection with the logging server A new client connection request is indicated by a CONNECT event A request to process logging records is indicated by a READ event The logging records and connection requests that clients issue can arrive concurrently at the logging server May incur the following liabilities: may be inefficient and non-scalable due to context switching, synchronization, and data movement among CPUs . may require the use of complex concurrency control schemes not available on all operating systems or non-portable semantics may be better to align threading strategy to available resources From http://www.cs.wustl.edu/~schmidt/PDF/proactor.pdf

Proactor Dynamics Asynch Operation Processor Asynch Operation Completion Dispatcher Completion Handler Application Asynch operation initiated invoke execute Operation runs asynchronously Operation completes dispatch handle_event Completion handler notified 1. application registers a concrete event handler with the reactor 2. application indicates the event(s) for which event handler will be notified 3. the reactor instructs each event handler to provide its internal handle this identifies event sources to demultiplexer and OS 4. application starts the reactor's event loop 5. reactor creates a handle set from the handles of all registered handlers 6. reactor calls the demultiplexer to wait for events 7. call to demultiplexer returns indicating that some handle is “ready” 8. reactor uses the ready handles as `keys' to locate appropriate handler(s) 9. reactor iteratively dispatches the handler’s hook method(s) hook methods carry out services Completion handler runs From http://www.cs.wustl.edu/~schmidt/PDF/proactor.pdf

Asynch I/O Factory classes ACE_Asynch_Read_Stream Initialization prior to initiating read: open() Initiate asynchronous read: read() (Attempt to) halt outstanding read: cancel() ACE_Asynch_Write_Stream Initialization prior to initiating write: open() Initiate asynchronous write: write() (Attempt to) halt outstanding write: cancel()

Asynchronous Event Handler Interface ACE_Handler Proactive handler Distinct from reactive ACE_Event_Handler Return handle for underlying stream handle() Read completion hook handle_read_stream() Write completion hook handle_write_stream() Timer expiration hook handle_time_out()

Proactor Interface (C++NPV2 Section 8.5) Lifecycle Management Initialize proactor instance: ACE_Proactor(), open () Shut down proactor: ~ACE_Proactor(), close() Singleton accessor: instance() Event Loop Management Event loop step: handle_events() Event loop: proactor_run_event_loop() Shut down event loop: proactor_end_event_loop() Event loop completion: proactor_event_loop_done() Timer Management Start/stop timers: schedule_timer(), cancel_timer() I/O Operation Facilitation Input: create_asynch_read_stream() Output: create_asynch_write_stream()

Proactor Consequences Benefits Separation of application, concurrency concerns Potential portability, performance increases Encapsulated concurrency mechanisms Separate lanes, no inherent need for synchronization Separation of threading and concurrency policies Liabilities Difficult to debug Opaque and non-portable completion dispatching Controlling outstanding operations Ordering, correct cancellation notoriously difficult an application must not block indefinitely waiting on any single source unnecessary utilization of the CPU(s) should be avoided Minimal modifications and maintenance effort should be required to integrate new or enhanced services Application should be shielded from the complexity of multi-threading and synchronization mechanism