Brian N. Bershad, Thomas E. Anderson, Edward D. Lazowska, and Henry M. Levy. Presented by: Tim Fleck.

Slides:



Advertisements
Similar presentations
Threads, SMP, and Microkernels
Advertisements

URPC for Shared Memory Multiprocessors Brian Bershad, Thomas E. Anderson, Edward D. Lazowska, and Henry M. Levy ACM TOCS 9 (2), May 1991.
Remote Procedure Call Design issues Implementation RPC programming
User-Level Interprocess Communication for Shared Memory Multiprocessors Bershad, B. N., Anderson, T. E., Lazowska, E.D., and Levy, H. M. Presented by Akbar.
CS 5204 – Operating Systems 1 Scheduler Activations.
Fast Communication Firefly RPC Lightweight RPC  CS 614  Tuesday March 13, 2001  Jeff Hoy.
Lightweight Remote Procedure Call BRIAN N. BERSHAD THOMAS E. ANDERSON EDWARD D. LAZOWSKA HENRY M. LEVY Presented by Wen Sun.
Chapter 4 Threads, SMP, and Microkernels Patricia Roy Manatee Community College, Venice, FL ©2008, Prentice Hall Operating Systems: Internals and Design.
Lightweight Remote Procedure Call Brian N. Bershad, Thomas E. Anderson, Edward D. Lazowska, and Henry M. Levy Presented by Alana Sweat.
Extensibility, Safety and Performance in the SPIN Operating System Department of Computer Science and Engineering, University of Washington Brian N. Bershad,
G Robert Grimm New York University Lightweight RPC.
User-Level Interprocess Communication for Shared Memory Multiprocessors Bershad, B. N., Anderson, T. E., Lazowska, E.D., and Levy, H. M. Presented by Chris.
Extensibility, Safety and Performance in the SPIN Operating System Brian Bershad, Stefan Savage, Przemyslaw Pardyak, Emin Gun Sirer, Marc E. Fiuczynski,
CS533 Concepts of Operating Systems Class 8 Shared Memory Implementations of Remote Procedure Call.
User Level Interprocess Communication for Shared Memory Multiprocessor by Bershad, B.N. Anderson, A.E., Lazowska, E.D., and Levy, H.M.
Dawson R. Engler, M. Frans Kaashoek, and James O'Tool Jr.
CS533 Concepts of Operating Systems Class 4 Remote Procedure Call.
3.5 Interprocess Communication Many operating systems provide mechanisms for interprocess communication (IPC) –Processes must communicate with one another.
Lightweight Remote Procedure Call Brian N. Bershad, Thomas E. Anderson, Edward D. Lazowska, Henry M. Levy ACM Transactions Vol. 8, No. 1, February 1990,
3.5 Interprocess Communication
USER LEVEL INTERPROCESS COMMUNICATION FOR SHARED MEMORY MULTIPROCESSORS Presented by Elakkiya Pandian CS 533 OPERATING SYSTEMS – SPRING 2011 Brian N. Bershad.
CS533 Concepts of Operating Systems Class 9 User-Level Remote Procedure Call.
User-Level Interprocess Communication for Shared Memory Multiprocessors Brian N. Bershad, Thomas E. Anderson, Edward D. Lazowska, and Henry M. Levy Presented.
1 Threads Chapter 4 Reading: 4.1,4.4, Process Characteristics l Unit of resource ownership - process is allocated: n a virtual address space to.
CS533 Concepts of Operating Systems Class 4 Remote Procedure Call.
The Mach System "Operating Systems Concepts, Sixth Edition" by Abraham Silberschatz, Peter Baer Galvin, and Greg Gagne Presentation by Jonathan Walpole.
1 Lightweight Remote Procedure Call Brian N. Bershad, Thomas E. Anderson, Edward D. Lazowska and Henry M. Levy Presented by: Karthika Kothapally.
CS533 Concepts of Operating Systems Class 9 Lightweight Remote Procedure Call (LRPC) Rizal Arryadi.
CS510 Concurrent Systems Jonathan Walpole. Lightweight Remote Procedure Call (LRPC)
Presentation by Betsy Kavali
Chapter 4 Threads, SMP, and Microkernels Dave Bremer Otago Polytechnic, N.Z. ©2008, Prentice Hall Operating Systems: Internals and Design Principles, 6/E.
Lightweight Remote Procedure Call (Bershad, et. al.) Andy Jost CS 533, Winter 2012.
Threads, SMP, and Microkernels Chapter 4. 2 Outline n Threads n Symmetric Multiprocessing (SMP) n Microkernel n Linux Threads.
Chapter 6 Operating System Support. This chapter describes how middleware is supported by the operating system facilities at the nodes of a distributed.
Distributed Systems. Interprocess Communication (IPC) Processes are either independent or cooperating – Threads provide a gray area – Cooperating processes.
Chapter 4 Threads, SMP, and Microkernels Patricia Roy Manatee Community College, Venice, FL ©2008, Prentice Hall Operating Systems: Internals and Design.
1 Threads, SMP, and Microkernels Chapter 4. 2 Focus and Subtopics Focus: More advanced concepts related to process management : Resource ownership vs.
Processes and Threads Processes have two characteristics: – Resource ownership - process includes a virtual address space to hold the process image – Scheduling/execution.
Scheduler Activations: Effective Kernel Support for the User- Level Management of Parallelism. Thomas E. Anderson, Brian N. Bershad, Edward D. Lazowska,
Scheduler Activations: Effective Kernel Support for the User-level Management of Parallelism Thomas E. Anderson, Brian N. Bershad, Edward D. Lazowska,
Lightweight Remote Procedure Call BRIAN N. BERSHAD, THOMAS E. ANDERSON, EDWARD D. LASOWSKA, AND HENRY M. LEVY UNIVERSTY OF WASHINGTON "Lightweight Remote.
The Mach System Abraham Silberschatz, Peter Baer Galvin, Greg Gagne Presentation By: Agnimitra Roy.
EXTENSIBILITY, SAFETY AND PERFORMANCE IN THE SPIN OPERATING SYSTEM
1 Threads, SMP, and Microkernels Chapter Multithreading Operating system supports multiple threads of execution within a single process MS-DOS.
CS533 - Concepts of Operating Systems 1 The Mach System Presented by Catherine Vilhauer.
Middleware Services. Functions of Middleware Encapsulation Protection Concurrent processing Communication Scheduling.
The Performance of μ-Kernel-Based Systems H. Haertig, M. Hohmuth, J. Liedtke, S. Schoenberg, J. Wolter Presenter: Sunita Marathe.
The Mach System Abraham Silberschatz, Peter Baer Galvin, and Greg Gagne Presented by: Jee Vang.
Remote Procedure Call Andy Wang Operating Systems COP 4610 / CGS 5765.
Networking Implementations (part 1) CPS210 Spring 2006.
LRPC Firefly RPC, Lightweight RPC, Winsock Direct and VIA.
Operating System 4 THREADS, SMP AND MICROKERNELS.
Mark Stanovich Operating Systems COP Primitives to Build Distributed Applications send and receive Used to synchronize cooperating processes running.
M. Accetta, R. Baron, W. Bolosky, D. Golub, R. Rashid, A. Tevanian, and M. Young MACH: A New Kernel Foundation for UNIX Development Presenter: Wei-Lwun.
The Mach System Silberschatz et al Presented By Anjana Venkat.
Lecture 4 Mechanisms & Kernel for NOSs. Mechanisms for Network Operating Systems  Network operating systems provide three basic mechanisms that support.
Cooperating Processes The concurrent processes executing in the operating system may be either independent processes or cooperating processes. A process.
Brian Bershad, Thomas Anderson, Edward Lazowska, and Henry Levy Presented by: Byron Marohn Published: 1991.
1.3 Operating system services An operating system provide services to programs and to the users of the program. It provides an environment for the execution.
CS533 Concepts of Operating Systems
B. N. Bershad, T. E. Anderson, E. D. Lazowska and H. M
Sarah Diesburg Operating Systems COP 4610
By Brian N. Bershad, Thomas E. Anderson, Edward D
Fast Communication and User Level Parallelism
Structuring Of Systems Using Upcalls - By David D. Clark
Presented by Neha Agrawal
Presented by: SHILPI AGARWAL
Thomas E. Anderson, Brian N. Bershad,
Brian N. Bershad, Thomas E. Anderson, Edward D. Lazowska, and Henry M
CS533 Concepts of Operating Systems Class 11
Presentation transcript:

Brian N. Bershad, Thomas E. Anderson, Edward D. Lazowska, and Henry M. Levy. Presented by: Tim Fleck

 Interprocess Communication (IPC)  User-Level Remote Procedure Call (URPC)  URPC Design Rational  Processor Reallocation  Data Transfer Using Shared Memory  Thread Management  URPC Performance  Latency  Throughput  Related Work  Conclusion

 Central to the design of contemporary Operating Systems  Encourages system decomposition across address space boundaries ◦ Fault isolation ◦ Extensibility ◦ Modularity  Provides for communication between different address spaces on the same machine

 Extent of the usability of the address spaces depends on the performance of the communication primitives  IPC has bee the responsibility of the kernel – which has two significant issues ◦ Architectural performance barriers  Performance of kernel-based synchronous communication is limited by the cost of invoking the kernel and reallocating processor between address spaces  In prior work on LRPC. 70% of the overhead can be attributed to the kernel’s mediation of the cross-address space call ◦ Interaction between kernel-based communication and high- performance user-level threads  For satisfactory performance, medium- and fine-grained parallel applications need user-level thread management.  Costs (performance and system complexity) high for partitioning strongly interdependent communication and thread management across protection boundaries.

 Eliminate the kernel from the path of cross- address space communication  User-level Remote Procedure Call improves performance because: ◦ Messages are sent between address spaces directly without invoking the kernel ◦ Eliminates unnecessary CPU reallocation ◦ When CPU reallocation is needed, the cost can be amortized over multiple independent calls ◦ Exploitation of inherent parallelism in message sending and receiving improves performance.

 In many contemporary OS’s applications communicate via narrow channels or ports  Only a few available operations – create, send, receive, destroy  Permit program to program communication across address space boundaries or even machine to machine  Messages are powerful, but they represent a control and data structure alien to traditional Algol-like languages.

 Almost every mature OS supports RPCs which enable messages to do the work with a procedure call interface  RPCs provide the synchronous language-level transfer of control between programs in different address spaces  Communication occurs through a narrow channel, which is left undefined as to its specific operation

 URPC exploits the lack of definition of the RPC channel in two ways ◦ Messages are passed between address spaces through logical channels kept in memory and shared between client and server ◦ Thread management is implemented at the user- level and manages messages at the user-level without kernel involvement for a call or reply  URPC provides synchronous, typed messages for the programmer, hiding the asynchronous untyped characteristics below the thread management layer

 URPC provides safe and efficient communication between address spaces on the same machine without kernel mediation  Isolates the three components of interprocess communication: processor reallocation, thread management, and data transfer  Kernel involvement limited to CPU reallocation  Control transfer handled by thread management and CPU reallocation  A simple procedure call with URPC has a latency of 93 µsecs compared to the LRPC’s 157 µsecs

 Designed on the Observation that there are several independent components to a cross- address space call.  Main components are:  Processor Reallocation  Ensuring that there is a physical processor to handle the client’s call in the server and the server’s reply in the client  Data Transfer Using Shared Memory  Moving arguments between the client and server address spaces  Thread Management  Blocking the caller’s thread, running a thread through the procedure in the server’s address space, and resuming the caller’s thread on return

 Aim is to reduce the frequency that CPU reallocations occur with an optimistic reallocation policy  Optimistic assumption ◦ Client has other work to do ◦ Server will soon have a processor to service a message  Some situations to not be optimistic and invoke the kernel for a reallocation ◦ Single thread applications ◦ High-latency I/O ◦ Real-Time applications ◦ Priority invocations

 Kernel handles processor reallocation to underpowered address spaces  Invoked using Processor.Donate which identifies the receiving address space to the kernel  Receiver is given identity of the caller by the kernel  The voluntary return of the processor is not guaranteed

 Three applications in there own address spaces ◦ Editor as the Client ◦ Server WinMgr ◦ Server FCMgr  Two available processors  Two threads T1 & T2 in the client

 In URPC each client-server combination is bound to a pair-wise mapped logical channel in shared memory  Mapping occurs once before the first call  Applications access URPC through the stubs layer  Safety of the communication is the responsibility of the stubs  Unlike traditional RPC the kernel is NOT invoked to copy data from one address space to another

 Data flows over a bidirectional shared memory queue with non-spinning test-and- set locks on either end

 Calling semantics of cross address space procedure call are synchronous with respect to the calling thread  Each communication function (send-receive) has corresponding thread management function (start-stop)  This close interaction between threads and communication can be exploited with user- level implementation to achieve good performance for both

 Thread overhead – points of reference ◦ Heavyweight – kernel makes no distinction between a thread and its address space ◦ Middleweight – Kernel managed but decoupled from address space to allow multiple threads ◦ Lightweight – user-level managed via libraries that execute in the context of weightier threads  Lightweight thread usage implies two level scheduling ◦ Lightweight threads scheduled user-level on heavier threads ◦ Heavier threads scheduled by the kernel

Cost of thread management actions between URPC and Toas threads Breakdown of the time taken by each component when no processor reallocation needed

C-client processors S-Server processors T-runnable client threads Time for T threads to make 100,000 “Null” procedure calls. Latency measured from call into the Null stub until control returns from the stub

C-client processors S-Server processors T-runnable client threads Time for T threads to make 100,000 “Null” procedure calls.

 URPC represents the appropriate division of responsibility between the user-level and the system kernel in shared memory multiprocessor systems  Performance improves over kernel involved message methods  URPC demonstrates the advantages to designing system facilities for the capabilities of a multiprocessor machine and making the distinction between a multiprocessor OS and uniprocessor OS that runs on a multiprocessor