Networking Implementations (part 1) CPS210 Spring 2006.

Slides:



Advertisements
Similar presentations
RPC Robert Grimm New York University Remote Procedure Calls.
Advertisements

Department of Computer Science and Engineering University of Washington Brian N. Bershad, Stefan Savage, Przemyslaw Pardyak, Emin Gun Sirer, Marc E. Fiuczynski,
The Spring System (and its trusty sidekick, Subcontract) Sun Microsystems.
Chorus and other Microkernels Presented by: Jonathan Tanner and Brian Doyle Articles By: Jon Udell Peter D. Varhol Dick Pountain.
User-Level Interprocess Communication for Shared Memory Multiprocessors Bershad, B. N., Anderson, T. E., Lazowska, E.D., and Levy, H. M. Presented by Akbar.
Fast Communication Firefly RPC Lightweight RPC  CS 614  Tuesday March 13, 2001  Jeff Hoy.
Lightweight Remote Procedure Call BRIAN N. BERSHAD THOMAS E. ANDERSON EDWARD D. LAZOWSKA HENRY M. LEVY Presented by Wen Sun.
Remote Procedure CallCS-4513, D-Term Remote Procedure Call CS-4513 Distributed Computing Systems (Slides include materials from Operating System.
Lightweight Remote Procedure Call Brian N. Bershad, Thomas E. Anderson, Edward D. Lazowska, and Henry M. Levy Presented by Alana Sweat.
Extensibility, Safety and Performance in the SPIN Operating System Department of Computer Science and Engineering, University of Washington Brian N. Bershad,
G Robert Grimm New York University Lightweight RPC.
Distributed Systems Lecture #3: Remote Communication.
Extensibility, Safety and Performance in the SPIN Operating System Brian Bershad, Stefan Savage, Przemyslaw Pardyak, Emin Gun Sirer, Marc E. Fiuczynski,
CS533 Concepts of Operating Systems Class 8 Shared Memory Implementations of Remote Procedure Call.
Implementing Remote Procedure Calls Authors: Andrew D. Birrell and Bruce Jay Nelson Xerox Palo Alto Research Center Presenter: Jim Santmyer Thanks to:
CS490T Advanced Tablet Platform Applications Network Programming Evolution.
CS533 Concepts of Operating Systems Class 14 Virtualization.
User Level Interprocess Communication for Shared Memory Multiprocessor by Bershad, B.N. Anderson, A.E., Lazowska, E.D., and Levy, H.M.
CS533 Concepts of Operating Systems Class 4 Remote Procedure Call.
Lightweight Remote Procedure Call Brian N. Bershad, Thomas E. Anderson, Edward D. Lazowska, Henry M. Levy ACM Transactions Vol. 8, No. 1, February 1990,
USER LEVEL INTERPROCESS COMMUNICATION FOR SHARED MEMORY MULTIPROCESSORS Presented by Elakkiya Pandian CS 533 OPERATING SYSTEMS – SPRING 2011 Brian N. Bershad.
490dp Prelude: Design Report Remote Invocation Robert Grimm (borrowing some from Hank Levy)
1 Chapter 4 Threads Threads: Resource ownership and execution.
User-Level Interprocess Communication for Shared Memory Multiprocessors Brian N. Bershad, Thomas E. Anderson, Edward D. Lazowska, and Henry M. Levy Presented.
Remote Procedure Calls. 2 Client/Server Paradigm Common model for structuring distributed computations A server is a program (or collection of programs)
CS533 Concepts of Operating Systems Class 4 Remote Procedure Call.
1 Lightweight Remote Procedure Call Brian N. Bershad, Thomas E. Anderson, Edward D. Lazowska and Henry M. Levy Presented by: Karthika Kothapally.
CS533 Concepts of Operating Systems Class 9 Lightweight Remote Procedure Call (LRPC) Rizal Arryadi.
CS510 Concurrent Systems Jonathan Walpole. Lightweight Remote Procedure Call (LRPC)
Stack Management Each process/thread has two stacks  Kernel stack  User stack Stack pointer changes when exiting/entering the kernel Q: Why is this necessary?
CSE 451: Operating Systems Autumn 2013 Module 6 Review of Processes, Kernel Threads, User-Level Threads Ed Lazowska 570 Allen.
Lightweight Remote Procedure Call (Bershad, et. al.) Andy Jost CS 533, Winter 2012.
CS533 Concepts of Operating Systems Jonathan Walpole.
From Coulouris, Dollimore, Kindberg and Blair Distributed Systems: Concepts and Design Edition 5, © Addison-Wesley 2012 Exercises for Chapter 7 Operating.
CS 390- Unix Programming Environment CS 390 Unix Programming Environment Topics to be covered: Distributed Computing Fundamentals.
CS533 Concepts of Operating Systems Jonathan Walpole.
Processes and Threads CS550 Operating Systems. Processes and Threads These exist only at execution time They have fast state changes -> in memory and.
Lightweight Remote Procedure Call BRIAN N. BERSHAD, THOMAS E. ANDERSON, EDWARD D. LASOWSKA, AND HENRY M. LEVY UNIVERSTY OF WASHINGTON "Lightweight Remote.
CSE 451: Operating Systems Winter 2015 Module 22 Remote Procedure Call (RPC) Mark Zbikowski Allen Center 476 © 2013 Gribble, Lazowska,
EXTENSIBILITY, SAFETY AND PERFORMANCE IN THE SPIN OPERATING SYSTEM
Middleware Services. Functions of Middleware Encapsulation Protection Concurrent processing Communication Scheduling.
Remote Procedure Calls CS587x Lecture Department of Computer Science Iowa State University.
Remote Procedure Call Andy Wang Operating Systems COP 4610 / CGS 5765.
Brian N. Bershad, Thomas E. Anderson, Edward D. Lazowska, and Henry M. Levy. Presented by: Tim Fleck.
Mark Stanovich Operating Systems COP Primitives to Build Distributed Applications send and receive Used to synchronize cooperating processes running.
CS533 Concepts of Operating Systems Jonathan Walpole.
Implementing Remote Procedure Call Landon Cox February 12, 2016.
1 Chapter 2: Operating-System Structures Services Interface provided to users & programmers –System calls (programmer access) –User level access to system.
Kernel Design & Implementation
CS533 Concepts of Operating Systems
B. N. Bershad, T. E. Anderson, E. D. Lazowska and H. M
CSE 451: Operating Systems Winter 2006 Module 20 Remote Procedure Call (RPC) Ed Lazowska Allen Center
Sarah Diesburg Operating Systems COP 4610
CSE 451: Operating Systems Autumn 2003 Lecture 16 RPC
CSE 451: Operating Systems Spring 2012 Module 6 Review of Processes, Kernel Threads, User-Level Threads Ed Lazowska 570 Allen.
CSE 451: Operating Systems Winter 2007 Module 20 Remote Procedure Call (RPC) Ed Lazowska Allen Center
By Brian N. Bershad, Thomas E. Anderson, Edward D
Fast Communication and User Level Parallelism
CSE 451: Operating Systems Winter 2004 Module 19 Remote Procedure Call (RPC) Ed Lazowska Allen Center
CSE 451: Operating Systems Spring 2012 Module 22 Remote Procedure Call (RPC) Ed Lazowska Allen Center
CSE 451: Operating Systems Autumn 2009 Module 21 Remote Procedure Call (RPC) Ed Lazowska Allen Center
Presented by Neha Agrawal
Remote Procedure Call Hank Levy 1.
Presented by: SHILPI AGARWAL
Remote Procedure Call Hank Levy 1.
CSE 451: Operating Systems Autumn 2010 Module 21 Remote Procedure Call (RPC) Ed Lazowska Allen Center
CSE 451: Operating Systems Winter 2003 Lecture 16 RPC
Remote Procedure Call Hank Levy 1.
Advanced Operating Systems (CS 202) Operating System Structure
CSE 451: Operating Systems Messaging and Remote Procedure Call (RPC)
Presentation transcript:

Networking Implementations (part 1) CPS210 Spring 2006

Papers  The Click Modular Router  Robert Morris  Lightweight Remote Procedure Call  Brian Bershad

Procedure Calls main (int, char**) { char *p = malloc (64); foo (p); } foo (char *p) { p[0] = ‘\0’; } Code + Data Heap main: argc, argv Stack 0x xf324 0xfa3964 foo: 0xf33c, 0xfa3964

RPC basics  Want network code to look local  Leverage language support  3 components on each side  User program (client or server)  Stub procedures  RPC runtime support

Building an RPC server  Define interface to server  IDL (Interface Definition Language)  Use stub compiler to create stubs  Input: IDL, Output: client/server stub code  Server code linked with server stub  Client code linked with client stub

RPC Binding  Binding connects clients to servers  Two phases: server export, client import  In Java RMI  rmic compiles IDL into ServerObj_{Skel,Stub}  Export looks like this  Naming.bind (“Service”, new ServerObj());  ServerObj_Skel dispatches requests to input ServerObj  Import looks like this  Naming.lookup("rmi://host/Service");  Returns a ServerObj_Stub (subtype of ServerObj)

Remote Procedure Calls (RPC) main (int, char**) { char *p = malloc (64); foo (p); } // client stub foo (char *p) { // bind to server socket s (“remote”); // invoke remote server s.send(FOO); s.send(marsh(p)); // copy reply memcpy(p,unmarsh(s.rcv())); // terminate s.close(); } Code + Data Heap main: argc, argv Stack foo: 0xf33c, 0xfa3964 Code + Data Heap RPC_dispatch: socket Stack stub: 0xd23c, &s // server foo (char *p) { p[0] = ‘\0’; } foo_stub (s) { // alloc, unmarshall char *p2 = malloc(64); s.recv(p2, 64); // call server foo(p2); // return reply s.send(p2, 64); } RPC_dispatch (s) { int call; s.recv (&call); // do dispatch switch (call) { … case FOO: // call stub foo_stub(s); …} s.close (); } foo: 0xd23c, 0xfb3268 1)Bind 2)Invoke and reply 3)Terminate

RPC Questions  Does this abstraction make sense?  You always know when a call is remote  What is the advantage over raw sockets?  When are sockets more appropriate?  What about strongly typed languages?  Can type info be marshaled efficiently?

LRPC Context  In 1990, micro-kernels were all the rage  Split OS functionality between “servers”  Each server runs in a separate addr space  Use RPC to communicate  Between apps and micro-kernel  Between micro-kernel and servers

Micro-kernels argument  Easy to protect OS from applications  Run in separate protection modes  Use HW to enforce  Easy to protect apps from each other  Run in separate address spaces  Use naming to enforce  How do we protect OS from itself?  Why is this important?

Mach architecture Kernel User process File server Pager Memory server Process sched. Comm. Network

LRPC Motivation  Overwhelmingly, RPCs are intra-machine  RPC on a single machine is very expensive  Many context switches  Much data copying between domains  Result: monolithic kernels make a comeback  Run servers in kernel to minimize overhead  Sacrifices safety of isolation  How can we make intra-machine RPC fast?  (without chucking microkernels altogether)

Baseline RPC cost  Null RPC call  void null () { return; } 1.Procedure call 2.Client to server: trap + context switch 3.Server to client: trap + context switch 4.Return to client

Sources of extra overhead  Stub code  Marshaling and unmarshaling arguments  User1  Kernel, Kernel  User2, back again  Access control (binding validation)  Message enquing and dequeuing  Thread scheduling  Client and server have separate thread pools  Context switches  Change virtual memory mappings  Server dispatch

LRPC Approach  Optimize for the common case:  Intra-machine communication  Idea: decouple threads from address spaces  For LRPC call, client provides server  Argument stack (A-frame)  Concrete thread (one of its own)  Kernel regulates transitions between domains

1) Binding Code + Data Heap Stack Code + Data Heap Stack LRPC runtime Kernel CS Clerk PDL(S) set_name{addr:0x54320, conc:1, A_stack_sz:12} import “S” main: argc, argv // server code char name[8]; set_name(char *newname) { int i, valid=0; for (i=0;i<8;i++) { if(newname[i]==‘\0’){ valid=1; break; } if (valid) return strcpy(name, newname); return –EINVAL; } 0x54320 C import req. A-stack (12 bytes) 0x74a28  LR{} 0x74a28 0x761c2 BindObj

2) Calling Code + Data Heap Stack Code + Data Heap Stack LRPC runtime Kernel CS Clerk PDL(S) set_name{addr:0x54320, conc:1, A_stack_sz:12} main: argc, argv // server code char name[8]; set_name(char *newname) { int i, valid=0; for (i=0;i<8;i++) { if(newname[i]==‘\0’){ valid=1; break; } if (valid) return strcpy(name, newname); return –EINVAL; } 0x54320 A-stack (12 bytes) ”foo” “foo” 0x74a28  LR{} 0x74a28 0x761c2 BindObj set_name: “foo” &BindObj,0x7428,set_name 0x74a28  LR{Csp,Cra} server_stub: set_name: 0x761c2 ”foo”, 0 “foo”, 0

Data copying

Questions  Is fast IPC still important?  Are the ideas here useful for VMs?  Just how safe are servers from clients?