Remote Procedure Calls

Slides:



Advertisements
Similar presentations
Processes Management.
Advertisements

IPC (Interprocess Communication)
Remote Procedure Call (RPC)
©2009 Operačné systémy Procesy. 3.2 ©2009 Operačné systémy Process in Memory.
Tam Vu Remote Procedure Call CISC 879 – Spring 03 Tam Vu March 06, 03.
02/02/2004CSCI 315 Operating Systems Design1 Interprocesses Communication Notice: The slides for this lecture have been largely based on those accompanying.
02/01/2010CSCI 315 Operating Systems Design1 Interprocess Communication Notice: The slides for this lecture have been largely based on those accompanying.
.NET Mobile Application Development Remote Procedure Call.
Lecture 5 Page 1 CS 111 Summer 2014 Process Communications, Synchronization, and Concurrency CS 111 Operating Systems Peter Reiher.
CS 390- Unix Programming Environment CS 390 Unix Programming Environment Topics to be covered: Distributed Computing Fundamentals.
Problems with Send and Receive Low level –programmer is engaged in I/O –server often not modular –takes 2 calls to get what you want (send, followed by.
1 Lecture 5 (part2) : “Interprocess communication” n reasons for process cooperation n types of message passing n direct and indirect message passing n.
2001 Networking Operating Systems (CO32010) 1. Operating Systems 2. Processes and scheduling 4.
CE Operating Systems Lecture 13 Linux/Unix interprocess communication.
Remote Procedure Call Andy Wang Operating Systems COP 4610 / CGS 5765.
Mark Stanovich Operating Systems COP Primitives to Build Distributed Applications send and receive Used to synchronize cooperating processes running.
Lecture 7 Page 1 CS 111 Winter 2014 Process Communications and Concurrency CS 111 Operating Systems Peter Reiher.
Lecture 7 Page 1 CS 111 Fall 2015 Process Communications and Concurrency CS 111 Operating Systems Peter Reiher.
The Mach System Silberschatz et al Presented By Anjana Venkat.
Remote Procedure Call and Serialization BY: AARON MCKAY.
Lecture 4 Mechanisms & Kernel for NOSs. Mechanisms for Network Operating Systems  Network operating systems provide three basic mechanisms that support.
Chapter 7 - Interprocess Communication Patterns
Lecture 7 Page 1 CS 111 Online Process Communications and Concurrency CS 111 On-Line MS Program Operating Systems Peter Reiher.
Lecture 7 Page 1 CS 111 Spring 2015 Process Communications and Concurrency CS 111 Operating Systems Peter Reiher.
 Process Concept  Process Scheduling  Operations on Processes  Cooperating Processes  Interprocess Communication  Communication in Client-Server.
Gokul Kishan CS8 1 Inter-Process Communication (IPC)
Lecture 5 Page 1 CS 111 Summer 2013 Bounded Buffers A higher level abstraction than shared domains or simple messages But not quite as high level as RPC.
Topic 4: Distributed Objects Dr. Ayman Srour Faculty of Applied Engineering and Urban Planning University of Palestine.
The Structuring of Systems Using Upcalls David D. Clark (Presented by John McCall)
Object Interaction: RMI and RPC 1. Overview 2 Distributed applications programming - distributed objects model - RMI, invocation semantics - RPC Products.
XE33OSA Chapter 3: Processes. 3.2XE33OSASilberschatz, Galvin and Gagne ©2005 Chapter 3: Processes Process Concept Process Scheduling Operations on Processes.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Interprocess Communication Race Conditions
Protecting Memory What is there to protect in memory?
03 – Remote invoaction Request-reply RPC RMI Coulouris 5
Protecting Memory What is there to protect in memory?
Protecting Memory What is there to protect in memory?
Async or Parallel? No they aren’t the same thing!
Chapter 4: Processes Process Concept Process Scheduling
Processes Overview: Process Concept Process Scheduling
Multiple Writers and Races
Lecture 25 More Synchronized Data and Producer/Consumer Relationship
Interprocess Communications
IPC and RPC.
Chapter 4: Processes Process Concept Process Scheduling
OPERATING SYSTEMS PROCESSES
Inter Process Communication (IPC)
Lecture 4: RPC Remote Procedure Call Coulouris et al: Chapter 5
Sarah Diesburg Operating Systems COP 4610
CSE 451: Operating Systems Autumn 2003 Lecture 16 RPC
Lecture 4: RPC Remote Procedure Call CDK: Chapter 5
Background and Motivation
Channels.
Concurrency: Mutual Exclusion and Process Synchronization
CSE 451: Operating Systems Autumn 2009 Module 21 Remote Procedure Call (RPC) Ed Lazowska Allen Center
Subject : T0152 – Programming Language Concept
EECE.4810/EECE.5730 Operating Systems
Lecture 6: RPC (exercises/questions)
Channels.
CS333 Intro to Operating Systems
CSE 451: Operating Systems Autumn 2010 Module 21 Remote Procedure Call (RPC) Ed Lazowska Allen Center
Lecture 6: RPC (exercises/questions)
Lecture 7: RPC (exercises/questions)
CSE 451: Operating Systems Winter 2003 Lecture 16 RPC
Last Class: Communication in Distributed Systems
Chapter 6 – Distributed Processing and File Systems
CSE 451: Operating Systems Messaging and Remote Procedure Call (RPC)
Exceptions and networking
Chapter 13: I/O Systems “The two main jobs of a computer are I/O and [CPU] processing. In many cases, the main job is I/O, and the [CPU] processing is.
Presentation transcript:

Remote Procedure Calls A more object-oriented mechanism Communicate by making procedure calls on other processes “Remote” here really means “in another process” Not necessarily “on another machine” They aren’t in your address space And don’t even use the same code Some differences from a regular procedure call Typically blocking

RPC Characteristics Procedure calls are primary unit of computation in most languages Unit of information hiding in most methodologies Primary level of interface specification A natural boundary between client and server processes Turn procedure calls into message send/receives Requires both sender and receiver to be playing the same game Typically both use some particular RPC standard

RPC Mechanics The process hosting the remote procedure might be on same computer or a different one Under the covers, use messages in either case Resulting limitations: No implicit parameters/returns (e.g. global variables) No call-by-reference parameters Much slower than procedure calls (TANSTAAFL) Most commonly used for client/server computing

RPC As A Layered Abstraction Most RPC implementations rely on messages under the covers Messages themselves are a communication abstraction RPC is a higher level abstraction With definitions about synchronization, kinds of data passed, etc. Layered on top of the low level message abstraction Would it make sense to use one low level abstraction (messages) for intermachine RPC and a different low level abstraction for interprocess RPC on the same machine?

Marshalling Arguments RPC calls have parameters But the remote procedure might not have the same data representation Especially if it’s on a different machine type Need to store sender’s version of arguments in a message Using intermediate representation Send the message to receiver Translate the intermediate representation to the receiver’s format

Tools to Support Remote Procedure Calls RPC interface specification RPC generation tool server RPC skeleton External Data Representation access functions Client RPC stubs client application code server implementation code client server

RPC Operations Client application links to local procedures Calls local procedures, gets results All RPC implementation is inside those procedures Client application does not know about details Does not know about formats of messages Does not worry about sends, timeouts, resents Does not know about external data representation All generated automatically by RPC tools The key to the tools is the interface specification

RPC As an IPC Mechanism Inherits many characteristics of local procedure calls Inherently non-parallel Make the call and wait for the reply Caller is thus limited by speed of callee Not suitable for creating parallel programs Works best for obvious client/server situations

What If You Call and No One Answers? Unlike your own procedure calls, remote procedure calls might not be answered Especially if they’re on different machines Possibly because the call failed Possibly because the return value delivery failed Possibly because the callee just isn’t done yet What should the caller do?

RPC Caller Options Just keep waiting Timeout and automatically retry Forever? Timeout and automatically retry As part of underlying RPC mechanism Without necessarily informing calling process How often do you do that? Forever? Timeout and allow process to decide You could query callee about what happened But what if there’s no answer there, either?

Another Difference Between Local and Remote Procedure Calls In local procedure calls, fatal bug in called procedure usually kills caller They’re in the same process In RPC, fatal bug in called procedure need not kill caller He won’t get an answer But otherwise his process is still OK RPC insulates caller better than normal procedure call

Bounded Buffers A higher level abstraction than shared domains or simple messages But not quite as high level as RPC A buffer that allows writers to put messages in And readers to pull messages out FIFO Unidirectional One process sends, one process receives With a buffer of limited size

SEND and RECEIVE With Bounded Buffers For SEND(), if buffer is not full, put the message into the end of the buffer and return If full, block waiting for space in buffer Then add message and return For RECEIVE(), if buffer has one or more messages, return the first one put in If there are no messages in buffer, block and wait until one is put in

Practicalities of Bounded Buffers Handles problem of not having infinite space Ensures that fast sender doesn’t overwhelm slow receiver A classic problem in queueing theory Provides well-defined, simple behavior for receiver But subject to some synchronization issues The producer/consumer problem A good abstraction for exploring those issues

The Bounded Buffer What could possibly go wrong? Process 1 is the writer Process 2 is the reader What could possibly go wrong? Process 1 Process 2 A fixed size buffer Process 1 SENDs a message through the buffer Process 2 RECEIVEsa message from the buffer More messages are sent And received

One Potential Issue What if the buffer is full? Process 1 Process 2 But the sender wants to send another message? The sender will need to wait for the receiver to catch up An issue of sequence coordination

Another Potential Issue Process 2 wants to receive a message Process 1 Process 2 But the buffer is empty Process 1 hasn’t sent any messages Another sequence coordination issue

Handling the Sequence Coordination Issues One party needs to wait For the other to do something If the buffer is full, process 1’s SEND must wait for process 2 to do a RECEIVE If the buffer is empty, process 2’s RECEIVE must wait for process 1 to SEND Naively, done through busy loops Check condition, loop back if it’s not true Also called spin loops

Implementing the Loops What exactly are the processes looping on? They care about how many messages are in the bounded buffer That count is probably kept in a variable Incremented on SEND Decremented on RECEIVE Never to go below zero or exceed buffer size The actual system code would test the variable

Process 2 wants to RECEIVE A Potential Danger Process 1 wants to SEND Process 2 wants to RECEIVE Concurrency’s a bitch Process 1 Process 2 Process 1 checks BUFFER_COUNT Process 2 checks BUFFER_COUNT 4 5 3 4 5 BUFFER_COUNT 4 3

Why Didn’t You Just Say BUFFER_COUNT=BUFFER_COUNT-1? These are system operations Occurring at a low level Using variables not necessarily in the processes’ own address space Perhaps even RAM memory locations The question isn’t, can we do it right? The question is, what must we do if we are to do it right?

Another Concurrency Problem The two processes may operate on separate cores Meaning true concurrency is possible Even if not, scheduling may make it seem to happen No guarantee of the atomicity of operations Above the level of hardware instructions E.g., when process 1 puts the new message into the buffer, its update of BUFFER_COUNT may not occur till process 2 does its work In which case, BUFFER_COUNT may end up at 5 Which isn’t right, either

One Possible Solution Use separate variables to hold the number of messages put into the buffer And the number of messages taken out Only the sender updates the IN variable Only the receiver updates the OUT variable Calculate buffer fullness by subtracting OUT from IN Won’t exhibit the previous problem When working with concurrent processes, it’s safest to only allow one process to write each variable