Presentation on theme: "4.2 Request/Reply Communication"— Presentation transcript:
1 4.2 Request/Reply Communication Wooyoung KimFall 2009
2 Part I : Basic Information  IntroductionRequest /Reply communicationRemote Procedure Call (RPC)RPC OperationsParameter Passing and Data ConversionBindingRPC compilationRPC Exception and Failure HandlingSecure RPC
3 Introduction Request/Reply Communication Common technique for one application to request the services of another.
4 Introduction – Cont’d Remote Procedure Call (RPC) Most widely used request/reply communication modelA language-level abstraction of the request/reply communication mechanismHow RPC work?What is the implementation issues for RPC?
5 RPC vs. local procedure call. RPC OperationsRPC vs. local procedure call.Similar in syntax as two have ‘calling’ and ‘waiting’ procedures – RPC provides access transparency to remote operations.Different in semantics, because RPC involves delays and failures (possibly).
6 RPC Operations – how it works? This operation exposes some issues when implementing
7 RPC Operations – implementation issues Parameter passing and data conversion.Binding – locating server and registering the serviceCompilation – origination of stub procedures and linkingException and failure handlingSecurity
8 Parameter PassingIn a single process : via parameters and/or global variablesIn multiple processes on the same host : via message passing .However, RPC-based clients and servers : passing parameters is typically the only wayParameter marshaling : Rules for parameter passing and data/message conversion. Primary responsibility of Stub procedure
9 Parameter Passing – Cont’d Call-by-value are fairly simple to handleThe client stub copies the value packages into a network messageCall-by -name requires dynamic run-time evaluation of symbolic expression.Call-by-reference is hard to implement in distributed systems with non-shared memory.Call-by-copy/restore: combination of call-by-value and call-by-reference. Call-by-value at the entry of call and call-by-reference to the exit of the call
10 Parameter Passing – Cont’d Most RPC implementations assume that parameters passed by call-by-value and call-by-copy/restore.
11 Data Conversion Three problems in conversion between data and message Data typingData representationData transfer syntax
12 Data Conversion – Cont’d Type checking across machines is difficult, because the data is passed through interprogram messages.Data should carry type information?Each machine has its own internal representation of the data types.Complicated by the Serial representation of bits and bytes in communication channels.Different machines have different standards for the bits or bytes with the least or the most significant digit first.
13 Data Conversion – Cont’d Transfer syntaxRules regarding of messages in a network.For n data representations, n*(n-2)/2 translators are required.Better solution: inventing an universal language : 2*n translators.However, this increase the packing/unpacking overhead.
14 Data Conversion – Cont’d ASN.1Abstract Syntax Notation OneMost important developments in standards.Used to define data structures.Used for specifying formats of protocol data units in network communications.
15 Data Conversion – Cont’d ASN.1 and transfer syntax are the major facilities for building network presentation services.ASN.1 can be used directly in data representation for RPC implementations.Data types are checked during stub generation and compilation. Providing type information in messages is not necessary.
16 Data Conversion– Cont’d Examples of canonical data representations for RPCSun ’s XDR: eXternal Data RepresentationDCE’s IDL : Interface Definition Language
17 Binding Binding is the process of connecting the client to the server Services are specified by a server interface with interface definition language such as XDR.
18 Binding – Cont’d The server, when it starts up Register its communication endpoint by sending a request (program, version number, port number) to the port mapper .Port mapper manages the mapping.Before RPC, client call RPC run-time library routine create, which contacts the port mapper to obtain a handle for accessing.Create message contains the server name, program, version number, transport protocol.
19 Binding – Cont’dPort mapper verifies the program and version numbers, returns the port number of the server to the client.Client builds a client handle for subsequent use in RPC. This establishes socket connections between clients and server.
20 Server machine address or handle to server Binding – Cont’dServer machine address or handle to serverRegister service (if server is unknown)directory serverport mapper2. create1. register3. port #clientserver4. handle
21 RPC compilation Compilation of RPC requires the followings: An interface specification fileAn RPC generator : input is the interface specification file and output is the client and server stub procedure source codes.A run-time library for supporting execution of an RPC, including support for binding, data conversion, and communication.
22 RPC Exception and Failure Handling ExceptionsAbnormal conditions raised by the execution of stub and server procedures.Ex. Overflow/underflow, protection violation.FailuresProblems caused by crashes of clients, servers, or the communication network.
23 Exception Handling Exceptions must be reported to the clients. Question: how the server report status information to clients?A client may have to stop the execution of a server procedure.Question: how does a client send control information to a server?
24 Exception Handling – Cont’d In local procedure call: global variables and signals.In computer network, the exchange of control and status information must rely on a data channel.In-band signaling, or out-band signaling (flag).Separate channel (socket connection) – more flexible for RPCIt is implemented as part of the stub library support and should be transparent.
25 Failure Handling Cannot locate the server nonexistent server, or outdated programhandle like an exception.Messages can be delayed or losteventually detected by a time-out or by no response from the server.The messages can be retransmitted.
26 Failure Handling – Cont’d Problem with Retransmission of requests.In case of delay, server get multiple requests-> make it idempotent (can be executed multiple times with the same effect)In case of idempotent impossible (lock servers), each request has sequence number.Typical RPC do not use sequence numbers – only requests-based.
27 Failure Handling – Cont’d Crash of a server.Client attempts to reestablish a connection, and retransmits its request.If server not fail, but TCP connection fail: examine the cache table for duplicated message.If server failed, then cache table lost. Then raise exception.
28 Failure Handling – Cont’d Three assumptions for RPC semantics in failures.Server raise exception, client retries. At least onceServer raise exception, client give up immediately At most onceNo error report from server, client resubmits until it gets or give up Maybe
29 Failure Handling – Cont’d Most desirable RPC semantics is exactly once.But hard to implement.Loss of cache table: at least once and log the cache table to storage.Reload the cache table when server recovers.Overhead since each service must be executed as a transaction at the server.
30 Failure Handling – Cont’d Crash of a client process.Server has an orphan computation and its reply is undeliverable.orphan computation waist server resources, may confuse the client with invalid replies from previous connections.How to eliminate orphan computation?Client: On reboot, cleans up all previous requests.Server: Occasionally locate owners of requests.Expiration: Each remote operation is given a maximum lifetime.
31 Secure RPC Security is important for RPC, since RPC introduces vulnerability because it opens doors for attacks.RPC became a cornerstone of client/server computation. All security features should be build on top of a secure RPC.Primary security issuesAuthentication of processes.Confidentiality of messages.Access control authorization from client to server.
32 Secure RPC – Cont’d Authentication protocol for RPC should establish: Mutual authentication.Message integrity, confidentiality, and originality.Design of a secure authentication protocolHow strong the security goals.What possible attacksSome inherent limitations of the system.Short-term solution: additional security features.
33 Secure RPC – Cont’d Sun secure RPC Built into Sun’s basic RPC. Assume a trusted Network Information Service (NIS), which keeps a database and secret keys.The keys are for generating a true cryptographical session key.When user login, NIS gives the key. With user password, the key used to decrypt the secret key, discard password.Password are not transmitted in the network.
34 Secure RPC – Cont’d Sun secure RPC – example Client login attempt login program deposit the client’s key in the key server.Key server generating a common session key, by exponential key exchange.Secrete keys are erased after common session keys generated.Each RPC message is authenticated by a conversation key.Conversation key is kept in server, used for the entire session, as it is not from the secrete key.
35 Secure RPC – Cont’d Sun secure RPC – RPC message may contain more Timestamp : check message expirationNonce : protect against the replay of a messageMessage digest: detect any tampering.Sun secure RPC is simple, using existing NIS.
37 Other RPC Industry Implementations  Part II : Current ProjectsOther RPC Industry Implementations ONC RPC/NFS (Sun Microsystems Inc.)Early 1990s - DCE RPC (Microsoft)Late 1990’s – ORPC (Object Oriented Programming Community)1997 – DCOM (Microsoft)NET Remoting (Microsoft)Doors (Solaris)2003-ICE (Internet Communications Engine)DCOP - Desktop Communication Protocol (KDE)
38 ICE (Internet Communications Engine) [3,4,5] ICE is object-oriented middleware providing RPC, grid computing, and publish/subscribe functionality.Influenced by CORBR (Common Object Request Broker Architecture) in its design, and developed by ZeroC, Inc.Supports C++, Java, .NET-languages, Objective-C, Python, PHP, and Ruby on most major operating systems.
40 ICE componentsIceStorm: object-oriented publish-and-subscribe frameworkIceGrid: provide object-oriented load balancing, failover, object-discovery and registry services.IcePatch :facilitates the deployment of ICE based softwareGlacier: a proxy-based service to enable communication through firewallsIceBox: a SOA-like container of executable services implemented with librariesSlice: file format that programmers follow to edit. Servers should communicate based on interfaces and classes as declared by the slice definitions.
41 Current Project with ICE  Ice middleware in the New Solar Telescope’s Telescope Control System, 2008 NST (new solar telescope) is an off-axis solar telescope with the world largest aperture.Develop TCS (telescope control system) to control all aspects of the telescopeTelescope PointingTracking SubsystemActive Optics Control SubsystemHandheld ControllerMain GUI
42 Current Project with ICE-Cont’d  Ice advantagesProvides fast and scalable communications.Simple to useIce Embedded (Ice-E) supports Microsoft Windows Mobile operating system for handheld devices.Source code of Ice is provided under the GNU (General Public License)Continuously updated.Ice problemFrequent package updates cause changes of coding.
43 Current Project with ICE-Cont’d  TCS ImplementationStar-like structure: all subsystems through HQ (headquarters).Each subsystem acts as a server and a client.Each subsystem use the same ICE interface.Interface for every object includes seven operations;Register, Unregister, SendError, SendCommand, RequestInformation, SendNotification, SendReply, ErrorSubsystems can be started in any order, only need to register with HQ and IcePack registry.
44 Part III : Future Works Compatible updates with old versions (ex. ICE) Trend on Object-oriented implementation:General-purpose tool to construct object-based modular systems, transparently distributed at run-time.
45 ReferenceRandy Chow, Theodore Johnson, “Distributed Operating Systems & Algorithms”, 1997Interprocess CommnicationsZeros, Inc.ICE in wikipedia,Shumko, Sergij. "Ice middleware in the New Solar Telescope's Telescope Control System". Astronomical Data Analysis Software and Systems XVII, ASP Conference Series, Vol. XXX, 2008., Canada.