4.2 Request/Reply Communication

Slides:



Advertisements
Similar presentations
1 Communication in Distributed Systems REKs adaptation of Tanenbaums Distributed Systems Chapter 2.
Advertisements

Distributed Systems Major Design Issues Presented by: Christopher Hector CS8320 – Advanced Operating Systems Spring 2007 – Section 2.6 Presentation Dr.
What is RMI? Remote Method Invocation –A true distributed computing application interface for Java, written to provide easy access to objects existing.
COM vs. CORBA.
Web Service Ahmed Gamal Ahmed Nile University Bioinformatics Group
CSE 486/586 Distributed Systems Remote Procedure Call
Remote Procedure Call (RPC)
Remote Procedure Call Design issues Implementation RPC programming
Tam Vu Remote Procedure Call CISC 879 – Spring 03 Tam Vu March 06, 03.
Implementing Remote Procedure Calls Authored by: Andrew D. Birrel and Bruce Jay Nelson Presented by: Terry, Jae, Denny.
Distributed components
Network Operating Systems Users are aware of multiplicity of machines. Access to resources of various machines is done explicitly by: –Logging into the.
Distributed Systems Lecture #3: Remote Communication.
Remote Method Invocation Chin-Chih Chang. Java Remote Object Invocation In Java, the object is serialized before being passed as a parameter to an RMI.
CS 501: Software Engineering Fall 2000 Lecture 16 System Architecture III Distributed Objects.
COS 420 DAY 25. Agenda Assignment 5 posted Chap Due May 4 Final exam will be take home and handed out May 4 and Due May 10 Latest version of Protocol.
Chapter 15 – Part 2 Networks The Internal Operating System The Architecture of Computer Hardware and Systems Software: An Information Technology Approach.
OCT 1 Master of Information System Management Organizational Communications and Distributed Object Technologies Lecture 5: Distributed Objects.
SSH : The Secure Shell By Rachana Maheswari CS265 Spring 2003.
Systems Architecture, Fourth Edition1 Internet and Distributed Application Services Chapter 13.
16: Distributed Systems1 DISTRIBUTED SYSTEM STRUCTURES NETWORK OPERATING SYSTEMS The users are aware of the physical structure of the network. Each site.
TCP: Software for Reliable Communication. Spring 2002Computer Networks Applications Internet: a Collection of Disparate Networks Different goals: Speed,
.NET Mobile Application Development Remote Procedure Call.
DISTRIBUTED PROCESS IMPLEMENTAION BHAVIN KANSARA.
Distributed Process Implementation Hima Mandava. OUTLINE Logical Model Of Local And Remote Processes Application scenarios Remote Service Remote Execution.
.NET, and Service Gateways Group members: Andre Tran, Priyanka Gangishetty, Irena Mao, Wileen Chiu.
FALL 2005CSI 4118 – UNIVERSITY OF OTTAWA1 Part 4 Web technologies: HTTP, CGI, PHP,Java applets)
Architecture Of ASP.NET. What is ASP?  Server-side scripting technology.  Files containing HTML and scripting code.  Access via HTTP requests.  Scripting.
Introduction to Ice Copyright © ZeroC, Inc. Ice Programming with Java 1. Introduction to Ice.
Distributed File Systems
FALL 2005CSI 4118 – UNIVERSITY OF OTTAWA1 Part 4 Other Topics RPC & Middleware.
1 Chapter 38 RPC and Middleware. 2 Middleware  Tools to help programmers  Makes client-server programming  Easier  Faster  Makes resulting software.
CS 390- Unix Programming Environment CS 390 Unix Programming Environment Topics to be covered: Distributed Computing Fundamentals.
Lecture 15 Introduction to Web Services Web Service Applications.
11 September 2008CIS 340 # 1 Topics To examine the variety of approaches to handle the middle- interaction (continued) 1.RPC-based systems 2.TP monitors.
Introduction to Distributed Systems Slides for CSCI 3171 Lectures E. W. Grundke.
NFS : Network File System SMU CSE8343 Prof. Khalil September 27, 2003 Group 1 Group members: Payal Patel, Malka Samata, Wael Faheem, Hazem Morsy, Poramate.
Tanenbaum & Van Steen, Distributed Systems: Principles and Paradigms, 2e, (c) 2007 Prentice-Hall, Inc. All rights reserved RPC Tanenbaum.
CSC 480 Software Engineering Lecture 18 Nov 6, 2002.
REQUEST/REPLY COMMUNICATION
Page 1 Remote Procedure Calls Paul Krzyzanowski Distributed Systems Except as otherwise noted, the content of this presentation.
CS 501: Software Engineering Fall 1999 Lecture 12 System Architecture III Distributed Objects.
Chapter 5: Distributed objects and remote invocation Introduction Remote procedure call Events and notifications.
GLOBAL EDGE SOFTWERE LTD1 R EMOTE F ILE S HARING - Ardhanareesh Aradhyamath.
Remote Procedure Call RPC
09/14/05 1 Implementing Remote Procedure Calls* Birrell, A. D. and Nelson, B. J. Presented by Emil Constantinescu *ACM Trans. Comput. Syst. 2, 1 (Feb.
1 Chapter 38 RPC and Middleware. 2 Middleware  Tools to help programmers  Makes client-server programming  Easier  Faster  Makes resulting software.
Lecture 4 Mechanisms & Kernel for NOSs. Mechanisms for Network Operating Systems  Network operating systems provide three basic mechanisms that support.
Web Services An Introduction Copyright © Curt Hill.
Manish Kumar,MSRITSoftware Architecture1 Remote procedure call Client/server architecture.
Computer Science Lecture 3, page 1 CS677: Distributed OS Last Class: Communication in Distributed Systems Structured or unstructured? Addressing? Blocking/non-blocking?
TCP/IP1 Address Resolution Protocol Internet uses IP address to recognize a computer. But IP address needs to be translated to physical address (NIC).
© Oxford University Press 2011 DISTRIBUTED COMPUTING Sunita Mahajan Sunita Mahajan, Principal, Institute of Computer Science, MET League of Colleges, Mumbai.
Distributed Systems Lecture 8 RPC and marshalling 1.
Computer Science Lecture 4, page 1 CS677: Distributed OS Last Class: RPCs RPCs make distributed computations look like local computations Issues: –Parameter.
Distributed Computing & Embedded Systems Chapter 4: Remote Method Invocation Dr. Umair Ali Khan.
Topic 4: Distributed Objects Dr. Ayman Srour Faculty of Applied Engineering and Urban Planning University of Palestine.
Object Interaction: RMI and RPC 1. Overview 2 Distributed applications programming - distributed objects model - RMI, invocation semantics - RPC Products.
“Request /Reply Communication”
03 – Remote invoaction Request-reply RPC RMI Coulouris 5
CSE 486/586 Distributed Systems Remote Procedure Call
Chapter 3: Windows7 Part 4.
DISTRIBUTED COMPUTING
Lecture 4: RPC Remote Procedure Call Coulouris et al: Chapter 5
Lecture 4: RPC Remote Procedure Call CDK: Chapter 5
Chapter 15 – Part 2 Networks The Internal Operating System
Remote invocation (call)
Last Class: Communication in Distributed Systems
Exceptions and networking
Presentation transcript:

4.2 Request/Reply Communication Wooyoung Kim Fall 2009

Part I : Basic Information [1] Introduction Request /Reply communication Remote Procedure Call (RPC) RPC Operations Parameter Passing and Data Conversion Binding RPC compilation RPC Exception and Failure Handling Secure RPC

Introduction Request/Reply Communication Common technique for one application to request the services of another.

Introduction – Cont’d Remote Procedure Call (RPC) Most widely used request/reply communication model A language-level abstraction of the request/reply communication mechanism How RPC work? What is the implementation issues for RPC?

RPC vs. local procedure call. RPC Operations RPC vs. local procedure call. Similar in syntax as two have ‘calling’ and ‘waiting’ procedures – RPC provides access transparency to remote operations. Different in semantics, because RPC involves delays and failures (possibly).

RPC Operations – how it works? This operation exposes some issues when implementing

RPC Operations – implementation issues Parameter passing and data conversion. Binding – locating server and registering the service Compilation – origination of stub procedures and linking Exception and failure handling Security

Parameter Passing In a single process : via parameters and/or global variables In multiple processes on the same host : via message passing . However, RPC-based clients and servers : passing parameters is typically the only way Parameter marshaling : Rules for parameter passing and data/message conversion. Primary responsibility of Stub procedure

Parameter Passing – Cont’d Call-by-value are fairly simple to handle The client stub copies the value packages into a network message Call-by -name requires dynamic run-time evaluation of symbolic expression. Call-by-reference is hard to implement in distributed systems with non-shared memory. Call-by-copy/restore: combination of call-by-value and call-by-reference. Call-by-value at the entry of call and call-by-reference to the exit of the call

Parameter Passing – Cont’d Most RPC implementations assume that parameters passed by call-by-value and call-by-copy/restore.

Data Conversion Three problems in conversion between data and message Data typing Data representation Data transfer syntax

Data Conversion – Cont’d Type checking across machines is difficult, because the data is passed through interprogram messages. Data should carry type information? Each machine has its own internal representation of the data types. Complicated by the Serial representation of bits and bytes in communication channels. Different machines have different standards for the bits or bytes with the least or the most significant digit first.

Data Conversion – Cont’d Transfer syntax Rules regarding of messages in a network. For n data representations, n*(n-2)/2 translators are required. Better solution: inventing an universal language : 2*n translators. However, this increase the packing/unpacking overhead.

Data Conversion – Cont’d ASN.1 Abstract Syntax Notation One Most important developments in standards. Used to define data structures. Used for specifying formats of protocol data units in network communications.

Data Conversion – Cont’d ASN.1 and transfer syntax are the major facilities for building network presentation services. ASN.1 can be used directly in data representation for RPC implementations. Data types are checked during stub generation and compilation. Providing type information in messages is not necessary.

Data Conversion– Cont’d Examples of canonical data representations for RPC Sun ’s XDR: eXternal Data Representation DCE’s IDL : Interface Definition Language

Binding Binding is the process of connecting the client to the server Services are specified by a server interface with interface definition language such as XDR.

Binding – Cont’d The server, when it starts up Register its communication endpoint by sending a request (program, version number, port number) to the port mapper . Port mapper manages the mapping. Before RPC, client call RPC run-time library routine create, which contacts the port mapper to obtain a handle for accessing. Create message contains the server name, program, version number, transport protocol.

Binding – Cont’d Port mapper verifies the program and version numbers, returns the port number of the server to the client. Client builds a client handle for subsequent use in RPC. This establishes socket connections between clients and server.

Server machine address or handle to server Binding – Cont’d Server machine address or handle to server Register service (if server is unknown) directory server port mapper 2. create 1. register 3. port # client server 4. handle

RPC compilation Compilation of RPC requires the followings: An interface specification file An RPC generator : input is the interface specification file and output is the client and server stub procedure source codes. A run-time library for supporting execution of an RPC, including support for binding, data conversion, and communication.

RPC Exception and Failure Handling Exceptions Abnormal conditions raised by the execution of stub and server procedures. Ex. Overflow/underflow, protection violation. Failures Problems caused by crashes of clients, servers, or the communication network.

Exception Handling Exceptions must be reported to the clients. Question: how the server report status information to clients? A client may have to stop the execution of a server procedure. Question: how does a client send control information to a server?

Exception Handling – Cont’d In local procedure call: global variables and signals. In computer network, the exchange of control and status information must rely on a data channel. In-band signaling, or out-band signaling (flag). Separate channel (socket connection) – more flexible for RPC It is implemented as part of the stub library support and should be transparent.

Failure Handling Cannot locate the server nonexistent server, or outdated program handle like an exception. Messages can be delayed or lost eventually detected by a time-out or by no response from the server. The messages can be retransmitted.

Failure Handling – Cont’d Problem with Retransmission of requests. In case of delay, server get multiple requests -> make it idempotent (can be executed multiple times with the same effect) In case of idempotent impossible (lock servers), each request has sequence number. Typical RPC do not use sequence numbers – only requests-based.

Failure Handling – Cont’d Crash of a server. Client attempts to reestablish a connection, and retransmits its request. If server not fail, but TCP connection fail: examine the cache table for duplicated message. If server failed, then cache table lost. Then raise exception.

Failure Handling – Cont’d Three assumptions for RPC semantics in failures. Server raise exception, client retries.  At least once Server raise exception, client give up immediately  At most once No error report from server, client resubmits until it gets or give up Maybe

Failure Handling – Cont’d Most desirable RPC semantics is exactly once. But hard to implement. Loss of cache table: at least once and log the cache table to storage. Reload the cache table when server recovers. Overhead since each service must be executed as a transaction at the server.

Failure Handling – Cont’d Crash of a client process. Server has an orphan computation and its reply is undeliverable. orphan computation waist server resources, may confuse the client with invalid replies from previous connections. How to eliminate orphan computation? Client: On reboot, cleans up all previous requests. Server: Occasionally locate owners of requests. Expiration: Each remote operation is given a maximum lifetime.

Secure RPC Security is important for RPC, since RPC introduces vulnerability because it opens doors for attacks. RPC became a cornerstone of client/server computation. All security features should be build on top of a secure RPC. Primary security issues Authentication of processes. Confidentiality of messages. Access control authorization from client to server.

Secure RPC – Cont’d Authentication protocol for RPC should establish: Mutual authentication. Message integrity, confidentiality, and originality. Design of a secure authentication protocol How strong the security goals. What possible attacks Some inherent limitations of the system. Short-term solution: additional security features.

Secure RPC – Cont’d Sun secure RPC Built into Sun’s basic RPC. Assume a trusted Network Information Service (NIS), which keeps a database and secret keys. The keys are for generating a true cryptographical session key. When user login, NIS gives the key. With user password, the key used to decrypt the secret key, discard password. Password are not transmitted in the network.

Secure RPC – Cont’d Sun secure RPC – example Client login attempt login program deposit the client’s key in the key server. Key server generating a common session key, by exponential key exchange. Secrete keys are erased after common session keys generated. Each RPC message is authenticated by a conversation key. Conversation key is kept in server, used for the entire session, as it is not from the secrete key.

Secure RPC – Cont’d Sun secure RPC – RPC message may contain more Timestamp : check message expiration Nonce : protect against the replay of a message Message digest: detect any tampering. Sun secure RPC is simple, using existing NIS.

SUN’S SECURE RPC

Other RPC Industry Implementations [2] Part II : Current Projects Other RPC Industry Implementations [2] 1984 - ONC RPC/NFS (Sun Microsystems Inc.) Early 1990s - DCE RPC (Microsoft) Late 1990’s – ORPC (Object Oriented Programming Community) 1997 – DCOM (Microsoft) 2002 - .NET Remoting (Microsoft) Doors (Solaris) 2003-ICE (Internet Communications Engine) DCOP - Desktop Communication Protocol (KDE)

ICE (Internet Communications Engine) [3,4,5] ICE is object-oriented middleware providing RPC, grid computing, and publish/subscribe functionality. Influenced by CORBR (Common Object Request Broker Architecture) in its design, and developed by ZeroC, Inc. Supports C++, Java, .NET-languages, Objective-C, Python, PHP, and Ruby on most major operating systems.

ICE components Figure from www.wikipedia.org

ICE components IceStorm: object-oriented publish-and-subscribe framework IceGrid: provide object-oriented load balancing, failover, object-discovery and registry services. IcePatch :facilitates the deployment of ICE based software Glacier: a proxy-based service to enable communication through firewalls IceBox: a SOA-like container of executable services implemented with libraries Slice: file format that programmers follow to edit. Servers should communicate based on interfaces and classes as declared by the slice definitions.

Current Project with ICE [5] Ice middleware in the New Solar Telescope’s Telescope Control System, 2008 [5] NST (new solar telescope) is an off-axis solar telescope with the world largest aperture. Develop TCS (telescope control system) to control all aspects of the telescope Telescope Pointing Tracking Subsystem Active Optics Control Subsystem Handheld Controller Main GUI

Current Project with ICE-Cont’d [5] Ice advantages Provides fast and scalable communications. Simple to use Ice Embedded (Ice-E) supports Microsoft Windows Mobile operating system for handheld devices. Source code of Ice is provided under the GNU (General Public License) Continuously updated. Ice problem Frequent package updates cause changes of coding.

Current Project with ICE-Cont’d [5] TCS Implementation Star-like structure: all subsystems through HQ (headquarters). Each subsystem acts as a server and a client. Each subsystem use the same ICE interface. Interface for every object includes seven operations; Register, Unregister, SendError, SendCommand, RequestInformation, SendNotification, SendReply, Error Subsystems can be started in any order, only need to register with HQ and IcePack registry.

Part III : Future Works Compatible updates with old versions (ex. ICE) Trend on Object-oriented implementation: General-purpose tool to construct object-based modular systems, transparently distributed at run-time.

Reference Randy Chow, Theodore Johnson, “Distributed Operating Systems & Algorithms”, 1997 Interprocess Commnications http://en.wikipedia.org/wiki/Interprocess_communication Zeros, Inc. http://zeroc.com/ice.html ICE in wikipedia, http://en.wikipedia.org/wiki/Internet_Communications_Engine Shumko, Sergij. "Ice middleware in the New Solar Telescope's Telescope Control System". Astronomical Data Analysis Software and Systems XVII, ASP Conference Series, Vol. XXX, 2008., Canada.

Thank You