We think you have liked this presentation. If you wish to download it, please recommend it to your friends in any social system. Share buttons are a little bit lower. Thank you!
Presentation is loading. Please wait.
Published byAubrie Blake
Modified about 1 year ago
CMPT 401 Summer 2007 Dr. Alexandra Fedorova Lecture XVIII: Concluding Remarks
2 CMPT 401 Summer 2007 © A. Fedorova Outline Discuss A Note on Distributed Computing by Jim Waldo et al. Jim Waldo: –Distinguished Engineer at Sun Microsystems –Chief architect of Jini –Adjunct professor at Harvard
3 CMPT 401 Summer 2007 © A. Fedorova A Note on Distributed Computing Distributed computing is fundamentally different from local computing The two paradigms are so different that it would be very inefficient to try and make them look the same –You’d end up with distributed applications that aren’t robust to failures –Or with local applications that are more complex than they need to be Most programming environments for DS attempt to mask the difference between local and remote invocation –But this is not what’s hard about distributed computing…
4 CMPT 401 Summer 2007 © A. Fedorova Key Argument Achieving interface transparency in distributed systems is unreasonable –Distributed systems have different failure modes than local systems –Handling those failures properly requires a certain interface –Therefore, distributed systems must be accessed via different interfaces –Those interfaces would be an overkill for local systems
5 CMPT 401 Summer 2007 © A. Fedorova Differences Between Local and Distributed Applications Latency Memory access Partial failure and concurrency
6 CMPT 401 Summer 2007 © A. Fedorova Latency A remote method call takes longer to execute than a local method call If you build your application without taking this into account, you are doomed to have performance problems Suppose you disregard local/remote differences: –You build/test your application using local objects –You decide later which objects are local and which are remote –You find out that if frequently accessed objects are remote, your performance sucks
7 CMPT 401 Summer 2007 © A. Fedorova Latency (cont.) One way to overcome the latency problem: –Make available tools that will allow developer to debug performance –Understand what components are slowing down the system –Make recommendations about the components that should be local But can we be sure that such tools would be available? (Do you know of a good one?) This is an active research area – this means that this is hard!
8 CMPT 401 Summer 2007 © A. Fedorova Memory Access A local pointer does not make sense in a remote address space What are the solutions? –Create a language where all memory access is managed by a runtime system (i.e., Java) – everything is a reference But not everyone uses Java –Force the programmer to access memory in a way that does not use pointers (in C++ you can do both) But not all programmers are well behaved
9 CMPT 401 Summer 2007 © A. Fedorova Memory Access and Latency: The Verdict Conceptually, it is possible to mask the difference between local and distributed computing w.r.t. memory access and latency Latency: –Develop your application without consideration for object locations –Decide on object locations later –Rely on good debugging tools to determine the right location Memory access –Enforce memory access though the underlying management system But masking this difference is difficult, and so it’s not clear whether we can realistically expect it to be masked
10 CMPT 401 Summer 2007 © A. Fedorova Partial Failure One component has failed others keep operating You don’t know how much of the computation has actually completed – this is unique to distributed systems –Has the server failed or is it just slow? –Did it update my bank account before it failed? With local computing, a function can also fail, or a system may block or deadlock, but –You can always find out what’s happening by asking the operating system or the application –In distributed computing, you cannot always find out what happened, because you may be unable communicate with the entity in question
11 CMPT 401 Summer 2007 © A. Fedorova Concurrency Aren’t local multithreaded applications subject to same issues as distributed applications? Not quite: –In local programming, a programmer can always force a certain order of operations –In distributed computing this cannot be done –In local programming, the underlying system provides synchronization primitives and mechanisms –In distributed systems, this is not easily available, and the system providing the synchronization infrastructure may fail
12 CMPT 401 Summer 2007 © A. Fedorova So What Do We Do? Design the right interfaces Interfaces must allow the programmer to handle errors that are unique to distributed systems For example: a read() system call: –Local interface: int read(int fd, char *buf, int size) –Remote interface: int read(int fd, char *buf, int size, long timeout) Error codes are expanded to indicate timeout or network failure
13 CMPT 401 Summer 2007 © A. Fedorova But Wait… Can’t You Unify Interfaces Can’t you use the beefed-up remote interface even when programming local applications? Then you don’t need to have different sets of interfaces You could, but –Local programming would become a nightmare –This defeats the purpose of unifying local and distributed paradigms: instead of making distributed programming simpler you’d be making local programming more complex
14 CMPT 401 Summer 2007 © A. Fedorova So What Does Jim Suggest? Design objects with local interfaces Add an extension to the interface if the object is to be distributed The programmer will be aware of the object’s location How is this actually done? Recall RMI: –A remote object must implement Remote interface –A method invoked on a remote object must catch Remote exception –But the same object can be used locally, without specifying that it implements Remote
15 CMPT 401 Summer 2007 © A. Fedorova Summary Distributed computing is fundamentally different from local computing because of different failure modes By making distributed interfaces look like local interfaces, we are diminishing our ability to properly handle those failures – this results in brittle applications To handle those failures properly, interfaces must be designed in a certain way Therefore, remote interfaces must be different from local interfaces (unless you want to make local interfaces unnecessarily complicated)
Java Threads 11 Threading and Concurrent Programming in Java Introduction and Definitions D.W. Denbo Introduction and Definitions D.W. Denbo.
CS 390- Unix Programming Environment CS 390 Unix Programming Environment Topics to be covered: Distributed Computing Fundamentals.
Distributed Computing & Embedded Systems Chapter 4: Remote Method Invocation Dr. Umair Ali Khan.
Presentation 3: Designing Distributed Objects. Ingeniørhøjskolen i Århus Slide 2 af 16 Outline Assumed students are knowledgeable about OOP principles.
Remote Procedure CallCS-4513, D-Term Remote Procedure Call CS-4513 Distributed Computing Systems (Slides include materials from Operating System.
Presentation 3: Designing Distributed Objects. Ingeniørhøjskolen i Århus Slide 2 af 14 Outline Assumed students are knowledgeable about OOP principles.
The article collection PRIS F7 Fredrik Kilander. Content “On agent-based software engineering” Nick Jennings, 1999 “An agent-based approach for building.
Distributed Web Systems Distributed Objects and Remote Method Invocation Lecturer Department University.
Lecture 5: RPC (exercises/questions). 26-Jun-16COMP28112 Lecture 52 First Six Steps of RPC TvS: Figure 4-7.
CMPT 431 Dr. Alexandra Fedorova Lecture VIII: Time And Global Clocks.
CSC 8320RPC Slide 1 Remote Procedure Call Design issues Design issues Implementation Implementation RPC programming RPC programming.
Advanced Programming Rabie A. Ramadan Lecture 4. A Simple Use of Java Remote Method Invocation (RMI) 2.
Tutorials 2 A programmer can use two approaches when designing a distributed application. Describe what are they? –Communication-Oriented Design Begin.
Lecture 5 Page 1 CS 111 Summer 2013 Bounded Buffers A higher level abstraction than shared domains or simple messages But not quite as high level as RPC.
Object Interaction: RMI and RPC 1. Overview 2 Distributed applications programming - distributed objects model - RMI, invocation semantics - RPC Products.
Topic 4: Distributed Objects Dr. Ayman Srour Faculty of Applied Engineering and Urban Planning University of Palestine.
Exception Handling Programmers must deal with errors and exceptional situations: User input errors Device errors Empty disk space, no memory Component.
16: Distributed Systems1 DISTRIBUTED SYSTEM STRUCTURES NETWORK OPERATING SYSTEMS The users are aware of the physical structure of the network. Each site.
© 2006 Pearson Addison-Wesley. All rights reserved4-1 Chapter 4 Data Abstraction: The Walls.
Locking In CFML. Locking in CFML - Why - How - What - When } to lock? Understand Locking.
Shuman Guo CSc 8320 Advanced Operating Systems. Outlines Design & Implementation Issues Object Models and Naming Schemes Distributed Coordination Interprocess.
Implementing Remote Procedure Call Landon Cox February 12, 2016.
Lecture 4 Remote Procedure Calls (cont). EECE 411: Design of Distributed Software Applications [Last time] Building Distributed Applications: Two Paradigms.
1 Remote Procedure Call CISC 879 – Spring 03 Tam Vu March 06, 03.
Implementing Remote Procedure Calls Authored by: Andrew D. Birrel and Bruce Jay Nelson Presented by: Terry, Jae, Denny.
CSE 490dp Resource Control Robert Grimm. Problems How to access resources? –Basic usage tracking How to measure resource consumption? –Accounting How.
Advanced Computer Networks Topic 2: Characterization of Distributed Systems.
ICOM 4035 – Data Structures Dr. Manuel Rodríguez Martínez Electrical and Computer Engineering Department Lecture 2 – August 23, 2001.
Executional Architecture Lecture Conceptual vs execution Conceptual Architecture Execution Architecture Component Connector Domain-level responsibilities.
Distributed Systems Lecture #3: Remote Communication.
© Janice Regan, CMPT 102, Sept CMPT 102 Introduction to Scientific Computer Programming Pointers.
CMSC 414 Computer and Network Security Lecture 9 Jonathan Katz.
Remote Procedure Call RPC Computer Engineering Department Distributed Systems Course Assoc. Prof. Dr. Ahmet Sayar Kocaeli University - Fall 2015.
Two phase commit. Failures in a distributed system Consistency requires agreement among multiple servers –Is transaction X committed? –Have all servers.
Problems with Send and Receive Low level –programmer is engaged in I/O –server often not modular –takes 2 calls to get what you want (send, followed by.
1 Jini Tutorial The road to reliable, autonomous distributed systems.
Manish Kumar,MSRITSoftware Architecture1 Remote procedure call Client/server architecture.
CORBA1 Distributed Software Systems Any software system can be physically distributed By distributed coupling we get the following: Improved performance.
Lecture 18B Exception Handling and Richard Gesick.
Remote Procedure Call (RPC) is a high-level model for client-sever communication. It provides the programmers with a familiar mechanism for building.
COMP28112 Lecture 2 Architectures of distributed systems Fundamental Models.
The Singleton Pattern SE-2811 Dr. Mark L. Hornick 1.
1 Concurrent and Distributed Systems Introduction 8 lectures on concurrency control in centralised systems - interaction of components in main memory -
Lecture 8 Epidemic communication, Server implementation.
C Programming - Lecture 6 This lecture we will learn: –Error checking in C –What is a wrappered function? –How to assess efficiency. –What is a clean interface?
PROCESS RESILIENCE By Ravalika Pola. outline: Process Resilience Design Issues Failure Masking and Replication Agreement in Faulty Systems Failure.
Exceptions Lecture 11 COMP 401, Fall /25/2014.
1 Communication in Distributed Systems –Part 2 REK’s adaptation of Tanenbaum’s Distributed Systems Chapter 2.
© 2017 SlidePlayer.com Inc. All rights reserved.