Erlang concurrency. Where were we? Finished talking about sequential Erlang Left with two questions  retry – not an issue; I mis-read the statement in.

Slides:



Advertisements
Similar presentations
Peer-to-peer and agent-based computing P2P Algorithms.
Advertisements

Thomas Arts Industrial Use of a Functional Language Thomas Arts Ericsson Computer Science Laboratory Stockholm, Sweden
Interprocess Communication CH4. HW: Reading messages: User Agent (the user’s mail reading program) is either a client of the local file server or a client.
Concurrency Important and difficult (Ada slides copied from Ed Schonberg)
RPC Robert Grimm New York University Remote Procedure Calls.
Concurrency The need for speed. Why concurrency? Moore’s law: 1. The number of components on a chip doubles about every 18 months 2. The speed of computation.
Remote Procedure Call (RPC)
Distributed Objects and Remote Invocation
Distributed Object & Remote Invocation Vidya Satyanarayanan.
28.2 Functionality Application Software Provides Applications supply the high-level services that user access, and determine how users perceive the capabilities.
Introduction in algorithms and applications Introduction in algorithms and applications Parallel machines and architectures Parallel machines and architectures.
Registered Processes A perennial problem in client-server computing is “How do clients find the server?” Two possible answers:  Clients are told about.
Tutorials 2 A programmer can use two approaches when designing a distributed application. Describe what are they? Communication-Oriented Design Begin with.
1 SWE Introduction to Software Engineering Lecture 23 – Architectural Design (Chapter 13)
Group Communications Group communication: one source process sending a message to a group of processes: Destination is a group rather than a single process.
Programming Language Semantics Java Threads and Locks Informal Introduction The Java Specification Language Chapter 17.
Advanced OS Chapter 3p2 Sections 3.4 / 3.5. Interrupts These enable software to respond to signals from hardware. The set of instructions to be executed.
Chapter 11: Distributed Processing Parallel programming Principles of parallel programming languages Concurrent execution –Programming constructs –Guarded.
Outcomes What is RPC? The difference between conventional procedure call and RPC? Understand the function of client and server stubs How many steps could.
© Lethbridge/Laganière 2001 Chap. 3: Basing Development on Reusable Technology 1 Let’s get started. Let’s start by selecting an architecture from among.
ITERATIVE COMPUTATIONS CONCURRENCY ID1218 Lecture Christian Schulte Software and Computer Systems School of Information and.
PRASHANTHI NARAYAN NETTEM.
Reminders 1 st Erlang assignment due Friday, Apr. 3 2 nd mid-term in-class Monday, Apr. 6.
Networking with Java. Basic Concepts A Network exists when two or more computers are connected such that they can communicate data back and forth. There.
Fundamentals of Python: From First Programs Through Data Structures
CLIENT SERVER INTERACTION USING MESSAGES. Introduction  The best choice for client server.  The interaction mechanisms remain similar.  Event loops.
What is Concurrent Programming? Maram Bani Younes.
Comp 248 Introduction to Programming Chapter 4 - Defining Classes Part A Dr. Aiman Hanna Department of Computer Science & Software Engineering Concordia.
Comparative Programming Languages hussein suleman uct csc304s 2003.
Tanenbaum & Van Steen, Distributed Systems: Principles and Paradigms, 2e, (c) 2007 Prentice-Hall, Inc. All rights reserved DISTRIBUTED SYSTEMS.
Chapter 9 Message Passing Copyright © Operating Systems, by Dhananjay Dhamdhere Copyright © Operating Systems, by Dhananjay Dhamdhere2 Introduction.
ICOM 5995: Performance Instrumentation and Visualization for High Performance Computer Systems Lecture 7 October 16, 2002 Nayda G. Santiago.
CS 390- Unix Programming Environment CS 390 Unix Programming Environment Topics to be covered: Distributed Computing Fundamentals.
4/2/03I-1 © 2001 T. Horton CS 494 Object-Oriented Analysis & Design Software Architecture and Design Readings: Ambler, Chap. 7 (Sections to start.
12/1/98 COP 4020 Programming Languages Parallel Programming in Ada and Java Gregory A. Riccardi Department of Computer Science Florida State University.
Multi-Threaded Application CSNB534 Asma Shakil. Overview Software applications employ a strategy called multi- threaded programming to split tasks into.
Java Software Solutions Lewis and Loftus Chapter 14 1 Copyright 1997 by John Lewis and William Loftus. All rights reserved. Advanced Flow of Control --
CS 152: Programming Language Paradigms May 7 Class Meeting Department of Computer Science San Jose State University Spring 2014 Instructor: Ron Mak
1 Web Based Programming Section 8 James King 12 August 2003.
1 Concurrency Architecture Types Tasks Synchronization –Semaphores –Monitors –Message Passing Concurrency in Ada Java Threads.
RMI Remote Method Invocation Distributed Object-based System and RPC Together 2-Jun-16.
Processes Introduction to Operating Systems: Module 3.
Client-Server Model of Interaction Chapter 20. We have looked at the details of TCP/IP Protocols Protocols Router architecture Router architecture Now.
Shuman Guo CSc 8320 Advanced Operating Systems
Distributed Programming Concepts and Notations. Inter-process Communication Synchronous Messages Asynchronous Messages Select statement Remote procedure.
The Client-Server Model And the Socket API. Client-Server (1) The datagram service does not require cooperation between the peer applications but such.
Department of Computer Science and Software Engineering
1 BİL 542 Parallel Computing. 2 Message Passing Chapter 2.
13-1 Chapter 13 Concurrency Topics Introduction Introduction to Subprogram-Level Concurrency Semaphores Monitors Message Passing Java Threads C# Threads.
Lecture 4 Mechanisms & Kernel for NOSs. Mechanisms for Network Operating Systems  Network operating systems provide three basic mechanisms that support.
Observations so far…. In general… There are two ways to design a software system –Centralized control One “driver” function that contains the entire algorithm.
A Pattern Language for Parallel Programming Beverly Sanders University of Florida.
3/12/2013Computer Engg, IIT(BHU)1 OpenMP-1. OpenMP is a portable, multiprocessing API for shared memory computers OpenMP is not a “language” Instead,
Distributed objects and remote invocation Pages
M1G Introduction to Programming 2 2. Creating Classes: Game and Player.
Distributed Computing & Embedded Systems Chapter 4: Remote Method Invocation Dr. Umair Ali Khan.
Topic 4: Distributed Objects Dr. Ayman Srour Faculty of Applied Engineering and Urban Planning University of Palestine.
Object Interaction: RMI and RPC 1. Overview 2 Distributed applications programming - distributed objects model - RMI, invocation semantics - RPC Products.
Model and complexity Many measures Space complexity Time complexity
New trends in parallel computing
Programming Models for Distributed Application
Transactional Memory Semaphores, monitors, and conditional critical regions all suffer from limitations based on lock semantics Naïve synchronization may.
CSE 451: Operating Systems Autumn 2003 Lecture 16 RPC
Erlang 3 Concurrency 8-Dec-18.
Subject : T0152 – Programming Language Concept
Computer Science 312 Concurrent Programming I Processes and Messages 1.
Classes, Objects and Methods
Foundations and Definitions
CSE 451: Operating Systems Winter 2003 Lecture 16 RPC
Presentation transcript:

Erlang concurrency

Where were we? Finished talking about sequential Erlang Left with two questions  retry – not an issue; I mis-read the statement in the book. Erlang does not have retry  Which exceptions are caught in try…catch. Only those in the FuncOrExpressionSequence i.e. exactly as described and non-intuitive.

Concurrency mechanisms - processes Concurrent activities are called processes Erlang processes are implemented by the language system, not the OS  Very lightweight even compared to Java threads Processes are created with spawn/1 Pid = spawn(Function)  Function must be nullary (take 0 args) Or spawn/3 Pid = spawn(Module, Function, ArgList)  Args is a list; Function must take that many args

Concurrency mechanisms - mailboxes Every process has exactly one mailbox The mailbox contains messages. Messages simply are Erlang values. Messages in the mailbox are in arrival order Sending messages Pid ! Msg Note: (Pid ! Msg) =:= Msg Note also: sending is asynchronous – the sender does not wait for the receiver

Concurrency mechanisms – selective receive receive Pattern1 [when Guard1] -> Exprs1; Pattern2 [when Guard2] -> Exprs2; … [after Time -> ExprsT] end

Execution of receive 1. For each message, m, in the mailbox, attempt to match it to each of the patterns/guards of the receive operation If a match succeeds, the result of receive is the result of evaluating the corresponding expression (the matched message is removed from the mailbox) 2. Otherwise, wait and as new messages arrive test them against the patterns/guards until one succeeds (and the result comes from evaluating the corresponding expression) or the timeout occurs (and the result comes from the timeout expression. (Different, equivalent, explanation in the book.)

Erlang receive is powerful Many concurrent languages allow only the first message in the mailbox to be received  Erlang approach allows one mailbox to emulate many (perhaps at some cost in execution time)  Erlang approach allows application to decide on message priority without any special language mechanism Asynchronous paradigm is a good match for distribution and is relatively easy to implement Synchronous message passing is more powerful in some ways, but is not a good match for distributed systems

Lurking under the covers Synchronization  A processes mailbox is a shared data structure, updated by more than one process Visibility  Are values that are sent to mailboxes fully visible to receiving processes on modern memory architectures These are Erlang implementation questions, rather than questions for the Erlang programmer

Client-server Client sends a request to server Server sends response to client Server usually services many clients  How does it know where to send the response message?  Protocol includes client’s Pid as part of the request. Obtain using self() A protocol is a set of rules for communicating Oops, and a client may use many servers or itself be a server  Protocol also includes the server’s Pid in the response

Client skeleton Server ! {self(), Request}, receive {Server, Response} -> Response end

Server skeleton loop() -> receive {From, Request} -> From ! answer(self(), Request) end, loop(). answer() -> … Start the server with Server = spawn(fun loop/0). Note: a server, inherently, does only one thing at a time; it does synchronization Note: stateless server Note: tail recursion; placement of recursive call

Software Engineering Don’t want every use of the server to look like our client skeleton – can we write it once? Who starts the server? Answers: include code for these things in the server’s module start() -> spawn(fun loop/0). % result is Server Pid rpc(Server, Request) -> Server ! (self(), Request), receive (Server, Response) -> Response end. Note: maybe still troublesome that client deals in Server Pid rather than a Server object.

Timeouts Timeout alone (no patterns) -> sleep. Timeout 0  polling receive (don’t do this);  mailbox flush  priority receive – take a particular kind of message out of order (receive processing matches each message to a pattern; timeout 0 allows to process by matching all patterns in turn to each message) Timeout infinity  Equivalent to no after clause, but allows computed infinite wait instead of static infinite wait

Registered Processes A perennial problem in client-server computing is “How do clients find the server?” Two possible answers:  Clients are told about the servers as parameters when the clients are started Lots of parameters; startup order dependency  Clients use a name service to locate the servers Erlang answer: registered processes  register(AnAtom, Pid)  Can then send to the atom as if it were a Pid. Note: mutable global state!

Assignment Problem 2, Section 8.11; due April 3 midnight  Details about exactly what I want will be posted later this week; in the meantime you can work on How to build a ring so that P0 sends to P1 to … to Pn-1 to P0  Use at most 2 static code bodies (P0 may be special) for the processes in the ring  Think first about what each process needs to do – what parameters does it require? Will each process spawn its successor or will they all be spawned by a main function then glued together?  If spawning all processes from a single point, construct the ring elegantly – ideas: use map or a list comprehension or for() from page 141.  You don’t have to implement in another language or write a blog.