Introduction in algorithms and applications Introduction in algorithms and applications Parallel machines and architectures Parallel machines and architectures.

Slides:



Advertisements
Similar presentations
MPI Message Passing Interface
Advertisements

Operating Systems Lecture 7.
COMMUNICATING SEQUENTIAL PROCESSES C. A. R. Hoare The Queen’s University Belfast, North Ireland.
Concurrency Important and difficult (Ada slides copied from Ed Schonberg)
Deadlocks, Message Passing Brief refresh from last week Tore Larsen Oct
Parallel programming in Java. Java has 2 forms of support for parallel programming: –Multithreading Multiple threads of control (sub processes), useful.
Informationsteknologi Wednesday, September 26, 2007 Computer Systems/Operating Systems - Class 91 Today’s class Mutual exclusion and synchronization 
Distributed Systems and Message Passing. 2 Context (Inter Process Communication, IPC) So far, we have studied concurrency in the context of Multiprogramming,
DISTRIBUTED AND HIGH-PERFORMANCE COMPUTING CHAPTER 7: SHARED MEMORY PARALLEL PROGRAMMING.
1 Semaphores Special variable called a semaphore is used for signaling If a process is waiting for a signal, it is suspended until that signal is sent.
Concurrency CS 510: Programming Languages David Walker.
Introduction in algorithms and applications Introduction in algorithms and applications Parallel machines and architectures Parallel machines and architectures.
Parallel programming in Java. Java has 2 forms of support for parallel programming: –Multithreading Multiple threads of control (sub processes), useful.
02/02/2004CSCI 315 Operating Systems Design1 Interprocesses Communication Notice: The slides for this lecture have been largely based on those accompanying.
Communication in Distributed Systems –Part 2
Asynchronous Message Passing EE 524/CS 561 Wanliang Ma 03/08/2000.
02/01/2010CSCI 315 Operating Systems Design1 Interprocess Communication Notice: The slides for this lecture have been largely based on those accompanying.
1 Organization of Programming Languages-Cheng (Fall 2004) Concurrency u A PROCESS or THREAD:is a potentially-active execution context. Classic von Neumann.
A. Frank - P. Weisberg Operating Systems Introduction to Cooperating Processes.
Concurrency - 1 Tasking Concurrent Programming Declaration, creation, activation, termination Synchronization and communication Time and delays conditional.
CS603 Communication Mechanisms 14 January Types of Communication Shared Memory Message Passing Stream-oriented Communications Remote Procedure Call.
Introduction in algorithms and applications Introduction in algorithms and applications Parallel machines and architectures Parallel machines and architectures.
Mapping Techniques for Load Balancing
Chapter 26 Client Server Interaction Communication across a computer network requires a pair of application programs to cooperate. One application on one.
Lecture 4: Parallel Programming Models. Parallel Programming Models Parallel Programming Models: Data parallelism / Task parallelism Explicit parallelism.
Chapter 9 Message Passing Copyright © Operating Systems, by Dhananjay Dhamdhere Copyright © Operating Systems, by Dhananjay Dhamdhere2 Introduction.
L15: Putting it together: N-body (Ch. 6) October 30, 2012.
ICOM 5995: Performance Instrumentation and Visualization for High Performance Computer Systems Lecture 7 October 16, 2002 Nayda G. Santiago.
Chapter 13 Concurrency. Copyright © 2012 Addison-Wesley. All rights reserved.1-2 Introduction Concurrency can occur at four levels: –Machine instruction.
1 Concurrency Architecture Types Tasks Synchronization –Semaphores –Monitors –Message Passing Concurrency in Ada Java Threads.
1 Lecture 5 (part2) : “Interprocess communication” n reasons for process cooperation n types of message passing n direct and indirect message passing n.
1 Parallel Programming Aaron Bloomfield CS 415 Fall 2005.
CSE 451: Operating Systems Winter 2015 Module 22 Remote Procedure Call (RPC) Mark Zbikowski Allen Center 476 © 2013 Gribble, Lazowska,
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Using a simple Rendez-Vous mechanism in Java
ICS 313: Programming Language Theory Chapter 13: Concurrency.
Computer Architecture and Operating Systems CS 3230: Operating System Section Lecture OS-4 Process Communication Department of Computer Science and Software.
Distributed Programming Concepts and Notations. Inter-process Communication Synchronous Messages Asynchronous Messages Select statement Remote procedure.
Remote Procedure Call Andy Wang Operating Systems COP 4610 / CGS 5765.
Message-Passing Computing Chapter 2. Programming Multicomputer Design special parallel programming language –Occam Extend existing language to handle.
Department of Computer Science and Software Engineering
Mark Stanovich Operating Systems COP Primitives to Build Distributed Applications send and receive Used to synchronize cooperating processes running.
13-1 Chapter 13 Concurrency Topics Introduction Introduction to Subprogram-Level Concurrency Semaphores Monitors Message Passing Java Threads C# Threads.
Stacey Levine Chapter 4.1 Message Passing Communication.
 Process Concept  Process Scheduling  Operations on Processes  Cooperating Processes  Interprocess Communication  Communication in Client-Server.
Distributed Algorithms Dr. Samir Tartir Extracted from Principles of Concurrent and Distributed Programming, Second Edition By M. Ben-Ari.
Channels. Models for Communications Synchronous communications – E.g. Telephone call Asynchronous communications – E.g. .
Gokul Kishan CS8 1 Inter-Process Communication (IPC)
© Oxford University Press 2011 DISTRIBUTED COMPUTING Sunita Mahajan Sunita Mahajan, Principal, Institute of Computer Science, MET League of Colleges, Mumbai.
Message passing model. buffer producerconsumer PRODUCER-CONSUMER PROBLEM.
Topic 4: Distributed Objects Dr. Ayman Srour Faculty of Applied Engineering and Urban Planning University of Palestine.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Interprocess Communication and Synchronization based on Message Passing.
Last Class: Introduction
03 – Remote invoaction Request-reply RPC RMI Coulouris 5
“Language Mechanism for Synchronization”
Processes Overview: Process Concept Process Scheduling
Last Class: RPCs and RMI
DISTRIBUTED COMPUTING
Sarah Diesburg Operating Systems COP 4610
CSE 451: Operating Systems Autumn 2003 Lecture 16 RPC
COMP60611 Fundamentals of Parallel and Distributed Systems
Channels.
Subject : T0152 – Programming Language Concept
Channels.
Channels.
Parallel programming in Java
CSE 451: Operating Systems Winter 2003 Lecture 16 RPC
CSE 451: Operating Systems Messaging and Remote Procedure Call (RPC)
Programming Parallel Computers
Presentation transcript:

Introduction in algorithms and applications Introduction in algorithms and applications Parallel machines and architectures Parallel machines and architectures Programming methods, languages, and environments Programming methods, languages, and environments Message passing (SR, MPI, Java) Higher-level languages (HPF) Applications N-body problems, search algorithms Many-core programming (CUDA) Many-core programming (CUDA) Course Outline

Approaches to Parallel Programming Sequential language + library MPI, PVM Extend sequential language C/Linda, Concurrent C++, HPF New languages designed for parallel or distributed programming SR, occam, Ada, Orca

Paradigms for Parallel Programming Processes + shared variables Processes + message passing Concurrent object-oriented languages Concurrent functional languages Concurrent logic languages Data-parallelism (SPMD model) - SR and MPI Java - - HPF, CUDA

Interprocess Communication and Synchronization based on Message Passing Henri Bal

Overview Message passing Naming the sender and receiver Explicit or implicit receipt of messages Synchronous versus asynchronous messages Nondeterminism Select statement Example language: SR (Synchronizing Resources) Example library: MPI (Message Passing Interface)

Point-to-point Message Passing Basic primitives: send & receive As library routines: send(destination, & MsgBuffer) receive(source, &MsgBuffer) As language constructs send MsgName(arguments) to destination receive MsgName(arguments) from source

Direct naming Sender and receiver directly name each other S: send M to R R: receive M from S Asymmetric direct naming (more flexible): S: send M to R R: receive M Direct naming is easy to implement Destination of message is know in advance Implementation just maps logical names to machine addresses

Indirect naming Indirect naming uses extra indirection level R: send M to P -- P is a port name S: receive M from P Sender and receiver need not know each other Port names can be moved around (e.g., in a message) send ReplyPort(P) to U -- P is name of reply port Most languages allow only a single process at a time to receive from any given port Some languages allow multiple receivers that service messages on demand -> called a mailbox

Explicit Message Receipt Explicit receive by an existing process Receiving process only handles message when it is willing to do so process main() { // regular computation here receive M( ….); // explicit message receipt // code to handle message // more regular computations …. }

Implicit message receipt Receipt by a new thread of control, created for handling the incoming message int X; process main( ) { // just regular computations, this code can access X } message-handler M( ) // created whenever a message M arrives { // code to handle the message, can also access X }

Threads Threads run in (pseudo-) parallel on the same node Each thread has its own program counter and local variables Threads share global variables mainMM X time

Differences (1) Implicit receipt is used if it’s unknown when a message will arrive; example: global bound in TSP int Minimum; process main( ) { // regular computations } message-handler Update(m: int) { if (m<Minimum) Minimum = m } process main() { int Minimum; while (true) { if (there is a message Update) { receive Update(m); if (m<Minimum) Minimum = m } // regular computations }

Differences (2) Explicit receive gives more control over when to accept which messages; e.g., SR allows: receive ReadFile(file, offset, NrBytes) by NrBytes // sorts messages by (increasing) 3rd parameter, i.e. small reads go first // sorts messages by (increasing) 3rd parameter, i.e. small reads go first MPI has explicit receive (+ polling for implicit receive) Java has implicit receive: Remote Method Invocation (RMI) SR has both

Synchronous vs. asynchronous Message Passing Synchronous message passing: Sender is blocked until receiver has accepted the message Too restrictive for many parallel applications Asynchronous message passing: Sender continues immediately More efficient Ordering problems Buffering problems

Ordering with asynchronous message passing SENDER: RECEIVER: send message(1)receive message(N); print N send message(2)receive message(M); print M Messages may be received in any order, depending on the protocol Message ordering message(1) message(2)

Example: AT&T crash P2P1 Are you still alive? P2P1P1 crashesP1 is dead P2P1 I’m back Regular message Something’s wrong, I’d better crash! P2P1P2 is dead

Message buffering Keep messages in a buffer until the receive( ) is done What if the buffer overflows? Continue, but delete some messages (e.g., oldest one), or Use flow control: block the sender temporarily Flow control changes the semantics since it introduces synchronization S: send zillion messages to R; receive messages R: send zillion messages to S; receive messages -> deadlock!

Nondeterminism Interactions may depend on run-time conditions e.g.: wait for a message from either A or B, whichever comes first Need to express and control nondeterminism specify when to accept which message Example (bounded buffer): do simultaneously when buffer not full: accept request to store message when buffer not empty: accept request to fetch message

Select statement several alternatives of the form: WHEN condition => RECEIVE message DO statement Each alternative may succeed, if condition=true & a message is available fail, if condition=false suspend, if condition=true & no message available yet Entire select statement may succeed, if any alternative succeeds -> pick one nondeterministically fail, if all alternatives fail suspend, if some alternatives suspend and none succeeds yet

Example: bounded buffer select when not FULL(BUFFER) => receive STORE_ITEM(X: INTEGER) do ‘store X in buffer’ end; or when not EMPTY(BUFFER) => receive FETCH_ITEM(X: out INTEGER) do X := ‘first item from buffer’ end; end select;

Synchronizing Resources (SR) Developed at University of Arizona Goals of SR: Expressiveness Many message passing primitives Ease of use Minimize number of underlying concepts Clean integration of language constructs Efficiency Each primitive must be efficient

Overview of SR Multiple forms of message passing Asynchronous message passing Rendezvous (synchronous send, explicit receipt) Remote Procedure Call (synchronous send, implicit receipt) Multicast (many receivers) Powerful receive-statement Conditional & ordered receive, based on contents of message Select statement Resource = module run on 1 node (uni/multiprocessor) Contains multiple threads that share variables

Orthogonality in SR The send and receive primitives can be combined in all 4 possible ways

Example body S #sender send R.m1 #asynchr. mp send R.m2 # fork call R.m1 # rendezvous call R.m2 # RPC end S body R #receiver proc M2( ) # implicit receipt # code to handle M2 end initial # main process of R do true -> #infinite loop in m1( ) # explicit receive # code to handle m1 ni od end end R