1 Implementing Master/Slave Algorithms l Many algorithms have one or more master processes that send tasks and receive results from slave processes l Because.

Slides:



Advertisements
Similar presentations
MPI Message Passing Interface
Advertisements

1 Introduction to Collective Operations in MPI l Collective operations are called by all processes in a communicator. MPI_BCAST distributes data from one.
Asynchronous I/O with MPI Anthony Danalis. Basic Non-Blocking API  MPI_Isend()  MPI_Irecv()  MPI_Wait()  MPI_Waitall()  MPI_Waitany()  MPI_Test()
Parallel Processing1 Parallel Processing (CS 667) Lecture 9: Advanced Point to Point Communication Jeremy R. Johnson *Parts of this lecture was derived.
1 Buffers l When you send data, where does it go? One possibility is: Process 0Process 1 User data Local buffer the network User data Local buffer.
MPI Fundamentals—A Quick Overview Shantanu Dutt ECE Dept., UIC.
Reference: / Point-to-Point Communication.
A Message Passing Standard for MPP and Workstations Communications of the ACM, July 1996 J.J. Dongarra, S.W. Otto, M. Snir, and D.W. Walker.
Point-to-Point Communication Self Test with solution.
Lesson2 Point-to-point semantics Embarrassingly Parallel Examples.
1 Parallel Computing—Introduction to Message Passing Interface (MPI)
MPI Point-to-Point Communication CS 524 – High-Performance Computing.
Comp 422: Parallel Programming
Jonathan Carroll-Nellenback CIRC Summer School MESSAGE PASSING INTERFACE (MPI)
1 Tuesday, October 10, 2006 To err is human, and to blame it on a computer is even more so. -Robert Orben.
A Brief Look At MPI’s Point To Point Communication Brian T. Smith Professor, Department of Computer Science Director, Albuquerque High Performance Computing.
1 What is message passing? l Data transfer plus synchronization l Requires cooperation of sender and receiver l Cooperation not always apparent in code.
CS 179: GPU Programming Lecture 20: Cross-system communication.
1 TRAPEZOIDAL RULE IN MPI Copyright © 2010, Elsevier Inc. All rights Reserved.
Non-Blocking I/O CS550 Operating Systems. Outline Continued discussion of semaphores from the previous lecture notes, as necessary. MPI Types What is.
A Message Passing Standard for MPP and Workstations Communications of the ACM, July 1996 J.J. Dongarra, S.W. Otto, M. Snir, and D.W. Walker.
1 CS4402 – Parallel Computing Lecture 2 MPI – Getting Started. MPI – Point to Point Communication.
1 Choosing MPI Alternatives l MPI offers may ways to accomplish the same task l Which is best? »Just like everything else, it depends on the vendor, system.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Specialized Sending and Receiving David Monismith CS599 Based upon notes from Chapter 3 of the MPI 3.0 Standard
War of the Worlds -- Shared-memory vs. Distributed-memory In distributed world, we have heavyweight processes (nodes) rather than threads Nodes communicate.
Parallel Computing A task is broken down into tasks, performed by separate workers or processes Processes interact by exchanging information What do we.
Parallel Programming with MPI Prof. Sivarama Dandamudi School of Computer Science Carleton University.
Jonathan Carroll-Nellenback CIRC Summer School MESSAGE PASSING INTERFACE (MPI)
1 MPI Primer Lesson 10 2 What is MPI MPI is the standard for multi- computer and cluster message passing introduced by the Message-Passing Interface.
MPI Communications Point to Point Collective Communication Data Packaging.
Message Passing Programming Model AMANO, Hideharu Textbook pp. 140-147.
Performance Oriented MPI Jeffrey M. Squyres Andrew Lumsdaine NERSC/LBNL and U. Notre Dame.
MPI Introduction to MPI Commands. Basics – Send and Receive MPI is a message passing environment. The processors’ method of sharing information is NOT.
1 The Message-Passing Model l A process is (traditionally) a program counter and address space. l Processes may have multiple threads (program counters.
MPI Send/Receive Blocked/Unblocked Tom Murphy Director of Contra Costa College High Performance Computing Center Message Passing Interface BWUPEP2011,
1 Overview on Send And Receive routines in MPI Kamyar Miremadi November 2004.
MPI (continue) An example for designing explicit message passing programs Advanced MPI concepts.
Synchronization Methods in Message Passing Model.
Its.unc.edu 1 University of North Carolina - Chapel Hill ITS Research Computing Instructor: Mark Reed Point to Point Communication.
Introduction to MPI Nischint Rajmohan 5 November 2007.
1 Lecture 4: Part 2: MPI Point-to-Point Communication.
MPI Point to Point Communication CDP 1. Message Passing Definitions Application buffer Holds the data for send or receive Handled by the user System buffer.
Project18’s Communication Drawing Design By: Camilo A. Silva BIOinformatics Summer 2008.
Programming distributed memory systems: Message Passing Interface (MPI) Distributed memory systems: multiple processing units working on one task (e.g.
FIT5174 Parallel & Distributed Systems Dr. Ronald Pose Lecture FIT5174 Distributed & Parallel Systems Lecture 5 Message Passing and MPI.
Bronis R. de Supinski and Jeffrey S. Vetter Center for Applied Scientific Computing August 15, 2000 Umpire: Making MPI Programs Safe.
1 Using PMPI routines l PMPI allows selective replacement of MPI routines at link time (no need to recompile) l Some libraries already make use of PMPI.
Introduction to Parallel Programming at MCSR Message Passing Computing –Processes coordinate and communicate results via calls to message passing library.
Message Passing Interface (MPI) 2 Amit Majumdar Scientific Computing Applications Group San Diego Supercomputer Center Tim Kaiser (now at Colorado School.
MPI Send/Receive Blocked/Unblocked Josh Alexander, University of Oklahoma Ivan Babic, Earlham College Andrew Fitz Gibbon, Shodor Education Foundation Inc.
Chapter 5. Nonblocking Communication MPI_Send, MPI_Recv are blocking operations Will not return until the arguments to the functions can be safely modified.
Parallel Algorithms & Implementations: Data-Parallelism, Asynchronous Communication and Master/Worker Paradigm FDI 2007 Track Q Day 2 – Morning Session.
MPI-Message Passing Interface. What is MPI?  MPI is a specification for the developers and users of message passing libraries. By itself, it is NOT a.
Lecture 3 Point-to-Point Communications Dr. Muhammad Hanif Durad Department of Computer and Information Sciences Pakistan Institute Engineering and Applied.
COMP7330/7336 Advanced Parallel and Distributed Computing MPI Programming: 1. Collective Operations 2. Overlapping Communication with Computation Dr. Xiao.
MPI: Message Passing Interface An Introduction S. Lakshmivarahan School of Computer Science.
MPI_Alltoall By: Jason Michalske. What is MPI_Alltoall? Each process sends distinct data to each receiver. The Jth block of process I is received by process.
3/12/2013Computer Engg, IIT(BHU)1 MPI-2. POINT-TO-POINT COMMUNICATION Communication between 2 and only 2 processes. One sending and one receiving. Types:
Pitfalls: Time Dependent Behaviors CS433 Spring 2001 Laxmikant Kale.
Introduction to parallel computing concepts and technics
MPI Point to Point Communication
Introduction to MPI.
Blocking / Non-Blocking Send and Receive Operations
More on MPI Nonblocking point-to-point routines Deadlock
Lecture 14: Inter-process Communication
Introduction to parallelism and the Message Passing Interface
More on MPI Nonblocking point-to-point routines Deadlock
Barriers implementations
More Quiz Questions Parallel Programming MPI Non-blocking, synchronous, asynchronous message passing routines ITCS 4/5145 Parallel Programming, UNC-Charlotte,
Presentation transcript:

1 Implementing Master/Slave Algorithms l Many algorithms have one or more master processes that send tasks and receive results from slave processes l Because there is one (or a few) controlling processes, the master can become a bottleneck

2 Skeleton Master Process l do while(.not. Done) ! Get results from anyone call MPI_Recv( a,…, status, ierr ) ! If this is the last data item, ! set done to.true. ! Send more work to them call MPI_Send( b, …, status(MPI_SOURCE), & …, ierr ) enddo l Not included: »Sending initial work to all processes »Deciding when to set done

3 Skeleton Slave Process Do while (.not. Done) ! Receive work from master call MPI_Recv( a, …, status, ierr ) … compute for task ! Return result to master call MPI_Send( b, …, ierr ) enddo l Not included: »Detection of termination (probably message from master)

4 Problems With Master/Slave l Slave processes have nothing to do while waiting for the next task l Many slaves may try to send data to the master at the same time »Could be a problem if data size is very large, such as MB l Master may fall behind in servicing requests for work if many processes ask in a very short interval l Presented with many requests, master may not evenly respond

5 Spreading out communication Use double buffering to overlap request for more work with work Do while (.not. Done) ! Receive work from master call MPI_Wait( request, status, ierr ) ! Request MORE work call MPI_Send( …, send_work, …, ierr ) call MPI_IRecv( a2, …, request, ierr ) … compute for task ! Return result to master (could also be nb) call MPI_Send( b, …, ierr ) enddo l MPI_Cancel »Last Irecv may never match; remove it with MPI_Cancel »MPI_Test_cancelled required on IBM (!), then MPI_Request_free

6 Limiting Memory Demands on Master l Using MPI_Ssend and MPI_Issend to encourage limits on memory demands »MPI_Ssend and MPI_Issend do not specify that the data itself doesn’t move until the matching receive is issued, but that is the easiest way to implement the synchronous send operations »Replace MPI_Send in slave with –MPI_Ssend for blocking –MPI_Issend for nonblocking (even less synchronization)

7 Distributing work further l Use multiple masters, slaves select a master to request work from at random l Keep more work locally l Use threads to implement work stealing (threads discussed later)

8 Fairness in Message Passing What happens in this code: if (rank.eq. 0) then do i=1,1000*(size-1) call MPI_Recv( a, n, MPI_INTEGER,& MPI_ANY_SOURCE, MPI_ANY_TAG, comm,& status, ierr ) print *,’ Received from’,status(MPI_SOURCE) enddo else do i=1,1000 call MPI_Send( a, n, MPI_INTEGER, 0, i, & comm, ierr ) enddo endif l In what order are messages received?

9 Fairness l MPI makes no guarantee, other than that all messages will be received. l The program could »Receive all from process 1, then all from process 2, etc. »That order would starve processes 2 and higher of work in a master/slave method l How can we encourage or enforce fairness?

10 MPI Multiple Completion Provide one Irecv for each process: do i=1,size-1 call MPI_Irecv(…, req(i), ierr) enddo Process all completed receives (wait guarantees at least one): call MPI_Waitsome( size-1, req, count,& array_of_indices, array_of_statuses, ierr ) do j=1,count ! Source of completed message is … array_of_statuses(MPI_SOURCE,j) ! Repost request call MPI_Irecv( …, req(array_of_indices(j)),) enddo

11 Exercise: Providing Fairness l Write a program with one master; have each slave process receive 10 work requests and send 10 responses. Make the slaves’ computation trivial (e.g., just return a counter) Use the simple algorithm (MPI_Send/MPI_Recv) »Is the MPI implementation fair? l Write a new version using MPI_Irecv/MPI_Waitall »Be careful of uncancelled Irecv requests »Be careful of the meaning of array_of_indices (zero versus one origin)