Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, ©

Slides:



Advertisements
Similar presentations
MPI Message Passing Interface
Advertisements

Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd ed., by B. Wilkinson & M
MPI Basics Introduction to Parallel Programming and Cluster Computing University of Washington/Idaho State University MPI Basics Charlie Peck Earlham College.
A Message Passing Standard for MPP and Workstations Communications of the ACM, July 1996 J.J. Dongarra, S.W. Otto, M. Snir, and D.W. Walker.
CS 240A: Models of parallel programming: Distributed memory and MPI.
Message-Passing Programming and MPI CS 524 – High-Performance Computing.
Distributed Memory Programming with MPI. What is MPI? Message Passing Interface (MPI) is an industry standard message passing system designed to be both.
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, ©
Parallel Programming in C with MPI and OpenMP
1 July 29, 2005 Distributed Computing 1:00 pm - 2:00 pm Introduction to MPI Barry Wilkinson Department of Computer Science UNC-Charlotte Consortium for.
EECC756 - Shaaban #1 lec # 7 Spring Message Passing Interface (MPI) MPI, the Message Passing Interface, is a library, and a software standard.
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, ©
MPI (Message Passing Interface) Basics
1 Tuesday, October 10, 2006 To err is human, and to blame it on a computer is even more so. -Robert Orben.
Basics of Message-passing Mechanics of message-passing –A means of creating separate processes on different computers –A way to send and receive messages.
CS 179: GPU Programming Lecture 20: Cross-system communication.
2a.1 Evaluating Parallel Programs Cluster Computing, UNC-Charlotte, B. Wilkinson.
A Message Passing Standard for MPP and Workstations Communications of the ACM, July 1996 J.J. Dongarra, S.W. Otto, M. Snir, and D.W. Walker.
Director of Contra Costa College High Performance Computing Center
2.1 Message-Passing Computing ITCS 4/5145 Parallel Computing, UNC-Charlotte, B. Wilkinson, Jan 17, 2012.
2a.1 Message-Passing Computing More MPI routines: Collective routines Synchronous routines Non-blocking routines ITCS 4/5145 Parallel Computing, UNC-Charlotte,
1 MPI: Message-Passing Interface Chapter 2. 2 MPI - (Message Passing Interface) Message passing library standard (MPI) is developed by group of academics.
2.1 Message-Passing Computing ITCS 4/5145 Parallel Computing, UNC-Charlotte, B. Wilkinson, Jan 14, 2013.
9-2.1 “Grid-enabling” applications Part 2 Using Multiple Grid Computers to Solve a Single Problem MPI © 2010 B. Wilkinson/Clayton Ferner. Spring 2010 Grid.
Part I MPI from scratch. Part I By: Camilo A. SilvaBIOinformatics Summer 2008 PIRE :: REU :: Cyberbridges.
Parallel Computing A task is broken down into tasks, performed by separate workers or processes Processes interact by exchanging information What do we.
CS 484. Message Passing Based on multi-processor Set of independent processors Connected via some communication net All communication between processes.
Parallel Programming with MPI Prof. Sivarama Dandamudi School of Computer Science Carleton University.
CS 838: Pervasive Parallelism Introduction to MPI Copyright 2005 Mark D. Hill University of Wisconsin-Madison Slides are derived from an online tutorial.
MPI Communications Point to Point Collective Communication Data Packaging.
Message Passing Programming Model AMANO, Hideharu Textbook pp. 140-147.
Summary of MPI commands Luis Basurto. Large scale systems Shared Memory systems – Memory is shared among processors Distributed memory systems – Each.
MPI Introduction to MPI Commands. Basics – Send and Receive MPI is a message passing environment. The processors’ method of sharing information is NOT.
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, ©
CSCI-455/522 Introduction to High Performance Computing Lecture 4.
1 Message Passing Models CEG 4131 Computer Architecture III Miodrag Bolic.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Message-passing Model.
Message-Passing Computing Chapter 2. Programming Multicomputer Design special parallel programming language –Occam Extend existing language to handle.
Introduction to MPI CDP 1. Shared Memory vs. Message Passing Shared Memory Implicit communication via memory operations (load/store/lock) Global address.
CSCI-455/552 Introduction to High Performance Computing Lecture 6.
MPI Point to Point Communication CDP 1. Message Passing Definitions Application buffer Holds the data for send or receive Handled by the user System buffer.
1 BİL 542 Parallel Computing. 2 Message Passing Chapter 2.
12.1 Parallel Programming Types of Parallel Computers Two principal types: 1.Single computer containing multiple processors - main memory is shared,
Introduction to Parallel Programming at MCSR Message Passing Computing –Processes coordinate and communicate results via calls to message passing library.
2.1 Collective Communication Involves set of processes, defined by an intra-communicator. Message tags not present. Principal collective operations: MPI_BCAST()
Chapter 5. Nonblocking Communication MPI_Send, MPI_Recv are blocking operations Will not return until the arguments to the functions can be safely modified.
1 Parallel and Distributed Processing Lecture 5: Message-Passing Computing Chapter 2, Wilkinson & Allen, “Parallel Programming”, 2 nd Ed.
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, ©
Message Passing Interface Using resources from
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, ©
1 Programming distributed memory systems Clusters Distributed computers ITCS 4/5145 Parallel Computing, UNC-Charlotte, B. Wilkinson, Jan 6, 2015.
ITCS 4/5145 Parallel Computing, UNC-Charlotte, B
Message Passing Computing
MPI Basics.
CS4402 – Parallel Computing
MPI Message Passing Interface
CS 668: Lecture 3 An Introduction to MPI
CS 584.
ITCS 4/5145 Parallel Computing, UNC-Charlotte, B
Message Passing Models
CS 5334/4390 Spring 2017 Rogelio Long
Lecture 14: Inter-process Communication
Message-Passing Computing More MPI routines: Collective routines Synchronous routines Non-blocking routines ITCS 4/5145 Parallel Computing, UNC-Charlotte,
Message-Passing Computing
ITCS 4/5145 Parallel Computing, UNC-Charlotte, B
Barriers implementations
Hardware Environment VIA cluster - 8 nodes Blade Server – 5 nodes
Message-Passing Computing Message Passing Interface (MPI)
MPI Message Passing Interface
CS 584 Lecture 8 Assignment?.
Presentation transcript:

Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, © 2004 Pearson Education Inc. All rights reserved. 2.1 Message-Passing Computing Chapter 2

Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, © 2004 Pearson Education Inc. All rights reserved. 2.2 Message-Passing Programming using User-level Message Passing Libraries Two primary mechanisms needed: 1.A method of creating separate processes for execution on different computers 2.A method of sending and receiving messages

Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, © 2004 Pearson Education Inc. All rights reserved. 2.3 Multiple program, multiple data (MPMD) model

Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, © 2004 Pearson Education Inc. All rights reserved. 2.4 Single Program Multiple Data (SPMD) model. Different processes merged into one program. Control statements select different parts for each processor to execute. All executables started together - static process creation

Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, © 2004 Pearson Education Inc. All rights reserved. 2.5 Multiple Program Multiple Data (MPMD) Model Separate programs for each processor. One processor executes master process. Other processes started from within master process - dynamic process creation.

Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, © 2004 Pearson Education Inc. All rights reserved. 2.6 Basic “point-to-point” Send and Receive Routines Passing a message between processes using send() and recv() library calls:

Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, © 2004 Pearson Education Inc. All rights reserved. 2.7 Synchronous Message Passing Routines that actually return when message transfer completed. Synchronous send routine Waits until complete message can be accepted by the receiving process before sending the message. Synchronous receive routine Waits until the message it is expecting arrives. Synchronous routines intrinsically perform two actions: They transfer data and they synchronize processes.

Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, © 2004 Pearson Education Inc. All rights reserved. 2.8 Synchronous send() and recv() using 3-way protocol

Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, © 2004 Pearson Education Inc. All rights reserved. 2.9 Asynchronous Message Passing Routines that do not wait for actions to complete before returning. Usually require local storage for messages. More than one version depending upon the actual semantics for returning. In general, they do not synchronize processes but allow processes to move forward sooner. Must be used with care.

Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, © 2004 Pearson Education Inc. All rights reserved MPI Definitions of Blocking and Non-Blocking Blocking - return after their local actions complete, though the message transfer may not have been completed. Non-blocking - return immediately. Assumes that data storage used for transfer not modified by subsequent statements prior to being used for transfer, and it is left to the programmer to ensure this. These terms may have different interpretations in other systems.

Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, © 2004 Pearson Education Inc. All rights reserved How message-passing routines return before message transfer completed Message buffer needed between source and destination to hold message:

Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, © 2004 Pearson Education Inc. All rights reserved Asynchronous (blocking) routines changing to synchronous routines Once local actions completed and message is safely on its way, sending process can continue with subsequent work. Buffers only of finite length and a point could be reached when send routine held up because all available buffer space exhausted. Then, send routine will wait until storage becomes re-available - i.e then routine behaves as a synchronous routine.

Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, © 2004 Pearson Education Inc. All rights reserved Message Tag Used to differentiate between different types of messages being sent. Message tag is carried within message. If special type matching is not required, a wild card message tag is used, so that the recv() will match with any send().

Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, © 2004 Pearson Education Inc. All rights reserved Message Tag Example To send a message, x, with message tag 5 from a source process, 1, to a destination process, 2, and assign to y:

Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, © 2004 Pearson Education Inc. All rights reserved “Group” message passing routines Have routines that send message(s) to a group of processes or receive message(s) from a group of processes Higher efficiency than separate point-to- point routines although not absolutely necessary.

Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, © 2004 Pearson Education Inc. All rights reserved Broadcast Sending same message to all processes concerned with problem. Multicast - sending same message to defined group of processes.

Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, © 2004 Pearson Education Inc. All rights reserved Scatter Sending each element of an array in root process to a separate process. Contents of ith location of array sent to ith process.

Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, © 2004 Pearson Education Inc. All rights reserved Gather Having one process collect individual values from set of processes.

Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, © 2004 Pearson Education Inc. All rights reserved Reduce Gather operation combined with specified arithmetic/logical operation. Example: Values could be gathered and then added together by root:

Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, © 2004 Pearson Education Inc. All rights reserved PVM (Parallel Virtual Machine) Perhaps first widely adopted attempt at using a workstation cluster as a multicomputer platform, developed by Oak Ridge National Laboratories. Available at no charge. Programmer decomposes problem into separate programs (usually master and group of identical slave programs). Programs compiled to execute on specific types of computers. Set of computers used on a problem first must be defined prior to executing the programs (in a hostfile).

Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, © 2004 Pearson Education Inc. All rights reserved Message routing between computers done by PVM daemon processes installed by PVM on computers that form the virtual machine.

Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, © 2004 Pearson Education Inc. All rights reserved MPI (Message Passing Interface) Message passing library standard developed by group of academics and industrial partners to foster more widespread use and portability. Defines routines, not implementation. Several free implementations exist.

Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, © 2004 Pearson Education Inc. All rights reserved MPI Process Creation and Execution Purposely not defined - Will depend upon implementation. Only static process creation supported in MPI version 1. All processes must be defined prior to execution and started together. Originally SPMD model of computation. MPMD also possible with static creation - each program to be started together specified.

Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, © 2004 Pearson Education Inc. All rights reserved Communicators Defines scope of a communication operation. Processes have ranks associated with communicator. Initially, all processes enrolled in a “universe” called MPI_COMM_WORLD, and each process is given a unique rank, a number from 0 to p - 1, with p processes. Other communicators can be established for groups of processes.

Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, © 2004 Pearson Education Inc. All rights reserved Using SPMD Computational Model main (int argc, char *argv[]) { MPI_Init(&argc, &argv);. MPI_Comm_rank(MPI_COMM_WORLD, &myrank); /*find process rank */ if (myrank == 0) master(); else slave();. MPI_Finalize(); } where master() and slave() are to be executed by master process and slave process, respectively.

Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, © 2004 Pearson Education Inc. All rights reserved Example 1 #include int main(int argc, char *argv[]) { int rank, size, len; char name[MPI_MAX_PROCESSOR_NAME]; MPI_Init(&argc, &argv); MPI_Comm_rank(MPI_COMM_WORLD, &rank); MPI_Comm_size(MPI_COMM_WORLD, &size); MPI_Get_processor_name(name, &len); printf(“Salam! I'm %d of %d on %s\n",rank, size, name); MPI_Finalize(); return 0; }

Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, © 2004 Pearson Education Inc. All rights reserved Unsafe message passing Example

Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, © 2004 Pearson Education Inc. All rights reserved MPI Solution “Communicators” Defines a communication domain - a set of processes that are allowed to communicate between themselves. Communication domains of libraries can be separated from that of a user program. Used in all point-to-point and collective MPI message-passing communications.

Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, © 2004 Pearson Education Inc. All rights reserved Default Communicator MPI_COMM_WORLD Exists as first communicator for all processes existing in the application. A set of MPI routines exists for forming communicators. Processes have a “rank” in a communicator.

Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, © 2004 Pearson Education Inc. All rights reserved MPI Point-to-Point Communication Uses send and receive routines with message tags (and communicator). Wild card message tags available

Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, © 2004 Pearson Education Inc. All rights reserved MPI Blocking Routines Return when “locally complete” - when location used to hold message can be used again or altered without affecting message being sent. Blocking send will send message and return - does not mean that message has been received, just that process free to move on without adversely affecting message.

Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, © 2004 Pearson Education Inc. All rights reserved Parameters of blocking send

Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, © 2004 Pearson Education Inc. All rights reserved Parameters of blocking receive

Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, © 2004 Pearson Education Inc. All rights reserved MPI Basic Types MPI Datatype C datatype MPI_CHAR signed char MPI_SHORT signed short int MPI_INT signed int MPI_LONG signed long int MPI_UNSIGNED_CHAR unsigned char MPI_UNSIGNED_SHORT unsigned short int MPI_UNSIGNED unsigned int MPI_UNSIGNED_LONG unsigned long int MPI_FLOAT float MPI_DOUBLE double MPI_LONG_DOUBLE long double MPI_BYTE MPI_PACKED

Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, © 2004 Pearson Education Inc. All rights reserved Example 2 #include int main(int argc, char *argv[]) { int myrank; MPI_Status status; MPI_Init(&argc, &argv); MPI_Comm_rank(MPI_COMM_WORLD, &myrank); if (myrank == 0){ int x = 3, y = 7; MPI_Send(&x,1,MPI_INT,1,0,MPI_COMM_WORLD); MPI_Send(&y,1,MPI_INT,1,1,MPI_COMM_WORLD); } else if (myrank == 1){ int x,y; MPI_Recv(&x,1,MPI_INT,0,0,MPI_COMM_WORLD,&status); printf("Received %d from %d.\n", x,0); MPI_Recv(&y,1,MPI_INT,0,1,MPI_COMM_WORLD,&status); printf("Sent %d to %d.\n", y,0); } MPI_Finalize(); return 0; }

Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, © 2004 Pearson Education Inc. All rights reserved Example 3: Communicating with Self #include int main(int argc, char *argv[]) { int x =3; MPI_Status status; MPI_Init(&argc, &argv); MPI_Send(&x,1,MPI_INT,0,0,MPI_COMM_WORLD); MPI_Recv(&x,1,MPI_INT,0,0,MPI_COMM_WORLD,&status); MPI_Finalize(); return 0; }

Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, © 2004 Pearson Education Inc. All rights reserved MPI Nonblocking Routines Nonblocking send - MPI_Isend() - will return “immediately” even before source location is safe to be altered. Nonblocking receive - MPI_Irecv() - will return even if no message to accept.

Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, © 2004 Pearson Education Inc. All rights reserved Nonblocking Routine Formats MPI_Isend(buf,count,datatype,dest,tag,comm,request) MPI_Irecv(buf,count,datatype,source,tag,comm, request) Completion detected by MPI_Wait() and MPI_Test(). MPI_Wait() waits until operation completed and returns then. MPI_Test() returns with flag set indicating whether operation completed at that time. Need to know whether particular operation completed. Determined by accessing request parameter.

Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, © 2004 Pearson Education Inc. All rights reserved Example 4 #include int main(int argc, char *argv[]) { int x =3, myrank; MPI_Status status; MPI_Request req1, req2; MPI_Init(&argc, &argv); MPI_Comm_rank(MPI_COMM_WORLD, &myrank); MPI_Send(&x,1,MPI_INT,0,0,MPI_COMM_WORLD); printf("Sent %d to %d\n", x, myrank); MPI_Irecv(&x,1,MPI_INT,0,0,MPI_COMM_WORLD,&req1); printf("Received %d to %d\n", x, myrank); MPI_Finalize(); return 0; }

Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, © 2004 Pearson Education Inc. All rights reserved Four Send Communication Modes Standard Mode Send - Not assumed that corresponding receive routine has started. Amount of buffering not defined by MPI. If buffering provided, send could complete before receive reached. Buffered Mode - Send may start and return before a matching receive. Necessary to specify buffer space via routine MPI_Buffer_attach(). Synchronous Mode - Send and receive can start before each other but can only complete together. Ready Mode - Send can only start if matching receive already reached, otherwise error. Use with care.

Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, © 2004 Pearson Education Inc. All rights reserved Each of the four modes can be applied to both blocking and nonblocking send routines. Only the standard mode is available for the blocking and nonblocking receive routines. Any type of send routine can be used with any type of receive routine.

Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, © 2004 Pearson Education Inc. All rights reserved Collective Communication Involves set of processes, defined by an intra-communicator. Message tags not present. Principal collective operations: MPI_Bcast() - Broadcast from root to all other processes MPI_Gather() - Gather values for group of processes MPI_Scatter() - Scatters buffer in parts to group of processes MPI_Alltoall() - Sends data from all processes to all processes MPI_Reduce() - Combine values on all processes to single value MPI_Reduce_scatter() - Combine values and scatter results MPI_Scan() - Compute prefix reductions of data on processes

Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, © 2004 Pearson Education Inc. All rights reserved Example 5 #include #include "mpi.h" int main( int argc, char* argv[]){ int myid,numprocs,itag; int i,ip; float x,y; MPI_Init(&argc,&argv); MPI_Comm_rank(MPI_COMM_WORLD,&myid); MPI_Comm_size(MPI_COMM_WORLD,&numprocs); if (myid == 0) x=2; MPI_Bcast(&x,1,MPI_FLOAT,0,MPI_COMM_WORLD); y=pow(x,2+myid); /* y=x^(2+myid) */ printf("On process %d y= %.1f\n",myid,y); MPI_Finalize(); return 0; }

Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, © 2004 Pearson Education Inc. All rights reserved MPI_Gather() To gather items from group of processes into process 0, using dynamically allocated memory in root process: int data[10];/*data to be gathered from processes*/ MPI_Comm_rank(MPI_COMM_WORLD, &myrank);/* find rank */ if (myrank == 0) { MPI_Comm_size(MPI_COMM_WORLD, &grp_size); /*find group size*/ buf = (int *)malloc(grp_size*10*sizeof (int)); /*allocate memory*/ } MPI_Gather(data,10,MPI_INT,buf,grp_size*10,MPI_INT,0,MPI_COMM_WORLD) ; MPI_Gather() gathers from all processes, including root.

Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, © 2004 Pearson Education Inc. All rights reserved Barrier As in all message-passing systems, MPI provides a means of synchronizing processes by stopping each one until they all have reached a specific “barrier” call.

Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, © 2004 Pearson Education Inc. All rights reserved Sample MPI program #include “mpi.h” #include #define MAXSIZE 1000 void main(int argc, char *argv) { int myid, numprocs; int data[MAXSIZE], i, x, low, high, myresult, result; char fn[255]; char *fp; MPI_Init(&argc,&argv); MPI_Comm_size(MPI_COMM_WORLD,&numprocs); MPI_Comm_rank(MPI_COMM_WORLD,&myid); if (myid == 0) { /* Open input file and initialize data */ strcpy(fn,getenv(“HOME”)); strcat(fn,”/MPI/rand_data.txt”); if ((fp = fopen(fn,”r”)) == NULL) { printf(“Can’t open the input file: %s\n\n”, fn); exit(1); } for(i = 0; i < MAXSIZE; i++) fscanf(fp,”%d”, &data[i]); } MPI_Bcast(data, MAXSIZE, MPI_INT, 0, MPI_COMM_WORLD); /* broadcast data */ x = n/nproc; /* Add my portion Of data */ low = myid * x; high = low + x; for(i = low; i < high; i++) myresult += data[i]; printf(“I got %d from %d\n”, myresult, myid); /* Compute global sum */ MPI_Reduce(&myresult, &result, 1, MPI_INT, MPI_SUM, 0, MPI_COMM_WORLD); if (myid == 0) printf(“The sum is %d.\n”, result); MPI_Finalize(); }

Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, © 2004 Pearson Education Inc. All rights reserved Evaluating Parallel Programs We have introduced two idealized laws for analyzing parallelism in Chapter 1 A good performance model is, while abstracting unimportant details, able to –Explain available observation –Predict future circumstances Amdahl’s law, empirical observations and asymptotic analysis do not satisfy the first of these requirements On the other hand, conventional modeling not practical –Typically involves detailed simulations of hardware components –Thus, too detailed for practical parallel programmers We introduce performance modeling technique with intermediate-level detail

Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, © 2004 Pearson Education Inc. All rights reserved Sequential execution time, t s : Estimate by counting computational steps of best sequential algorithm. Parallel execution time, t p : In addition to number of computational steps, t comp, need to estimate communication overhead, t comm : t p = t comp + t comm

Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, © 2004 Pearson Education Inc. All rights reserved Computational Time Count number of computational steps. When more than one process executed simultaneously, count computational steps of most complex process. Generally, function of n and p, i.e. t comp = f (n, p) Often break down computation time into parts. Then t comp = t comp1 + t comp2 + t comp3 + … Analysis usually done assuming that all processors are same and operating at same speed.

Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, © 2004 Pearson Education Inc. All rights reserved Communication Time Many factors, including network structure and network contention. As a first approximation (e.g., ignores routing), use t comm = t startup + nt data t startup is startup time, essentially time to send a message with no data. –Assumed to be constant. –Includes msg packing and unpacking t data is transmission time to send one data word –Also assumed constant, –There are n data words.

Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, © 2004 Pearson Education Inc. All rights reserved Final communication time, t comm, the summation of communication times of all sequential messages from a process, i.e. t comm = t comm1 + t comm2 + t comm3 + … Typically, communication patterns of all processes same and assumed to take place together so that only one process need be considered. Both startup and data transmission times, t startup and t data, measured in units of one computational step, so that can add t comp and t comm together to obtain parallel execution time, t p. Communication Time (cont’d)

Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, © 2004 Pearson Education Inc. All rights reserved Benchmark Factors With t s, t comp, and t comm, can establish speedup factor and computation/communication ratio for a particular algorithm/implementation: Both functions of number of processors, p, and number of data elements, n. Will give indication of scalability of parallel solution with increasing number of processors and problem size. Computation/communication ratio will highlight effect of communication with increasing problem size and system size.

Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, © 2004 Pearson Education Inc. All rights reserved Getting a parallel program to work properly can be a significant intellectual challenge Sequential programs are debugged, usually, by adding instrumentation code Like sequential code, instrumentation code can slowdown a parallel program More seriously, instrumentation can cause instructions of a parallel program to be executed in a different interleave order –Assumed to be constant.Can cause a nonworking program to work! Traditional debugging tools as in sequential programs of little use –Varying orders of execution possible Debugging and Evaluating Parallel Programs Empirically Low-level debugging

Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, © 2004 Pearson Education Inc. All rights reserved Debugging and Evaluating Parallel Programs Empirically Visualization Tools Programs can be watched as they are executed in a space-time diagram (or process-time diagram):

Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, © 2004 Pearson Education Inc. All rights reserved Implementations of visualization tools are available for MPI. An example is the Upshot program visualization system. Visualization tools imply software probes into the execution –May alter characteristics of the computations Hardware performance monitors (which do not usually affect performance are) also available Debugging and Evaluating Parallel Programs (cont’d)

Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, © 2004 Pearson Education Inc. All rights reserved Evaluating Programs Empirically Measuring Execution Time To measure the execution time between point L1 and point L2 in the code, we might have a construction such as. L1: time(&t1);/* start timer */. L2: time(&t2);/* stop timer */. elapsed_time = difftime(t2, t1); /* elapsed_time = t2 - t1 */ printf(“Elapsed time = %5.2f seconds”, elapsed_time); MPI provides the routine MPI_Wtime() for returning time (in seconds).

Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, © 2004 Pearson Education Inc. All rights reserved Parallel Programming Home Page Gives step-by-step instructions for compiling and executing programs, and other information.

Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, © 2004 Pearson Education Inc. All rights reserved Compiling/Executing MPI Programs Preliminaries You need to first run the following script /tools/mpich/mpichinit

Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, © 2004 Pearson Education Inc. All rights reserved More MPI Programs Sample program tracing Exemplifying Reduce Exemplifying Scatter Visualization Demo

Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, © 2004 Pearson Education Inc. All rights reserved Example 6 switch(myrank) { case 0: x = 0; y = 1; z = 2; MPI_Bcast(&x,1,MPI_INT,0,MPI_COMM_WORLD); MPI_Send(&y,1,MPI_INT,2,43,MPI_COMM_WORLD); MPI_Bcast(&z,1,MPI_INT,1,MPI_COMM_WORLD); printf("On %d: x=%d,y=%d,z=%d\n",myrank,x,y,z); break; case 1: x = 3; y = 4; z = 5; MPI_Bcast(&x,1,MPI_INT,0,MPI_COMM_WORLD); MPI_Bcast(&y,1,MPI_INT,1,MPI_COMM_WORLD); printf("On %d: x=%d,y=%d,z=%d\n",myrank,x,y,z); break; case 2: x = 6; y = 7; z = 8; MPI_Bcast(&z,1,MPI_INT,0,MPI_COMM_WORLD); MPI_Recv(&x,1,MPI_INT,0,43,MPI_COMM_WORLD,&status); MPI_Bcast(&y,1,MPI_INT,1,MPI_COMM_WORLD); printf("On %d: x=%d,y=%d,z=%d\n",myrank,x,y,z); break; }

Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, © 2004 Pearson Education Inc. All rights reserved MPI_Reduce() Used in a global reduction operation –All processes in the communicator contribute data that is combined using a binary operator

Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, © 2004 Pearson Education Inc. All rights reserved MPI_Reduce() (cont’d) Predefined reduction operators in MPI

Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, © 2004 Pearson Education Inc. All rights reserved Example 7 #include #include "mpi.h" int main( int argc, char* argv[]){ int myid; float x,y, result; MPI_Init(&argc,&argv); MPI_Comm_rank(MPI_COMM_WORLD,&myid); if (myid == 0) x=2; MPI_Bcast(&x,1,MPI_FLOAT,0,MPI_COMM_WORLD); y=pow(x,2+myid); printf("On process %d y= %.1f\n",myid,y); MPI_Reduce(&y,&result,1,MPI_FLOAT,MPI_SUM,0,MPI_COMM_WORLD); MPI_Finalize(); if(!myid) printf("Result: %lf\n",result); return 0; }

Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, © 2004 Pearson Education Inc. All rights reserved MPI_Scatter() You need to first run the following script /tools/mpich/mpichinit

Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, © 2004 Pearson Education Inc. All rights reserved MPI_Scatter()

Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, © 2004 Pearson Education Inc. All rights reserved Example 8 static double ddot(int n, double x[], double y[]) { double result=0.0; int i; for (i=0; i<n; ++i) result += (x[i] * y[i]); return result; }

Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, © 2004 Pearson Education Inc. All rights reserved Example 8 (cont’d) remain = N % nproc; nb = N/nproc; if(remain) { printf("\nVector length must be a multiple of nproc.\n"); exit(1); } for (i=0; i<N; ++i) x[i] = y[i] = 1.; /* scatter the two vectors among the processors */ MPI_Scatter(x, nb, MPI_DOUBLE, xfrag, nb, MPI_DOUBLE, 0, MPI_COMM_WORLD); MPI_Scatter(y, nb, MPI_DOUBLE, yfrag, nb, MPI_DOUBLE, 0, MPI_COMM_WORLD); partial = ddot(nb, x, y); MPI_Reduce(&partial, &result, 1, MPI_DOUBLE, MPI_SUM, 0, MPI_COMM_WORLD); if(!rank) printf("\nInner product is %f.\n", result);

Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, © 2004 Pearson Education Inc. All rights reserved Visualization Demo To generate trace log, –mpicc –mpilog … –mpicc –mpitrace … Use Jumpshot/logviewer to view.clog or.slog Tools to be installed