1 July 29, 2005 Distributed Computing 1:00 pm - 2:00 pm Introduction to MPI Barry Wilkinson Department of Computer Science UNC-Charlotte Consortium for.

Slides:



Advertisements
Similar presentations
MPI Message Passing Interface Portable Parallel Programs.
Advertisements

CS 140: Models of parallel programming: Distributed memory and MPI.
Types of Parallel Computers
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, ©
A Message Passing Standard for MPP and Workstations Communications of the ACM, July 1996 J.J. Dongarra, S.W. Otto, M. Snir, and D.W. Walker.
CS 240A: Models of parallel programming: Distributed memory and MPI.
Message-Passing Programming and MPI CS 524 – High-Performance Computing.
Distributed Memory Programming with MPI. What is MPI? Message Passing Interface (MPI) is an industry standard message passing system designed to be both.
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, ©
EECC756 - Shaaban #1 lec # 7 Spring Message Passing Interface (MPI) MPI, the Message Passing Interface, is a library, and a software standard.
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, ©
MPI (Message Passing Interface) Basics
1 Tuesday, October 10, 2006 To err is human, and to blame it on a computer is even more so. -Robert Orben.
Basics of Message-passing Mechanics of message-passing –A means of creating separate processes on different computers –A way to send and receive messages.
1b.1 Types of Parallel Computers Two principal approaches: Shared memory multiprocessor Distributed memory multicomputer ITCS 4/5145 Parallel Programming,
1b.1 Types of Parallel Computers Two principal approaches: Shared memory multiprocessor Distributed memory multicomputer ITCS 4/5145 Parallel Programming,
Parallel & Cluster Computing MPI Basics Paul Gray, University of Northern Iowa David Joiner, Shodor Education Foundation Tom Murphy, Contra Costa College.
2.1 Message-Passing Computing ITCS 4/5145 Parallel Computing, UNC-Charlotte, B. Wilkinson, Jan 17, 2012.
2a.1 Message-Passing Computing More MPI routines: Collective routines Synchronous routines Non-blocking routines ITCS 4/5145 Parallel Computing, UNC-Charlotte,
1 MPI: Message-Passing Interface Chapter 2. 2 MPI - (Message Passing Interface) Message passing library standard (MPI) is developed by group of academics.
2.1 Message-Passing Computing ITCS 4/5145 Parallel Computing, UNC-Charlotte, B. Wilkinson, Jan 14, 2013.
Grid Computing, B. Wilkinson, 20047a.1 Computational Grids.
9-2.1 “Grid-enabling” applications Part 2 Using Multiple Grid Computers to Solve a Single Problem MPI © 2010 B. Wilkinson/Clayton Ferner. Spring 2010 Grid.
Part I MPI from scratch. Part I By: Camilo A. SilvaBIOinformatics Summer 2008 PIRE :: REU :: Cyberbridges.
CS 240A Models of parallel programming: Distributed memory and MPI.
Parallel Computing A task is broken down into tasks, performed by separate workers or processes Processes interact by exchanging information What do we.
Parallel Programming with MPI Prof. Sivarama Dandamudi School of Computer Science Carleton University.
CS 838: Pervasive Parallelism Introduction to MPI Copyright 2005 Mark D. Hill University of Wisconsin-Madison Slides are derived from an online tutorial.
Message Passing Programming Model AMANO, Hideharu Textbook pp. 140-147.
MPI Introduction to MPI Commands. Basics – Send and Receive MPI is a message passing environment. The processors’ method of sharing information is NOT.
Distributed-Memory (Message-Passing) Paradigm FDI 2004 Track M Day 2 – Morning Session #1 C. J. Ribbens.
2.1 Message-Passing Computing Cluster Computing, UNC B. Wilkinson, 2007.
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, ©
CSCI-455/522 Introduction to High Performance Computing Lecture 4.
1 Message Passing Models CEG 4131 Computer Architecture III Miodrag Bolic.
Chapter 4 Message-Passing Programming. The Message-Passing Model.
Message-Passing Computing Chapter 2. Programming Multicomputer Design special parallel programming language –Occam Extend existing language to handle.
CSCI-455/552 Introduction to High Performance Computing Lecture 6.
1 BİL 542 Parallel Computing. 2 Message Passing Chapter 2.
12.1 Parallel Programming Types of Parallel Computers Two principal types: 1.Single computer containing multiple processors - main memory is shared,
By Chi-Chang Chen.  Cluster computing is a technique of linking two or more computers into a network (usually through a local area network) in order.
Introduction to Parallel Programming at MCSR Message Passing Computing –Processes coordinate and communicate results via calls to message passing library.
2.1 Collective Communication Involves set of processes, defined by an intra-communicator. Message tags not present. Principal collective operations: MPI_BCAST()
1 Parallel and Distributed Processing Lecture 5: Message-Passing Computing Chapter 2, Wilkinson & Allen, “Parallel Programming”, 2 nd Ed.
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, ©
Message Passing Interface Using resources from
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, ©
1 Programming distributed memory systems Clusters Distributed computers ITCS 4/5145 Parallel Computing, UNC-Charlotte, B. Wilkinson, Jan 6, 2015.
Constructing a system with multiple computers or processors 1 ITCS 4/5145 Parallel Programming, UNC-Charlotte, B. Wilkinson. Jan 13, 2016.
Message Passing Computing
Message-Passing Computing
MPI Message Passing Interface
Constructing a system with multiple computers or processors
CS 584.
ITCS 4/5145 Parallel Computing, UNC-Charlotte, B
Message Passing Models
Lecture 14: Inter-process Communication
Constructing a system with multiple computers or processors
Constructing a system with multiple computers or processors
Constructing a system with multiple computers or processors
Message-Passing Computing More MPI routines: Collective routines Synchronous routines Non-blocking routines ITCS 4/5145 Parallel Computing, UNC-Charlotte,
Message-Passing Computing
ITCS 4/5145 Parallel Computing, UNC-Charlotte, B
Barriers implementations
Message-Passing Computing Message Passing Interface (MPI)
Types of Parallel Computers
Hello, world in MPI #include <stdio.h> #include "mpi.h"
Hello, world in MPI #include <stdio.h> #include "mpi.h"
MPI Message Passing Interface
CS 584 Lecture 8 Assignment?.
Presentation transcript:

1 July 29, 2005 Distributed Computing 1:00 pm - 2:00 pm Introduction to MPI Barry Wilkinson Department of Computer Science UNC-Charlotte Consortium for Computational Science and High Performance Computing 2005 Summer Workshop, July 29-July 31, 2005

2 Brief History A network of computers became attractive alternative to expensive supercomputers and parallel computer systems for high- performance computing. Several projects. Notable: –Berkeley NOW (network of workstations) project. –NASA Beowulf project. Late 1980’s - early 1990’s

3 Software Tools for Clusters Late 1980’sParallel Virtual Machine (PVM) - developed Became very popular. Mid 1990’s -Message-Passing Interface (MPI) - standard defined. Based upon Message Passing Parallel Programming model. Both provide a set of user-level libraries for message passing. Use with sequential programming languages (C, C++,...).

4 Key advantages of using networked computers as high performance computing platform Very high performance workstations and PCs readily available at low cost. The latest processors can easily be incorporated into the system as they become available. Existing software can be used or modified.

5 Beowulf Clusters* A group of interconnected “commodity” computers achieving high performance with low cost. Typically using commodity interconnects - high speed Ethernet, and Linux OS. * Beowulf comes from name given by NASA Goddard Space Flight Center cluster project.

6 Cluster Interconnects Originally fast Ethernet on low cost clusters –Can use multiple Ethernet connections (cards) to improve performance Gigabit Ethernet - easy upgrade path

7 More Specialized/Higher Performance Myrinet Gbits/sec - disadvantage: single vendor cLan SCI (Scalable Coherent Interface) QNet Infiniband - may be important as infininband interfaces may be integrated on next generation PCs

8 Dedicated cluster with a master node Dedicated Cluster User Switch Master node Compute nodes Up link 2nd Ethernet interface External network

9 Message-Passing Programming using User-level Message-Passing Libraries Two primary mechanisms needed: 1.A method of creating separate processes for execution on different computers 2.A method of sending and receiving messages

10 Methods of creating separate processes for execution on different computers

11 Multiple program, multiple data (MPMD) model Source file Executables Processor 0Processorp - 1 Compile to suit processor Source file

12 Single Program Multiple Data (SPMD) model Source file Executables Processor 0Processorp - 1 Compile to suit processor

13 Single Program Multiple Data (SPMD) model Different processes merged into one program. Control statements select different parts for each processor to execute. Generally, all executables started together – static process creation Usual MPI way

14 Dynamic Process Creation One processor executes master process. Other processes started from within master process –uses MPMD Model.

15 Dynamic Process Creation Process 1 Process 2 spawn(); Time Start execution of process 2

16 Methods of sending and receiving messages

17 Basic “point-to-point” Send and Receive Routines Process 1Process 2 send(&x, 2); recv(&y, 1); xy Movement of data Generic syntax (actual formats later) Passing a message between processes using send() and recv() library calls:

18 Synchronous Message Passing Routines that actually return when message transfer completed. Synchronous send routine Waits until complete message can be accepted by the receiving process before sending the message. Synchronous receive routine Waits until the message it is expecting arrives. Synchronous routines intrinsically perform two actions: They transfer data and they synchronize processes.

19 Synchronous send() and recv() using 3-way protocol Process 1Process 2 send(); recv(); Suspend Time process Acknowledgment Message Both processes continue (a) When send() occurs before recv() Process 1Process 2 recv(); send(); Suspend Time process Acknowledgment Message Both processes continue (b) When recv() occurs before send() Request to send

20 Asynchronous Message Passing Routines that do not wait for actions to complete before returning. Usually require local storage for messages. More than one version depending upon the actual semantics for returning. In general, they do not synchronize processes but allow processes to move forward sooner. Must be used with care.

21 MPI Definitions of Blocking and Non- Blocking Blocking - return after their local actions complete, though the message transfer may not have been completed. Non-blocking - return immediately. Assumes that data storage used for transfer not modified by subsequent statements prior to being used for transfer, and it is left to the programmer to ensure this. These terms may have different interpretations in other systems.

22 How message-passing routines return before message transfer completed Process 1Process 2 send(); recv(); Message buffer Read message buffer Continue process Time Message buffer needed between source and destination to hold message:

23 Asynchronous (blocking) routines changing to synchronous routines Once local actions completed and message is safely on its way, sending process can continue with subsequent work. Buffers only of finite length and a point could be reached when send routine held up because all available buffer space exhausted. Then, send routine will wait until storage becomes re- available - i.e. then routine behaves as a synchronous routine.

24 Message Tag Used to differentiate between different types of messages being sent. Message tag is carried within message. If special type matching is not required, a wild card message tag is used, so that the recv() will match with any send().

25 Message Tag Example Process 1Process 2 send(&x,2,5); recv(&y,1,5); xy Movement of data Waits for a message from process 1 with a tag of 5 To send a message, x, with message tag 5 from a source process, 1, to a destination process, 2, and assign to y:

26 “Group” message passing routines Have routines that send message(s) to a group of processes or receive message(s) from a group of processes Higher efficiency than separate point-to-point routines although not absolutely necessary.

27 Broadcast Sending same message to all processes concerned with problem. Multicast - sending same message to defined group of processes. bcast(); buf bcast(); data bcast(); data Process 0Processp - 1Process 1 Action Code MPI form

28 Scatter scatter(); buf scatter(); data scatter(); data Process 0Processp - 1Process 1 Action Code MPI form Sending each element of an array in root process to a separate process. Contents of ith location of array sent to ith process.

29 Gather gather(); buf gather(); data gather(); data Process 0Processp - 1Process 1 Action Code MPI form Having one process collect individual values from set of processes.

30 Reduce reduce(); buf reduce(); data reduce(); data Process 0Processp - 1Process 1 + Action Code MPI form Gather operation combined with specified arithmetic/logical operation. Example: Values could be gathered and then added together by root:

31 Message routing between computers done by daemon processes installed on computers that form the “virtual machine” Application daemon program daemon Application program Application program daemon Messages sent through network (executable) (executable) (executable) Can have more than one process running on each computer. Computer

32 MPI (Message Passing Interface) Message passing library standard developed by group of academics and industrial partners to foster more widespread use and portability. Defines routines, not implementation. Several free implementations exist.

33 MPI Process Creation and Execution Purposely not defined - Will depend upon implementation. Version 1 - Only static process creation supported. All processes must be defined prior to execution and started together. Version 2 - does allow dynamic process creation with MPI_Comm_spawn routine. (Version 2 only recently fully implemented.) Originally SPMD model of computation. MPMD now also possible.

34 Communicators Defines scope of a communication operation. Processes have ranks associated with communicator. Initially, all processes enrolled in a “universe” called MPI_COMM_WORLD, and each process is given a unique rank, a number from 0 to p - 1, with p processes. Other communicators can be established for groups of processes.

35 Using SPMD Computational Model main (int argc, char *argv[]) { MPI_Init(&argc, &argv);. MPI_Comm_rank(MPI_COMM_WORLD, &myrank); /*find process rank*/ if (myrank == 0) master(); else slave();. MPI_Finalize(); } where master() and slave() are to be executed by master process and slave processes, respectively.

36 Unsafe message passing - Example lib() send(…,1,…); recv(…,0,…); Process 0Process 1 send(…,1,…); recv(…,0,…); (a) Intended behavior (b) Possible behavior lib() send(…,1,…); recv(…,0,…); Process 0Process 1 send(…,1,…); recv(…,0,…); Destination Source

37 MPI Solution “Communicators” Defines a communication domain - a set of processes that are allowed to communicate between themselves. Communication domains of libraries can be separated from that of a user program. Used in all point-to-point and collective MPI message-passing communications.

38 Default Communicator MPI_COMM_WORLD Exists as first communicator for all processes existing in the application. A set of MPI routines exists for forming communicators. Processes have a “rank” in a communicator.

39 MPI Point-to-Point Communication Uses send and receive routines with message tags (and communicator). Wild card message tags available

40 MPI Blocking Routines Return when “locally complete” - when location used to hold message can be used again or altered without affecting message being sent. Blocking send will send message and return - does not mean that message has been received, just that process free to move on without adversely affecting message.

41 Parameters of blocking send MPI_Send(buf, count, datatype, dest, tag, comm) Address of Number of items Datatype of Rank of destination Message tag Communicator send buffer to send each item process

42 Parameters of blocking receive MPI_Recv(buf, count, datatype, src, tag, comm, status) Address of Maximum number Datatype of Rank of source Message tag Communicator receive buffer of items to receive each item process Status after operation

43 Example To send an integer x from process 0 to process 1, MPI_Comm_rank(MPI_COMM_WORLD,&myrank); /* find rank */ if (myrank == 0) { int x; MPI_Send(&x, 1, MPI_INT, 1, msgtag, MPI_COMM_WORLD); } else if (myrank == 1) { int x; MPI_Recv(&x, 1, MPI_INT, 0,msgtag,MPI_COMM_WORLD,status); }

44 MPI Nonblocking Routines Nonblocking send - MPI_Isend() - will return “immediately” even before source location is safe to be altered. Nonblocking receive - MPI_Irecv() - will return even if no message to accept.

45 Nonblocking Routine Formats MPI_Isend(buf,count,datatype,dest,tag,comm,request) MPI_Irecv(buf,count,datatype,source,tag,comm, request) Completion detected by MPI_Wait() and MPI_Test(). MPI_Wait() waits until operation completed and returns then. MPI_Test() returns with flag set indicating whether operation completed at that time. Need to know whether particular operation completed. Determined by accessing request parameter.

46 Example To send an integer x from process 0 to process 1 and allow process 0 to continue, MPI_Comm_rank(MPI_COMM_WORLD, &myrank);/* find rank */ if (myrank == 0) { int x; MPI_Isend(&x,1,MPI_INT, 1, msgtag, MPI_COMM_WORLD, req1); compute(); MPI_Wait(req1, status); } else if (myrank == 1) { int x; MPI_Recv(&x,1,MPI_INT,0,msgtag, MPI_COMM_WORLD, status); }

47 Four Send Communication Modes Standard Mode - Not assumed that corresponding receive routine has started. Amount of buffering not defined by MPI. If buffering provided, send could complete before receive reached. Buffered Mode - Send may start and return before a matching receive. Necessary to specify buffer space via routine MPI_Buffer_attach(). Synchronous Mode - Send and receive can start before each other but can only complete together. Ready Mode - Send can only start if matching receive already reached, otherwise error. Use with care.

48 Each of the four modes can be applied to both blocking and nonblocking send routines. Only the standard mode is available for the blocking and nonblocking receive routines. Any type of send routine can be used with any type of receive routine.

49 Collective Communication Involves set of processes, defined by an intra-communicator. Message tags not present. Principal collective operations: MPI_Bcast() - Broadcast from root to all other processes MPI_Gather() - Gather values for group of processes MPI_Scatter() - Scatters buffer in parts to group of processes MPI_Alltoall() - Sends data from all processes to all processes MPI_Reduce() - Combine values on all processes to single value MPI_Reduce_scatter() - Combine values and scatter results MPI_Scan() - Compute prefix reductions of data on processes

50 Barrier routine A means of synchronizing processes by stopping each one until they all have reached a specific “barrier” call.

51 Sample MPI program #include “mpi.h” #include #define MAXSIZE 1000 void main(int argc, char *argv){ int myid, numprocs; int data[MAXSIZE], i, x, low, high, myresult, result; char fn[255]; char *fp; MPI_Init(&argc,&argv); MPI_Comm_size(MPI_COMM_WORLD,&numprocs); MPI_Comm_rank(MPI_COMM_WORLD,&myid); if (myid == 0) { /* Open input file and initialize data */ strcpy(fn,getenv(“HOME”)); strcat(fn,”/MPI/rand_data.txt”); if ((fp = fopen(fn,”r”)) == NULL) { printf(“Can’t open the input file: %s\n\n”, fn); exit(1); } for(i = 0; i < MAXSIZE; i++) fscanf(fp,”%d”, &data[i]); } MPI_Bcast(data, MAXSIZE, MPI_INT, 0, MPI_COMM_WORLD); /* broadcast data */ x = n/nproc; /* Add my portion Of data */ low = myid * x; high = low + x; for(i = low; i < high; i++) myresult += data[i]; printf(“I got %d from %d\n”, myresult, myid); /* Compute global sum */ MPI_Reduce(&myresult, &result, 1, MPI_INT, MPI_SUM, 0, MPI_COMM_WORLD); if (myid == 0) printf(“The sum is %d.\n”, result); MPI_Finalize(); }

52 Debugging/Evaluating Parallel Programs Empirically

53 Visualization Tools Programs can be watched as they are executed in a space-time diagram (or process-time diagram): Process 1 Process 2 Process 3 Time Computing Waiting Message-passing system routine Message

54 Implementations of visualization tools are available for MPI. An example is the Jumpshot program visualization system.

55 Evaluating Programs Empirically Measuring Execution Time To measure the execution time between point L1 and point L2 in the code, we might have a construction such as. L1: time(&t1);/* start timer */. L2: time(&t2);/* stop timer */. elapsed_time = difftime(t2, t1); /* elapsed_time = t2 - t1 */ printf(“Elapsed time = %5.2f seconds”, elapsed_time); MPI provides the routine MPI_Wtime() for returning time (in seconds).

56 Parallel Programming Home Page Gives step-by-step instructions for compiling and executing programs, and other information.

57 Compiling/Executing MPI Programs Minor differences in the command lines required depending upon MPI implementation. In the laboratory part of this session, we will use MPICH or MPICH-2. Generally, a file need to be present that lists all the computers to be used. MPI then uses those computers listed.

58 MPICH and MPICH-2 Both Windows and Linux versions Very easy to install on a windows system.

59 Defining the Computers to Use Generally, need to create a file containing the list of machines to be used. Sample machines file (or hostfile) grid0.cs.appstate.edu node1.cs.appstate.edu node2.cs.appstate.edu node3.cs.appstate.edu node4.cs.appstate.edu node5.cs.appstate.edu node6.cs.appstate.edu node7.cs.appstate.edu In MPICH, if just using one computer, do not need this file.

60 Starting MPI In LAM-MPI (6.5.2), need to issue the command: To start MPI first time: lamboot -v hostfile Subsequently: lamboot (Also need to terminate at end.) In MPICH, no command is necessary.

61 MPICH Commands Two basic commands: mpicc, a script to compile MPI programs mpirun, the command to execute an MPI program.

62 Compiling/executing (SPMD) MPI program To compile MPI programs: mpicc -o file file.c or mpiCC -o file file.cpp To execute MPI program: mpirun -v -np no_processors file

63 Acknowledgements Support on cluster computing has been provided by National Science Foundation, under grants # , # , and # , PI: A. B. Wilkinson,

64 Barry Wilkinson and Michael Allen Parallel Programming: Techniques and Application Using Networked Workstations and Parallel Computers 2nd edition Prentice Hall Inc., 467 pp, 2005, ISBN International version ISBN