2.1 Message-Passing Computing ITCS 4/5145 Parallel Computing, UNC-Charlotte, B. Wilkinson, 2012. Jan 17, 2012.

Slides:



Advertisements
Similar presentations
MPI Message Passing Interface Portable Parallel Programs.
Advertisements

MPI Message Passing Interface
MPI Basics Introduction to Parallel Programming and Cluster Computing University of Washington/Idaho State University MPI Basics Charlie Peck Earlham College.
CS 140: Models of parallel programming: Distributed memory and MPI.
Chapter 3. MPI MPI = Message Passing Interface Specification of message passing libraries for developers and users –Not a library by itself, but specifies.
Introduction to MPI. What is Message Passing Interface (MPI)?  Portable standard for communication  Processes can communicate through messages.  Each.
CS 240A: Models of parallel programming: Distributed memory and MPI.
Message-Passing Programming and MPI CS 524 – High-Performance Computing.
Distributed Memory Programming with MPI. What is MPI? Message Passing Interface (MPI) is an industry standard message passing system designed to be both.
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, ©
1 Parallel Computing—Introduction to Message Passing Interface (MPI)
Comp 422: Parallel Programming Lecture 8: Message Passing (MPI)
1 July 29, 2005 Distributed Computing 1:00 pm - 2:00 pm Introduction to MPI Barry Wilkinson Department of Computer Science UNC-Charlotte Consortium for.
EECC756 - Shaaban #1 lec # 7 Spring Message Passing Interface (MPI) MPI, the Message Passing Interface, is a library, and a software standard.
12b.1 Introduction to Message-passing with MPI UNC-Wilmington, C. Ferner, 2008 Nov 4, 2008.
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, ©
MPI (Message Passing Interface) Basics
1 Tuesday, October 10, 2006 To err is human, and to blame it on a computer is even more so. -Robert Orben.
Basics of Message-passing Mechanics of message-passing –A means of creating separate processes on different computers –A way to send and receive messages.
Parallel & Cluster Computing MPI Basics Paul Gray, University of Northern Iowa David Joiner, Shodor Education Foundation Tom Murphy, Contra Costa College.
Director of Contra Costa College High Performance Computing Center
1 MPI: Message-Passing Interface Chapter 2. 2 MPI - (Message Passing Interface) Message passing library standard (MPI) is developed by group of academics.
2.1 Message-Passing Computing ITCS 4/5145 Parallel Computing, UNC-Charlotte, B. Wilkinson, Jan 14, 2013.
9-2.1 “Grid-enabling” applications Part 2 Using Multiple Grid Computers to Solve a Single Problem MPI © 2010 B. Wilkinson/Clayton Ferner. Spring 2010 Grid.
Part I MPI from scratch. Part I By: Camilo A. SilvaBIOinformatics Summer 2008 PIRE :: REU :: Cyberbridges.
CS 240A Models of parallel programming: Distributed memory and MPI.
Parallel Computing A task is broken down into tasks, performed by separate workers or processes Processes interact by exchanging information What do we.
Message Passing Programming with MPI Introduction to MPI Basic MPI functions Most of the MPI materials are obtained from William Gropp and Rusty Lusk’s.
CS 838: Pervasive Parallelism Introduction to MPI Copyright 2005 Mark D. Hill University of Wisconsin-Madison Slides are derived from an online tutorial.
Message Passing Programming Model AMANO, Hideharu Textbook pp. 140-147.
MPI Introduction to MPI Commands. Basics – Send and Receive MPI is a message passing environment. The processors’ method of sharing information is NOT.
Distributed-Memory (Message-Passing) Paradigm FDI 2004 Track M Day 2 – Morning Session #1 C. J. Ribbens.
2.1 Message-Passing Computing Cluster Computing, UNC B. Wilkinson, 2007.
Parallel Programming with MPI By, Santosh K Jena..
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, ©
CSCI-455/522 Introduction to High Performance Computing Lecture 4.
1 Message Passing Models CEG 4131 Computer Architecture III Miodrag Bolic.
Chapter 4 Message-Passing Programming. The Message-Passing Model.
Message Passing and MPI Laxmikant Kale CS Message Passing Program consists of independent processes, –Each running in its own address space –Processors.
Message-Passing Computing Chapter 2. Programming Multicomputer Design special parallel programming language –Occam Extend existing language to handle.
CSCI-455/552 Introduction to High Performance Computing Lecture 6.
Programming distributed memory systems: Message Passing Interface (MPI) Distributed memory systems: multiple processing units working on one task (e.g.
1 BİL 542 Parallel Computing. 2 Message Passing Chapter 2.
12.1 Parallel Programming Types of Parallel Computers Two principal types: 1.Single computer containing multiple processors - main memory is shared,
Introduction to Parallel Programming at MCSR Message Passing Computing –Processes coordinate and communicate results via calls to message passing library.
3/12/2013Computer Engg, IIT(BHU)1 MPI-1. MESSAGE PASSING INTERFACE A message passing library specification Extended message-passing model Not a language.
Implementing Processes and Threads CS550 Operating Systems.
1 Parallel and Distributed Processing Lecture 5: Message-Passing Computing Chapter 2, Wilkinson & Allen, “Parallel Programming”, 2 nd Ed.
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, ©
Message Passing Interface Using resources from
MPI-Message Passing Interface. What is MPI?  MPI is a specification for the developers and users of message passing libraries. By itself, it is NOT a.
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, ©
1 Programming distributed memory systems Clusters Distributed computers ITCS 4/5145 Parallel Computing, UNC-Charlotte, B. Wilkinson, Jan 6, 2015.
Parallel Programming C. Ferner & B. Wilkinson, 2014 Introduction to Message Passing Interface (MPI) Introduction 9/4/
PVM and MPI.
1 ITCS4145 Parallel Programming B. Wilkinson March 23, hybrid-abw.ppt Hybrid Parallel Programming Introduction.
Introduction to parallel computing concepts and technics
Message-Passing Computing
Introduction to MPI.
MPI Message Passing Interface
Introduction to Message Passing Interface (MPI)
Message Passing Models
Message-Passing Computing
Quiz Questions ITCS 4145/5145 Parallel Programming MPI
Introduction to parallelism and the Message Passing Interface
Hardware Environment VIA cluster - 8 nodes Blade Server – 5 nodes
Message-Passing Computing Message Passing Interface (MPI)
Hello, world in MPI #include <stdio.h> #include "mpi.h"
Hello, world in MPI #include <stdio.h> #include "mpi.h"
MPI Message Passing Interface
Presentation transcript:

2.1 Message-Passing Computing ITCS 4/5145 Parallel Computing, UNC-Charlotte, B. Wilkinson, Jan 17, 2012.

2.2 Software Tools for Clusters Late 1980’sParallel Virtual Machine (PVM) - developed Became very popular. Mid 1990’s -Message-Passing Interface (MPI) - standard defined. Based upon Message Passing Parallel Programming model. Both provide a set of user-level libraries for message passing. Use with sequential programming languages (C, C++,...).

2.3 MPI (Message Passing Interface) Message passing library standard developed by group of academics and industrial partners to foster more widespread use and portability. Defines routines, not implementation. Several free implementations exist.

2.4 Message passing concept using library routines

2.5 Message routing between computers typically done by daemon processes installed on computers that form the “virtual machine”. Application daemon process program Workstation Application program Application program Workstation Workstation Messages sent through network (executable) (executable) (executable). Can have more than one process running on each computer.

mpd daemon processes (MPICH implementation of MPI) For MPI programs to operate between servers, mpi “mpd” daemon processes must be running on each server to form a ring. Execute the command: mpdtrace or mpdtrace -l which will list those servers where mpd is running. 2.6

2.7 Message-Passing Programming using User-level Message-Passing Libraries Two primary mechanisms needed: 1.A method of creating processes for execution on different computers 2.A method of sending and receiving messages

Creating processes on different computers 2.8

2.9 Multiple program, multiple data (MPMD) model Source file Executable Processor 0Processorp - 1 Compile to suit processor Source file Different programs executed by each processor

2.10 Single Program Multiple Data (SPMD) model Source file Executables Processor 0Processorp - 1 Compile to suit processor Basic MPI way Same program executed by each processor Control statements select different parts for each processor to execute.

In MPI, processes within a defined communicating group given a number called a rank starting from zero onwards. Program uses control constructs, typically IF statements, to direct processes to perform specific actions. Example if (rank == 0).../* do this */; if (rank == 1).../* do this */;. 2.11

Master-Slave approach Usually computation constructed as a master-slave model One process (the master), performs one set of actions and all the other processes (the slaves) perform identical actions although on different data, i.e. if (rank == 0).../* master do this */; else... /* all slaves do this */; 2.12

Static process creation All executables started together. Done when one starts the compiled programs. Normal MPI way. 2.13

2.14 Multiple Program Multiple Data (MPMD) Model with Dynamic Process Creation Process 1 Process 2 spawn(); Time Start execution of process 2 One processor executes master process. Other processes started from within master process Available in MPI-2 Might find applicability if do not initially how many processes needed. Does have a process creation overhead.

Methods of sending and receiving messages 2.15

2.16 Basic “point-to-point” Send and Receive Routines Process 1Process 2 send(&x, 2); recv(&y, 1); xy Movement of data Generic syntax (actual formats later) Passing a message between processes using send() and recv() library calls:

2.17 MPI point-to-point message passing using MPI_send() and MPI_recv() library calls

Semantics of MPI_Send() and MPI_Recv() Called blocking, which means in MPI that routine waits until all its local actions have taken place before returning. After returning, any local variables used can be altered without affecting message transfer. MPI_Send() - Message may not reached its destination but process can continue in the knowledge that message safely on its way. MPI_Recv() – Returns when message received and data collected. Will cause process to stall until message received. Other versions of MPI_Send() and MPI_Recv() have different semantics. 2.18

2.19 Message Tag Used to differentiate between different types of messages being sent. Message tag is carried within message. If special type matching is not required, a wild card message tag used. Then recv() will match with any send().

2.20 Message Tag Example Process 1Process 2 send(&x,2,5); recv(&y,1,5); xy Movement of data Waits for a message from process 1 with a tag of 5 To send a message, x, with message tag 5 from a source process, 1, to a destination process, 2, and assign to y:

2.21 Unsafe message passing - Example lib() send(…,1,…); recv(…,0,…); Process 0Process 1 send(…,1,…); recv(…,0,…); (a) Intended behavior (b) Possible behavior lib() send(…,1,…); recv(…,0,…); Process 0Process 1 send(…,1,…); recv(…,0,…); Destination Source

2.22 MPI Solution “Communicators” Defines a communication domain - a set of processes that are allowed to communicate between themselves. Communication domains of libraries can be separated from that of a user program. Used in all point-to-point and collective MPI message-passing communications. Note: Intracommunicator – for communicating within a single group of processes. Intercommunicator - for communicating within two or more groups of processes

2.23 Default Communicator MPI_COMM_WORLD Exists as first communicator for all processes existing in the application. A set of MPI routines exists for forming communicators. Processes have a “rank” in a communicator.

2.24 Using SPMD Computational Model main (int argc, char *argv[]) { MPI_Init(&argc, &argv); MPI_Comm_rank(MPI_COMM_WORLD, &myrank); /*find rank */ if (myrank == 0) master(); else slave(); MPI_Finalize(); } where master() and slave() are to be executed by master process and slave process, respectively.

2.25 Parameters of blocking send MPI_Send(buf, count, datatype, dest, tag, comm) Address of Number of items Datatype of Rank of destination Message tag Communicator send buffer to send each item process

2.26 Parameters of blocking receive MPI_Recv(buf, count, datatype, src, tag, comm, status) Address of Maximum number Datatype of Rank of source Message tag Communicator receive buffer of items to receive each item process Status after operation

2.27 Example To send an integer x from process 0 to process 1, MPI_Comm_rank(MPI_COMM_WORLD,&myrank); /* find rank */ if (myrank == 0) { int x; MPI_Send(&x, 1, MPI_INT, 1, msgtag, MPI_COMM_WORLD); } else if (myrank == 1) { int x; MPI_Recv(&x, 1, MPI_INT, 0,msgtag,MPI_COMM_WORLD,status); }

Sample MPI Hello World program #include #include "mpi.h" main(int argc, char **argv ) { char message[20]; int i,rank, size, type=99; MPI_Status status; MPI_Init(&argc, &argv); MPI_Comm_size(MPI_COMM_WORLD,&size); MPI_Comm_rank(MPI_COMM_WORLD,&rank); if(rank == 0) { strcpy(message, "Hello, world"); for (i=1; i<size; i++) MPI_Send(message,13,MPI_CHAR,i,type,MPI_COMM_WORLD); } else MPI_Recv(message,20,MPI_CHAR,0,type,MPI_COMM_WORLD,&status); printf( "Message from process =%d : %.13s\n", rank,message); MPI_Finalize(); } 2.28

Program sends message “Hello World” from master process (rank = 0) to each of the other processes (rank != 0). Then, all processes execute a println statement. In MPI, standard output automatically redirected from remote computers to the user’s console so final result will be Message from process =1 : Hello, world Message from process =0 : Hello, world Message from process =2 : Hello, world Message from process =3 : Hello, world... except that the order of messages might be different but is unlikely to be in ascending order of process ID; it will depend upon how the processes are scheduled. 2.29

Setting Up the Message Passing Environment Usually computers specified in a file, called a hostfile or machines file. File contains names of computers and possibly number of processes that should run on each computer. Implementation-specific algorithm selects computers from list to run user programs. 2.30

Users may create their own machines file for their program. Example coit-grid01.uncc.edu coit-grid02.uncc.edu coit-grid03.uncc.edu coit-grid04.uncc.edu coit-grid05.uncc.edu If a machines file not specified, a default machines file used or it may be that program will only run on a single computer Note for our cluster, one uses local names, see assignment instructions

2.32 Compiling/Executing MPI Programs Minor differences in the command lines required depending upon MPI implementation. For the assignments, we will use MPICH or MPICH-2. Generally, a machines file need to be present that lists all the computers to be used. MPI then uses those computers listed. Otherwise it will simply run on one computer

2.33 MPICH Commands Two basic commands: mpicc, a script to compile MPI programs mpiexec - MPI-2 standard command * * mpiexec replaces earlier mpirun comamnd although mpirun still exists.)

2.34 Compiling/executing (SPMD) MPI program For MPICH. At a command line: To start MPI: Nothing special. (Make sure mpd daemons running) To compile MPI programs: for C mpicc -o prog prog.c for C++ mpiCC -o prog prog.cpp To execute MPI program: mpiexec -n no_procs prog A positive integer

2.35 Executing MPICH program on multiple computers Create a file called say “machines” containing the list of machines, say: coit-grid01.uncc.edu coit-grid02.uncc.edu coit-grid03.uncc.edu coit-grid04.uncc.edu coit-grid05.uncc.edu Note for our cluster, one uses local names, see assignment instructions

2.36 mpiexec -machinefile machines -n 4 prog would run prog with four processes. Each processes would execute on one of machines in list. MPI would cycle through list of machines giving processes to machines. Can also specify number of processes on a particular machine by adding that number after machine name.)

2.37 Debugging/Evaluating Parallel Programs Empirically

2.38 Evaluating Programs Empirically Measuring Execution Time To measure execution time between point L1 and point L2 in code, might have construction such as:. L1: time(&t1); /* start timer */. L2: time(&t2);/* stop timer */. elapsed_Time = difftime(t2, t1); /*time=t2-t1*/ printf(“Elapsed time=%5.2f secs”,elapsed_Time);

2.39 MPI provides the routine MPI_Wtime() for returning time (in seconds): double start_time, end_time, exe_time; start_time = MPI_Wtime();. end_time = MPI_Wtime(); exe_time = end_time - start_time;

2.40 Visualization Tools Programs can be watched as they are executed in a space-time diagram (or process-time diagram): Process 1 Process 2 Process 3 Time Computing Waiting Message-passing system routine Message Visualization tools available for MPI, e.g., Upshot.

Eclipse IDE PTP Parallel Tools A recent version of Eclipse IDE that supports development of parallel programs (MPI, OpenMP) We are still evaluating this but looks really good 2.41

Eclipse - PTP

2.43 Next topic Discussion of first assignment – To write and execute some simple MPI programs. – Will include timing execution.