MPI Program Structure Self Test with solution. Self Test 1.How would you modify "Hello World" so that only even-numbered processors print the greeting.

Slides:



Advertisements
Similar presentations
Practical techniques & Examples
Advertisements

Chapter 3. MPI MPI = Message Passing Interface Specification of message passing libraries for developers and users –Not a library by itself, but specifies.
Virtual Topologies Self Test with solution. Self Test 1.When using MPI_Cart_create, if the cartesian grid size is smaller than processes available in.
Reference: / MPI Program Structure.
High Performance Computing
MPI Program Performance Self Test with solution. Matching 1.Amdahl's Law 2.Profiles 3.Relative efficiency 4.Load imbalances 5.Timers 6.Asymptotic analysis.
Getting Started with MPI Self Test with solution.
Point-to-Point Communication Self Test with solution.
Collective Communications Self Test with solution.
Message Passing Fundamentals Self Test. 1.A shared memory computer has access to: a)the memory of other nodes via a proprietary high- speed communications.
Message-Passing Programming and MPI CS 524 – High-Performance Computing.
S an D IEGO S UPERCOMPUTER C ENTER N ATIONAL P ARTNERSHIP FOR A DVANCED C OMPUTATIONAL I NFRASTRUCTURE Message Passing Interface (MPI) Part I NPACI Parallel.
Comp 422: Parallel Programming Lecture 8: Message Passing (MPI)
Communicators Self Test with solution. Self Test 1.MPI_Comm_group may be used to: a)create a new group. b)determine group handle of a communicator. c)create.
Derived Datatypes and Related Features Self Test with solution.
Collective Communications Solution. #include #define N 300 int main(int argc, char **argv) { int i, target;/*local variables*/ int b[N], a[N/4];/*a is.
Lecture 8 Objectives Material from Chapter 9 More complete introduction of MPI functions Show how to implement manager-worker programs Parallel Algorithms.
1 Tuesday, October 10, 2006 To err is human, and to blame it on a computer is even more so. -Robert Orben.
Parallel Programming Using Basic MPI Presented by Timothy H. Kaiser, Ph.D. San Diego Supercomputer Center Presented by Timothy H. Kaiser, Ph.D. San Diego.
ICOM 5995: Performance Instrumentation and Visualization for High Performance Computer Systems Lecture 9 October 30, 2002 Nayda G. Santiago.
Parallel & Cluster Computing MPI Basics Paul Gray, University of Northern Iowa David Joiner, Shodor Education Foundation Tom Murphy, Contra Costa College.
Director of Contra Costa College High Performance Computing Center
2.1 Message-Passing Computing ITCS 4/5145 Parallel Computing, UNC-Charlotte, B. Wilkinson, Jan 17, 2012.
Chapter 6 Parallel Sorting Algorithm Sorting Parallel Sorting Bubble Sort Odd-Even (Transposition) Sort Parallel Odd-Even Transposition Sort Related Functions.
1 MPI: Message-Passing Interface Chapter 2. 2 MPI - (Message Passing Interface) Message passing library standard (MPI) is developed by group of academics.
MA471Fall 2003 Lecture5. More Point To Point Communications in MPI Note: so far we have covered –MPI_Init, MPI_Finalize –MPI_Comm_size, MPI_Comm_rank.
Introduction Algorithms and Conventions The design and analysis of algorithms is the core subject matter of Computer Science. Given a problem, we want.
Specialized Sending and Receiving David Monismith CS599 Based upon notes from Chapter 3 of the MPI 3.0 Standard
Part I MPI from scratch. Part I By: Camilo A. SilvaBIOinformatics Summer 2008 PIRE :: REU :: Cyberbridges.
Parallel Computing A task is broken down into tasks, performed by separate workers or processes Processes interact by exchanging information What do we.
Message Passing Programming with MPI Introduction to MPI Basic MPI functions Most of the MPI materials are obtained from William Gropp and Rusty Lusk’s.
Message Passing Programming Model AMANO, Hideharu Textbook pp. 140-147.
Summary of MPI commands Luis Basurto. Large scale systems Shared Memory systems – Memory is shared among processors Distributed memory systems – Each.
Message Passing Interface (MPI) 1 Amit Majumdar Scientific Computing Applications Group San Diego Supercomputer Center Tim Kaiser (now at Colorado School.
MPI Introduction to MPI Commands. Basics – Send and Receive MPI is a message passing environment. The processors’ method of sharing information is NOT.
CS 591 x I/O in MPI. MPI exists as many different implementations MPI implementations are based on MPI standards MPI standards are developed and maintained.
PP Lab MPI programming II. Program#1 Write a program that prints hello from every created process. Like: Hello World from process 0 of 5 Hello World from.
1 Flight Times. 2 Problem Specification 3 Additional Specifications You may assume that the input is a valid 24 hour time. Output the time entered by.
MA471Fall 2002 Lecture5. More Point To Point Communications in MPI Note: so far we have covered –MPI_Init, MPI_Finalize –MPI_Comm_size, MPI_Comm_rank.
ITCS 4/5145 Parallel Programming, UNC-Charlotte, B. Wilkinson, 2013, QuizQuestions2a.ppt Jan 21, 2013 Quiz Questions ITCS 4145/5145 Parallel Programming.
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, ©
Parallel Programming & Cluster Computing MPI Collective Communications Dan Ernst Andrew Fitz Gibbon Tom Murphy Henry Neeman Charlie Peck Stephen Providence.
CSCI-455/522 Introduction to High Performance Computing Lecture 4.
1 Message Passing Models CEG 4131 Computer Architecture III Miodrag Bolic.
Message Passing and MPI Laxmikant Kale CS Message Passing Program consists of independent processes, –Each running in its own address space –Processors.
Project18’s Communication Drawing Design By: Camilo A. Silva BIOinformatics Summer 2008.
An Introduction to MPI (message passing interface)
Project18 Communication Design + Parallelization Camilo A Silva BIOinformatics Summer 2008.
2.1 Collective Communication Involves set of processes, defined by an intra-communicator. Message tags not present. Principal collective operations: MPI_BCAST()
Chapter 5. Nonblocking Communication MPI_Send, MPI_Recv are blocking operations Will not return until the arguments to the functions can be safely modified.
3/12/2013Computer Engg, IIT(BHU)1 MPI-1. MESSAGE PASSING INTERFACE A message passing library specification Extended message-passing model Not a language.
April 24, 2002 Parallel Port Example. April 24, 2002 Introduction The objective of this lecture is to go over a simple problem that illustrates the use.
Implementing Processes and Threads CS550 Operating Systems.
Lecture 5 CSS314 Parallel Computing Book: “An Introduction to Parallel Programming” by Peter Pacheco
Message Passing Interface Using resources from
MPI-Message Passing Interface. What is MPI?  MPI is a specification for the developers and users of message passing libraries. By itself, it is NOT a.
Chapter 4.
REPETITION CONTROL STRUCTURE
MPI Message Passing Interface
Send and Receive.
CS 584.
Send and Receive.
May 19 Lecture Outline Introduce MPI functionality
Quiz Questions ITCS 4145/5145 Parallel Programming MPI
Introduction to parallelism and the Message Passing Interface
Parallel Processing - MPI
MPI Message Passing Interface
Some codes for analysis and preparation for programming
CS 584 Lecture 8 Assignment?.
Programming Parallel Computers
Presentation transcript:

MPI Program Structure Self Test with solution

Self Test 1.How would you modify "Hello World" so that only even-numbered processors print the greeting message?

Answer #include void main (int argc, char *argv[]) { int myrank, size; MPI_Init(&argc, &argv); /* Initialize MPI */ MPI_Comm_rank(MPI_COMM_WORLD, &myrank); /* Get my rank */ MPI_Comm_size(MPI_COMM_WORLD, &size); /* Get the total number of processors */ if ((myrank % 2) == 0) printf("Processor %d of %d: Hello World!\n", myrank, size); MPI_Finalize(); /* Terminate MPI */ }

Self Test Consider the following MPI pseudo-code, which sends a piece of data from processor 1 to processor 2: MPI_INIT() MPI_COMM_RANK(MPI_COMM_WORLD, myrank, ierr) if (myrank = 1) MPI_SEND (some data to processor 2 in MPI_COMM_WORLD) else { MPI_RECV (data from processor 1 in MPI_COMM_WORLD) print "Message received!" } MPI_FINALIZE() –where MPI_SEND and MPI_RECV are blocking send and receive routines. Thus, for example, a process encountering the MPI_RECV statement will block while waiting for a message from processor 1.

Self Test 2.If this code is run on a single processor, what do you expect to happen? a)The code will print "Message received!" and then terminate. b)The code will terminate normally with no output. c)The code will hang with no output. d)An error condition will result.

Answer a)Incorrect. This cannot happen; there is only one processor! b)Incorrect. Not quite. Remember that ranks are indexed starting with zero. c)Incorrect. Good try! The receive on processor 0 is blocking, and so you might expect it to simply wait forever for an incoming message. However, remember that there is no processor 1. d)Correct! Both 1 and 2 are invalid ranks in this case; thus the code prints an error message and exits. (Note, however, that checking for invalid arguments may be disabled by default on some machines. In such cases the code will appear to hang with no output, since the blocking receive on processor 0 is never satisfied.)

Self Test 3.If the code is run on three processors, what do you expect? a)The code will terminate after printing "Message received!". b)The code will hang with no output. c)The code will hang after printing "Message received!". d)The code will give an error message and exit (possibly leaving a core file).

Answer a)Incorrect. Not quite. Remember that processor 0 also posts a (blocking) receive. b)Incorrect. Close. It is true that the blocking receive on processor 0 is never satisfied, but the communication between processor 1 and processor 2 occurs independently of this. c)Correct. Yes! The receive posted by processor 0 is blocking, so the code hangs while waiting for a message to arrive. d)Incorrect! No, in this case all ranks referred to in the code are valid, so there is no error.

Self Test 4.Consider an MPI code running on four processors, denoted A, B, C, and D. In the default communicator MPI_COMM_WORLD their ranks are 0-3, respectively. Assume that we have defined another communicator, called USER_COMM, consisting of processors B and D. Which one of the following statements about USER_COMM is always true? a)Processors B and D have ranks 1 and 3, respectively. b)Processors B and D have ranks 0 and 1, respectively. c)Processors B and D have ranks 1 and 3, but which has which is in general undefined. d)Processors B and D have ranks 0 and 1, but which has which is in general undefined.

Answer a)Incorrect. Remember that ranks in a given communicator always start from zero. Try again. b)Incorrect. Close. Ranks are assigned starting with zero, so this is sometimes true. Remember, however, that there is no connection between ranks in different communicators. c)Incorrect. Remember that ranks in a given communicator always start from zero. Try again. d)Correct! Yes! This is the only statement which is always true. Ranks are assigned starting from zero, but which processor gets which rank is in general undefined, and ranks in different communicators are unrelated.

Course Problem

Description –The initial problem implements a parallel search of an extremely large (several thousand elements) integer array. The program finds all occurrences of a certain integer, called the target, and writes all the array indices where the target was found to an output file. In addition, the program reads both the target value and all the array elements from an input file. Exercise –You now have enough knowledge to write pseudo-code for the parallel search algorithm introduced in Chapter 1. In the pseudo- code, you should correctly initialize MPI, have each processor determine and use its rank, and terminate MPI. By tradition, the Master processor has rank 0. Assume in your pseudo-code that the real code will be run on 4 processors.

Solution #include int rank, error; error = MPI_Init(&argc,&argv); error = MPI_Comm_rank(MPI_COMM_WORLD, &rank); if (rank == 0) then read in target value from input data file send target value to processor 1 send target value to processor 2 send target value to processor 3 read in integer array b from input file send first third of array to processor 1 send second third of array to processor 2 send last third of array to processor 3 while (not done) receive target indices from any of the slaves write target indices to the output file end while else receive the target from processor 0 receive my sub_array from processor 0 for each element in my subarray if ( element value == target ) then convert local index into global index // SEE COMMENT #1 send global index to processor 0 end if end loop send message to processor 0 indicating my search is done // SEE COMMENT #2 end if error = MPI_Finalize();

Solution Comment #1 –For example, say that the array b is 300 elements long. –Each of the three slaves would then be working with their own array containing 100 elements. –Let's say Processor 3 found the target at index 5 of its local array. Five would then be the local index. But in the global array b the index of that particular target location would be 200+5=205. It is this global index that should be sent back to the master. –Thus, in the real program, (which you will write after the next chapter) you will have to write code to convert local indices to global. Comment #2 –There are several methods by which each slave could send a message to the master indicating that it is done with its part of the total search. –One way would be to have the message contain a special integer value that could not possibly be an index of the array b. –An alternate method would be to send a message with a special tag different from the tag used by the "real" messages containing target indices. NOTE: Message tags will be discussed in the next chapter.