Message-Passing Programming and MPI CS 524 – High-Performance Computing.

Slides:



Advertisements
Similar presentations
MPI Message Passing Interface Portable Parallel Programs.
Advertisements

MPI Message Passing Interface
CS 140: Models of parallel programming: Distributed memory and MPI.
Chapter 3. MPI MPI = Message Passing Interface Specification of message passing libraries for developers and users –Not a library by itself, but specifies.
Reference: / MPI Program Structure.
MPI Program Structure Self Test with solution. Self Test 1.How would you modify "Hello World" so that only even-numbered processors print the greeting.
Reference: Getting Started with MPI.
Introduction to MPI. What is Message Passing Interface (MPI)?  Portable standard for communication  Processes can communicate through messages.  Each.
A Message Passing Standard for MPP and Workstations Communications of the ACM, July 1996 J.J. Dongarra, S.W. Otto, M. Snir, and D.W. Walker.
CS 240A: Models of parallel programming: Distributed memory and MPI.
SOME BASIC MPI ROUTINES With formal datatypes specified.
1 Parallel Programming with MPI: Day 1 Science & Technology Support High Performance Computing Ohio Supercomputer Center 1224 Kinnear Road Columbus, OH.
Distributed Memory Programming with MPI. What is MPI? Message Passing Interface (MPI) is an industry standard message passing system designed to be both.
MPI Collective Communication CS 524 – High-Performance Computing.
1 Parallel Computing—Introduction to Message Passing Interface (MPI)
High Performance Parallel Programming Dirk van der Knijff Advanced Research Computing Information Division.
Comp 422: Parallel Programming Lecture 8: Message Passing (MPI)
MPI Point-to-Point Communication CS 524 – High-Performance Computing.
1 Tuesday, October 10, 2006 To err is human, and to blame it on a computer is even more so. -Robert Orben.
Sahalu JunaiduICS 573: High Performance Computing6.1 Programming Using the Message Passing Paradigm Principles of Message-Passing Programming The Building.
Parallel Programming Using Basic MPI Presented by Timothy H. Kaiser, Ph.D. San Diego Supercomputer Center Presented by Timothy H. Kaiser, Ph.D. San Diego.
Lecture 4: Parallel Programming Models. Parallel Programming Models Parallel Programming Models: Data parallelism / Task parallelism Explicit parallelism.
A Message Passing Standard for MPP and Workstations Communications of the ACM, July 1996 J.J. Dongarra, S.W. Otto, M. Snir, and D.W. Walker.
2.1 Message-Passing Computing ITCS 4/5145 Parallel Computing, UNC-Charlotte, B. Wilkinson, Jan 17, 2012.
1 MPI: Message-Passing Interface Chapter 2. 2 MPI - (Message Passing Interface) Message passing library standard (MPI) is developed by group of academics.
2.1 Message-Passing Computing ITCS 4/5145 Parallel Computing, UNC-Charlotte, B. Wilkinson, Jan 14, 2013.
Part I MPI from scratch. Part I By: Camilo A. SilvaBIOinformatics Summer 2008 PIRE :: REU :: Cyberbridges.
CS 240A Models of parallel programming: Distributed memory and MPI.
Parallel Computing A task is broken down into tasks, performed by separate workers or processes Processes interact by exchanging information What do we.
1 Review –6 Basic MPI Calls –Data Types –Wildcards –Using Status Probing Asynchronous Communication Collective Communications Advanced Topics –"V" operations.
Parallel Programming with MPI Prof. Sivarama Dandamudi School of Computer Science Carleton University.
Hybrid MPI and OpenMP Parallel Programming
CS 838: Pervasive Parallelism Introduction to MPI Copyright 2005 Mark D. Hill University of Wisconsin-Madison Slides are derived from an online tutorial.
Message Passing Programming Model AMANO, Hideharu Textbook pp. 140-147.
MPI Introduction to MPI Commands. Basics – Send and Receive MPI is a message passing environment. The processors’ method of sharing information is NOT.
CS 420 – Design of Algorithms MPI Data Types Basic Message Passing - sends/receives.
Distributed-Memory (Message-Passing) Paradigm FDI 2004 Track M Day 2 – Morning Session #1 C. J. Ribbens.
Parallel Programming with MPI By, Santosh K Jena..
CSCI-455/522 Introduction to High Performance Computing Lecture 4.
Chapter 4 Message-Passing Programming. The Message-Passing Model.
MPI Jakub Yaghob. Literature and references Books Gropp W., Lusk E., Skjellum A.: Using MPI: Portable Parallel Programming with the Message-Passing Interface,
CS4230 CS4230 Parallel Programming Lecture 13: Introduction to Message Passing Mary Hall October 23, /23/2012.
Message-Passing Computing Chapter 2. Programming Multicomputer Design special parallel programming language –Occam Extend existing language to handle.
Introduction to MPI CDP 1. Shared Memory vs. Message Passing Shared Memory Implicit communication via memory operations (load/store/lock) Global address.
Programming distributed memory systems: Message Passing Interface (MPI) Distributed memory systems: multiple processing units working on one task (e.g.
1 BİL 542 Parallel Computing. 2 Message Passing Chapter 2.
Introduction to Parallel Programming at MCSR Message Passing Computing –Processes coordinate and communicate results via calls to message passing library.
Message Passing Interface (MPI) 2 Amit Majumdar Scientific Computing Applications Group San Diego Supercomputer Center Tim Kaiser (now at Colorado School.
3/12/2013Computer Engg, IIT(BHU)1 MPI-1. MESSAGE PASSING INTERFACE A message passing library specification Extended message-passing model Not a language.
1 Parallel and Distributed Processing Lecture 5: Message-Passing Computing Chapter 2, Wilkinson & Allen, “Parallel Programming”, 2 nd Ed.
Message Passing Interface Using resources from
Introduction to MPI programming Morris Law, SCID May 18/25, 2013.
MPI-Message Passing Interface. What is MPI?  MPI is a specification for the developers and users of message passing libraries. By itself, it is NOT a.
PVM and MPI.
Distributed Processing with MPI International Summer School 2015 Tomsk Polytechnic University Assistant Professor Dr. Sergey Axyonov.
3/12/2013Computer Engg, IIT(BHU)1 MPI-2. POINT-TO-POINT COMMUNICATION Communication between 2 and only 2 processes. One sending and one receiving. Types:
Introduction to parallel computing concepts and technics
Introduction to MPI.
MPI Message Passing Interface
Introduction to MPI CDP.
CS 584.
CS4961 Parallel Programming Lecture 16: Introduction to Message Passing Mary Hall November 3, /03/2011 CS4961.
MPI-Message Passing Interface
A Message Passing Standard for MPP and Workstations
Introduction to parallelism and the Message Passing Interface
Distributed Memory Programming with Message-Passing
Parallel Processing - MPI
MPI Message Passing Interface
CS 584 Lecture 8 Assignment?.
Programming Parallel Computers
Presentation transcript:

Message-Passing Programming and MPI CS 524 – High-Performance Computing

CS 524 (Wi 2003/04)- Asim LUMS2 Writing Parallel Programs Directives-based data-parallel languages  Serial code is made parallel by adding directives (which appear as comments in the serial code) that tell the compiler how to distribute the data and work among the processors  Normally available for shared memory architectures because the global address space greatly simplifies the writing of compilers  Examples: OpenMP, HPF Explicit message passing  Programmer explicitly divides data and code and specifies communication among processors  Normally suitable for distributed memory architectures  Examples: MPI, PVM

CS 524 (Wi 2003/04)- Asim LUMS3 Message Passing Model Consists of a number of processes, each working on some local data Each process has local variables, and it cannot directly access the variables of another process Sharing of data is done by explicitly sending and receiving messages … Send(P1, msg1) Recv(P1, msg2) … Recv(P0, msg1) Send(P0, msg2) … Local variables Process 0Process 1

CS 524 (Wi 2003/04)- Asim LUMS4 Characteristics of Message Passing Model General and flexible  Essentially any parallel computation can be cast in the message passing form  Can be implemented on a wide variety of architectures, including shared-memory, distributed-memory, and even single processor machines  Generally allows more control over data location and flow in a parallel computation  Both domain and functional decompositions can be cast in message passing form Tedious  Generally more tedious to program in message passing form

CS 524 (Wi 2003/04)- Asim LUMS5 Message Passing Interface (MPI) MPI is a message-passing library  Provides an interface for communicating (exchange of data and synchronization of tasks) among the processors in a distributed memory architecture Standard and portable library: both the syntax and behavior of MPI is standardized. High performance: higher performing than earlier message-passing libraries Functionality: rich set of functions and behaviors are provided Multiplicity of implementations: Numerous implementations are available, including open-source implementations like LAM/MPI and MPICH

CS 524 (Wi 2003/04)- Asim LUMS6 MPI Functions General  Functions for initiating and terminating communication  Functions for creating and managing groups of processors for communication Point-to-point communication  Functions for sending and receiving messages between two named processors Collective communication  Functions for exchanging data among a group of processors  Functions for synchronization  Functions for mathematical operations on data held by group of processors

CS 524 (Wi 2003/04)- Asim LUMS7 MPI Function Format For C: Header file: #include rc = MPI_Xxxxx(parameter, … )  rc is return code of type integer  Case is important. MPI and first letter after underscore is capitalized Exceptions to this format are the timing functions (MPI_Wtime() and MPI_Wtick()) which return floating-point of type double

CS 524 (Wi 2003/04)- Asim LUMS8 Initializing and Terminating Initializing MPI: must be first MPI function call before others int MPI_Init(int *argc, char ***argv) Terminating MPI: must be the last MPI function call int MPI_Finalize()

CS 524 (Wi 2003/04)- Asim LUMS9 Communicators Programmer’s view: group of processes that are allowed to communicate with each other All MPI communication calls have a communicator argument Most often use MPI_COMM_WORLD  Defined by default by MPI_Init()  It includes all processors

CS 524 (Wi 2003/04)- Asim LUMS10 Rank and Size Process ID number within the communicator, starting with zero Used to specify source and destination of messages int MPI_Comm_rank(MPI_Comm comm, int *rank) How many processes are contained within a communicator? int MPI_Comm_size(MPI_Comm comm, int *size)

CS 524 (Wi 2003/04)- Asim LUMS11 Example #include void main(int argc, char *argv[]) { int myid, p, err; err = MPI_Init(&argc, &argv); err = MPI_Comm_rank(MPI_COMM_WORLD, &myid); err = MPI_Comm_size(MPI_COMM_WORLD, &p); printf("Hello! I am processor %d of %d\n",myid, p); err = MPI_Finalize(); } Output Hello! I am processor 0 of 4 Hello! I am processor 3 of 4 Hello! I am processor 1 of 4 Hello! I am processor 2 of 4

CS 524 (Wi 2003/04)- Asim LUMS12 Messages Message consist of two parts, each made up of 3 parameters  Data: data that is being sent  Envelope: information that helps to route the data Message = data (3 parameters) + envelope (3 parameters) Data  startbuf: address where data start  count: number of elements from startbuf  Datatype: type of data to be transmitted Envelope  destination/source: the rank or ID of receiving/sending process  tag: arbitrary number to distinguish among messages  Communicator: communicator in which communication is to take place

CS 524 (Wi 2003/04)- Asim LUMS13 Basic Datatypes MPI DatatypeC Datatype MPI_CHARsigned char MPI_SHORTsigned short int MPI_INTsigned int MPI_LONGsigned long int MPI_UNSIGNED_CHARUnsigned char MPI_UNSIGNED_SHORTUnsigned short int MPI_UNSIGNEDunsigned int MPI_UNSIGNED_LONGunsigned long int MPI_FLOATfloat MPI_DOUBLEdouble MPI_LONG_DOUBLElong double MPI_BYTEbyte

CS 524 (Wi 2003/04)- Asim LUMS14 Derived Datatypes User defined datatypes  Built up from basic datatypes  Depending on the nature of data to be communicated, using derived datatypes makes programming easier and can improve performance  Examples: contiguous, vector, struct Rules and rationale for MPI datatypes  Programmer declares variables to have “normal” C type, but uses matching MPI datatype as arguments in MPI functions  Mechanism to handle type conversion in a heterogeneous collection of machines  General rule: MPI datatype specified in a receive must match the MPI datatype specified in the send

CS 524 (Wi 2003/04)- Asim LUMS15 Point-to-Point Communication Communication between two processes Source process sends message to destination process Communication takes place within a communicator Destination process is identified by its rank in the communicator source dest communicator

CS 524 (Wi 2003/04)- Asim LUMS16 Definitions “Completion” means that memory locations used in the message transfer can be safely accessed  send: variable sent can be reused after completion  receive: variable received can now be used MPI communication modes differ in what conditions on the receiving end are needed for completion Communication modes can be blocking or non- blocking  Blocking: return from function call implies completion  Non-blocking: routine returns immediately, completion tested for

CS 524 (Wi 2003/04)- Asim LUMS17 Communication Modes ModeCompletion Condition Synchronous sendOnly completes when the receive has completed Buffered sendAlways completes (unless an error occurs), irrespective of receiver Standard sendMessage sent (receive state unknown) Ready sendAlways completes (unless an error occurs), irrespective of whether the receive has completed ReceiveCompletes when a message has arrived