Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 BİL 542 Parallel Computing. 2 Message Passing Chapter 2.

Similar presentations


Presentation on theme: "1 BİL 542 Parallel Computing. 2 Message Passing Chapter 2."— Presentation transcript:

1 1 BİL 542 Parallel Computing

2 2 Message Passing Chapter 2

3 2.3 Message-Passing Computing Chapter 2 –Programming a Message Passing computer 1.Using a special parallel Prog. Lang. 2.Extending an existing language 3.Using a high-level language and providing a message-passing library –Here, the third option is employed

4 2.4 Message-Passing Programming using User-level Message-Passing Libraries Two primary mechanisms needed: 1.A method of creating separate processes for execution on different computers Static process creation: Before the execution number of processes are fixed Dynamic process creation: At runtime processess can be created 2.A method of sending and receiving messages

5 2.5 Programming Models: 1. Multiple program, multiple data (MPMD) model Source file Executable Processor 0Processorp - 1 Compile to suit processor Source file

6 COMPE472 Parallel Computing 2.6 Programming models: 2. Single Program Multiple Data (SPMD) model. Source file Executables Processor 0Processorp - 1 Compile to suit processor Basic MPI way Different processes merged into one program. Control statements select different parts for each processor to execute. All executables started together - static process creation

7 COMPE472 Parallel Computing 2.7 Multiple Program Multiple Data (MPMD) Model Process 1 Process 2 spawn(); Time Start execution of process 2 Separate programs for each processor. One processor executes master process. Other processes started from within master process - dynamic process creation.

8 COMPE472 Parallel Computing 2.8 Basic “point-to-point” Send and Receive Routines Process 1Process 2 send(&x, 2); recv(&y, 1); xy Movement of data Generic syntax (actual formats will explained later) Passing a message between processes using send() and recv() library calls:

9 2.9 Synchronous Message Passing Routines that actually return when message transfer completed. Synchronous send routine –Waits until complete message can be accepted by the receiving process before sending the message. Synchronous receive routine –Waits until the message it is expecting arrives. –No need for buffer storage Synchronous routines perform two actions: They transfer data and they synchronize processes.

10 2.10 Synchronous send() and recv() using 3-way protocol Process 1Process 2 send(); recv(); Suspend Time process Acknowledgment Message Both processes continue Request to send

11 2.11 Asynchronous Message Passing Routines that do not wait for actions to complete before returning. Usually require local storage for messages. In general, they do not synchronize processes but allow processes to move forward sooner. Must be used with care.

12 2.12 Message passing: Blocking and Non-Blocking Blocking: A blocking message occurs when one of the processors performs a send operation and does not continue (i.e. does not execute any following instruction) unless it is sure that the message buffer can be reclaimed.

13 COMPE472 Parallel Computing 2.13 Message passing: Blocking and Non-Blocking Non-blocking - A non-blocking message is the opposite of a blocking message where a processor performs a send or a receive operation and immediately continues (to the next instruction in the code) without caring whether the message has been received or not.

14 2.14 How message-passing routines continue before message transfer completed Process 1Process 2 send(); recv(); Message buffer Read message buffer Continue process Time Message buffer needed between source and destination to hold message:

15 2.15 Asynchronous (blocking) routines changing to synchronous routines Once local actions completed and message is safely on its way, sending process can continue with subsequent work. Buffers only of finite length and if all available buffer space exhausted, send routine will held up. Then, send routine will wait until storage becomes re- available.

16 2.16 Message Tag Used to differentiate between different types of messages being sent. Message tag is carried within message. If special type matching is not required, a wild card message tag is used, so that the recv() will match with any send().

17 2.17 Message Tag Example Process 1Process 2 send(&x,2,5); recv(&y,1,5); xy Movement of data Waits for a message from process 1 with a tag of 5 To send a message, x, with message tag 5 from a source process, 1, to a destination process, 2, and assign to y:

18 2.18 “Group” message passing routines Have routines that send message(s) to a group of processes or receive message(s) from a group of processes Higher efficiency than separate point-to-point routines.

19 2.19 Scatter scatter(); buf data Process 0Processp - 1Process 1 Action Code MPI form Sending each element of an array in root process to a separate process. Contents of i-th location of array sent to i-th process.

20 2.20 Gather buf gather(); data gather(); data Process 0Processp - 1Process 1 Action Code MPI form Having one process collect individual values from set of processes. scatter();

21 COMPE472 Parallel Computing 2.21 Reduce reduce(); buf data Process 0Processp - 1Process 1 + Action Code Gather operation combined with specified arithmetic/logical operation. For example, let’s say we have a list of numbers [1, 2, 3, 4, 5]. Reducing this list of numbers with the sum function would produce sum([1, 2, 3, 4, 5]) = 15.

22 2.22 AllGather & AllReduce So far, we have covered two MPI routines that perform many-to-one or one-to- many communication patterns, which simply means that many processes send/receive to one process. Oftentimes it is useful to be able to send many elements to many processes (i.e. a many-to-many communication pattern). MPI_Allgather and MPI_Allreduce has this characteristic.

23 COMPE472 Parallel Computing 2.23 Barrier Barrier: synchronization point One of the things to remember about collective communication is that it implies a synchronization point among processes. This means that all processes must reach a point in their code before they can all begin executing again.


Download ppt "1 BİL 542 Parallel Computing. 2 Message Passing Chapter 2."

Similar presentations


Ads by Google