Presentation is loading. Please wait.

Presentation is loading. Please wait.

Hardware Environment VIA cluster - 8 nodes Blade Server – 5 nodes

Similar presentations


Presentation on theme: "Hardware Environment VIA cluster - 8 nodes Blade Server – 5 nodes"— Presentation transcript:

1 Hardware Environment VIA cluster - 8 nodes Blade Server – 5 nodes
Two 1.0 GHz VIA-C3 processors each node Connected with Gigabit Ethernet Linux kernel – smp Blade Server – 5 nodes two 3.0 GHz Intel Xeon processors each node each Xeon processor is Hyper-Threading

2 MPI – message passing interface
Basic data types MPI_CHAR – char MPI_UNSIGNED_CHAR – unsigned char MPI_BYTE – like unsigned char MPI_SHORT – short MPI_LONG – long MPI_INT – int MPI_FLOAT – float MPI_DOUBLE – double ……

3 MPI – message passing interface
6 basic MPI functions MPI_Init – initialize MPI environment MPI_Finalize – shutting down MPI environment MPI_Comm_size – determine number of processes MPI_Comm_rank – determine process rank MPI_Send – blocking data send MPI_Recv – blocking data receive

4 MPI – message passing interface
Initialize MPI MPI_Init(&argc, &argv) First MPI function called by each process Allow system to do any necessary setup Not necessarily first executable statement in your code

5 MPI – message passing interface
Communicators Communicators: opaque object that provides message- passing environment for processes MPI_COMM_WORLD Default communicator Includes all processes Create new communicators MPI_Comm_create() MPI_Group_incl()

6 Communicators Communicator Name Communicator MPI_COMM_WORLD Processes
5 2 Ranks 1 4 3

7 MPI – message passing interface
Shutting Down MPI environment MPI_Finalize() Call after all MPI function calls Allow system to free any resources

8 MPI – message passing interface
Determine Number of Processes MPI_Comm_size(MPI_COMM_WORLD,&size) First argument is communicator Number of processes returned through second argument

9 MPI – message passing interface
Determine Process Rank MPI_Comm_rank(MPI_COMM_WORLD,&myid) First argument is communicator Process rank (in range 0, 1, 2, …, P-1) returned through second argument

10 Example - hello.c

11 Example - hello.c (con’t)
Compile MPI Programs mpicc –o foo foo.c mpicc – script to compile and link MPI library example: mpicc –o hello hello.c

12 Example - hello.c (con’t)
Execute MPI Programs mpirun –np <p> <exec> <argc1> … -np <p> - number of processes <exec> - executable filename <argc1> … - argument passing to <exec> example: mpirun –np 4 hello

13 Example – hello.c (con’t)

14 Example – hello.c (con’t)

15 Example – hello.c (con’t)
rank = 0 rank = 1 rank = 2 rank = 3

16 Example – hello.c (con’t)
rank = 0 size = 4 rank = 1 size = 4 rank = 2 size = 4 rank = 3 size = 4

17 Example – hello.c (con’t)
rank = 0 size = 4 rank = 1 size = 4 rank = 2 size = 4 rank = 3 size = 4

18 MPI – message passing interface
Specify Host Processors machine file describes machines to run your program # of MPI processes > physical machines ? avoid login with password mpirun –np <p> -machinefile <filename> <exec> example: in machines.LINUX # machines.LINUX # put machine hostname below node01 node02 node03

19 MPI – message passing interface
Blocking Send and Receive MPI_send(&buf,count,datatype,dest,tag,MPI_COMM_WORLD) MPI_recv(&buf,count,datatype,src,tag,MPI_COMM_WORLD,status) Argument datatype must be MPI_CHAR, MPI_INT….. For each send-recv pair, tag must be the same

20 MPI – message passing interface
Other program notes variables and functions except for MPI_XXX are local messages dumped are not in order example: send_recv.c

21 MPI – send_recv.c

22 Odd-Even Sort Operation in two phases, even phase and odd phase
Even-numbered processes exchange numbers with their right neighbor Odd phase Odd-numbered processes exchange numbers with their right neighbor

23

24 How to solve this 8-number sorting?
Sequential program – easy MPI one number for one MPI process start MPI program master sends data to other process start odd_even sorting master collects result from other processes end MPI program

25 Other problem? # of unsorted numbers is not power of 2 ?
# of unsorted numbers is large ? # of unsorted numbers can not be divided by nprocs ?

26 MPI – message passing interface
Advanced MPI functions MPI_Bcast – broadcast a msg from source to other processes MPI_Scatter – scatter values to a group of processes MPI_Gather – gather values from a group of processes MPI_Allgather – gather data from all tasks an distribute it to all MPI_Barrier – block until all processes reach this routine

27 MPI_Bcast MPI_Bcast (void *buffer, int count, MPI_Datatype datatype, int root, MPI_Comm comm)

28 MPI_Scatter MPI_Scatter (void *sendbuf, int sendcnt, MPI_Datatype sendtype, void *recvbuf, int recvcnt, MPI_Datatype recvtype, int root, MPI_Comm comm)

29 MPI_Gather MPI_Gather (void *sendbuf, int sendcnt, MPI_Datatype sendtype, void *recvbuf, int recvcnt, MPI_Datatype recvtype, int root, MPI_Comm comm)

30 MPI_Allgather MPI_Allgather (void *sendbuf, int sendcnt, MPI_Datatype sendtype, void *recvbuf, int recvcnt, MPI_Datatype recvtype, MPI_Comm comm)

31 MPI_Barrier MPI_Barrier (MPI_Comm comm)

32 Extension of MPI_Recv source is don’t-care – MPI_ANY_SOURCE
MPI_Recv (void *buffer, int count, MPI_Datatype datatype, int source, int tag, MPI_Comm comm, MPI_Status *status) source is don’t-care – MPI_ANY_SOURCE tag is don’t-care – MPI_ANY_TAG to retrieve sender’s information typedef struct { int count; int MPI_SOURCE; int MPI_TAG; int MPI_ERROR; } MPI_Status; use status->MPI_SOURCE to get sender’s id use status->MPI_TAG to get message


Download ppt "Hardware Environment VIA cluster - 8 nodes Blade Server – 5 nodes"

Similar presentations


Ads by Google