Presentation is loading. Please wait.

Presentation is loading. Please wait.

Computations with MPI technology Matthew BickellThomas Rananga Carmen Jacobs John S. NkunaMalebo Tibane S UPERVISORS : Dr. Alexandr P. Sapozhnikov Dr.

Similar presentations


Presentation on theme: "Computations with MPI technology Matthew BickellThomas Rananga Carmen Jacobs John S. NkunaMalebo Tibane S UPERVISORS : Dr. Alexandr P. Sapozhnikov Dr."— Presentation transcript:

1 Computations with MPI technology Matthew BickellThomas Rananga Carmen Jacobs John S. NkunaMalebo Tibane S UPERVISORS : Dr. Alexandr P. Sapozhnikov Dr. Tatiana Sapozhnikova Prof. Elena Zemylyanaya J OINT I NSTITUTE OF N UCLEAR R ESEARCH

2 Outline What is MPI? Why MPI? Examples Results Discussions and Conclusions Recommendations

3 What is MPI? Message Passing Interface (1992) A tool to develop programs that use multiple, parallel processes MPI is a set of communicative and auxiliary operations for programming in Fortran and C languages. The fundamental structures are: processes and messages  Processes communicate exclusively through messages

4 All modern computers have multiple processors Parallel computing obeys Amdahl’s Law: 0 ≤ S ≤ 1 is the number of operations that must be performed successively, P is the number of processes, A is acceleration To compute as fast as possible Allows a more flexible division of work among processors Affords one the opportunity to develop one’s own parallel programming paradigm Portable across different platforms and languages Why MPI?

5 Examples program main include 'mpif.h' integer ierr, rank, size, a, b, stat(MPI_STATUS_SIZE) call MPI_INIT(ierr) call MPI_COMM_RANK(MPI_COMM_WORLD, rank, ierr) call MPI_COMM_SIZE(MPI_COMM_WORLD, size, ierr) if (rank.eq.0) then a = 2 call MPI_SEND(a, 1, MPI_INTEGER, 1, 5, MPI_COMM_WORLD, ierr) call MPI_RECV(b, 1, MPI_INTEGER, 1, 5, MPI_COMM_WORLD, stat, ierr) elseif (rank.eq.1) then b = 5 call MPI_SEND(b, 1, MPI_INTEGER, 0, 5, MPI_COMM_WORLD, ierr) call MPI_RECV(a, 1, MPI_INTEGER, 0, 5, MPI_COMM_WORLD, stat, ierr) endif call MPI_FINALIZE(ierr) end I II b a

6 Examples 123456789 10 I II III I sum Total = 55 V = Process: Vector summation Split a vector of length L up amongst the N processes Each process sums its part of the vector Each process sends its result back to the master process

7 Examples 0 12 3456 789 10 12 13 11 14 Process: Collective communication Each process has its own region of memory. Transfer of data from one process to another can take time. We want to minimise these transfers.  Tree broadcasting o Parallel transfers

8 Examples = I II III Matrix Multiplication Broadcast the matrices to all processes Each process calculates a number of columns of the product matrix

9 Results Time Num Processors Num of physical processors 1 For large matrices, time is inversely proportional to number of processors. Time increases for N > N phy since the transfer times become substantial.

10 Discussions and Conclusions Learnt fundamental principles of MPI Experienced the power of parallel computing Knowledge of MPI (at JINR) has improved our potential for professional excellence. MPI is most effective in distributed memory systems (DMS) than shared memory systems (SMS). High performance computing can be utilised to its full potential. MPI improves research productivity

11 Recommendations Continued correspondence with our supervisors. Encourage researchers from SA to learn about MPI (CHPC, National Facility). Propose the introduction of MPI into undergraduate courses.

12 Acknowledgements NRF (RSA) JINR (Russia) Supervisors  Dr. Alexandr P. Sapozhnikov  Prof. Tatiana Sapozhnikova  Prof. Elena Zemylyanaya Prof. M. L. Lekala Dr. N. M. Jacobs


Download ppt "Computations with MPI technology Matthew BickellThomas Rananga Carmen Jacobs John S. NkunaMalebo Tibane S UPERVISORS : Dr. Alexandr P. Sapozhnikov Dr."

Similar presentations


Ads by Google