Presentation is loading. Please wait.

Presentation is loading. Please wait.

Pattern Programming Tools

Similar presentations


Presentation on theme: "Pattern Programming Tools"— Presentation transcript:

1 Pattern Programming Tools

2 Pattern programming approaches we have developed
High-level abstraction (Seeds framework) – Patterns fully pre-implemented for a distributed or local system. Self deploys. Programmer simply writes what master and slaves do without code for pattern (message passing). Java-based. Medium-level abstraction using compiler directives (Paraguin) – Programmers provided with compiler directives that implement patterns and common data transfer operations. Requires a special compiler that recognizes directives. Low-level abstraction (Suzaku) – Programmer provided with macros and pre-written MPI routines that together can implement patterns and common data transfer operations. Compile as a regular MPI program. No abstraction – Programmer implements everything, but can be given guidance to follow a pattern approach.

3 Suzaku Pattern Programming Framework
Project at UNC-Charlotte © B. Wilkinson/Clayton Ferner Suzaku.ppt Modification date Sept. 23, 2014

4 Suzaku Framework Version 0
An on-going project, tested once in the classroom. Enables programmers implement pattern-based MPI programs using macros and routines without writing MPI message passing code Provides: Macros – in-line substitution of short code sequences Routines – for patterns and common operations needed to implement patterns

5 Suzaku Hello world #include “suzaku.h” void compute(double a[N][N],double b[N][N],double c[N][N],int index,int blksize) { // Slave compute function does nothing here } int main (int argc, char **argv) { int i, p, rank; MPI_START(&p, &rank, &argc, &argv); printf(“Hello world from process: %i \n”, rank); MPI_Finalize(); return 0; Suzaku routines that incorporate several MPI routines commonly needed at the beginning of MPI programs. Currently an MPI routine, which will be changed to a Suzaku routine in subsequent development of Suzaku. Program outputs “Hello world from process: ” from each process and the process number.

6 MPI_Start – actually a macro
#define MPI_START(p, rank, argc, argv) \ MPI_Init(argc, argv); \ MPI_Comm_size(MPI_COMM_WORLD, p); \ MPI_Comm_rank(MPI_COMM_WORLD, rank) No semicolon

7 Suzaku routines for timing execution
void startTimer(int rank); void stopTimer(int rank); Implementation: void startTimer(int rank) { if (rank == 0) { gettimeofday(&tv1, NULL); } void stopTimer(int rank) { gettimeofday(&tv2, NULL); printf("elapsed_time=\t%lf (seconds)\n", (tv2.tv_sec -tv1.tv_sec)+((tv2.tv_usec - tv1.tv_usec)/ )); MPI_Finalize(); Could have used MPI_Wtime(). However, time() or gettimeofday() useful to compare with a sequential C version of program with same libraries.

8 Suzaku routine to read input data
void readInputFile(int argc, char *argv[], int *error, double array1[N][N], double array2[N][N]) Read values from file into 2 floating point arrays. File name given by 1st command line argument. File format used in other assignments. Implementation void readInputFile(int argc,char *argv[],int *error,double array1[N][N],double array2[N][N]){ int i, j; FILE *fd; char *usage = "Usage: %s file\n"; if (argc< 2) { fprintf (stderr, usage, argv[0]); *error = -1; } if ((fd = fopen (argv[1], "r")) == NULL) { fprintf (stderr, "%s: Cannot open file %s for reading.\n", argv[0], argv[1]); fprintf (stderr, usage, argv[1]); MPI_Bcast(error, 1, MPI_INT, 0, MPI_COMM_WORLD); if (*error != 0) MPI_Finalize(); for (i = 0; i< N; i++) for (j = 0; j < N; j++) fscanf (fd, "%lf", &array1[i][j]); for (j = 0; j < N; j++) fscanf (fd, "%lf", &array2[i][j]); fclose(fd); MPI_Barrier(MPI_COMM_WORLD);

9 Suzaku routine to broadcast data to all processes
mpiBroadcastArrayOfDoubles(*b); Send out b array to all processes Implementation void mpiBroadcastArrayOfDoubles(double *array) { int n = N * N; MPI_Bcast(array, n , MPI_DOUBLE, 0, MPI_COMM_WORLD); }

10 Workpool pattern void masterProcess(double array1[N][N], double array2[N][N], int p, int rank, int blksize); Manages work flow of workers. Uses task queue to issue work. Workers come back with completed work. void workerProcess(double array1[N][N], double array2[N][N],double array3[N][N],int rank,int blksize); Workers receive work and calls compute function, returns results to master. void compute(double array1[N][N], double array2[N][N], double array3[N][N], int index, int blksize); Programmer implements compute routine:

11 Implementation of Master process
void masterProcess(double array1[N][N], double array2[N][N], int p, int rank, int blksize) { int process, m; int n = N; int work = 0; if(rank ==0){ for (process = 1; process < p; process++) { //give all of the workers work MPI_Send(&work, 1, MPI_INT, process, work, MPI_COMM_WORLD); // send index MPI_Send(&array1[work], n * blksize, MPI_DOUBLE, process, work, MPI_COMM_WORLD); // send block of elements work+=blksize; } while (work < N) { MPI_Recv(&process,1,MPI_INT, MPI_ANY_SOURCE,MPI_ANY_TAG,MPI_COMM_WORLD, &status);//recv index & results MPI_Recv(&array2[process],n*blksize,MPI_DOUBLE,status.MPI_SOURCE,MPI_ANY_TAG,MPI_COMM_WORLD,&status); if(work < n-p * blksize){ MPI_Send(&work, 1, MPI_INT, status.MPI_SOURCE, work, MPI_COMM_WORLD); // send another block MPI_Send(&array1[work], n * blksize, MPI_DOUBLE, status.MPI_SOURCE, work, MPI_COMM_WORLD); } else if(work >= n-p * blksize && status.MPI_SOURCE == 1){ MPI_Send(&work, 1, MPI_INT, status.MPI_SOURCE, work, MPI_COMM_WORLD); if(work == N){ // all work done, terminate, send final sends to waiting recvs and pick up final sends for(m = 1; m < p; m++){ MPI_Isend(&n, 1, MPI_INT, m, 0, MPI_COMM_WORLD, &request[0]); MPI_Recv(&process, 1, MPI_INT, MPI_ANY_SOURCE, MPI_ANY_TAG, MPI_COMM_WORLD, &status); MPI_Recv(&array2[process], n*blksize,MPI_DOUBLE,status.MPI_SOURCE,MPI_ANY_TAG,MPI_COMM_WORLD,&status); Note: status.MPI_SOURCE used to identify where result coming from.

12 Implementation of Worker process
void workerProcess(double array1[N][N], double array2[N][N],double array3[N][N],int rank,int blksize) { int work = 0; int n = N; if(rank != 0){ while(work < n){ MPI_Recv(&work, 1, MPI_INT, 0, MPI_ANY_TAG, MPI_COMM_WORLD, &status); // recv index if(work < n){ MPI_Recv(&array1[work],n*blksize,MPI_DOUBLE,0,MPI_ANY_TAG,MPI_COMM_WORLD,&status); compute(array1, array2, array3, work, blksize); // user programmer’s routine MPI_Send(&work, 1, MPI_INT, 0, rank, MPI_COMM_WORLD); MPI_Send(&array3[work], N * blksize, MPI_DOUBLE, 0, rank, MPI_COMM_WORLD); }

13 Suzaku workpool handshaking Slaves Master
Will return status with destination rank Send index to any waiting process Master Send block of data to same slave (work) After all slaves receive index/data (p blocks), continue sending index/data for remaining blocks when slaves return results. Finally send termination sends

14 Other Suzaku routines Implementation
void workerGetRowOfDoubles(double array[N][N], int *index, int rank, int blksize); Puts worker in waiting status to receive next piece of work Implementation void workerGetRowOfDoubles(double array[N][N], int *index, int rank, int blksize) { int n = N; MPI_Recv(index, 1, MPI_INT, 0, MPI_ANY_TAG, MPI_COMM_WORLD, &status); if(*index < N) { MPI_Recv(&array[*index], n * blksize, MPI_DOUBLE, 0, MPI_ANY_TAG, MPI_COMM_WORLD, &status); }

15 Workpool Matrix Multiplication
int main(int argc, char *argv[]) { int i, j, k, error = 0; int p, rank = 0; double a[N][N], b[N][N], c[N][N]; double sum; MPI_START(&p, &rank, &argc, &argv); readInputFile(argc, argv, &error, a, b); // Read input data files startTimer(rank); if (p == 1) { // workpool fails with 1 process (no slaves), so do in master for (k = 0; k < N; k++) { for (i = 0; i < N; i++) { sum = 0; for (j = 0; j < N; j++) sum+=a[k][j] * b[j][i]; c[k][i] = sum; } } else { mpiBroadcastArrayOfDoubles(*b); // Send out the b array to the workers masterProcess(a, c, p, rank, BLKSIZE); // Task queue issue rows from a. Workers return results workerProcess(a, b, c, rank, BLKSIZE); // Fetches work, returns results from compute function printResults("C =", c, rank); stopTimer(rank); return 0;

16 Compute function void compute(double a[N][N], double b[N][N], double c[N][N], int index, int blksize) { int i, j; double sum; for (int indexx = index; indexx < index + blksize; indexx++) { for(i = 0; i < N; i++) { sum = 0; for(j = 0; j < N; j++) { sum+= a[indexx][j] * b[j][i]; } c[indexx][i] = sum;

17 Suzaku Software Has two files: suzaku.h -- header file containing macro definitions and routine signatures suzaku.o -- compiled object file of sukaku.c source file that contains suzaku routines. In class, students given suzaku.o rather than sukaku.c, because subsequently they write the routines themselves.

18 Compilation/execution
As a regular MPI program: Command line Compile: mpicc –o hello hello.c suzaku.o mpicc uses gcc to links libraries and create executable hello, and all the usual features of gcc can be used. Execute: mpiexec –n # ./hello where “#” is number of copies of process to start. Using Eclipse Same approach as regular MPI program. See “Using Suzaku” at

19 Suzaku Re-design Project just started to re-design Suzaku.
Avoid as much as possible MPI related parameters Have routines specifically implementing generic (type-less) low-level patterns. Suzaku_Scatter(…); Suzaku_Gather(…); Suzaku)_Compute(…); Implementation needs to get around MPI message type constraints using more advanced MPI features.

20 Simple master-slave pattern
(scatter-compute-gather) int main (int argc, char **argv) { Suzaku_Scatter(…); Suzaku)_Compute(…); Suzaku_Gather(…); return 0; }

21 Questions


Download ppt "Pattern Programming Tools"

Similar presentations


Ads by Google