Using Compiler Directives Paraguin Compiler 1 © 2013 B. Wilkinson/Clayton Ferner SIGCSE 2013 Workshop 310 session2a.ppt Modification date: Jan 9, 2013.

Slides:



Advertisements
Similar presentations
Parallel Processing with OpenMP
Advertisements

1 Programming Explicit Thread-level Parallelism  As noted previously, the programmer must specify how to parallelize  But, want path of least effort.
Toward using higher-level abstractions to teach Parallel Computing 5/20/2013 (c) Copyright 2013 Clayton S. Ferner, UNC Wilmington1 Clayton Ferner, University.
Comp 422: Parallel Programming Lecture 8: Message Passing (MPI)
Other Means of Executing Parallel Programs OpenMP And Paraguin 1(c) 2011 Clayton S. Ferner.
1 ITCS4145/5145, Parallel Programming B. Wilkinson Feb 21, 2012 Programming with Shared Memory Introduction to OpenMP.
12d.1 Two Example Parallel Programs using MPI UNC-Wilmington, C. Ferner, 2007 Mar 209, 2007.
Csinparallel.org Patterns and Exemplars: Compelling Strategies for Teaching Parallel and Distributed Computing to CS Undergraduates Libby Shoop Joel Adams.
Programming with Shared Memory Introduction to OpenMP
CS470/570 Lecture 5 Introduction to OpenMP Compute Pi example OpenMP directives and options.
1 Copyright © 2010, Elsevier Inc. All rights Reserved Chapter 5 Shared Memory Programming with OpenMP An Introduction to Parallel Programming Peter Pacheco.
1 Datamation Sort 1 Million Record Sort using OpenMP and MPI Sammie Carter Department of Computer Science N.C. State University November 18, 2004.
What is required for "standard" distributed parallel programming model? Mitsuhisa Sato Taisuke Boku and Jinpil Lee University of Tsukuba.
Optimizing the trace transform Using OpenMP and CUDA Tim Besard
Parallel Processing1 Parallel Processing (CS 676) Lecture 7: Message Passing using MPI * Jeremy R. Johnson *Parts of this lecture was derived from chapters.
OpenMP Blue Waters Undergraduate Petascale Education Program May 29 – June
Hybrid MPI and OpenMP Parallel Programming
Message Passing Programming Model AMANO, Hideharu Textbook pp. 140-147.
MPI Introduction to MPI Commands. Basics – Send and Receive MPI is a message passing environment. The processors’ method of sharing information is NOT.
1 " Teaching Parallel Design Patterns to Undergraduates in Computer Science” Panel member SIGCSE The 45 th ACM Technical Symposium on Computer Science.
Parallel Programming with MPI By, Santosh K Jena..
Parallel Programming & Cluster Computing MPI Collective Communications Dan Ernst Andrew Fitz Gibbon Tom Murphy Henry Neeman Charlie Peck Stephen Providence.
Oct. 23, 2002Parallel Processing1 Parallel Processing (CS 730) Lecture 6: Message Passing using MPI * Jeremy R. Johnson *Parts of this lecture was derived.
Shared Memory Parallelism - OpenMP Sathish Vadhiyar Credits/Sources: OpenMP C/C++ standard (openmp.org) OpenMP tutorial (
1 "Workshop 31: Developing a Hands-on Undergraduate Parallel Programming Course with Pattern Programming SIGCSE The 44 th ACM Technical Symposium.
MPI and OpenMP.
CS/EE 217 GPU Architecture and Parallel Programming Lecture 23: Introduction to OpenACC.
3/12/2013Computer Engg, IIT(BHU)1 MPI-1. MESSAGE PASSING INTERFACE A message passing library specification Extended message-passing model Not a language.
April 24, 2002 Parallel Port Example. April 24, 2002 Introduction The objective of this lecture is to go over a simple problem that illustrates the use.
Message Passing Interface Using resources from
Suzaku Pattern Programming Framework (a) Structure and low level patterns © 2015 B. Wilkinson Suzaku.pptx Modification date February 22,
ITCS 4/5145 Parallel Computing, UNC-Charlotte, B
1 ITCS4145 Parallel Programming B. Wilkinson March 23, hybrid-abw.ppt Hybrid Parallel Programming Introduction.
Introduction to OpenMP
Hybrid Parallel Programming with the Paraguin compiler
Pattern Parallel Programming
MPI Message Passing Interface
Introduction to OpenMP
Paraguin Compiler Examples.
OpenMP Quiz B. Wilkinson January 22, 2016.
Sieve of Eratosthenes.
Parallel Graph Algorithms
CS 584.
Using compiler-directed approach to create MPI code automatically
Multi-core CPU Computing Straightforward with OpenMP
Pattern Parallel Programming
Paraguin Compiler Examples.
ITCS 4/5145 Parallel Computing, UNC-Charlotte, B
Using compiler-directed approach to create MPI code automatically
Hybrid Parallel Programming
Paraguin Compiler Communication.
Paraguin Compiler Version 2.1.
Paraguin Compiler Examples.
Paraguin Compiler Version 2.1.
Lecture 10 Dr. Guy Tel-Zur.
Quiz Questions Suzaku pattern programming framework
Pattern Programming Tools
Questions Parallel Programming Shared memory performance issues
Hybrid Parallel Programming
Using compiler-directed approach to create MPI code automatically
Hybrid Parallel Programming
Introduction to OpenMP
OpenMP Quiz.
Patterns Paraguin Compiler Version 2.1.
Hybrid MPI and OpenMP Parallel Programming
Introduction to Parallel Computing
Hybrid Parallel Programming
Quiz Questions How does one execute code in parallel in Paraguin?
Some codes for analysis and preparation for programming
CS 584 Lecture 8 Assignment?.
Presentation transcript:

Using Compiler Directives Paraguin Compiler 1 © 2013 B. Wilkinson/Clayton Ferner SIGCSE 2013 Workshop 310 session2a.ppt Modification date: Jan 9, 2013

OpenMP Programming environment for shared- memory parallel systems (such as multi- core) Programmer “directs” the compiler though pragma statements Advantage of pragma statements is that they are ignored by compilers that don’t recognize them 2

Example OpenMP #pragma omp parallel private (i, j, k, sum, tid) shared (A, B, C) { #pragma omp for for (i = 0; i < N; i++) { for (j = 0; j < N; j++) { sum = 0; for (k = 0; k < N; k++) sum += A[i][k]*B[k][j]; C[i][j] = sum; } 1a.3

OpenMP vs. MPI OpenMP is a higher-level of abstraction than MPI Goal is to create OpenMP method of creating MPI code Paraguin compiler will generate MPI code from pragma’s 1a.4

Paraguin Compiler Paraguin compiler is a compiler we are building at UNCW based on the SUIF compiler infrastructure 1a.5

Example Paraguin #pragma paraguin begin_parallel #pragma paraguin bcast A B #pragma paraguin forall for (i = 0; i < N; i++) { for (j = 0; j < N; j++) { sum = 0; for (k = 0; k < N; k++) sum += A[i][k]*B[k][j]; C[i][j] = sum; } #pragma paraguin gather C #pragma paraguin end_parallel 1a.6

Compare to this (MPI) MPI_Init(&argc, &argv); MPI_Comm_size(MPI_COMM_WORLD, &NP); MPI_Comm_rank(MPI_COMM_WORLD, &rank); … MPI_Bcast(A, N*N, MPI_FLOAT, 0, MPI_COMM_WORLD); MPI_Bcast(B, N*N, MPI_FLOAT, 0, MPI_COMM_WORLD); blksz = (int) ceil (((float) N) / NP); for (i = 0; i < min(N, blksz * (rank + 1)); i++) { for (j = 0; j < N; j++) { sum = 0; for (k=0; k < N; k++) sum += A[i][k]*B[k][j]; C[i][j] = sum; } MPI_Gather(C[rank * blksz], N * blksz, MPI_FLOAT, C[rank * blksz], N * blksz, MPI_FLOAT, 0, MPI_COMM_WORLD); 1a.7

Hybrid Since pragma are ignore by compilers that don’t understand them, We can combine pragmas to create a hybrid program Paraguin is a source to source compiler (produces C code) mpicc is a script and uses gcc gcc implements OpenMP 8

Example Hybrid #pragma paraguin begin_parallel #pragma paraguin bcast A B #pragma paraguin forall #pragma omp parallel for private (i, j, k, sum, tid) shared (A, B, C) for (i = 0; i < N; i++) { for (j = 0; j < N; j++) { sum = 0; for (k=0; k < N; k++) sum += A[i][k]*B[k][j]; C[i][j] = sum; } #pragma paraguin gather C #pragma paraguin end_parallel 1a.9

Hybrid 10 Paraguin Source Code w/ pragmas mpicc gcc w/ OpenMP Hybrid Executable

Future Work – Patterns Planned for Summer a.11

Workpool Pattern (Future Work) #pragma paraguin pattern(workpool) #pragma paraguin begin_master … #pragma paraguin end_master #pragma paraguin begin_worker … #pragma paraguin end_worker 1a.12

Pipeline Pattern (Future Work) #pragma paraguin pattern(pipeline) #pragma paraguin begin_master … #pragma paraguin end_master #pragma paraguin begin_stage … #pragma paraguin end_stage #pragma paraguin begin_stage … #pragma paraguin end_stage #pragma paraguin begin_stage … #pragma paraguin end_stage 1a.13

Stencil Pattern (Future Work) #pragma paraguin pattern(stencil) #pragma paraguin begin_master … #pragma paraguin end_master #pragma paraguin begin_worker … #pragma paraguin end_worker 1a.14

Other Patterns Other patterns –Divide and Conquer –All to all We could do through templates 15

Questions 16