CS240A, T. Yang, 2013 Modified from Demmel/Yelick’s and Mary Hall’s Slides 1 Parallel Programming with OpenMP.

Slides:



Advertisements
Similar presentations
NewsFlash!! Earth Simulator no longer #1. In slightly less earthshaking news… Homework #1 due date postponed to 10/11.
Advertisements

Open[M]ulti[P]rocessing Pthreads: Programmer explicitly define thread behavior openMP: Compiler and system defines thread behavior Pthreads: Library independent.
Mohsan Jameel Department of Computing NUST School of Electrical Engineering and Computer Science 1.
PARALLEL PROGRAMMING WITH OPENMP Ing. Andrea Marongiu
1 OpenMP—An API for Shared Memory Programming Slides are based on:
1 Tuesday, November 07, 2006 “If anything can go wrong, it will.” -Murphy’s Law.
DISTRIBUTED AND HIGH-PERFORMANCE COMPUTING CHAPTER 7: SHARED MEMORY PARALLEL PROGRAMMING.
Computer Architecture II 1 Computer architecture II Programming: POSIX Threads OpenMP.
Parallel Programming Lecture Set 4 POSIX Threads Overview & OpenMP Johnnie Baker February 2,
Introduction to OpenMP For a more detailed tutorial see: Look at the presentations.
1 ITCS4145/5145, Parallel Programming B. Wilkinson Feb 21, 2012 Programming with Shared Memory Introduction to OpenMP.
CSCI-6964: High Performance Parallel & Distributed Computing (HPDC) AE 216, Mon/Thurs 2-3:20 p.m. Pthreads (reading Chp 7.10) Prof. Chris Carothers Computer.
OpenMPI Majdi Baddourah
A Very Short Introduction to OpenMP Basile Schaeli EPFL – I&C – LSP Vincent Keller EPFL – STI – LIN.
1 Parallel Programming With OpenMP. 2 Contents  Overview of Parallel Programming & OpenMP  Difference between OpenMP & MPI  OpenMP Programming Model.
Introduction to OpenMP (Originally for CS 838, Fall 2005) University of Wisconsin-Madison Slides are derived from online references of Lawrence Livermore.
Programming with Shared Memory Introduction to OpenMP
1 Copyright © 2010, Elsevier Inc. All rights Reserved Chapter 5 Shared Memory Programming with OpenMP An Introduction to Parallel Programming Peter Pacheco.
Shared Memory Parallelism - OpenMP Sathish Vadhiyar Credits/Sources: OpenMP C/C++ standard (openmp.org) OpenMP tutorial (
Lecture 5: Shared-memory Computing with Open MP. Shared Memory Computing.
Chapter 17 Shared-Memory Programming. Introduction OpenMP is an application programming interface (API) for parallel programming on multiprocessors. It.
OpenMP - Introduction Süha TUNA Bilişim Enstitüsü UHeM Yaz Çalıştayı
1 OpenMP Writing programs that use OpenMP. Using OpenMP to parallelize many serial for loops with only small changes to the source code. Task parallelism.
Introduction to OpenMP. OpenMP Introduction Credits:
OpenMP OpenMP A.Klypin Shared memory and OpenMP Simple Example Threads Dependencies Directives Handling Common blocks Synchronization Improving load balance.
Lecture 8: OpenMP. Parallel Programming Models Parallel Programming Models: Data parallelism / Task parallelism Explicit parallelism / Implicit parallelism.
OpenMP – Introduction* *UHEM yaz çalıştayı notlarından derlenmiştir. (uhem.itu.edu.tr)
09/06/2011CS4961 CS4961 Parallel Programming Lecture 5: Introduction to Threads (Pthreads and OpenMP) Mary Hall September 6, 2011.
04/10/25Parallel and Distributed Programming1 Shared-memory Parallel Programming Taura Lab M1 Yuuki Horita.
CS 838: Pervasive Parallelism Introduction to OpenMP Copyright 2005 Mark D. Hill University of Wisconsin-Madison Slides are derived from online references.
Work Replication with Parallel Region #pragma omp parallel { for ( j=0; j
OpenMP fundamentials Nikita Panov
High-Performance Parallel Scientific Computing 2008 Purdue University OpenMP Tutorial Seung-Jai Min School of Electrical and Computer.
Threaded Programming Lecture 4: Work sharing directives.
09/08/2011CS4961 CS4961 Parallel Programming Lecture 6: More OpenMP, Introduction to Data Parallel Algorithms Mary Hall September 8, 2011.
09/17/2010CS4961 CS4961 Parallel Programming Lecture 8: Introduction to Threads and Data Parallelism in OpenMP Mary Hall September 17, 2009.
Introduction to OpenMP
Introduction to OpenMP Eric Aubanel Advanced Computational Research Laboratory Faculty of Computer Science, UNB Fredericton, New Brunswick.
09/09/2010CS4961 CS4961 Parallel Programming Lecture 6: Data Parallelism in OpenMP, cont. Introduction to Data Parallel Algorithms Mary Hall September.
09/07/2012CS4230 CS4230 Parallel Programming Lecture 7: Loop Scheduling cont., and Data Dependences Mary Hall September 7,
Shared Memory Parallelism - OpenMP Sathish Vadhiyar Credits/Sources: OpenMP C/C++ standard (openmp.org) OpenMP tutorial (
9/22/2011CS4961 CS4961 Parallel Programming Lecture 9: Task Parallelism in OpenMP Mary Hall September 22,
Introduction to Pragnesh Patel 1 NICS CSURE th June 2015.
3/12/2013Computer Engg, IIT(BHU)1 OpenMP-1. OpenMP is a portable, multiprocessing API for shared memory computers OpenMP is not a “language” Instead,
Heterogeneous Computing using openMP lecture 2 F21DP Distributed and Parallel Technology Sven-Bodo Scholz.
CPE779: Shared Memory and OpenMP Based on slides by Laxmikant V. Kale and David Padua of the University of Illinois.
CS240A, T. Yang, Parallel Programming with OpenMP.
Heterogeneous Computing using openMP lecture 1 F21DP Distributed and Parallel Technology Sven-Bodo Scholz.
OpenMP – Part 2 * *UHEM yaz çalıştayı notlarından derlenmiştir. (uhem.itu.edu.tr)
B. Estrade, LSU – High Performance Computing Enablement Group OpenMP II B. Estrade.
MPSoC Architectures OpenMP Alberto Bosio
Introduction to OpenMP
Shared Memory Parallelism - OpenMP
Lecture 5: Shared-memory Computing with Open MP
CS427 Multicore Architecture and Parallel Computing
Auburn University COMP7330/7336 Advanced Parallel and Distributed Computing Improving Barrier Performance Dr. Xiao Qin.
Auburn University COMP7330/7336 Advanced Parallel and Distributed Computing A bug in the rwlock program Dr. Xiao Qin.
Open[M]ulti[P]rocessing
Computer Engg, IIT(BHU)
Introduction to OpenMP
Shared-Memory Programming
Computer Science Department
Shared Memory Programming with OpenMP
CS4961 Parallel Programming Lecture 5: Data and Task Parallelism, cont
Parallel Programming with OpenMP
CS4961 Parallel Programming Lecture 5: More OpenMP, Introduction to Data Parallel Algorithms Mary Hall September 4, /04/2012 CS4230.
Introduction to High Performance Computing Lecture 20
Programming with Shared Memory Introduction to OpenMP
Introduction to OpenMP
OpenMP Martin Kruliš.
Presentation transcript:

CS240A, T. Yang, 2013 Modified from Demmel/Yelick’s and Mary Hall’s Slides 1 Parallel Programming with OpenMP

2 Introduction to OpenMP What is OpenMP? Open specification for Multi-Processing “Standard” API for defining multi-threaded shared-memory programs openmp.org – Talks, examples, forums, etc.openmp.org High-level API Preprocessor (compiler) directives ( ~ 80% ) Library Calls ( ~ 19% ) Environment Variables ( ~ 1% )

3 A Programmer’s View of OpenMP OpenMP is a portable, threaded, shared-memory programming specification with “light” syntax Exact behavior depends on OpenMP implementation! Requires compiler support (C or Fortran) OpenMP will: Allow a programmer to separate a program into serial regions and parallel regions, rather than T concurrently-executing threads. Hide stack management Provide synchronization constructs OpenMP will not: Parallelize automatically Guarantee speedup Provide freedom from data races

4 Motivation – OpenMP int main() { // Do this part in parallel printf( "Hello, World!\n" ); return 0; }

5 Motivation – OpenMP int main() { omp_set_num_threads(4); // Do this part in parallel #pragma omp parallel { printf( "Hello, World!\n" ); } return 0; } Printf

OpenMP parallel region construct Block of code to be executed by multiple threads in parallel Each thread executes the same code redundantly (SPMD) Work within work-sharing constructs is distributed among the threads in a team Example with C/C++ syntax #pragma omp parallel [ clause [ clause ]... ] new-line structured-block clause can include the following: private (list) shared (list)

OpenMP Data Parallel Construct: Parallel Loop All pragmas begin: #pragma Compiler calculates loop bounds for each thread directly from serial source (computation decomposition) Compiler also manages data partitioning Synchronization also automatic (barrier)

8 Programming Model – Parallel Loops Requirement for parallel loops No data dependencies (reads/write or write/write pairs) between iterations! Preprocessor calculates loop bounds and divide iterations among parallel threads ? for( i=0; i < 25; i++ ) { printf(“Foo”); } #pragma omp parallel for

OpenMp: Parallel Loops with Reductions OpenMP supports reduce operation sum = 0; #pragma omp parallel for reduction(+:sum) for (i=0; i < 100; i++) { sum += array[i]; } Reduce ops and init() values (C and C++): + 0 bitwise & ~0 logical & bitwise | 0 logical | 0 * 1 bitwise ^ 0

Example: Trapezoid Rule for Integration Straight-line approximation x0x0 x1x1 x f(x) L(x)

Composite Trapezoid Rule x0x0 x1x1 x f(x) x2x2 hhx3x3 hhx4x4

Serial algorithm for composite trapezoid rule x0x0 x1x1 x f(x) x2x2 hhx3x3 hhx4x4

From Serial Code to Parallel Code x0x0 x1x1 x f(x) x2x2 hhx3x3 hhx4x4

14 Programming Model – Loop Scheduling schedule clause determines how loop iterations are divided among the thread team static([chunk]) divides iterations statically between threads Each thread receives [chunk] iterations, rounding as necessary to account for all iterations Default [chunk] is ceil( # iterations / # threads ) dynamic([chunk]) allocates [chunk] iterations per thread, allocating an additional [chunk] iterations when a thread finishes Forms a logical work queue, consisting of all loop iterations Default [chunk] is 1 guided([chunk]) allocates dynamically, but [chunk] is exponentially reduced with each allocation

Loop scheduling options 2 (2)

Impact of Scheduling Decision Load balance Same work in each iteration? Processors working at same speed? Scheduling overhead Static decisions are cheap because they require no run-time coordination Dynamic decisions have overhead that is impacted by complexity and frequency of decisions Data locality Particularly within cache lines for small chunk sizes Also impacts data reuse on same processor

More loop scheduling attributes RUNTIME The scheduling decision is deferred until runtime by the environment variable OMP_SCHEDULE. It is illegal to specify a chunk size for this clause. AUTO The scheduling decision is delegated to the compiler and/or runtime system. NO WAIT / nowait: If specified, then threads do not synchronize at the end of the parallel loop. ORDERED: Specifies that the iterations of the loop must be executed as they would be in a serial program. COLLAPSE: Specifies how many loops in a nested loop should be collapsed into one large iteration space and divided according to the schedule clause (collapsed order corresponds to original sequential order).

OpenMP environment variables OMP_NUM_THREADS  sets the number of threads to use during execution  when dynamic adjustment of the number of threads is enabled, the value of this environment variable is the maximum number of threads to use  For example, setenv OMP_NUM_THREADS 16 [csh, tcsh] export OMP_NUM_THREADS=16 [sh, ksh, bash] OMP_SCHEDULE  applies only to do/for and parallel do/for directives that have the schedule type RUNTIME  sets schedule type and chunk size for all such loops  For example, setenv OMP_SCHEDULE GUIDED,4 [csh, tcsh] export OMP_SCHEDULE= GUIDED,4 [sh, ksh, bash]

19 Programming Model – Data Sharing Parallel programs often employ two types of data Shared data, visible to all threads, similarly named Private data, visible to a single thread (often stack-allocated) OpenMP: shared variables are shared private variables are private PThreads: Global-scoped variables are shared Stack-allocated variables are private // shared, globals int bigdata[1024]; void* foo(void* bar) { // private, stack int tid; /* Calculation goes here */ } int bigdata[1024]; void* foo(void* bar) { int tid; #pragma omp parallel \ shared ( bigdata ) \ private ( tid ) { /* Calc. here */ }

20 Programming Model - Synchronization OpenMP Synchronization OpenMP Critical Sections Named or unnamed No explicit locks / mutexes Barrier directives Explicit Lock functions When all else fails – may require flush directive Single-thread regions within parallel regions master, single directives #pragma omp critical { /* Critical code here */ } #pragma omp barrier omp_set_lock( lock l ); /* Code goes here */ omp_unset_lock( lock l ); #pragma omp single { /* Only executed once */ }

CS267 Lecture 6 21 Microbenchmark: Grid Relaxation (Stencil) for( t=0; t < t_steps; t++) { for( x=0; x < x_dim; x++) { for( y=0; y < y_dim; y++) { grid[x][y] = /* avg of neighbors */ } } #pragma omp parallel for \ shared(grid,x_dim,y_dim) private(x,y) // Implicit Barrier Synchronization temp_grid = grid; grid = other_grid; other_grid = temp_grid;

CS267 Lecture 6 22 Microbenchmark: Ocean

CS267 Lecture 6 23 Microbenchmark: Ocean

CS267 Lecture 6 24 SpecOMP (2001) Parallel form of SPEC FP 2000 using Open MP, larger working sets Aslot et. Al., Workshop on OpenMP Apps. and Tools (2001) Many of CFP2000 were “straightforward” to parallelize: ammp (Computational chemistry): 16 Calls to OpenMP API, 13 #pragmas, converted linked lists to vector lists Applu (Parabolic/elliptic PDE solver): 50 directives, mostly parallel or do Fma3d (Finite element car crash simulation): 127 lines of OpenMP directives (60k lines total) mgrid (3D multigrid): automatic translation to OpenMP Swim (Shallow water modeling): 8 loops parallelized

25 OpenMP Summary OpenMP is a compiler-based technique to create concurrent code from (mostly) serial code OpenMP can enable (easy) parallelization of loop-based code Lightweight syntactic language extensions OpenMP performs comparably to manually-coded threading Scalable Portable Not a silver bullet for all applications

CS267 Lecture 6 26 More Information openmp.org OpenMP official site A handy OpenMP tutorial Another OpenMP tutorial and reference