Parallel Programming Lecture Set 4 POSIX Threads Overview & OpenMP Johnnie Baker February 2, 2011 1.

Slides:



Advertisements
Similar presentations
NewsFlash!! Earth Simulator no longer #1. In slightly less earthshaking news… Homework #1 due date postponed to 10/11.
Advertisements

May 2, 2015©2006 Craig Zilles1 (Easily) Exposing Thread-level Parallelism  Previously, we introduced Multi-Core Processors —and the (atomic) instructions.
Open[M]ulti[P]rocessing Pthreads: Programmer explicitly define thread behavior openMP: Compiler and system defines thread behavior Pthreads: Library independent.
PARALLEL PROGRAMMING WITH OPENMP Ing. Andrea Marongiu
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts Essentials – 2 nd Edition Chapter 4: Threads.
1 Tuesday, November 07, 2006 “If anything can go wrong, it will.” -Murphy’s Law.
DISTRIBUTED AND HIGH-PERFORMANCE COMPUTING CHAPTER 7: SHARED MEMORY PARALLEL PROGRAMMING.
Computer Architecture II 1 Computer architecture II Programming: POSIX Threads OpenMP.
1 ITCS4145/5145, Parallel Programming B. Wilkinson Feb 21, 2012 Programming with Shared Memory Introduction to OpenMP.
OpenMPI Majdi Baddourah
A Very Short Introduction to OpenMP Basile Schaeli EPFL – I&C – LSP Vincent Keller EPFL – STI – LIN.
1 Parallel Programming With OpenMP. 2 Contents  Overview of Parallel Programming & OpenMP  Difference between OpenMP & MPI  OpenMP Programming Model.
CS240A, T. Yang, 2013 Modified from Demmel/Yelick’s and Mary Hall’s Slides 1 Parallel Programming with OpenMP.
Programming with Shared Memory Introduction to OpenMP
Shared Memory Parallelization Outline What is shared memory parallelization? OpenMP Fractal Example False Sharing Variable scoping Examples on sharing.
Lecture 4: Parallel Programming Models. Parallel Programming Models Parallel Programming Models: Data parallelism / Task parallelism Explicit parallelism.
Lecture 5: Shared-memory Computing with Open MP. Shared Memory Computing.
ICOM 5995: Performance Instrumentation and Visualization for High Performance Computer Systems Lecture 7 October 16, 2002 Nayda G. Santiago.
OpenMP - Introduction Süha TUNA Bilişim Enstitüsü UHeM Yaz Çalıştayı
1 OpenMP Writing programs that use OpenMP. Using OpenMP to parallelize many serial for loops with only small changes to the source code. Task parallelism.
Introduction to OpenMP. OpenMP Introduction Credits:
OpenMP OpenMP A.Klypin Shared memory and OpenMP Simple Example Threads Dependencies Directives Handling Common blocks Synchronization Improving load balance.
Lecture 8: OpenMP. Parallel Programming Models Parallel Programming Models: Data parallelism / Task parallelism Explicit parallelism / Implicit parallelism.
OpenMP – Introduction* *UHEM yaz çalıştayı notlarından derlenmiştir. (uhem.itu.edu.tr)
09/06/2011CS4961 CS4961 Parallel Programming Lecture 5: Introduction to Threads (Pthreads and OpenMP) Mary Hall September 6, 2011.
04/10/25Parallel and Distributed Programming1 Shared-memory Parallel Programming Taura Lab M1 Yuuki Horita.
CS 838: Pervasive Parallelism Introduction to OpenMP Copyright 2005 Mark D. Hill University of Wisconsin-Madison Slides are derived from online references.
Chapter 3 Parallel Programming Models. Abstraction Machine Level – Looks at hardware, OS, buffers Architectural models – Looks at interconnection network,
Work Replication with Parallel Region #pragma omp parallel { for ( j=0; j
OpenMP fundamentials Nikita Panov
Includes slides from course CS194 at UC Berkeley, by prof. Katherine Yelick Shared Memory Programming Pthreads: an overview Ing. Andrea Marongiu
Multicore Programming (Parallel Computing) HP Training Session 5 Prof. Dan Connors.
Threaded Programming Lecture 4: Work sharing directives.
09/08/2011CS4961 CS4961 Parallel Programming Lecture 6: More OpenMP, Introduction to Data Parallel Algorithms Mary Hall September 8, 2011.
10/02/2012CS4230 CS4230 Parallel Programming Lecture 11: Breaking Dependences and Task Parallel Algorithms Mary Hall October 2,
09/17/2010CS4961 CS4961 Parallel Programming Lecture 8: Introduction to Threads and Data Parallelism in OpenMP Mary Hall September 17, 2009.
Introduction to OpenMP
Introduction to OpenMP Eric Aubanel Advanced Computational Research Laboratory Faculty of Computer Science, UNB Fredericton, New Brunswick.
09/09/2010CS4961 CS4961 Parallel Programming Lecture 6: Data Parallelism in OpenMP, cont. Introduction to Data Parallel Algorithms Mary Hall September.
09/07/2012CS4230 CS4230 Parallel Programming Lecture 7: Loop Scheduling cont., and Data Dependences Mary Hall September 7,
Shared Memory Parallelism - OpenMP Sathish Vadhiyar Credits/Sources: OpenMP C/C++ standard (openmp.org) OpenMP tutorial (
9/22/2011CS4961 CS4961 Parallel Programming Lecture 9: Task Parallelism in OpenMP Mary Hall September 22,
MPI and OpenMP.
09/02/2010CS4961 CS4961 Parallel Programming Lecture 4: CTA, cont. Data and Task Parallelism Mary Hall September 2,
3/12/2013Computer Engg, IIT(BHU)1 OpenMP-1. OpenMP is a portable, multiprocessing API for shared memory computers OpenMP is not a “language” Instead,
Heterogeneous Computing using openMP lecture 2 F21DP Distributed and Parallel Technology Sven-Bodo Scholz.
CPE779: Shared Memory and OpenMP Based on slides by Laxmikant V. Kale and David Padua of the University of Illinois.
CS240A, T. Yang, Parallel Programming with OpenMP.
7/9/ Realizing Concurrency using Posix Threads (pthreads) B. Ramamurthy.
Chapter 4: Threads Modified by Dr. Neerja Mhaskar for CS 3SH3.
Introduction to OpenMP
Lecture 5: Shared-memory Computing with Open MP
CS427 Multicore Architecture and Parallel Computing
Open[M]ulti[P]rocessing
Computer Engg, IIT(BHU)
Introduction to OpenMP
Computer Science Department
CS4961 Parallel Programming Lecture 4: Memory Systems and Introduction to Threads (Pthreads and OpenMP) Mary Hall August 30, /30/2012 CS4230.
Realizing Concurrency using Posix Threads (pthreads)
CS4961 Parallel Programming Lecture 5: Data and Task Parallelism, cont
Chapter 4: Threads.
Parallel Programming with OpenMP
Realizing Concurrency using the thread model
CS4961 Parallel Programming Lecture 5: More OpenMP, Introduction to Data Parallel Algorithms Mary Hall September 4, /04/2012 CS4230.
Introduction to High Performance Computing Lecture 20
Programming with Shared Memory Introduction to OpenMP
Realizing Concurrency using Posix Threads (pthreads)
Introduction to OpenMP
Realizing Concurrency using the thread model
Chapter 4: Threads & Concurrency
Presentation transcript:

Parallel Programming Lecture Set 4 POSIX Threads Overview & OpenMP Johnnie Baker February 2,

Topics Data and Task Parallelism Brief Overview of POSIX Threads Data Parallelism in OpenMP – Expressing Parallel Loops – Parallel Regions (SPMD) – Scheduling Loops – Synchronization 2

Sources for Material Primary Source: Mary Hall, CS4961, Lectures 4 and 5, University of Utah Larry Snyder, Univ. of Washington, Ch. 4 slides, Textbook, Chapters 4 & 6 Jim Demmel and Kathy Yelick, UCB Allan Snavely, SDSC Michael Quinn, Parallel Programming in C with MPI and OpenMP, Chapter 17 3

Definitions of Data and Task Parallelism Data parallel computation: – Perform the same operation on different items of data at the same time; the parallelism grows with the size of the data. Task parallel computation: – Perform distinct computations -- or tasks -- at the same time. If the number of tasks is fixed, the parallelism is not scalable. Summary – Mostly we will study data parallelism in this class – Data parallelism facilitates very high speedups - and scaling to supercomputers. – Hybrid (mixing of the two) is increasingly common 4

Parallel Formulation vs. Parallel Algorithm Parallel Formulation – Refers to a parallelization of a serial algorithm. Parallel Algorithm – May represent an entirely different algorithm than the one used serially. In this course, we primarily focus on “Parallel Formulations”. 5

Steps to Parallel Formulation Computation Decomposition/Partitioning: – Identify pieces of work that can be done concurrently – Assign tasks to multiple processors (processes used equivalently) Data Decomposition/Partitioning: – Decompose input, output and intermediate data across different processors Manage Access to shared data and synchronization: – coherent view, safe access for input or intermediate data UNDERLYING PRINCIPLES: Maximize concurrency and reduce overheads due to parallelization! Maximize potential speedup! 6

Concept of Threads Text introduces Peril-L as a neutral language for describing parallel programming constructs – Abstracts away details of existing languages – Architecture independent – Data parallel – Based on C, for universality We will instead learn OpenMP – Similar to Peril-L – However, OpenMP is a important programming language. 7

8 Common Notions of Task-Parallel Thread Creation (not in Peril-L)

Review: Predominant Parallel Control Mechanisms 9

Programming with Threads Several Thread Libraries PTHREADS is the POSIX Standard – Solaris threads are very similar – Relatively low level – Portable but possibly slow OpenMP is newer standard – Support for scientific programming on shared memory architectures P4 (Parmacs) is another portable package – Higher level than Pthreads –

Overview of POSIX Threads POSIX: Portable Operating System Interface for UNIX – Interface to Operating System utilities PThreads: The POSIX threading interface – System calls to create and synchronize threads – Should be relatively uniform across UNIX-like OS platforms PThreads contain support for – Creating parallelism – Synchronizing – No explicit support for communication, because shared memory is implicit; a pointer to shared data is passed to a thread – See Chapter 6 of textbook for more details on Ptreads 11

Forking POSIX Threads thread_id is the thread id or handle (used to halt, etc.) thread_attribute various attributes – standard default values obtained by passing a NULL pointer thread_fun the function to be run (takes and returns void*) fun_arg an argument can be passed to thread_fun when it starts errorcode will be set nonzero if the create operation fails Signature: int pthread_create(pthread_t *, const pthread_attr_t *, void * (*)(void *), void *); Example call: errcode = pthread_create(&thread_id; &thread_attribute &thread_fun; &fun_arg); 12

Simple Threading Example void* SayHello(void *foo) { printf( "Hello, world!\n" ); return NULL; } int main() { pthread_t threads[16]; int tn; for(tn=0; tn<16; tn++) { pthread_create(&threads[tn], NULL, SayHello, NULL); } for(tn=0; tn<16 ; tn++) { pthread_join(threads[tn], NULL); } return 0; } Compile using gcc –lpthread But overhead of thread creation is nontrivial SayHello should have a significant amount of work 13

Variables declared outside of main are shared Object allocated on the heap may be shared (if pointer is passed) Variables on the stack are private: passing pointer to these around to other threads can cause problems Often done by creating a large “thread data” struct – Passed into all threads as argument – Simple example: char *message = "Hello World!\n"; pthread_create( &thread1, NULL, (void*)&print_fun, (void*) message); 14 Shared Data and Threads

Posix Thread Example #include void print_fun( void *message ) { printf("%s \n", message); } main() { pthread_t thread1, thread2; char *message1 = "Hello"; char *message2 = "World"; pthread_create( &thread1, NULL, (void*)&print_fun, (void*) message1); pthread_create(&thread2, NULL, (void*)&print_fun, (void*) message2); return(0); } Compile using gcc –lpthread Note: There is a race condition in the print statements 15

Explicit Synchronization: Creating and Initializing a Barrier To (dynamically) initialize a barrier, use code similar to this (which sets the number of threads to 3): pthread_barrier_t b; pthread_barrier_init(&b,NULL,3); The second argument specifies an object attribute; using NULL yields the default attributes. To wait at a barrier, a process executes: pthread_barrier_wait(&b); This barrier could have been statically initialized by assigning an initial value created using the macro PTHREAD_BARRIER_INITIALIZER(3). 16

Mutexes (aka Locks) in POSIX Threads To create a mutex: #include pthread_mutex_t amutex = PTHREAD_MUTEX_INITIALIZER; pthread_mutex_init(&amutex, NULL); To use it: int pthread_mutex_lock(amutex); int pthread_mutex_unlock(amutex); To deallocate a mutex int pthread_mutex_destroy(pthread_mutex_t *mutex); Multiple mutexes may be held, but can lead to deadlock: thread1 thread2 lock(a) lock(b) lock(b) lock(a) 17

Summary of Programming with Threads POSIX Threads are based on OS features – Can be used from multiple languages (need appropriate header) – Familiar language for most programmers – Ability to shared data is convenient Pitfalls – Data race bugs are very nasty to find because they can be intermittent – Deadlocks are usually easier, but can also be intermittent OpenMP is commonly used today as a simpler alternative, but it is more restrictive 18

OpenMP Motivation Thread libraries are hard to use – P-Threads/Solaris threads have many library calls for initialization, synchronization, thread creation, condition variables, etc. – Programmer must code with multiple threads in mind Synchronization between threads introduces a new dimension of program correctness Wouldn’t it be nice to write serial programs and somehow parallelize them “automatically”? – OpenMP can parallelize many serial programs with relatively few annotations that specify parallelism and independence – It is not automatic: you can still make errors in your annotations 19

OpenMP: Prevailing Shared Memory Programming Approach Model for parallel programming Shared-memory parallelism Portable across shared-memory architectures Scalable Incremental parallelization Compiler based Extensions to existing programming languages (Fortran, C and C++) – mainly by directives – a few library routines See 20

A Programmer’s View of OpenMP OpenMP is a portable, threaded, shared-memory programming specification with “light” syntax – Exact behavior depends on OpenMP implementation! – Requires compiler support (C/C++ or Fortran) OpenMP will: – Allow a programmer to separate a program into serial regions and parallel regions, rather than concurrently- executing threads. – Hide stack management – Provide synchronization constructs OpenMP will not: – Parallelize automatically – Guarantee speedup – Provide freedom from data races 21

OpenMP Data Parallel Construct: Parallel Loop All pragmas begin: #pragma Compiler calculates loop bounds for each thread directly from serial source (computation decomposition) Compiler also manages data partitioning of Res Synchronization also automatic (barrier) 22

OpenMP Execution Model Fork-join model of parallel execution Begin execution as a single process (master thread) Start of a parallel construct: – Master thread creates team of threads Completion of a parallel construct: – Threads in the team synchronize -- implicit barrier Only master thread continues execution Implementation optimization: – Worker threads spin waiting on next fork fork join 23

OpenMP Execution Model 24

Count 3s Example? (see textbook) What do we need to worry about? 25

OpenMP directive format C (also Fortran and C++ bindings) Pragmas, format #pragma omp directive_name [ clause [ clause ]... ] new-line Conditional compilation #ifdef _OPENMP block, e.g., printf(“%d avail.processors\n”,omp_get_num_procs()); #endif Case sensitive Include file for library routines #ifdef _OPENMP #include #endif 26

Limitations and Semantics Not all “element-wise” loops can be ||ized #pragma omp parallel for for (i=0; i < numPixels; i++) {} – Loop index: signed integer – Termination Test:,=> with loop invariant int – Incr/Decr by loop invariant int; change each iteration – Count up for,>= – Basic block body: no control in/out except at top Threads are created and iterations divvied up; requirements ensure iteration count is predictable 27

OpenMP Synchronization Implicit barrier – At beginning and end of parallel constructs – At end of all other control constructs – Implicit synchronization can be removed with nowait clause Explicit synchronization –critical –atomic (single statement) –barrier 28

OpenMp Reductions OpenMP has reduce sum = 0; #pragma omp parallel for reduction(+:sum) for (i=0; i < 100; i++) { sum += array[i]; } Reduce ops and init() values: + 0 bitwise & ~0 logical & bitwise | 0 logical | 0 * 1 bitwise ^ 0 29

OpenMP parallel region construct Block of code to be executed by multiple threads in parallel Each thread executes the same code redundantly (SPMD) – Work within work-sharing constructs is distributed among the threads in a team Example with C/C++ syntax #pragma omp parallel [ clause [ clause ]... ] new-line structured-block clause can include the following: private (list) shared (list) 30

Programming Model – Loop Scheduling schedule clause determines how loop iterations are divided among the thread team – static([chunk]) divides iterations statically between threads Each thread receives [chunk] iterations, rounding as necessary to account for all iterations Default [chunk] is ceil( # iterations / # threads ) – dynamic([chunk]) allocates [chunk] iterations per thread, allocating an additional [chunk] iterations when a thread finishes Forms a logical work queue, consisting of all loop iterations Default [chunk] is 1 – guided([chunk]) allocates dynamically, but [chunk] is exponentially reduced with each allocation 31

Loop scheduling 2 (2) 32

OpenMP critical directive Enclosed code – executed by all threads, but – restricted to only one thread at a time #pragma omp critical [ ( name ) ] new-line structured-block A thread waits at the beginning of a critical region until no other thread in the team is executing a critical region with the same name. All unnamed critical directives map to the same unspecified name. 33

Variation: OpenMP parallel and for directives Syntax: #pragma omp for [ clause [ clause ]... ] new-line for-loop clause can be one of the following: shared (list) private( list) reduction( operator: list) schedule( type [, chunk ] ) nowait (C/C++: on #pragma omp for) #pragma omp parallel private(f) { f=7; #pragma omp for for (i=0; i<20; i++) a[i] = b[i] + f * (i+1); } /* omp end parallel */ 34

Programming Model – Data Sharing Parallel programs often employ two types of data – Shared data, visible to all threads, similarly named – Private data, visible to a single thread (often stack-allocated) OpenMP: shared variables are shared private variables are private Default is shared Loop index is private PThreads: Global-scoped variables are shared Stack-allocated variables are private // shared, globals int bigdata[1024]; void* foo(void* bar) { // private, stack int tid; /* Calculation goes here */ } int bigdata[1024]; void* foo(void* bar) { int tid; #pragma omp parallel \ shared ( bigdata ) \ private ( tid ) { /* Calc. here */ }

OpenMP environment variables OMP_NUM_THREADS  sets the number of threads to use during execution  when dynamic adjustment of the number of threads is enabled, the value of this environment variable is the maximum number of threads to use  For example, setenv OMP_NUM_THREADS 16 [csh, tcsh] export OMP_NUM_THREADS=16 [sh, ksh, bash] OMP_SCHEDULE  applies only to do/for and parallel do/for directives that have the schedule type RUNTIME  sets schedule type and chunk size for all such loops  For example, setenv OMP_SCHEDULE GUIDED,4 [csh, tcsh] export OMP_SCHEDULE= GUIDED,4 [sh, ksh, bash] 36

More loop scheduling attributes RUNTIME The scheduling decision is deferred until runtime by the environment variable OMP_SCHEDULE. It is illegal to specify a chunk size for this clause. AUTO The scheduling decision is delegated to the compiler and/or runtime system. NO WAIT / nowait: If specified, then threads do not synchronize at the end of the parallel loop. ORDERED: Specifies that the iterations of the loop must be executed as they would be in a serial program. COLLAPSE: Specifies how many loops in a nested loop should be collapsed into one large iteration space and divided according to the schedule clause. The sequential execution of the iterations in all associated loops determines the order of the iterations in the collapsed iteration space.

Impact of Scheduling Decision Load balance – Same work in each iteration? – Processors working at same speed? Scheduling overhead – Static decisions are cheap because they require no run- time coordination – Dynamic decisions have overhead that is impacted by complexity and frequency of decisions Data locality – Particularly within cache lines for small chunk sizes – Also impacts data reuse on same processor

A Few Words About Data Distribution (Ch. 5) Data distribution describes how global data is partitioned across processors. – Recall the CTA model and the notion that a portion of the global address space is physically co-located with each processor This data partitioning is implicit in OpenMP and may not match loop iteration scheduling Compiler will try to do the right thing with static scheduling specifications

Common Data Distributions Consider a 1-Dimensional array to solve the count3s problem, 16 elements, 4 threads CYCLIC (chunk = 1): for (i = 0; i<blocksize; i++) … in [i*blocksize + tid]; BLOCK (chunk = 4): for (i=tid*blocksize; i<(tid+1) *blocksize; i++) … in[i]; BLOCK-CYCLIC (chunk = 2): CS

OpenMP runtime library, Query Functions omp_get_num_threads: Returns the number of threads currently in the team executing the parallel region from which it is called int omp_get_num_threads(void); omp_get_thread_num: Returns the thread number, within the team, that lies between 0 and omp_get_num_threads()-1, inclusive. The master thread of the team is thread 0 int omp_get_thread_num(void); 41

Local OpenMP Details All of our dept servers (Loki, Neptune, etc.) support OpenMP virtually. Mike Yuan uses the following command to compile a OpenMP program: – g++ hello.cpp -fopenmp Alternately, as indicated on slide 8 of the OpenMP module from SC10, you can do following slight modification : – In your Unix terminal window, go to your home directory by typing cd ~ – Then edit your.bashrc file using your preferred Unix-compatible editor such as vim: vim.bashrc – Add the following line, which is an alias to compile OpenMP code alias ompcc='gcc –fopenmp' 42

Summary of Preceding Lecture OpenMP, data-parallel constructs only – Task-parallel constructs later What’s good? – Small changes are required to produce a parallel program from sequential (parallel formulation) – Avoid having to express low-level mapping details – Portable and scalable, correct on 1 processor What is missing? – Not completely natural if want to write a parallel code from scratch – Not always possible to express certain common parallel constructs – Locality management – Control of performance 43