Budapest, 25-29 November 2002 1st ALADIN maintenance and phasing workshop Short introduction to OpenMP Jure Jerman, Environmental Agency of Slovenia.

Slides:



Advertisements
Similar presentations
OpenMP.
Advertisements

Parallel Processing with OpenMP
Introduction to Openmp & openACC
1 ISCM-10 Taub Computing Center High Performance Computing for Computational Mechanics Moshe Goldberg March 29, 2001.
Introductions to Parallel Programming Using OpenMP
NewsFlash!! Earth Simulator no longer #1. In slightly less earthshaking news… Homework #1 due date postponed to 10/11.
Mohsan Jameel Department of Computing NUST School of Electrical Engineering and Computer Science 1.
PARALLEL PROGRAMMING WITH OPENMP Ing. Andrea Marongiu
Scientific Programming OpenM ulti- P rocessing M essage P assing I nterface.
1 OpenMP—An API for Shared Memory Programming Slides are based on:
1 Tuesday, November 07, 2006 “If anything can go wrong, it will.” -Murphy’s Law.
DISTRIBUTED AND HIGH-PERFORMANCE COMPUTING CHAPTER 7: SHARED MEMORY PARALLEL PROGRAMMING.
Computer Architecture II 1 Computer architecture II Programming: POSIX Threads OpenMP.
Introduction to OpenMP For a more detailed tutorial see: Look at the presentations.
1 ITCS4145/5145, Parallel Programming B. Wilkinson Feb 21, 2012 Programming with Shared Memory Introduction to OpenMP.
CSCI-6964: High Performance Parallel & Distributed Computing (HPDC) AE 216, Mon/Thurs 2-3:20 p.m. Pthreads (reading Chp 7.10) Prof. Chris Carothers Computer.
OpenMPI Majdi Baddourah
A Very Short Introduction to OpenMP Basile Schaeli EPFL – I&C – LSP Vincent Keller EPFL – STI – LIN.
Introduction to OpenMP Introduction OpenMP basics OpenMP directives, clauses, and library routines.
High Performance Computation --- A Practical Introduction Chunlin Tian NAOC Beijing 2011.
1 Parallel Programming With OpenMP. 2 Contents  Overview of Parallel Programming & OpenMP  Difference between OpenMP & MPI  OpenMP Programming Model.
Programming with Shared Memory Introduction to OpenMP
CS470/570 Lecture 5 Introduction to OpenMP Compute Pi example OpenMP directives and options.
Shared Memory Parallelization Outline What is shared memory parallelization? OpenMP Fractal Example False Sharing Variable scoping Examples on sharing.
1 Copyright © 2010, Elsevier Inc. All rights Reserved Chapter 5 Shared Memory Programming with OpenMP An Introduction to Parallel Programming Peter Pacheco.
Shared Memory Parallelism - OpenMP Sathish Vadhiyar Credits/Sources: OpenMP C/C++ standard (openmp.org) OpenMP tutorial (
Parallel Programming in Java with Shared Memory Directives.
Lecture 4: Parallel Programming Models. Parallel Programming Models Parallel Programming Models: Data parallelism / Task parallelism Explicit parallelism.
Chapter 17 Shared-Memory Programming. Introduction OpenMP is an application programming interface (API) for parallel programming on multiprocessors. It.
ICOM 5995: Performance Instrumentation and Visualization for High Performance Computer Systems Lecture 7 October 16, 2002 Nayda G. Santiago.
OpenMP - Introduction Süha TUNA Bilişim Enstitüsü UHeM Yaz Çalıştayı
1 OpenMP Writing programs that use OpenMP. Using OpenMP to parallelize many serial for loops with only small changes to the source code. Task parallelism.
OpenMP OpenMP A.Klypin Shared memory and OpenMP Simple Example Threads Dependencies Directives Handling Common blocks Synchronization Improving load balance.
OpenMP – Introduction* *UHEM yaz çalıştayı notlarından derlenmiştir. (uhem.itu.edu.tr)
04/10/25Parallel and Distributed Programming1 Shared-memory Parallel Programming Taura Lab M1 Yuuki Horita.
CS 838: Pervasive Parallelism Introduction to OpenMP Copyright 2005 Mark D. Hill University of Wisconsin-Madison Slides are derived from online references.
Work Replication with Parallel Region #pragma omp parallel { for ( j=0; j
OpenMP fundamentials Nikita Panov
High-Performance Parallel Scientific Computing 2008 Purdue University OpenMP Tutorial Seung-Jai Min School of Electrical and Computer.
Introduction to OpenMP
Parallel Programming 0024 Week 10 Thomas Gross Spring Semester 2010 May 20, 2010.
Introduction to OpenMP Eric Aubanel Advanced Computational Research Laboratory Faculty of Computer Science, UNB Fredericton, New Brunswick.
HPC1 OpenMP E. Bruce Pitman October, HPC1 Outline What is OpenMP Multi-threading How to use OpenMP Limitations OpenMP + MPI References.
Shared Memory Parallelism - OpenMP Sathish Vadhiyar Credits/Sources: OpenMP C/C++ standard (openmp.org) OpenMP tutorial (
Threaded Programming Lecture 2: Introduction to OpenMP.
3/12/2013Computer Engg, IIT(BHU)1 OpenMP-1. OpenMP is a portable, multiprocessing API for shared memory computers OpenMP is not a “language” Instead,
CPE779: More on OpenMP Based on slides by Laxmikant V. Kale and David Padua of the University of Illinois.
Heterogeneous Computing using openMP lecture 2 F21DP Distributed and Parallel Technology Sven-Bodo Scholz.
CPE779: Shared Memory and OpenMP Based on slides by Laxmikant V. Kale and David Padua of the University of Illinois.
OpenMP Lab Antonio Gómez-Iglesias Texas Advanced Computing Center.
Embedded Systems MPSoC Architectures OpenMP: Exercises Alberto Bosio
B. Estrade, LSU – High Performance Computing Enablement Group OpenMP II B. Estrade.
NPACI Parallel Computing Institute August 19-23, 2002 San Diego Supercomputing Center S an D IEGO S UPERCOMPUTER C ENTER N ATIONAL P ARTNERSHIP FOR A DVANCED.
OpenMP An API : For Writing Portable SMP Application Software Rider NCHC GTD.
Introduction to OpenMP
Shared Memory Parallelism - OpenMP
SHARED MEMORY PROGRAMMING WITH OpenMP
CS427 Multicore Architecture and Parallel Computing
Computer Engg, IIT(BHU)
Introduction to OpenMP
Shared-Memory Programming
September 4, 1997 Parallel Processing (CS 667) Lecture 5: Shared Memory Parallel Programming with OpenMP* Jeremy R. Johnson Parallel Processing.
Computer Science Department
Shared Memory Programming with OpenMP
Multi-core CPU Computing Straightforward with OpenMP
September 4, 1997 Parallel Processing (CS 730) Lecture 5: Shared Memory Parallel Programming with OpenMP* Jeremy R. Johnson Wed. Jan. 31, 2001 *Parts.
September 4, 1997 Parallel Processing (CS 730) Lecture 5: Shared Memory Parallel Programming with OpenMP* Jeremy R. Johnson *Parts of this lecture.
Introduction to OpenMP
OpenMP Parallel Programming
Shared-Memory Paradigm & OpenMP
Presentation transcript:

Budapest, November st ALADIN maintenance and phasing workshop Short introduction to OpenMP Jure Jerman, Environmental Agency of Slovenia

Budapest, November st ALADIN maintenance and phasing workshop Outline Motivation and Introduction Basic OpenMP structures Description of exercise Warning: far from being comprehensive introduction to OpenMP

Budapest, November st ALADIN maintenance and phasing workshop Motivation Two kinds of parallelization in ALADIN: MPI and OpenMP MPI: explicit parallelization OpenMP: set of compiler directives in the code It was believed for quite a long time, that OpenMP can not compete with MPI where programmer holds everything in hands, but: –with OpenMP the amount of computational overhead related to halo is reduced –Amount of communication is reduced as well => Better scalability for bigger number of processors The new computer architecture (SMP machines, or clusters of SMP machines)

Budapest, November st ALADIN maintenance and phasing workshop Example of OpenMP efficiency for IFS code

Budapest, November st ALADIN maintenance and phasing workshop Introduction to OpenMP OpenMP: Higher level of abstraction model for parallelization set up by consortium of HPC vendors Consisted of compiler directives, library routines and environmental variables Suitable for SMP (symmetrical multi processing) computers) For cluster of SMPs, the communication between computers has to be done via MPI

Budapest, November st ALADIN maintenance and phasing workshop Our first program OpenMP compiler directives –!$OMP !$OMP PARALLEL DEFAULT(NONE) & !$OMP SHARED(A,B) –!$ !$ I = OMP_get_thread_num() Parallel region constructor !$OMP PARALLEL / $OMP END PARALLEL The constructs will be ignored by compiler without OpenMP.

Budapest, November st ALADIN maintenance and phasing workshop Hello world in OpenMP (1) Program Hello $!OMP PARALLEL Write(*,*)’Hello’ $!OMP END PARALLEL End Program Hello

Budapest, November st ALADIN maintenance and phasing workshop Hello world (2)

Budapest, November st ALADIN maintenance and phasing workshop Hello world (3) Now we want just one thread to print on the screen: We use !$OMP SINGLE directive !$OMP SINGLE Write(*,*) “Hello” !$OMP END SINGLE

Budapest, November st ALADIN maintenance and phasing workshop Hello world (4)

Budapest, November st ALADIN maintenance and phasing workshop Nesting of parallel regions

Budapest, November st ALADIN maintenance and phasing workshop Variables in parallel regions Program Hello I=5 $!OMP PARALLEL I=I*5 $!OMP END PARALLEL Write(*,*) I End Program Hello The output for 2 threads is 125 Every thread has access to variable trough shared memory Variables can be declared as shared or private to the threads like: !$PARALLEL SHARED(A) PRIVATE(I)

Budapest, November st ALADIN maintenance and phasing workshop Worksharing constructs Purpose: To do some real parallelism The work sharing directives have to be put inside !$OMP PARALLEL and !$ END PARALLEL Best example: !$OMP DO clause/ !$OMP END DO !$OMP DO do i=1, Enddo !$OMP END DO

Budapest, November st ALADIN maintenance and phasing workshop !$OMP DO / !$OMP END DO

Budapest, November st ALADIN maintenance and phasing workshop !$OMP DO clause1, clause2,.. Clause could be: –PRIVATE() –ORDERED –SCHEDULE –.... SCHEDULE (type, chunck) : –Way how to control the distribution of iterations between different threads (type can be DYNAMIC, STATIC), chunk is the portion of the loop distributed to separate threads

Budapest, November st ALADIN maintenance and phasing workshop Parallelizing some loops might be tricky... Not all of loops can be made parallel: The chunks of loops are distributed in unpredictable way REAL :: A(1000) DO I = A(I)=A(I+1) ENDDO

Budapest, November st ALADIN maintenance and phasing workshop !$OMP SECTION(1) It is possible to create MPMD (Multiple Program, Multiple Data) style of programs with !$OMP SECTION !$OMP SECTIONS !$OMP SECTION Write(*,*)”Hello” !$OMP SECTION Write(*,*)”Hi” !$OMP SECTION Write(*,*)”Bye” !$OMP END SECTIONS

Budapest, November st ALADIN maintenance and phasing workshop !$OMP SECTION(2)

Budapest, November st ALADIN maintenance and phasing workshop OpenMP run-time library OpenMP Fortran API run-time library: control and query tool for the parallel execution environment Set of external procedures with clearly defined interfaces delivered trough omp_lib Fortran module Main categories: –Execution environment routines –Lock routines –Timing routines

Budapest, November st ALADIN maintenance and phasing workshop Some OMP function examples OMP_get_num_threads : number of currently used threads OMP_get_thread_number : The identification number of current thread Locking routines: synchronization mechanism different from OpenMP directives

Budapest, November st ALADIN maintenance and phasing workshop The environment variables Provide control of OpenMP behavior at runtime: OPM_NUM_THREADS : Number of threads to be used during execution of parallel region OMP_SCHEDULE: What to do with !$OMP DO loops OMP _ DYNAMIC (boolean):Dynamical adjustment of number of threads by Operating System OMP_NESTED (boolean): What to with nested parallelizem

Budapest, November st ALADIN maintenance and phasing workshop OpenMP constructs in IFS/Arpege/ALADIN code !$OMP PARALLEL / $OMP END PARALLEL !$OMP DO / !$OMP END DO !$OMP DO PRIVATE() SHARED() !$OMP DO SCHEDULE(DYNAMIC,1)

Budapest, November st ALADIN maintenance and phasing workshop Description of excersize Paralleliize the serial Fortran95 code (400 lines) Shallow water model with periodic LBC One main program, no subroutines REFERENCES: –OpenMP Fortran interface specification 2.0 –Parallel Programming in Fortran 95 using OpenMP