Steps in Creating a Parallel Program

Slides:



Advertisements
Similar presentations
Symmetric Multiprocessors: Synchronization and Sequential Consistency.
Advertisements

Concurrency Important and difficult (Ada slides copied from Ed Schonberg)
Indian Institute of Science Bangalore, India भारतीय विज्ञान संस्थान बंगलौर, भारत Supercomputer Education and Research Centre (SERC) SE 292: High Performance.
1 Parallelization of An Example Program Examine a simplified version of a piece of Ocean simulation Iterative equation solver Illustrate parallel program.
Grid solver example Gauss-Seidel (near-neighbor) sweeps to convergence  interior n-by-n points of (n+2)-by-(n+2) updated in each sweep  updates done.
Reference: Message Passing Fundamentals.
ECE669 L4: Parallel Applications February 10, 2004 ECE 669 Parallel Computer Architecture Lecture 4 Parallel Applications.
1 Tuesday, November 07, 2006 “If anything can go wrong, it will.” -Murphy’s Law.
DISTRIBUTED AND HIGH-PERFORMANCE COMPUTING CHAPTER 7: SHARED MEMORY PARALLEL PROGRAMMING.
Fundamental Design Issues for Parallel Architecture Todd C. Mowry CS 495 January 22, 2002.
ECE669 L5: Grid Computations February 12, 2004 ECE 669 Parallel Computer Architecture Lecture 5 Grid Computations.
1 Parallel Programs. 2 Why Bother with Programs? They’re what runs on the machines we design Helps make design decisions Helps evaluate systems tradeoffs.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
Efficient Parallelization for AMR MHD Multiphysics Calculations Implementation in AstroBEAR.
1 Steps in Creating a Parallel Program 4 steps: Decomposition, Assignment, Orchestration, Mapping Done by programmer or system software (compiler, runtime,...)
1 Steps in Creating a Parallel Program 4 steps: Decomposition, Assignment, Orchestration, Mapping Done by programmer or system software (compiler, runtime,...)
Implications for Programming Models Todd C. Mowry CS 495 September 12, 2002.
Chapter 13 Finite Difference Methods: Outline Solving ordinary and partial differential equations Finite difference methods (FDM) vs Finite Element Methods.
ECE669 L6: Programming for Performance February 17, 2004 ECE 669 Parallel Computer Architecture Lecture 6 Programming for Performance.
Parallel Programming: Case Studies Todd C. Mowry CS 495 September 12, 2002.
Today Objectives Chapter 6 of Quinn Creating 2-D arrays Thinking about “grain size” Introducing point-to-point communications Reading and printing 2-D.
Memory Consistency Models Some material borrowed from Sarita Adve’s (UIUC) tutorial on memory consistency models.
Exercise problems for students taking the Programming Parallel Computers course. Janusz Kowalik Piotr Arlukowicz Tadeusz Puzniakowski Informatics Institute.
Lecture 29 Fall 2006 Lecture 29: Parallel Programming Overview.
Lecture 4: Parallel Programming Models. Parallel Programming Models Parallel Programming Models: Data parallelism / Task parallelism Explicit parallelism.
L15: Putting it together: N-body (Ch. 6) October 30, 2012.
ICOM 5995: Performance Instrumentation and Visualization for High Performance Computer Systems Lecture 7 October 16, 2002 Nayda G. Santiago.
Parallel Programming Models Jihad El-Sana These slides are based on the book: Introduction to Parallel Computing, Blaise Barney, Lawrence Livermore National.
9/20/2015 slide 1 PCOD: Lecture 1 Per Stenström © 2008, Sally A. McKee © credit points Instructor: Sally A. McKee.
Chapter 3 Parallel Algorithm Design. Outline Task/channel model Task/channel model Algorithm design methodology Algorithm design methodology Case studies.
1 Lecture 2: Parallel Programs Topics: parallel applications, parallelization process, consistency models.
SE-292 High Performance Computing Intro. to Parallelization Sathish Vadhiyar.
Chapter 3 Parallel Programming Models. Abstraction Machine Level – Looks at hardware, OS, buffers Architectural models – Looks at interconnection network,
1 Concurrency Architecture Types Tasks Synchronization –Semaphores –Monitors –Message Passing Concurrency in Ada Java Threads.
Lecture 4 TTH 03:30AM-04:45PM Dr. Jianjun Hu CSCE569 Parallel Computing University of South Carolina Department of.
CS 484 Designing Parallel Algorithms Designing a parallel algorithm is not easy. There is no recipe or magical ingredient Except creativity We can benefit.
Fundamentals of Parallel Computer Architecture - Chapter 71 Chapter 7 Introduction to Shared Memory Multiprocessors Yan Solihin Copyright.
1 BİL 542 Parallel Computing. 2 Message Passing Chapter 2.
13-1 Chapter 13 Concurrency Topics Introduction Introduction to Subprogram-Level Concurrency Semaphores Monitors Message Passing Java Threads C# Threads.
Threaded Programming Lecture 1: Concepts. 2 Overview Shared memory systems Basic Concepts in Threaded Programming.
A Pattern Language for Parallel Programming Beverly Sanders University of Florida.
3/12/2013Computer Engg, IIT(BHU)1 OpenMP-1. OpenMP is a portable, multiprocessing API for shared memory computers OpenMP is not a “language” Instead,
1 Lecture 25: Multiprocessors Today’s topics:  Synchronization  Consistency  Shared memory vs message-passing  Simultaneous multi-threading (SMT)
April 24, 2002 Parallel Port Example. April 24, 2002 Introduction The objective of this lecture is to go over a simple problem that illustrates the use.
Parallel Computing Presented by Justin Reschke
Concurrency and Performance Based on slides by Henri Casanova.
Department of Computer Science, Johns Hopkins University Lecture 7 Finding Concurrency EN /420 Instructor: Randal Burns 26 February 2014.
CS5102 High Performance Computer Systems Thread-Level Parallelism
Background on the need for Synchronization
Parallel Programming By J. H. Wang May 2, 2017.
3- Parallel Programming Models
Computer Engg, IIT(BHU)
Steps in Creating a Parallel Program
Steps in Creating a Parallel Program
Perspective on Parallel Programming
Steps in Creating a Parallel Program
Steps in Creating a Parallel Program
Lecture 26: Multiprocessors
Steps in Creating a Parallel Program
Lecture 2: Parallel Programs
Steps in Creating a Parallel Program
Steps in Creating a Parallel Program
Parallelization of An Example Program
Parallel Sorting Algorithms
Background and Motivation
Steps in Creating a Parallel Program
Lecture 26: Multiprocessors
Steps in Creating a Parallel Program
Parallel Programming in C with MPI and OpenMP
Steps in Creating a Parallel Program
Presentation transcript:

Steps in Creating a Parallel Program From last lecture Parallel Algorithm At or above Communication Abstraction Computational Problem Fine-grain Parallel Computations Fine-grain Parallel Computations ® Tasks Processes ® Processors + Execution Order (scheduling) Tasks ® Processes 1 2 4 3 Max DOP Find max. degree of Parallelism (DOP) or concurrency (Dependency analysis/ graph) Max. no of Tasks Tasks How many tasks? Task (grain) size? Processes 4 steps: Decomposition, Assignment, Orchestration, Mapping Done by programmer or system software (compiler, runtime, ...) Issues are the same, so assume programmer does it all explicitly + Scheduling 1 2 3 4 Vs. implicitly by parallelizing compiler (PCA Chapter 2.3) lec # 4 Fall 2014 9-25-2014

From last lecture Example Motivating Problem: Simulating Ocean Currents/Heat Transfer ... (a) Cross sections (b) Spatial discretization of a cross section n x n Grid n A [ i,j ] = 0.2 ´ ( ] + i,j – 1] + i – 1 , j + 1] + i + 1, ]) Expr ession for updating each interior point: n grids Maximum Degree of Parallelism (DOP) or concurrency: O(n2) data parallel computations per grid per iteration n 2D Grid n2 points to update Total O(n3) Computations Per iteration n2 per grid X n grids Model as two-dimensional “n x n” grids Discretize in space and time finer spatial and temporal resolution => greater accuracy Many different computations per time step, O(n2) per grid. set up and solve linear equations iteratively (Gauss-Seidel) . Concurrency across and within grid computations per iteration n2 parallel computations per grid x number of grids When one task updates/computes one grid element Synchronous iteration (PCA Chapter 2.3) More reading: PP Chapter 11.3 (Pages 352-364)

Solution of Linear System of Equation By Synchronous Iteration Iterations are sequential – Parallelism within an iteration O(n2) Initialize all points For one 2D nxn grid Setup Iterate over (update) all interior n2 points O(n2) Converged? One Iteration Or Sweep A [ i,j ] = 0.2 ´ ( ] + i,j – 1] + i – 1 , j + 1] + i + 1, ]) Expr ession for updating each interior point: Find Error (Global Difference) Is Error < Tolerance Limit ? No (Threshold) Iterate Again Or maximum number of iterations is reached Yes Done

Parallelization of An Example Program Examine a simplified version of a piece of Ocean simulation Iterative (Gauss-Seidel) linear equation solver Illustrate parallel program in low-level parallel language: C-like pseudo-code with simple extensions for parallelism Expose basic communication and synchronization primitives that must be supported by parallel programming model. Synchronous iteration One 2D Grid, n x n = n2 points (instead of 3D – n grids) Three parallel programming models targeted for orchestration: Data Parallel Shared Address Space (SAS) Message Passing (PCA Chapter 2.3)

2D Grid Solver Example n + 2 points n2 = n x n interior grid points 2D n x n Grid n + 2 points Computation = O(n2) per sweep or iteration Boundary Points Fixed Simplified version of solver in Ocean simulation Gauss-Seidel (near-neighbor) sweeps (iterations) to convergence: Interior n-by-n points of (n+2)-by-(n+2) updated in each sweep (iteration) Updates done in-place in grid, and difference from previous value is computed Accumulate partial differences into a global difference at the end of every sweep or iteration Check if error (global difference) has converged (to within a tolerance parameter) If so, exit solver; if not, do another sweep (iteration) Or iterate for a set maximum number of iterations. 2D (one grid) not 3D 1 2 3 4

{ Done? Iterate until convergence Sweep O(n2) computations Pseudocode, Sequential Equation Solver Setup Initialize grid points Call equation solver Iterate until convergence { i.e one iteration Sweep O(n2) computations Update Points Old value Done? Global Difference TOL, tolerance or threshold

Decomposition Simple way to identify concurrency is to look at loop iterations Dependency analysis; if not enough concurrency is found, then look further into application Not much concurrency here at this level (all loops sequential) Examine fundamental dependencies, ignoring loop structure 1 2 3 1 2 j Start New (updated) Concurrency along anti-diagonals O(n) Serialization along diagonals O(n) i i Old (Not updated yet) A [ i,j ] = 0.2 ´ ( ] + i,j – 1] + i – 1 , j + 1] + i + 1, ]) Expr ession for updating each interior point: j Concurrency O(n) along anti-diagonals, serialization O(n) along diagonal Retain loop structure, use pt-to-pt synch; Problem: too many synch ops. Restructure loops, use global synch; load imbalance and too much synch Or i.e using barriers along diagonals

Exploit Application Knowledge Decomposition: Two parallel sweeps Each with parallel n2/2 points updates Reorder grid traversal: red-black ordering j Maximum Degree of parallelism = DOP = O(n2) Type of parallelism: Data parallelism One point update per task (n2 parallel tasks) Computation = 1 Communication = 4 Communication-to-Computation ratio = 4 For PRAM with O(n2) processors: Sweep = O(1) Global Difference = O( log2n2) Thus: T = O( log2n2) i Different ordering of updates: may converge quicker or slower Red sweep and black sweep are each fully parallel: Global synchronization between them (conservative but convenient) Ocean uses red-black; here we use simpler, asynchronous one to illustrate No red-black sweeps, simply ignore dependencies within a single sweep (iteration) all points can be updated in parallel DOP = n2 = O(n2) Sequential order same as original. Iterations may converge slower than red-black ordering i.e. Max Software DOP = n2 = O(n2)

Decomposition Only for_all = in parallel Task = Update one grid point DOP = O(n2) Parallel PRAM O(1) Point Update O(n2) Parallel Computations (tasks) Global Difference PRAM O( log2n2) Fine Grain: n2 parallel tasks each updates one element Task = update one grid point Degree of Parallelism (DOP) Task = One row Decomposition into elements: degree of concurrency n2 To decompose into rows, make line 18 loop sequential; degree of parallelism (DOP) = n for_all leaves assignment left to system but implicit global synch. at end of for_all loop DOP = O(n2) Coarser Grain: n parallel tasks each update a row Task = grid row Computation = O(n) Communication = O(n) ~ 2n Communication to Computation ratio = O(1) ~ 2 Task = update one row of points The “for_all” loop construct imply parallel loop computations

Assignment: (Update n/p rows per task) i.e Task Assignment p = number of processes or processors i.e n2/p points per task Static assignments (given decomposition into rows) Block assignment of rows: Row i is assigned to process Cyclic assignment of rows: process i is assigned rows i, i+p, and so on i p p = number of processors < n p tasks or processes Task = updates n/p rows = n2/p elements Computation = O(n2/p) Communication = O(n) Communication-to-Computation ratio = O ( n / (n2/p) ) = O(p/n) Lower C-to-C ratio is better Block or strip assignment n/p rows per task ~ 2n (2 rows) p = number of processors (tasks or processes) Dynamic assignment (at runtime): Get a row index, work on the row, get a new row, and so on Static assignment into rows reduces concurrency (from n2 to p) concurrency (DOP) = n for one row per task C-to-C = O(1) Block assign. reduces communication by keeping adjacent rows together Let’s examine orchestration under three programming models: p tasks Instead of n2 Why Block Assignment 1- Data Parallel 2- Shared Address Space (SAS) 3- Message Passing

} Data Parallel Solver nprocs = number of processes = p Setup/Initialize Points Block decomposition by row n/p rows per processor } Sweep/Iteration: T = O(n2/p) In Parallel Add all local differences (REDUCE) cost depends on architecture and implementation of REDUCE best: O(log2p) using binary tree reduction Worst: O(p) sequentially O(n2/p + log2p) £ T(iteration) £ O(n2/p + p)

Shared Address Space Solver SAS Single Program Multiple Data (SPMD) Still MIMD Setup Array of grid points “A” in shared memory Barrier 1 n/p rows or n2/p points per process or task p tasks (iteration) Barrier 2 Not Done? Sweep again All processes test for convergence Barrier 3 i.e iterate Done ? i.e Which n/p rows to update for a task or process with a given process ID Assignment controlled by values of variables used as loop bounds and individual process ID (PID) For process As shown next slide

SAS Sweep: T = O(n2/p) T(p) = O(n2/p + p) T = O(p) Pseudo-code, Parallel Equation Solver for Shared Address Space (SAS) SAS # of processors = p = nprocs pid = process ID, 0 …. P-1 Main process or thread Array “A” is shared (all grid points) Create p-1 processes Setup Loop Bounds/Which Rows? Setup mymin = low row mymax = high row Private Variables Which rows? Sweep: T = O(n2/p) Barrier 1 (Start sweep) Global Difference T(p) = O(n2/p + p) Mutual Exclusion (lock) for global difference (sweep done) Serialized update of global difference T = O(p) Critical Section: global difference Barrier 2 Check/test convergence: all processes do it Barrier 3 Done?

Notes on SAS Program But SPMD: not lockstep (i.e. still MIMD not SIMD) or even necessarily same instructions. Assignment controlled by values of variables used as loop bounds and process ID (pid) (i.e. mymin, mymax) Unique pid per process, used to control assignment of blocks of rows to processes. Done condition (convergence test) evaluated redundantly by all processes Code that does the update identical to sequential program Each process has private mydiff variable Most interesting special operations needed are for synchronization Accumulations of local differences (mydiff) into shared global difference have to be mutually exclusive Why the need for all the barriers? SPMD = Single Program Multiple Data Which n/p rows? But Otherwise each process must enter the shared global difference critical section for each point, n2/p times (n2 times total ) instead of just p times per iteration for all processes Why? Using LOCK ( ) …. UNLOCK ( )

Need for Mutual Exclusion diff = Global Difference Code each process executes: load the value of diff into register r1 add the register r2 to register r1 store the value of register r1 into diff A possible interleaving: P1 P2 r1  diff {P1 gets 0 in its r1} r1  diff {P2 also gets 0} r1  r1+r2 {P1 sets its r1 to 1} r1  r1+r2 {P2 sets its r1 to 1} diff  r1 {P1 sets cell_cost to 1} diff  r1 {P2 also sets cell_cost to 1} Need the sets of operations to be atomic (mutually exclusive) Local Difference in r2 i.e relative operations ordering in time Time diff = Global Difference (in shared memory) r2 = mydiff = Local Difference Fix ?

Mutual Exclusion Provided by LOCK-UNLOCK around critical section Enter Exit No order guarantee provided Provided by LOCK-UNLOCK around critical section Set of operations we want to execute atomically Implementation of LOCK/UNLOCK must guarantee mutual exclusion. Can lead to significant serialization if contended (many tasks want to enter critical section at the same time) Especially costly since many accesses in critical section are non- local Another reason to use private mydiff for partial accumulation: Reduce the number times needed to enter critical section by each process to update global difference: Once per iteration vs. n2/p times per process without mydiff However, no order guarantee i.e one task at a time in critical section O(p) total number of accesses to critical section i.e O(n2) total number of accesses to critical section by all processes

Global (or group) Event Synchronization BARRIER(nprocs): wait here till nprocs processes get here Built using lower level primitives Global sum example: wait for all to accumulate before using sum Often used to separate phases of computation Process P_1 Process P_2 Process P_nprocs set up eqn system set up eqn system set up eqn system Barrier (name, nprocs) Barrier (name, nprocs) Barrier (name, nprocs) solve eqn system solve eqn system solve eqn system Barrier (name, nprocs) Barrier (name, nprocs) Barrier (name, nprocs) apply results apply results apply results Conservative form of preserving dependencies, but easy to use i.e locks, semaphores Convergence Test Done by all processes

Point-to-point (Ordering) Event Synchronization (Not Used or Needed Here) SAS One process notifies another of an event so it can proceed: Needed for task ordering according to data dependence between tasks Common example: producer-consumer (bounded buffer) Concurrent programming on uniprocessor: semaphores Shared address space parallel programs: semaphores, or use ordinary variables as flags in shared address space Initially flag = 0 P2 P1 On A i.e P2 computed A Or compute using “A” as operand i.e busy-wait (or spin on flag) Busy-waiting (i.e. spinning) Or block process (better for uniprocessors?)

Message Passing Grid Solver Cannot declare A to be a shared array any more Need to compose it logically from per-process private arrays Usually allocated in accordance with the assignment of work Process assigned a set of rows allocates them locally Explicit transfers (communication) of entire border or “Ghost”rows between tasks is needed (as shown next slide) Structurally similar to SAS (e.g. SPMD), but orchestration is different Data structures and data access/naming Communication Synchronization Thus No shared address space myA arrays Each n/p rows in local memory myA n/p rows in this case At start of each iteration e.g Local arrays vs. shared array + Explicit } Via Send/receive pairs Implicit

Message Passing Grid Solver n/p rows or n2/p points per process or task myA n Each n/p rows Compute n2/p elements per task pid = 0 Ghost (border) Rows for Task pid 1 Same block assignment as before Send Row Receive Row Send Row Receive Row myA For pid 1 pid 1 Send Row Receive Row Send Row Receive Row As shown next slide Send Row Receive Row Send Row Receive Row Pid = nprocs -1 n/p rows per task Parallel Computation = O(n2/p) Communication of rows = O(n) Communication of local DIFF = O(p) Computation = O(n2/p) Communication = O( n + p) Communication-to-Computation Ratio = O( (n+p)/(n2/p) ) = O( (np + p2) / n2 ) Time per iteration: T = T(computation) + T(communication) T = O(n2/p + n + p) nprocs = number of processes = number of processors = p

{ Message Passing Exchange ghost rows Communication O(n) Pseudo-code, Parallel Equation Solver for Message Passing # of processors = p = nprocs Create p-1 processes Message Passing Initialize local rows myA Initialize myA (Local rows) Send one or two ghost rows Exchange ghost rows Communication O(n) exchange ghost rows (send/receive) Receive one or two ghost rows { Before start of iteration Computation O(n2/p) Sweep over n/p rows = n2/p points per task T = O(n2/p) Local Difference Send mydiff to pid 0 Receive test result from pid 0 Done? Pid 0: calculate global difference and test for convergence send test result to all processes O(p) Communication Pid 0 T = O(n2/p + n + p) Only Pid 0 tests for convergence

Notes on Message Passing Program Use of ghost rows. Receive does not transfer data, send does (sender-initiated) Unlike SAS which is usually receiver-initiated (load fetches data) Communication done at beginning of iteration (exchange of ghost rows). Explicit communication in whole rows, not one element at a time Core similar, but indices/bounds in local space rather than global space. Synchronization through sends and blocking receives (implicit) Update of global difference and event synch for done condition Could implement locks and barriers with messages Only one process (pid = 0) checks convergence (done condition). Can use REDUCE and BROADCAST library calls to simplify code: Or border rows i.e Two-sided communication i.e One-sided communication Compute global difference Broadcast convergence test result to all processes Tell all tasks if done

Message-Passing Modes: Send and Receive Alternatives Point-to-Point Communication Can extend functionality: stride, scatter-gather, groups Semantic flavors: based on when control is returned Affect when data structures or buffers can be reused at either end All can be implemented using send/receive primitives Send/Receive Send waits until message is actually received Synchronous Asynchronous Easy to create Deadlock Blocking Non-blocking Immediate Receive: Wait until message is received Send: Wait until message is sent Return immediately (both) Affect event synch (mutual exclusion implied: only one process touches data) Affect ease of programming and performance Synchronous messages provide built-in synch. through match Separate event synchronization needed with asynch. messages With synchronous messages, our code is deadlocked. Fix? Use asynchronous blocking sends/receives

Message-Passing Modes: Send and Receive Alternatives Synchronous Message Passing: Process X executing a synchronous send to process Y has to wait until process Y has executed a synchronous receive from X. Asynchronous Message Passing: Blocking Send/Receive: A blocking send is executed when a process reaches it without waiting for a corresponding receive. Returns when the message is sent. A blocking receive is executed when a process reaches it and only returns after the message has been received. Non-Blocking Send/Receive: A non-blocking send is executed when reached by the process without waiting for a corresponding receive. A non-blocking receive is executed when a process reaches it without waiting for a corresponding send. Both return immediately. In MPI: MPI_Ssend ( ) MPI_Srecv( ) In MPI: MPI_Send ( ) MPI_Recv( ) Most Common Type In MPI: MPI_Isend ( ) MPI_Irecv( ) MPI = Message Passing Interface

Orchestration: Summary Shared address space Shared and private data explicitly separate Communication implicit in access patterns No correctness need for data distribution Synchronization via atomic operations on shared data Synchronization explicit and distinct from data communication Message passing Data distribution among local address spaces needed No explicit shared structures (implicit in communication patterns) Communication is explicit Synchronization implicit in communication (at least in synch. case) Mutual exclusion implied myA’s No SAS

Correctness in Grid Solver Program Decomposition and Assignment similar in SAS and message-passing Orchestration is different: Data structures, data access/naming, communication, synchronization AKA shared? Via Send/ Receive Pairs Lock/unlock Barriers i.e. ghost rows Ghost Rows Requirements for performance are another story ...