Presentation is loading. Please wait.

Presentation is loading. Please wait.

Programming with Shared Memory Specifying parallelism

Similar presentations


Presentation on theme: "Programming with Shared Memory Specifying parallelism"— Presentation transcript:

1 Programming with Shared Memory Specifying parallelism
Performance issues ITCS4145/5145, Parallel Programming B. Wilkinson Spring Oct 10, 2013.

2 We have seen OpenMP for specifying parallelism
Programmer decides on what parts of the code should be parallelized and inserts compiler directives (pragma’s) Whatever programming environment we use where the programmers explicitly says what should be done in parallel, the issue for the programmer is deciding what can be done in parallel. Let us use generic language constructs for parallelism.

3 par Construct For specifying concurrent statements: par { S1; S2; .
Sn; } Says one can execute all statement S1 to Sn simultaneously if resources available, or execute them in any order and still get the correct result

4 How is this specified in OpenMP?
Question How is this specified in OpenMP?

5 forall Construct To start multiple similar processes together:
forall (i = 0; i < n; i++) { S1; S2; . Sm; } Says each iteration of body can be executed simultaneously if resources available, or in any order and still get the correct result. The statements of each instance of body executed in order given. Each instance of the body uses a different value of i.

6 Example forall (i = 0; i < 5; i++) a[i] = 0;
clears a[0], a[1], a[2], a[3], and a[4] to zero concurrently.

7 How is this specified in OpenMP?
Question How is this specified in OpenMP?

8 Dependency Analysis To identify which processes could be executed together. Example Can see immediately in the code forall (i = 0; i < 5; i++) a[i] = 0; that every instance of the body is independent of other instances and all instances can be executed simultaneously. However, it may not be that obvious. Need algorithmic way of recognizing dependencies, for a parallelizing compiler.

9

10

11 Can use Berstein’s conditions at:
Machine instruction level inside processor – have logic to detect if conditions satisfied (see computer architecture course) At the process level to detect whether two processes can be executed simultaneously (using the inputs and outputs of processes). Can be extended to more than two processes but number of conditions rises – need every input/out combination checked. For three statements, need how many conditions checked?

12 Shared Memory Programming
Performance Issues

13 Performance issues with Threads
Program might actually go slower when parallelized! Too many threads can significantly reduce the program performance.

14 Reasons: Work split among too many threads gives each thread too little work, so overhead of starting and terminating threads swamps useful work. To many concurrent threads incurs overhead from having to share fixed hardware resources OS typically schedules threads in round robin with a time-slice. Time-slicing incurs overhead Need to save registers, effects on cache memory, virtual memory management … . Waiting to acquire a lock. When a thread is suspended while holding a lock, all threads waiting for lock will have to wait for thread to re-start. Source: Multi-core programming by S. Akhter and J. Roberts, Intel Press.

15 Some Strategies Limit number of runnable threads to number of hardware threads. (See later we do not do this with GPUs) For a n-core machine (not hyper-threaded) have n runnable threads. If hyper-threaded (with 2 virtual threads per core) double this. Can have more threads in total but others may be blocked. Separate I/O threads from compute threads I/O threads wait for external events Never hard-code number of threads – leave as a tuning parameter.

16 Let OpenMP optimize number of threads
Implement a thread pool Implement a work stealing approach in which threads has a work queue. Threads with no work take work from other threads

17 Critical Sections Serializing Code
High performance programs should have as few as possible critical sections as their use can serialize the code. Suppose, all processes happen to come to their critical section together. They will execute their critical sections one after the other. In that situation, the execution time becomes almost that of a single processor.

18 Illustration

19 Shared Data in Systems with Caches
All modern computer systems have cache memory, high-speed memory closely attached to each processor for holding recently referenced data and code. Processors Cache memory Main memory

20 Cache coherence protocols
Update policy - copies of data in all caches are updated at the time one copy is altered, or Invalidate policy - when one copy of data is altered, the same data in any other cache is invalidated (by resetting a valid bit in the cache). These copies are only updated when the associated processor makes reference for it. Protocol needed even on a single processor system (Why?) More details in a computer architecture class

21 False Sharing Different parts of block required by different processors but not same bytes. If one processor writes to one part of the block, copies of the complete block in other caches must be updated or invalidated although the actual data is not shared.

22 Solution for False Sharing
Compiler to alter the layout of the data stored in the main memory, separating data only altered by one processor into different blocks.

23 Sequential Consistency
Formally defined by Lamport (1979): A multiprocessor is sequentially consistent if the result of any execution is the same as if the operations of all the processors were executed in some sequential order, and the operations of each individual processors occur in this sequence in the order specified by its program. i.e., the overall effect of a parallel program is not changed by any arbitrary interleaving of instruction execution in time.

24 Writing a parallel program for a system which is known to be sequentially consistent – that is each program is executed in program order and any interleaving instructions results in the correct answer - enables us to reason about the result of the program.

25 Example Process P1 Process 2 . . data = new; . flag = TRUE; .
data = new; . flag = TRUE; . . while (flag != TRUE) { }; . data_copy = data; Expect data_copy to be set to new because we expect data = new to be executed before flag = TRUE and while (flag != TRUE) { } to be executed before data_copy = data. Ensures that process 2 reads new data from process 1. Process 2 will simple wait for the new data to be produced.

26 Program Order Sequential consistency refers to “operations of each individual processor .. occur in the order specified in its program” or program order. In previous figure, this order is that of the stored machine instructions to be executed.

27 Compiler Optimizations
The order of execution is not necessarily the same as the order of the corresponding high level statements in the source program as a compiler may reorder statements for improved performance.

28 High Performance Processors
Modern processors also usually reorder machine instructions internally during execution for increased performance. Does not alter a multiprocessor being sequential consistency, if processor only produces final results in program order -- that is, retire values to registers in program order. All multiprocessors will have the option of operating under the sequential consistency model i.e. retire values to registers in program order. However, it can severely limit compiler optimizations and processor performance.

29 Example of Processor Re-ordering
Process P1 Process 2 new = a * b; . data = new; . flag = TRUE; . . while (flag != TRUE) { }; . data_copy = data; Multiply machine instruction corresponding to new = a * b is issued for execution. Next instruction corresponding to data = new cannot be issued until the multiply has produced its result. However following statement, flag = TRUE, completely independent and a clever processor could start this operation before multiply has completed leading to the sequence:

30 Process P1 Process 2 . . new = a * b; . flag = TRUE; . data = new; . . while (flag != TRUE) { }; . data_copy = data; Now the while statement might occur before new assigned to data, and code would fail. To achieve the desired result, operate under sequential consistency model, i.e. not reorder instructions and forcing multiply instruction above to complete before starting subsequent instruction that depend upon its result.

31 Relaxing Read/Write Orders Examples of machine instructions
Processors may be able to relax the consistency in terms of the order of reads and writes of one processor with respect to those of another processor to obtain higher performance, and instructions to enforce consistency when needed. Examples of machine instructions Memory barrier (MB) instruction - waits for all previously issued memory accesses instructions to complete before issuing any new memory operations. Write memory barrier (WMB) instruction - as MB but only on memory write operations, i.e. waits for all previously issued memory write accesses instructions to complete before issuing any new memory write operations - which means memory reads could be issued after a memory write operation but overtake it and complete before the write operation.

32 Questions


Download ppt "Programming with Shared Memory Specifying parallelism"

Similar presentations


Ads by Google