Presentation is loading. Please wait.

Presentation is loading. Please wait.

Creating Coarse-grained Parallelism for Loop Nests Chapter 6, Sections 6.3 through 6.9 Yaniv Carmeli.

Similar presentations


Presentation on theme: "Creating Coarse-grained Parallelism for Loop Nests Chapter 6, Sections 6.3 through 6.9 Yaniv Carmeli."— Presentation transcript:

1 Creating Coarse-grained Parallelism for Loop Nests Chapter 6, Sections 6.3 through 6.9 Yaniv Carmeli

2 Single loop methods  Privatization  Loop distribution  Alignment  Loop Fusion Last time …

3 Perfect Loop Nests  Loop Interchange  Loop Selection  Loop Reversal  Loop Skewing  Profitibility-Based Methods This time …

4 Imperfectly Nested Loops  Multilevel Loop Fusion  Parallel Code Generation Packaging Parallelism  Strip Mining  Pipeline Parallelism  Guided Self Scheduling

5 Loop Interchange: Reminder Theorem 5.2 A permutation of the loops in a perfect nest is legal if and only if the direction matrix, after the same permutation is applied to its columns, has no ">" direction as the leftmost non- "=" direction in any row. i j k < < = j i k < < = = j k i < = < = > < © H. Kermany & M. Shalem

6 Vectorization: Bad Parallelization: Good Loop Interchange A(I+1, J) = A(I, J) + B(I, J) ENDDO Vectorization: OK Parallelization: Problematic DO J = 1, M DO I = 1, NPARALLEL DO J = 1, M DO I = 1, N A(I+1, J) = A(I, J) + B(I, J) ENDDO END PARALLEL DO D = ( <, = )

7 DO I = 1, N DO J = 1, M A(I+1, J+1) = A(I, J) + B(I, J) ENDDO DO I = 1, N PARALLEL DO J = 1, M A(I+1, J+1) = A(I, J) + B(I, J) END PARALLEL DO ENDDO Loop Interchange (Cont.) Loop Interchange doesn’t work, as both loops carry dependence!! Best we can do D = ( <, < ) When can a loop be moved to the outermost position in the nest, and be guaranteed to be parallel?

8 Loop Interchange (Cont.) Theorem: In a perfect nest of loops, a particular loop can be parallelized at the outermost level if and only if the column of the direction matrix for that nest contains only ‘=‘ entries. Proof. If. A column with only “=“ entries represents a loop that can be interchanged, and carries no dependence. Only If. There is a non “=“ entry in that column:  If it is “>” – Can’t interchange loops (dependence will be reversed)  If it is “<“ – Can interchange, but can’t shake the dependece (Will not allow parallelization anyway...)

9 Loop Interchange (Cont.) Working with direction matrix  1. Move loops with all “=“ entries into outermost position and parallelize it. Remove the column from the matrix  2. Move loops with most “<“ entries into next outermost position and sequentialize it, eliminate the column and any rows representing carried dependences  3. Repeat step 1

10 DO I = 1, N DO J = 1, M DO K = 1, L A(I+1, J,K) = A(I, J,K) + X1 B(I, J,K+1) = B(I, J,K) + X2 C(I+1, J+1,K+1) = C(I, J,K) + X3 ENDDO ENDO DO I = 1, N PARALLEL DO J = 1, M DO K = 1, L A(I+1, J,K) = A(I, J,K) + X1 B(I, J,K+1) = B(I, J,K) + X2 C(I+1, J+1,K+1) = C(I, J,K) + X3 ENDDO END PARALLEL DO ENDO Loop Interchange (Cont.) Example: < = = = = < < < <

11 < < = = < = < = = < = < = = = = < = = = = < Loop Selection – Optimal? Is the approach of selecting the loop with the most ‘ < ‘ directions optimal?  Will result in NO parallelization for this matrix  While other selections may allow parallelization < < = = < = < = = < = < = = = = < = = = = < Is it possible to derive a selection heuristic that provides optimal code?

12 The problem of loop selection is NP-complete  Loop selection is best done by a heuristic! Loop Selection < < = = < = < = = < = < = = = = < = = = = < Favor the selection of loops that must be sequentialized before parallelism can be uncovered.

13 Loop Selection Goal: Generate most parallelism with adequate granularity  Key is to select proper loops to run in parallel Informal parallel code generation strategy: 1. While there are loops that can be run in parallel, move them to the outermost position and parallelize them 2. Select a sequential loop, run it sequentially, and find what new parallelism may have been revealed.

14 Heuristic Loop Selection (Cont.) Example of principals involved in heuristic loop selection DO I = 2, N DO J = 2, M DO K = 2, L A(I, J, K) = A(I, J-1, K) + A(I-1, J, K-1) + A(I, J+1, K+1) + A(I-1, J, K+1) ENDDO The I-loop must be sequentialized because of the fourth dependence The J-loop must be sequentialized because of the first dependence DO J = 2, M DO I = 2, N PARALLEL DO K = 2, L A(I, J, K) = A(I, J-1, K) + A(I-1, J, K-1) + A(I, J+1, K+1) + A(I-1, J, K+1) END PARALLEL DO ENDDO = < = < = < = < <

15 Loop Reversal DO I = 2, N+1 DO J = 2, M+1 DO K = 1, L A(I, J, K) = A(I, J-1, K+1) + A(I-1, J, K+1) ENDDO Using loop reversal to create coarse-grained parallelism. Consider: DO I = 2, N+1 DO J = 2, M+1 DO K = L, 1, -1 A(I, J, K) = A(I, J-1, K+1) + A(I-1, J, K+1) ENDDO DO K = L, 1, -1 DO I = 2, N+1 DO J = 2, M+1 A(I, J, K) = A(I, J-1, K+1) + A(I-1, J, K+1) ENDDO DO K = L, 1, -1 PARALLEL DO I = 2, N+1 PARALLEL DO J = 2, M+1 A(I, J, K) = A(I, J-1, K+1) + A(I-1, J, K+1) END PARALLEL DO ENDDO = = < < < = <

16 Loop Skewing: Reminder I J < = = < S(1,1) S(1,3) S(1,2) S(1,4) S(2,1) S(2,3) S(2,2) S(2,4) S(3,1) S(3,3) S(3,2) S(3,4) S(4,1) S(4,3) S(4,2) S(4,4) J = 1 J = 2 J = 3 J = 4 I = 1I = 2I = 3I = 4 Note: there are diagonal lines of parallelism © H. Kermany & M. Shalem

17 Loop Skewing DO I = 2, N+1 DO J = 2, M+1 DO K = 1, L A(I, J, K) = A(I, J-1, K) + A(I-1, J, K) B(I, J, K+1) = B(I, J, K) + A(I, J, K) ENDDO Skewed using k = K + I + J yield: DO I = 2, N+1 DO J = 2, M+1 DO k = I+J+1, I+J+L A(I, J, k-I-J) = A(I, J-1, k-I-J) + A(I-1, J, k-I-J) B(I, J, k-I-J+1) = B(I, J, k-I-J) + A(I, J, k-I-J) ENDDO = < = < = = = = < = = = 0 1 0 1 0 0 0 0 1 0 0 0 DO k = 5, N+M+1 PARALLEL DO I = MAX(2, k-M-L-1), MIN(N+1, k-L-2) PARALLEL DO J = MAX(2, k-I-L), MIN(M+1, k-I-1) A(I, J, k-I-J) = A(I, J-1, k-I-J) + A(I-1, J, k-I-J) B(I, J, k-I-J+1) = B(I, J, k-I-J) + A(I, J, k-I-J) END PARALLEL DO ENDDO = < < < = < = = < = = =

18 Loop Skewing - Main Benefits Eliminate “>” signs in the matrix Transforms skewed loops in such a way, that after outward interchange, it will carry all dependences formerly carried by the loop with respect to which it is skewed

19 Loop Skewing - Drawback The resulting parallelism is usually unbalanced. (The resulting loop executes a variable amount of iterations each time).  As we shall see – It’s not really a problem for asynchronous parallelism (unlike vectorization).

20 Loop Skewing (Cont.) Updated strategy 1. Parallelize outermost loop if possible 2. Sequentializes at most one outer loop to find parallelism in the next loop 3. If 1 and 2 fail, try skewing 4. If 3 fails, sequentialize the loop that can be moved to the outermost position and cover the most other loops

21 In Practice –  Sometimes we get much worse execution times, than we would have gotten parallelizing less\different loops.

22 Profitability-Based Methods Static performance estimation function  No need to be accurate, just good at selecting the better of two alternatives Key considerations  Cost of memory references  Sufficiency of granularity

23 Profitability-Based Methods (Cont.) Impractical to choose from all arrangements Consider only subset of the possible code arrangements, based on properties of the cost function  In our case: consider only the inner-most loop

24 Profitability-Based Methods (Cont.) 1.Subdivide all the references in the loop body into reference groups  Two references are in the same group if: There is a loop independent dependence between them. There is a constant-distance loop carried dependence between them. A possible cost evaluation heuristics:

25 Profitability-Based Methods (Cont.) 2.Determine whether subsequent accesses to the same reference are  Loop invariant Cost = 1  Unit stride Cost = number of iterations / cache line size  Non-unit stride Cost = number of iterations A possible cost evaluation heuristics:

26 Profitability-Based Methods (Cont.) 3.Compute loop cost: A possible cost evaluation heuristics:

27 Profitability-Based Methods: Example DO I = 1, N DO J = 1, N DO K = 1, N C(I, J) = C(I, J) + A(I, K) * B(K, J) ENDDO

28 Profitability-Based Methods: Example DO I = 1, N DO J = 1, N DO K = 1, N C(I, J) = C(I, J) + A(I, K) * B(K, J) ENDDO 2N 3 /L+N 2 1N/L I 2N 3 +N 2 N1NJ N 3 (1+1/L)+N 2 N/LN1K COSTBAC Inner- most loop Worst Best

29 Profitability-Based Methods: Example Reorder loop from innermost to outermost by increasing loop cost: I,K,J Can’t always have desired loop order (as some permutations are illegal) - Try to find the possible permutation closest to the desired one. DO J = 1, N DO K = 1, N DO I = 1, N C(I, J) = C(I, J) + A(I, K) * B(K, J) ENDDO

30 Profitability-Based Methods (Cont.) Goal: Given a desired loop order and a direction matrix for a loop nest - find the legal permutation closest to the desired one. Method: Until there are no more loops: Choose from all the loops that can be interchanged to the outermost position, the one that is outermost in the desired permutation. Drop that loop. It can be shown that if a legal permutation with the desired innermost loop in the innermost position exists – this algorithm will find such a permutation.

31 Profitability-Based Methods (Cont.) DO J = 1, N DO K = 1, N DO I = 1, N C(I, J) = C(I, J) + A(I, K) * B(K, J) ENDDO For performance reasons – the compiler may mark the inner loop as “not meant for parallelization” (sequential performance utilizes locality in memory accesses).

32 Multilevel Loop Fusion Commonly used for imperfect loop nests Used after maximal loop distribution

33 Multilevel Loop Fusion DO I = 1, N DO J = 1, M A(I, J+1) = A(I, J) + C B(I+1, J) = B(I, J) + D ENDDO DO I = 1, N DO J = 1, M A(I, J+1) = A(I, J) + C ENDDO DO I = 1, N DO J = 1, M B(I+1, J) = B(I, J) + D ENDDO PARALLEL DO I = 1, N DO J = 1, M A(I, J+1) = A(I, J) + C ENDDO END PARALLEL DO PARALLEL DO J = 1, M DO I = 1, N B(I+1, J) = B(I, J) + D ENDDO END PARALLEL DO After distribution each nest is better with a different outer loop – Can’t fuse!

34 Multilevel Loop Fusion (Cont.) DO I = 1, N DO J = 1, M A(I, J) = A(I, J) + X B(I+1, J) = A(I, J) + B(I,J) C(I, J+1) = A(I, J) + B(I,J) D(I+1, J) = B(I+1, J) + C(I,J) + D(I,J) ENDDO DO I = 1, N DO J = 1, M A(I, J) = A(I, J) + X ENDDO DO I = 1, N DO J = 1, M B(I+1, J) = A(I, J) + B(I,J) ENDDO DO I = 1, N DO J = 1, M C(I, J+1) = A(I, J) + C(I,J) ENDDO DO I = 1, N DO J = 1, M D(I+1, J) = B(I+1, J) + C(I,J) + D(I,J) ENDDO DO I = 1, N DO J = 1, M A(I, J) = A(I, J) + X ENDDO DO I = 1, N DO J = 1, M B(I+1, J) = A(I, J) + B(I,J) ENDDO DO I = 1, N DO J = 1, M C(I, J+1) = A(I, J) + C(I,J) ENDDO DO I = 1, N DO J = 1, M D(I+1, J) = B(I+1, J) + C(I,J) + D(I,J) ENDDO Which loop should be fused into the A loop? i,j A j B i C j D

35 Multilevel Loop Fusion (Cont.) PARALLEL DO J = 1, M DO I = 1, N A(I, J) = A(I, J) + X B(I+1, J) = A(I, J) + B(I,J) ENDDO PARALLEL DO I = 1, N DO J = 1, M C(I, J+1) = A(I, J) + C(I,J) ENDDO PARALLEL DO J = 1, M DO I = 1, N D(I+1, J) = B(I+1, J) + C(I,J) + D(I,J) ENDDO j AB i C j D Fusing A loop with B loop 2 barriers

36 Multilevel Loop Fusion (Cont.) PARALLEL DO I = 1, N DO J = 1, M A(I, J) = A(I, J) + X C(I, J+1) = A(I, J) + C(I,J) ENDDO PARALLEL DO J = 1, M DO I = 1, N B(I+1, J) = A(I, J) + B(I,J) ENDDO PARALLEL DO J = 1, M DO I = 1, N D(I+1, J) = B(I+1, J) + C(I,J) + D(I,J) ENDDO i AC j B j D PARALLEL DO I = 1, N DO J = 1, M A(I, J) = A(I, J) + X C(I, J+1) = A(I, J) + C(I,J) ENDDO PARALLEL DO J = 1, M DO I = 1, N B(I+1, J) = A(I, J) + B(I,J) D(I+1, J) = B(I+1, J) + C(I,J) + D(I,J) ENDDO Now we can also fuse B-D i AC j BD Fusing A loop with C loop 1 barrier

37 Multilevel Loop Fusion (Cont.) Decision making needs look-ahead Strategy: Fuse with the loop that cannot be fused with one of its successors Rationale: If it can’t be fused with its successors – a barrier will be formed anyway. i,j A j B i C j D A barrier is inevitable!!

38 Parallel Code Generation Parallelize(l,D) 1. Try methods for perfect nests (loop interchange, loop skewing, loop reversal), and stop if parallelism is found. 2. If nest can be distributed: distribute, run recursively on the distributed nests, and merge. 3. Else sequentialize outer loop, eliminate the dependences it carries, and try recursively on each of the loops nested in it. Code generation scheme:

39 Parallel Code Generation procedure Parallelize(l, D l ); ParallelizeNest(l, success); //(try methods for perfect nests..) if ¬success then begin if l can be distributed then begin distribute l into loop nests l1, l2, …, ln; for i:=1 to n do begin Parallelize(li, Di); end Merge({l1, l2, …, ln}); end

40 Parallel Code Generation (Cont.) else begin // if l cannot be distributed then for each outer loop l 0 nested in l do begin let D 0 be the set of dependences between statements in l 0 less dependences carried by l; Parallelize(l 0,D 0 ); end let S - the set of outer loops and statements loops left in l; If ||S||>1 then Merge(S); end end Parallelize

41 Parallel Code Generation (Cont.) DO J = 1, M DO I = 1, N A(I+1, J+1) = A(I+1, J) + C X(I, J) = A(I, J) + C ENDDO DO J = 1, M DO I = 1, N A(I+1, J+1) = A(I+1, J) + C ENDDO DO J = 1, M DO I = 1, N X(I, J) = A(I, J) + C ENDDO Both loops carry dependence – loop interchange will not find sufficient parallelism. Try distribution… PARALLEL DO I = 1, N DO J = 1, M A(I+1, J+1) = A(I+1, J) + C ENDDO END PARALLEL DO DO J = 1, M DO I = 1, N X(I, J) = A(I, J) + C ENDDO I loop can be parallelized PARALLEL DO I = 1, N DO J = 1, M A(I+1, J+1) = A(I+1, J) + C ENDDO END PARALLEL DO PARALLEL DO J = 1, M DO I = 1, N ! Left sequential for memory hierarchy X(I, J) = A(I, J) + C ENDDO END PARALLEL DO Both loops can be parallelized Now fusing… Type: (I-loop, parallel) Type: (J-loop, parallel) Different types – can’t fuse

42 Parallel Code Generation (Cont.) DO I = 1, N DO J = 1, M A(I, J) = A(I, J) + X B(I+1, J) = A(I, J) + B(I,J) C(I, J+1) = A(I, J) + C(I,J) D(I+1, J) = B(I+1, J) + C(I,J) + D(I,J) ENDDO PARALLEL DO J = 1, M DO I = 1, N !Sequentialized for memory hierarchy A(I, J) = A(I, J) + X ENDDO ENDPARALLEL DO PARALLEL DO J = 1, M DO I = 1, N B(I+1, J) = A(I, J) + B(I,J) ENDDO END PARALLEL DO PARALLEL DO I = 1, N DO J = 1, M C(I, J+1) = A(I, J) + C(I,J) ENDDO END PARALLEL DO PARALLEL DO J = 1, M DO J = 1, N D(I+1, J) = B(I+1, J) + C(I,J) + D(I,J) ENDDO END PARLLEL DO PARALLEL DO J = 1, M DO I = 1, N !Sequentialized for memory hierarchy A(I, J) = A(I, J) + X ENDDO DO I = 1, N B(I+1, J) = A(I, J) + B(I,J) ENDDO END PARALLEL DO PARALLEL DO I = 1, N DO J = 1, M C(I, J+1) = A(I, J) + C(I,J) ENDDO END PARALLEL DO PARALLEL DO J = 1, M DO J = 1, N D(I+1, J) = B(I+1, J) + C(I,J) + D(I,J) ENDDO END PARLLEL DO PARALLEL DO J = 1, M DO I = 1, N !Sequentialized for memory hierarchy A(I, J) = A(I, J) + X B(I+1, J) = A(I, J) + B(I,J) ENDDO END PARALLEL DO PARALLEL DO I = 1, N DO J = 1, M C(I, J+1) = A(I, J) + C(I,J) ENDDO END PARALLEL DO PARALLEL DO J = 1, M DO J = 1, N D(I+1, J) = B(I+1, J) + C(I,J) + D(I,J) ENDDO END PARLLEL DO A BC D I loop, parallel J loop, parallel A C D

43 DO J = 1, JMAX DO I = 1, IMAXD F(I, J, 1) = F(I, J, 1) * B(1) DO K = 2, N-1 DO J = 1, JMAXD DO I = 1, IMAXD F(I, J, K) = (F(I, J, K) – A(K) * F(I, J, K-1)) * B(K) DO J = 1, JMAXD DO I = 1, IMAXD TOT(I, J) = 0.0 DO J = 1, JMAXD DO I = 1, IMAXD TOT(I, J) = TOT(I, J) + D(1) * F(I, J, 1) DO K = 2, N-1 DO J = 1, JMAXD DO I = 1, IMAXD TOT(I, J) = TOT(I, J) + D(K) * F(I, J, K) Erlebacher PARALLEL DO J = 1, JMAX DO I = 1, IMAXD F(I, J, 1) = F(I, J, 1) * B(1) DO K = 2, N-1 PARALLEL DO J = 1, JMAXD DO I = 1, IMAXD F(I, J, K) = (F(I, J, K) – A(K) * F(I, J, K-1)) * B(K) PARALLEL DO J = 1, JMAXD DO I = 1, IMAXD TOT(I, J) = 0.0 PARALLEL DO J = 1, JMAXD DO I = 1, IMAXD TOT(I, J) = TOT(I, J) + D(1) * F(I, J, 1) DO K = 2, N-1 PARALLEL DO J = 1, JMAXD DO I = 1, IMAXD TOT(I, J) = TOT(I, J) + D(K) * F(I, J, K)

44 Erlebacher PARALLEL DO J= 1, MAXD L1 :DO I = 1, IMAXD F(I, J, 1) = F(I, J, 1) * B(1) L2:DO K = 2, N – 1 DO I = 1, IMAXD F(I, J, K) = ( F(I, J, K) – A(K) * F(I, J, K-1)) * B(K) L3:DO I = 1, IMAXD TOT(I, J) = 0.0 L4:DO I = 1, IMAXD TOT(I, J) = TOT(I, J) + D(1) * F(I, J, 1) L5:DO K = 2, N-1 DO I = 1, IMAXD TOT(I, J) = TOT(I, J) + D(K) * F(I, J, K) END PARALLEL DO L1 L4 L2 L3 L5

45 Erlebacher PARALLEL DO J = 1, JMAXD DO I = 1, IMAXD F(I, J, 1) = F(I, J, 1) * B(1) TOT(I, J) = 0.0 TOT(I, J) = TOT(I, J) + D(1) * F(I, J, 1) ENDDO DO K = 2, N-1 DO I = 1, IMAXD F(I, J, K) = ( F(I, J, K) – A(K) * F(I, J, K-1)) * B(K) TOT(I, J) = TOT(I, J) + D(K) * F(I, J, K) ENDDO END PARALLEL DO

46 Packaging of Parallelism Trade off between parallelism and granularity of synchronization.  Larger granularity work-units means synchronization needs to be done less frequently, but at a cost of less parallelism, and poorer load balance.

47 Strip Mining Converts available parallelism into a form more suitable for the hardware DO I = 1, N A(I) = A(I) + B(I) ENDDO  Interruptions may be disastrous k = CEIL (N / P) PARALLEL DO I = 1, N, k DO i = I, MIN(I + k-1, N) A(i) = A(i) + B(i) ENDDO END PARALLEL DO The value of P is unknown until runtime, so strip mining is often handled by special hardware (Convex C2 and C3)

48 Strip Mining (Cont.) What if the execution time varies among iteraions? PARALLEL DO I = 1, N DO J = 2, I A(J, I) = A(J-1, I) * 2.0 ENDDO END PARALLEL DO Solution: smaller unit size to allow more balanced distribution

49 Pipeline Parallelism Fortran command DOACROSS – pipelines parallel loop iterations with cross-iteration synchronization. Useful where parallelization is not available High synchronization costs DOACROSS I = 2, N S1: A(I) = B(I) + C(I) POST(EV(I)) IF (I>2) WAIT (EV(I-1)) S2: C(I) = A(I-1) + A(I) ENDDO

50 Scheduling Parallel Work Load balance Little Sychro.

51 Scheduling Parallel Work Parallel execution is slower than serial execution if Bakery-counter scheduling  Moderate synchronization overhead N- number of iterations B- time of one iteration p- number of processors σ 0 - constant overhead per processor

52 Guided Self-Scheduling Incorporates some level of static scheduling to guide dynamic self-scheduling  Schedules groups of iterations  Going from large to small chunks of work Iterations dispensed at time t follows:

53 Guided Self-Scheduling (Cont.) GSS: (20 iteration, 4 processors) Not completely balanced Required synchronization: 9 In bakery counter: 20

54 Guided Self-Scheduling (Cont.) In the example, last 4 allocation are for a single iteration. Coincidence?  Last p-1 iterations will always be of 1 iteration. GSS(2): No block of iterations smaller than 2 GSS(k): No block is smaller than k

55 Yaniv Carmeli B.A. in CS Thanks for you attention!


Download ppt "Creating Coarse-grained Parallelism for Loop Nests Chapter 6, Sections 6.3 through 6.9 Yaniv Carmeli."

Similar presentations


Ads by Google