Presentation is loading. Please wait.

Presentation is loading. Please wait.

CR18: Advanced Compilers L04: Scheduling Tomofumi Yuki 1.

Similar presentations


Presentation on theme: "CR18: Advanced Compilers L04: Scheduling Tomofumi Yuki 1."— Presentation transcript:

1 CR18: Advanced Compilers L04: Scheduling Tomofumi Yuki 1

2 Today’s Agenda Revisiting legality with schedules How to find schedules 2

3 Schedules Recall that we had many “schedules” here, we use the one related to time In general, a schedule is a function s.t. input: statement instance output: timestamp where instances mapped to the same timestamp “may happen in parallel” We talk about static schedules in this class 3

4 Legality with Schedule Causality Condition Given a PRDG with nodes N and edges E src(e) = producer statement dst(e) = consumer statement D S = domain of statement node S D e = domain of dependence e Check: 4

5 Example (uniform case) Back to the legality check with vectors 5 for (i=1; i<N; i++) for (j=1; j<M; j++) S: A[i][j] = A[i-1][j+1] + B[i][j]; for (i=1; i<N; i++) for (j=1; j<M; j++) S: A[i][j] = A[i-1][j+1] + B[i][j]; [1,-1] i j θ s (i,j)=i e: (i,j->i+1,j-1) θ s (i+1,j-1)>θ s (i,j) i+1>i

6 Example (uniform case) Back to the legality check with vectors 6 for (i=1; i<N; i++) for (j=1; j<M; j++) S: A[i][j] = A[i-1][j+1] + B[i][j]; for (i=1; i<N; i++) for (j=1; j<M; j++) S: A[i][j] = A[i-1][j+1] + B[i][j]; [1,-1] i j θ s (i,j)=j e: (i,j->i+1,j-1) θ s (i+1,j-1)>θ s (i,j) j-1>j

7 Example (uniform case) Back to the legality check with vectors 7 for (i=1; i<N; i++) for (j=1; j<M; j++) S: A[i][j] = A[i-1][j+1] + B[i][j]; for (i=1; i<N; i++) for (j=1; j<M; j++) S: A[i][j] = A[i-1][j+1] + B[i][j]; [1,-1] i j θ s (i,j)=i-j e: (i,j->i+1,j-1) θ s (i+1,j-1)>θ s (i,j) i-j+2>i-j

8 Example (affine case) Back to the legality check with vectors 8 for (i=1; i<N; i++) for (j=1; j<M; j++) S: A[i][j] = A[i][j-1] + A[i-1][M-j]; for (i=1; i<N; i++) for (j=1; j<M; j++) S: A[i][j] = A[i][j-1] + A[i-1][M-j]; [1,*] i j θ s (i,j)=i+j e: (i,j->i+1,M-j) θ s (i+1,M-j)>θ s (i,j) M+i-j+1>i+j (M+1)/2>j [0,1]

9 The Scheduling Problem Find θs that satisfy causality conditions i.e., no dependences are violated Connection to loops you can complete the schedule to get the transformation for loops Sometimes, the problem is formulated in terms of the transform instead of schedule 9

10 Parallel Execution of DO Loops [Lamport 74] One of the 1 st papers on automatic parallelization Hyper-plane method Loops of the form Scope of dependences: uniform + α for I 1 = l 1.. u 1... for I n = l n.. u n body for I 1 = l 1.. u 1... for I n = l n.. u n body 10 for J 1 = λ 1.. μ 1... for J k = λ k.. μ k forall J k+1 = λ k+1.. μ k+1... forall J n = λ n.. μ n body for J 1 = λ 1.. μ 1... for J k = λ k.. μ k forall J k+1 = λ k+1.. μ k+1... forall J n = λ n.. μ n body

11 The Hyper-Plane Method The main theorem (simplified) We are looking for a schedule θ, such that the inner n-1 loops are parallel θ is restricted to linear θ=a 1 I 1 +...+a n I n The key idea: given a distance vector c we want θ (c)>0 proof of existence for lex. positive c in paper 11

12 The Hyper-Plane Method Optimizing the schedule What should be the objective function? In this paper, it is min(μ 1 -λ 1 ) which is min(θ’(μ 1 -λ 1 )) θ’(x)=|a 1 |x 1 +...+|a n |x n 12 for I 1 = l 1.. u 1... for I n = l n.. u n body for I 1 = l 1.. u 1... for I n = l n.. u n body for J 1 = λ 1.. μ 1 forall J 2 = λ 2.. μ 2... forall J n = λ n.. μ n body for J 1 = λ 1.. μ 1 forall J 2 = λ 2.. μ 2... forall J n = λ n.. μ n body

13 Example 1 With distance vectors [1,0] [0,1] θ(i,j)=ai+bj Constraints θ([1,0])>0 : a>0 θ([0,1])>0 : b>0 Minimize Ni+Mj for 0≤i<N, 0≤j<M 13 i j

14 Example 2 With distance vectors [1,-1] [0,1] θ(i,j)=ai+bj Constraints θ([1,-1])>0: a>b θ([0,1])>0 : b>0 Minimize Ni+Mj for 0≤i<N, 0≤j<M 14 i j

15 The General Plane Method Generalizing the Hyper-Plane method When the dependences are no longer uniform Given the iteration vector x, Hyper-Plane method is for array accesses: VAR[p(x)+c] where p is a permutation common to the entire body General-Plane method extends to: VAR[d(p(x)+c)] where d “drops” some number of dimensions 15

16 Final Words on this Paper Very earlier paper, but it does dependence analysis scheduling loop transformation / code generation Similar technique by Wolf & Lam for direction vectors (1991) 16

17 Farkas Scheduling [Feautrier 92] Given a PRDG find a schedule θ s for each statement S θis restricted to affine functions Affine form of Farkas Lemma given a domain D = Ax+b≥0 an affine form ψ(x) is non-negative in D iff it can be described as positive combination 17 Farkas Multiplier

18 Problem Formulation Given a PRDG with nodes N and edges E Positivity: all schedules starts at 0 Causality: source/destination instance x,y when the dependence is active note: edge is producer to consumer 18

19 Using Farkas Lemma Given statements S1 and S2 with schedules θ S1, θ S2 and a dependence e (from S1 to S2) We want to make sure θ S2 (y)>θ S1 (x) for all in D e which is θ S2 (y)-θ S1 (x)-1≥0 in D e make it a single function to get ψ e (x,y)≥0 in D e 19

20 The Farkas Method Build constraints on the schedule build ψ e (x,y) for each e each ψ constraints the Farkas multipliers solve! 20

21 Example 1 Consider the following a D S0 : {[i,j] : 0≤i≤N and 0≤j<i } D S1 : {[i] : 0≤i≤N} e1: S0[i,j]->S0[i,j-1] e2: S1[i]->S0[i,i-1] 21 for (i=0.. N) { for (j=0.. i-1) S0: x[i] = x[i] – L[i,j]*x[j]; S1: x[i] = x[i] / L[i,j]; } for (i=0.. N) { for (j=0.. i-1) S0: x[i] = x[i] – L[i,j]*x[j]; S1: x[i] = x[i] / L[i,j]; } direction is consumer to producer

22 Example 2 Consider the following D S0 : {[i] : 0≤i≤N} D S1 : {[i,j] : 0≤i,j≤N} e1: S1[i,j]->S0[i] : j=0 e2: S1[i,j]->S1[i,j-1] : j>0 22 for (i=0.. N) { S0: x[i] = 0; for (j=0.. N) S1: x[i] = x[i] + L[i,j]*b[j]; } for (i=0.. N) { S0: x[i] = 0; for (j=0.. N) S1: x[i] = x[i] + L[i,j]*b[j]; }

23 Example 3 Back to this example 23 for (i=1; i<N; i++) for (j=1; j<M; j++) S: A[i][j] = A[i][j-1] + A[i-1][M-j]; for (i=1; i<N; i++) for (j=1; j<M; j++) S: A[i][j] = A[i][j-1] + A[i-1][M-j]; i j θ s =a 1 i+a 2 j+a 0 e1:(i,j->i,j-1) e2:(i,j->i-1,M-j)

24 Multi-Dimensional Scheduling One-Dimensional Affine Schedules are not sufficient linearization of lex. order is polynomial (if you have parameters) So we want to find a set of θs for each statement 24

25 Multi-Dimensional Farkas Formulate the problem just like 1D case each dependence adds constraints But, we allow some to be not satisfied recall causality condition δ< 0 : dependence violation δ= 0 : weakly satisfied δ> 0 : strongly satisfied 25

26 Greedy Algorithm Given a PRDG with edged E 1. formulate the problem for all edges in E 2. weakly satisfy all of them 3. strongly satisfy as much as possible 4. add the obtained θ to the list 5. remove strongly satisfied edged from E 6. repeat until E is empty The obtained list of θs is your schedule 26

27 Back to the Example Back to this example 27 for (i=1; i<N; i++) for (j=1; j<M; j++) S: A[i][j] = A[i][j-1] + A[i-1][M-j]; for (i=1; i<N; i++) for (j=1; j<M; j++) S: A[i][j] = A[i][j-1] + A[i-1][M-j]; i j θ s =a 1 i+a 2 j+a 0 e1:(i,j->i,j-1) e2:(i,j->i-1,M-j)

28 The Vertex Method Another method for scheduling Uses the generator representation of polyhedra Constraint representation: intersection of half-spaces Generator representation: convex hull of vertices, rays, and lines 28 The Mapping of Linear Recurrence Equations on Regular Arrays, Patrice Quinton and Vincent Van Dogen, 1989

29 The Main Theorem A schedule legal for the vertices + rays + lines is also legal for the entire polyhedron generated by them you can compute constraints on schedules no need to reason about potentially infinite set of iterations 29

30 On the Optimality of Scheduling Paper by Alain Darte and Frédéric Vivien Survey of various methods for scheduling what is the dependence abstraction used? what can you say about optimality? Optimality: does the method find all parallelism? how to define “all” parallelism? 30

31 Scheduling Algorithms Allen and Kennedy [1987] targeting vector machines; dependence-levels Wolf and Lam [1991] Lamport-like; dependence vectors Darte and Vivien [1996] Farkas-like; dependence polyhedra Feautrier [1992] Farkas Algorithm; affine dependences Lim and Lam [1997] 31

32 Allen and Kennedy (in short) You have dependence-levels only i.e., you know the dimension where the dependence is carried Parallelizes the inner loops with no loop carried dependence this paper introduced dependence levels Also deals with loop fusion if dependence is carried in some outer common loop, it can safely be fused 32

33 Optimality of Allen and Kennedy The dependence information is very limited dependence-level only Then the parallelism found is actually optimal later proved by Darte and Vivien 33

34 Wolf and Lam (in short) Input: direction vectors Output: fully permutable loops what does this mean? Context: unimodular transformations Optimal parallelism extraction if you only know direction vectors perfectly nested loops 34

35 Optimality of Farkas Algorithm Original paper had no claims later proved by Darte and Vivien The Greedy algorithm is actually optimal! With a few caveats affine schedules one schedule per statement 35

36 Index-Set Splitting Piece-wise affine schedule or split a statement into multiple statements or split an equation into... Main Idea: using one schedule for the entire statement is (sometimes) not optimal 36

37 Example: Smashing Periodic Boundaries can you tile? 37 i j

38 Example: Smashing Periodic Boundaries 38 i j i j

39 How Good is Optimal What does Farkas scheduling bring? 39


Download ppt "CR18: Advanced Compilers L04: Scheduling Tomofumi Yuki 1."

Similar presentations


Ads by Google