Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Instruction Level Parallelism Vincent H. Berk October 15, 2008 Reading for today: A.7 – A.8 Reading for Friday: 2.1 – 2.5 Project Proposals Due Right.

Similar presentations


Presentation on theme: "1 Instruction Level Parallelism Vincent H. Berk October 15, 2008 Reading for today: A.7 – A.8 Reading for Friday: 2.1 – 2.5 Project Proposals Due Right."— Presentation transcript:

1 1 Instruction Level Parallelism Vincent H. Berk October 15, 2008 Reading for today: A.7 – A.8 Reading for Friday: 2.1 – 2.5 Project Proposals Due Right NOW!

2 2 Instruction Level Parallelism Pipeline CPI = Ideal pipeline CPI + Structural stalls + Data hazard stalls + Control stalls Reduce stalls, reduce CPI Reduce CPI, increase IPC Instruction-level parallelism (ILP) seeks to reduce stalls Loop-level parallelism is easiest to see: for (i=1; i<100;i=i+1) { A[i]= B[i] + C[i]; D[i]= E[i] + F[i]; }

3 3 Instruction Level Parallelism ILP in SW (static) or HW (dynamic) HW intensive ILP dominates desktop and server markets SW compiler intensive approaches more likely seen in embedded systems

4 4 Dependences Two instructions are parallel if they can execute simultaneously in a pipeline without causing any stalls (assuming no structural hazards) and can be reordered Two instructions that are dependent are not parallel and cannot be reordered Types of dependences –Data dependences –Name dependences –Control dependences

5 5 Dependences Dependences are properties of programs Hazards are properties of the pipeline organization Dependence indicates the potential for a hazard Compiler concerned about dependences in program, whether or not a HW hazard occurs depends on a given pipeline

6 6 Review of Hazards Consider instructions i and j, where i occurs before j. RAW (read after write) — j tries to read a source before i writes it, so j gets the old value WAW (write after write) — j tries to write an operand before it is written by i (only possible in pipelines that write in more than one pipe stage or allow an instruction to proceed even when a previous instruction is stalled) WAR (write after read) — j tries to write a destination before it is read by i, so i incorrectly gets the new value (only possible when some instructions can write results early in the pipeline and other instructions can read sources late in the pipeline)

7 7 Data Dependences (True) Data dependences (RAW if a hazard for HW) –Instruction i produces a result used by instruction j, or –Instruction j is data dependent on instruction k, and instruction k is data dependent on instruction i. Easy to determine for registers (fixed names) Hard for memory: –Does 100(R4) = 20(R6)? –From different loop iterations, does 20(R6) = 20(R6)?

8 8 Name Dependences Another kind of dependence called name dependence: two instructions use same name but don’t exchange data Antidependence (WAR if a hazard for HW) –Instruction j writes a register or memory location that instruction i reads from and instruction i is executed first Output dependence (WAW if a hazard for HW) –Instruction i and instruction j write the same register or memory location; ordering between instructions must be preserved

9 9 Name Dependences Hard for memory accesses –Does 100(R4) = 20 (R6)? –From different loop iterations, does 20(R6) = 20(R6)? Example of renaming: DIV.D F0,F2,F4 ADD.D F6,F0,F8 ADD.D S,F0,F8 S.D F6, 0(R1) S.D S, 0(R1) SUB.D F8,F10,F14 SUB.D T,F10,F14 MUL.D F6,F10,F8MUL.D F6,F10,T

10 10 Control Dependence Final kind of dependence called control dependence Example if pl {S1;} if p2 {S2;} S1 is control dependent on p1 and S2 is control dependent on p2 but not on p1. Note that S2 could be data dependent on S1.

11 11 Control Dependences Two (obvious) constraints on control dependences: –An instruction that is control dependent on a branch cannot be moved before the branch so that its execution is no longer controlled by the branch –An instruction that is not control dependent on a branch cannot be moved to after the branch so that its execution is controlled by the branch Control dependences often relaxed to get parallelism; get same effect if we preserve order of exceptions and data flow

12 12 Basic Loop Unrolling for (i=1000; i>0; i=i-1) x[i] = x[i] + s; Loop:LDF0, 0(R1); F0=array element ADDDF4, F0, F2; add scalar in F2 SD0 (R1), F4; store result SUBIR1, R1, #8; decrement pointer 8 bytes (DW) BNEZR1, Loop; branch R1! = zero NOP; delayed branch slot

13 13 FP Loop Hazards Where are the stalls? (Note: latencies due to pipeline organization1) Loop:LDF0, 0(R1); F0=vector element ADDDF4, F0, F2; add scalar in F2 SD0 (R1), F4; store result SUBIR1, R1, #8; decrement pointer 8 bytes (DW) BNEZR1, Loop; branch R1! = zero NOP; delayed branch slot

14 14 FP Loop Showing Stalls Rewrite code to minimize stalls?

15 15 Revised FP Loop Minimizing Stalls Can we unroll the loop to make it faster?

16 16 Loop Unrolling Short loop minimizes parallelism, induces significant overhead Branches per instruction is high Replicate the loop body several times and adjust the loop termination code for (i = 0; i < 100; i = i + 4) { x[i] = x[i] + y[i]; x[i + 1] = x[i + 1] + y[i + 1]; x[i + 2] = x[i + 2] + y[i + 2]; x[i + 3] = x[i + 3] + y[i + 3]; } Improves scheduling since instructions from different iterations can be scheduled together This is done very early in the compilation process All dependences have to be found beforehand Need to use different registers for each iteration

17 17 Where are the control dependences? 1 Loop:LDF0, 0 (R1) 2ADDDF4, F0, F2 3SD0 (R1), F4 4SUBIR1, R1, #8 5BEQZR1, exit 6LDF0, 0 (R1) 7ADDDF4, F0, F2 8SD0 (R1), F4 9SUBIR1, R1, #8 10BEQZR1, exit 11LDF0, 0 (R1) 12ADDDF4, F0, F2 13SD0 (R1), F4 14SUBIR1, R1, #8 15BEQZR1, exit....

18 18 Name Dependences 1 Loop:LDF0, 0 (R1) 2ADDDF4, F0, F2 3SD0 (R1), F4 ; drop SUBI & BNEZ 4LDF0, –8 (R1) 2ADDDF4, F0, F2 3SD–8 (R1), F4 ; drop SUBI & BNEZ 7LDF0, –16 (R1) 8ADDDF4, F0, F2 9SD–16 (R1), F4 ; drop SUBI & BNEZ 10LDF0, –24 (R1) 11ADDDF4, F0, F2 12SD–24 (R1), F4 13SUBIR1, R1, #32; alter to 4*8 14BNEZR1, LOOP 15NOP

19 19 Name Dependences 1 Loop:LDF0, 0 (R1) 2ADDDF4, F0, F2 3SD0 (R1), F4 ; drop SUBI & BNEZ 4LDF6, –8 (R1); F0 becomes F6 5ADDDF8, F6, F2; F4 becomes F8 6SD–8 (R1), F8 ; drop SUBI & BNEZ 7LDF10, –16 (R1); F0 becomes F10 8ADDDF12, F10, F2; F4 becomes F12 9SD–16 (R1), F12 ; drop SUBI & BNEZ 10LDF14, –24 (R1); F0 becomes F14 11ADDDF16, F14, F2; F4 becomes F16 12SD–24 (R1), F16 13SUBIR1, R1, #32; alter to 4*8 14BNEZR1, LOOP 15NOP Register renaming

20 20 Reschedule code to minimize stalls Rewrite loop to minimize stalls? 15 + 4  (1+2) +1 = 28 clock cycles to initiate, or 7 per iteration Assumes R1 is multiple of 4

21 21 Unrolled Loop That Minimizes Stalls What assumptions were made when we moved code? -OK to move store past SUBI even though SUBI changes the register -OK to move loads before stores: get right data? -When is it safe for compiler to do such changes? 14+1=15 clock cycles, or 3.75 per iteration Can we eliminate the remaining stall?

22 22 Compiler Loop Unrolling Most important: Code Correctness Unrolling produces larger code that might interfere with cache: –Code sequence no longer fits in L1 cache –Cache to memory bandwidth might not be wide enough Compiler must understand hardware: –Enough registers must be available OR –Compiler must rely on hardware register renaming Compiler must understand the code: –Determine that loop iterations are independent –Eliminate branch instructions while preserving correctness –Determine that the LD and SD are independent over the loop –Rescheduling of instructions and adjusting the offsets

23 23 Multiple Issue Machines Superscalar: multiple parallel dedicated pipelines: –Varying number of instructions per cycle, scheduled by compiler and/or by hardware (Tomasulo) –IBM PowerPC, Sun UltraSparc, DEC Alpha, IA32 Pentium VLIW (Very Long Instruction Word): multiple operations encoded in instruction: –Instructions have wide template (4-16 operations) –IA-64 Itanium

24 24 Getting CPI < 1: Issuing Multiple Instructions/Cycle Superscalar DLX: 2 instructions, 1 FP & 1 anything else –Fetch 64-bits/clock cycle; integer on left, FP on right –Can only issue 2nd instruction if 1st instruction issues –More ports for FP registers to do FP load & FP op in a pair 1 cycle load delay expands to 3 instructions in superscalar DLX – Instruction in right half can’t use it, nor instructions in next slot

25 25 Superscalar Example Superscalar: –Our system can issue one floating point and one other (non-floating point) instruction per cycle. –Instructions are dynamically scheduled from the window –Unroll the loop 5 times and reschedule to minimize cycles per iteration. (WHY?) While Integer/FP split is simple for the HW, get CPI of 0.5 only for programs with: –Exactly 50% FP operations –No hazards If more instructions issued at same time, greater difficulty in decode and issue –Even 2-way superscalar  examine 2 opcodes, 6 register specifiers, & decide if 1 or 2 instructions can issue

26 26 Loop Unrolling in Superscalar Integer instructionFP instructionClock cycle Loop:LD F0, 0 (R1)1 LD F6, –8 (R1)2 LD F10, –16 (R1)ADDD F4, F0, F23 LD F14, –24 (R1)ADDD F8, F6, F24 LD F18, –32 (R1)ADDD F12, F10, F25 SD 0 (R1), F4ADDD F16, F14, F26 SD –8 (R1), F8ADDD F20, F18, F27 SD –16 (R1), F128 SUBI R1, R1, #409 SD 16 (R1), F16 10 BNEZ R1, Loop11 SD 8 (R1), F2012 Unrolled 5 times to avoid delays (+ 1 due to SS) 12 clocks to initiate, or 2.4 clocks per iteration

27 27 VLIW Example VLIW: –5 instructions in one very long instruction word. 2 FP, 2 Memory, 1 branch/integer –Compiler avoids hazards –Not all slots are always full VLIW: tradeoff instruction space for simple decoding –The long instruction word has room for many operations –By definition, all the operations the compiler puts in the long instruction word are independent  execute in parallel –E.g., 2 integer operations, 2 FP ops, 2 memory refs, 1 branch  16 to 24 bits per field  7*16 or 112 bits to 7*24 or 168 bits wide –Need compiling technique that schedules across several branches

28 28 Loop Unrolling in VLIW Memory MemoryFPFPInt. op/Clock reference 1reference 2operation 1 op. 2 branch LD F0, 0 (R1)LD F6, –8 (R1)1 LD F10, –16 (R1)LD F14, –24 (R1)2 LD F18, –32 (R1)LD F22, –40 (R1)ADDD F4, F0, F2ADDD F8, F6, F23 LD F26, –48 (R1)ADDD F12, F10, F2ADDD F16, F14, F24 ADDD F20, F18, F2ADDD F24, F22, F25 SD 0 (R1), F4SD –8 (R1), F8ADDD F28, F26, F26 SD –16 (R1), F12SD –24 (R1), F167 SD –32 (R1), F20SD –40 (R1), F24SUBI R1, R1, #488 SD 0 (R1), F28BNEZ R1, LOOP9 Unrolled 7 times to avoid delays 9 clocks to initiate, or 1.3 clocks per iteration Average: 2.5 ops per clock, 50% efficiency Note: Need more registers in VLIW (15 vs. 6 in SS)

29 29 Limits to Multi-Issue Machines Inherent limitations of instruction-level parallelism –1 branch in 5: How to keep a 5-way VLIW busy? –Latencies of units: many operations must be scheduled –Easy: More instruction bandwidth –Easy: Duplicate functional units to get parallel execution –Hard: Increase ports to register file (bandwidth) VLIW example needs 7 reads and 3 writes for integer registers & 5 reads and 3 writes for FP registers –Harder: Increase ports to memory (bandwidth) –Pipelines in lockstep: One pipeline stall, stalls all others to avoid hazards

30 30 Limits to Multi-Issue Machines Limitations specific to either superscalar or VLIW implementation –Decode issue in superscalar: how wide is practical? –VLIW code size: unroll loops + wasted fields in VLIW IA-64 compresses dependent instructions, but still larger –VLIW lock step  1 hazard & all instructions stall IA-64 not lock step? Dynamic pipeline? –VLIW & binary compatibility: IA-64 promises binary compatibility

31 31 Dependences Two instructions are parallel if they can execute simultaneously in a pipeline without causing any stalls (assuming no structural hazards) and can be reordered (depending on code semantics) Two instructions that are dependent are not parallel and cannot be reordered Types of dependences –Data dependences –Name dependences –Control dependences Dependences are properties of programs Hazards are properties of the pipeline organization Dependence indicates the potential for a hazard

32 32 Compiler Perspectives on Code Movement Hard for memory accesses –Does 100(R4) = 20 (R6)? –From different loop iterations, does 20(R6) = 20(R6)? Our example required compiler to know that if R1 doesn’t change then: 0(R1)  -8 (R1)  -16 (R1)  -24 (R1) There were no dependences between some loads and stores so they could be moved by each other

33 33 Detecting Loop Level Dependences for (i=1; i<=100; i=i+1) { A[i] = A[i] + B[i];/* S1 */ B[i+1] = C[i] + D[i];/* S2 */ } Loop carried dependence: S1 relies on the S2 of the previous iteration There is no dependence between S1 and S2, consider: A[1] = A[1] + B[1]; for (i=1; i<=99; i=i+1) { B[i+1] = C[i] + D[i]; A[i+1] = A [i+1] + B[i+1]; } B[101] = C[100] + D[100];

34 34 Dependence Distance for (i=6; i<=100; i=i+1) Y[i] = Y[i-5] + Y[i]; Loop carried dependence in the form of a recurrence of Y Dependence distance of 5 Higher dependence distance allows for more ILP

35 35 Greatest Common Divisor test Affine array indices: –All array indices DIRECTLY depend on loop variable i Assume the code properties: –for loop runs from n to m with index i –loop has an access pattern: X [a * i +b] = X [c * i +d] … –two values for i: j and k both between n and m –store indexed by j and a load later on index by k with: a*j+b = c*k+d A loop carried dependence exists if GCD (c,a) must divide (d-b) a=2, b=3, c=2, d=0 GDC(a,c) = 2 and d-b = -3 There is no loop dependence since 2 does not divide -3 for (i=1; i<=100; i=i+1) X[2*i+3] = X[2*i] * 5.0;

36 36 Problem Cases Reference by pointers instead of array indices –partly eliminated by strict type checking Sparse arrays with indexing through other arrays (similar to pointers) When a dependence exists for values of the indices but those values are never reached The loop-carried dependence has a distance far greater than what loop- unrolling would cover


Download ppt "1 Instruction Level Parallelism Vincent H. Berk October 15, 2008 Reading for today: A.7 – A.8 Reading for Friday: 2.1 – 2.5 Project Proposals Due Right."

Similar presentations


Ads by Google