1 Instruction Level Parallelism Vincent H. Berk October 15, 2008 Reading for today: A.7 – A.8 Reading for Friday: 2.1 – 2.5 Project Proposals Due Right.

Slides:



Advertisements
Similar presentations
Instruction-level Parallelism Compiler Perspectives on Code Movement dependencies are a property of code, whether or not it is a HW hazard depends on.
Advertisements

CS 378 Programming for Performance Single-Thread Performance: Compiler Scheduling for Pipelines Adopted from Siddhartha Chatterjee Spring 2009.
CPE 731 Advanced Computer Architecture Instruction Level Parallelism Part I Dr. Gheith Abandah Adapted from the slides of Prof. David Patterson, University.
Lecture 8 Dynamic Branch Prediction, Superscalar and VLIW Advanced Computer Architecture COE 501.
1 COMP 206: Computer Architecture and Implementation Montek Singh Wed, Oct 19, 2005 Topic: Instruction-Level Parallelism (Multiple-Issue, Speculation)
Compiler techniques for exposing ILP
ENGS 116 Lecture 101 ILP: Software Approaches Vincent H. Berk October 12 th Reading for today: , 4.1 Reading for Friday: 4.2 – 4.6 Homework #2:
CPE 631: ILP, Static Exploitation Electrical and Computer Engineering University of Alabama in Huntsville Aleksandar Milenkovic,
CPE 731 Advanced Computer Architecture ILP: Part V – Multiple Issue Dr. Gheith Abandah Adapted from the slides of Prof. David Patterson, University of.
1 4/20/06 Exploiting Instruction-Level Parallelism with Software Approaches Original by Prof. David A. Patterson.
FTC.W99 1 Advanced Pipelining and Instruction Level Parallelism (ILP) ILP: Overlap execution of unrelated instructions gcc 17% control transfer –5 instructions.
Instruction Level Parallelism María Jesús Garzarán University of Illinois at Urbana-Champaign.
COMP4611 Tutorial 6 Instruction Level Parallelism
POLITECNICO DI MILANO Parallelism in wonderland: are you ready to see how deep the rabbit hole goes? ILP: VLIW Architectures Marco D. Santambrogio:
Eliminating Stalls Using Compiler Support. Instruction Level Parallelism gcc 17% control transfer –5 instructions + 1 branch –Reordering among 5 instructions.
ILP: Loop UnrollingCSCE430/830 Instruction-level parallelism: Loop Unrolling CSCE430/830 Computer Architecture Lecturer: Prof. Hong Jiang Courtesy of Yifeng.
EECC551 - Shaaban #1 Fall 2003 lec# Pipelining and Exploiting Instruction-Level Parallelism (ILP) Pipelining increases performance by overlapping.
1 COMP 740: Computer Architecture and Implementation Montek Singh Tue, Feb 24, 2009 Topic: Instruction-Level Parallelism IV (Software Approaches/Compiler.
EEL Advanced Pipelining and Instruction Level Parallelism Lotzi Bölöni.
Computer Architecture Instruction Level Parallelism Dr. Esam Al-Qaralleh.
Dynamic Branch PredictionCS510 Computer ArchitecturesLecture Lecture 10 Dynamic Branch Prediction, Superscalar, VLIW, and Software Pipelining.
CS152 Lec15.1 Advanced Topics in Pipelining Loop Unrolling Super scalar and VLIW Dynamic scheduling.
Pipelining 5. Two Approaches for Multiple Issue Superscalar –Issue a variable number of instructions per clock –Instructions are scheduled either statically.
1 Advanced Computer Architecture Limits to ILP Lecture 3.
Static Scheduling for ILP Professor Alvin R. Lebeck Computer Science 220 / ECE 252 Fall 2008.
CS252 Graduate Computer Architecture Lecture 6 Static Scheduling, Scoreboard February 6 th, 2012 John Kubiatowicz Electrical Engineering and Computer Sciences.
COMP4211 Seminar Intro to Instruction-Level Parallelism 04S1 Week 02 Oliver Diessel.
1 COMP 740: Computer Architecture and Implementation Montek Singh Tue, Mar 17, 2009 Topic: Instruction-Level Parallelism (Multiple-Issue, Speculation)
1 Lecture: Pipeline Wrap-Up and Static ILP Topics: multi-cycle instructions, precise exceptions, deep pipelines, compiler scheduling, loop unrolling, software.
EECC551 - Shaaban #1 Winter 2002 lec# Pipelining and Exploiting Instruction-Level Parallelism (ILP) Pipelining increases performance by overlapping.
EECC551 - Shaaban #1 Spring 2006 lec# Pipelining and Instruction-Level Parallelism. Definition of basic instruction block Increasing Instruction-Level.
EECC551 - Shaaban #1 Fall 2005 lec# Pipelining and Instruction-Level Parallelism. Definition of basic instruction block Increasing Instruction-Level.
COMP381 by M. Hamdi 1 Superscalar Processors. COMP381 by M. Hamdi 2 Recall from Pipelining Pipeline CPI = Ideal pipeline CPI + Structural Stalls + Data.
1 Lecture 5: Pipeline Wrap-up, Static ILP Basics Topics: loop unrolling, VLIW (Sections 2.1 – 2.2) Assignment 1 due at the start of class on Thursday.
1 COMP 206: Computer Architecture and Implementation Montek Singh Mon., Oct. 9, 2002 Topic: Instruction-Level Parallelism (Multiple-Issue, Speculation)
Chapter 2 Instruction-Level Parallelism and Its Exploitation
EECC551 - Shaaban #1 Fall 2002 lec# Floating Point/Multicycle Pipelining in MIPS Completion of MIPS EX stage floating point arithmetic operations.
EECC551 - Shaaban #1 lec # 8 Winter Multiple Instruction Issue: CPI < 1 To improve a pipeline’s CPI to be better [less] than one, and to.
CIS 629 Fall 2002 Multiple Issue/Speculation Multiple Instruction Issue: CPI < 1 To improve a pipeline’s CPI to be better [less] than one, and to utilize.
\course\ELEG652-03Fall\Topic Exploitation of Instruction-Level Parallelism (ILP)
EECC551 - Shaaban #1 Winter 2011 lec# Pipelining and Instruction-Level Parallelism (ILP). Definition of basic instruction block Increasing Instruction-Level.
EECC551 - Shaaban #1 Spring 2004 lec# Definition of basic instruction blocks Increasing Instruction-Level Parallelism & Size of Basic Blocks.
COMP381 by M. Hamdi 1 Loop Level Parallelism Instruction Level Parallelism: Loop Level Parallelism.
ENGS 116 Lecture 91 Dynamic Branch Prediction and Speculation Vincent H. Berk October 10, 2005 Reading for today: Chapter 3.2 – 3.6 Reading for Wednesday:
Pipelining and Exploiting Instruction-Level Parallelism (ILP)
CPSC614 Lec 6.1 Exploiting Instruction-Level Parallelism with Software Approach #1 E. J. Kim.
EECC551 - Shaaban #1 Fall 2001 lec# Floating Point/Multicycle Pipelining in DLX Completion of DLX EX stage floating point arithmetic operations.
CIS 662 – Computer Architecture – Fall Class 16 – 11/09/04 1 Compiler Techniques for ILP  So far we have explored dynamic hardware techniques for.
Recap Multicycle Operations –MIPS Floating Point Putting It All Together: the MIPS R4000 Pipeline.
Compiler Techniques for ILP
CS 352H: Computer Systems Architecture
CPE 731 Advanced Computer Architecture ILP: Part V – Multiple Issue
CSCE430/830 Computer Architecture
CPE 631 Lecture 13: Exploiting ILP with SW Approaches
Adapted from the slides of Prof
Pipelining and Exploiting Instruction-Level Parallelism (ILP)
CC423: Advanced Computer Architecture ILP: Part V – Multiple Issue
Pipelining and Exploiting Instruction-Level Parallelism (ILP)
Key to pipelining: smooth flow Hazards limit performance
Pipelining and Exploiting Instruction-Level Parallelism (ILP)
CSC3050 – Computer Architecture
Dynamic Hardware Prediction
Pipelining and Exploiting Instruction-Level Parallelism (ILP)
Pipelining and Exploiting Instruction-Level Parallelism (ILP)
Pipelining and Exploiting Instruction-Level Parallelism (ILP)
CPE 631 Lecture 14: Exploiting ILP with SW Approaches (2)
CMSC 611: Advanced Computer Architecture
Pipelining and Exploiting Instruction-Level Parallelism (ILP)
Loop-Level Parallelism
Lecture 5: Pipeline Wrap-up, Static ILP
Presentation transcript:

1 Instruction Level Parallelism Vincent H. Berk October 15, 2008 Reading for today: A.7 – A.8 Reading for Friday: 2.1 – 2.5 Project Proposals Due Right NOW!

2 Instruction Level Parallelism Pipeline CPI = Ideal pipeline CPI + Structural stalls + Data hazard stalls + Control stalls Reduce stalls, reduce CPI Reduce CPI, increase IPC Instruction-level parallelism (ILP) seeks to reduce stalls Loop-level parallelism is easiest to see: for (i=1; i<100;i=i+1) { A[i]= B[i] + C[i]; D[i]= E[i] + F[i]; }

3 Instruction Level Parallelism ILP in SW (static) or HW (dynamic) HW intensive ILP dominates desktop and server markets SW compiler intensive approaches more likely seen in embedded systems

4 Dependences Two instructions are parallel if they can execute simultaneously in a pipeline without causing any stalls (assuming no structural hazards) and can be reordered Two instructions that are dependent are not parallel and cannot be reordered Types of dependences –Data dependences –Name dependences –Control dependences

5 Dependences Dependences are properties of programs Hazards are properties of the pipeline organization Dependence indicates the potential for a hazard Compiler concerned about dependences in program, whether or not a HW hazard occurs depends on a given pipeline

6 Review of Hazards Consider instructions i and j, where i occurs before j. RAW (read after write) — j tries to read a source before i writes it, so j gets the old value WAW (write after write) — j tries to write an operand before it is written by i (only possible in pipelines that write in more than one pipe stage or allow an instruction to proceed even when a previous instruction is stalled) WAR (write after read) — j tries to write a destination before it is read by i, so i incorrectly gets the new value (only possible when some instructions can write results early in the pipeline and other instructions can read sources late in the pipeline)

7 Data Dependences (True) Data dependences (RAW if a hazard for HW) –Instruction i produces a result used by instruction j, or –Instruction j is data dependent on instruction k, and instruction k is data dependent on instruction i. Easy to determine for registers (fixed names) Hard for memory: –Does 100(R4) = 20(R6)? –From different loop iterations, does 20(R6) = 20(R6)?

8 Name Dependences Another kind of dependence called name dependence: two instructions use same name but don’t exchange data Antidependence (WAR if a hazard for HW) –Instruction j writes a register or memory location that instruction i reads from and instruction i is executed first Output dependence (WAW if a hazard for HW) –Instruction i and instruction j write the same register or memory location; ordering between instructions must be preserved

9 Name Dependences Hard for memory accesses –Does 100(R4) = 20 (R6)? –From different loop iterations, does 20(R6) = 20(R6)? Example of renaming: DIV.D F0,F2,F4 ADD.D F6,F0,F8 ADD.D S,F0,F8 S.D F6, 0(R1) S.D S, 0(R1) SUB.D F8,F10,F14 SUB.D T,F10,F14 MUL.D F6,F10,F8MUL.D F6,F10,T

10 Control Dependence Final kind of dependence called control dependence Example if pl {S1;} if p2 {S2;} S1 is control dependent on p1 and S2 is control dependent on p2 but not on p1. Note that S2 could be data dependent on S1.

11 Control Dependences Two (obvious) constraints on control dependences: –An instruction that is control dependent on a branch cannot be moved before the branch so that its execution is no longer controlled by the branch –An instruction that is not control dependent on a branch cannot be moved to after the branch so that its execution is controlled by the branch Control dependences often relaxed to get parallelism; get same effect if we preserve order of exceptions and data flow

12 Basic Loop Unrolling for (i=1000; i>0; i=i-1) x[i] = x[i] + s; Loop:LDF0, 0(R1); F0=array element ADDDF4, F0, F2; add scalar in F2 SD0 (R1), F4; store result SUBIR1, R1, #8; decrement pointer 8 bytes (DW) BNEZR1, Loop; branch R1! = zero NOP; delayed branch slot

13 FP Loop Hazards Where are the stalls? (Note: latencies due to pipeline organization1) Loop:LDF0, 0(R1); F0=vector element ADDDF4, F0, F2; add scalar in F2 SD0 (R1), F4; store result SUBIR1, R1, #8; decrement pointer 8 bytes (DW) BNEZR1, Loop; branch R1! = zero NOP; delayed branch slot

14 FP Loop Showing Stalls Rewrite code to minimize stalls?

15 Revised FP Loop Minimizing Stalls Can we unroll the loop to make it faster?

16 Loop Unrolling Short loop minimizes parallelism, induces significant overhead Branches per instruction is high Replicate the loop body several times and adjust the loop termination code for (i = 0; i < 100; i = i + 4) { x[i] = x[i] + y[i]; x[i + 1] = x[i + 1] + y[i + 1]; x[i + 2] = x[i + 2] + y[i + 2]; x[i + 3] = x[i + 3] + y[i + 3]; } Improves scheduling since instructions from different iterations can be scheduled together This is done very early in the compilation process All dependences have to be found beforehand Need to use different registers for each iteration

17 Where are the control dependences? 1 Loop:LDF0, 0 (R1) 2ADDDF4, F0, F2 3SD0 (R1), F4 4SUBIR1, R1, #8 5BEQZR1, exit 6LDF0, 0 (R1) 7ADDDF4, F0, F2 8SD0 (R1), F4 9SUBIR1, R1, #8 10BEQZR1, exit 11LDF0, 0 (R1) 12ADDDF4, F0, F2 13SD0 (R1), F4 14SUBIR1, R1, #8 15BEQZR1, exit....

18 Name Dependences 1 Loop:LDF0, 0 (R1) 2ADDDF4, F0, F2 3SD0 (R1), F4 ; drop SUBI & BNEZ 4LDF0, –8 (R1) 2ADDDF4, F0, F2 3SD–8 (R1), F4 ; drop SUBI & BNEZ 7LDF0, –16 (R1) 8ADDDF4, F0, F2 9SD–16 (R1), F4 ; drop SUBI & BNEZ 10LDF0, –24 (R1) 11ADDDF4, F0, F2 12SD–24 (R1), F4 13SUBIR1, R1, #32; alter to 4*8 14BNEZR1, LOOP 15NOP

19 Name Dependences 1 Loop:LDF0, 0 (R1) 2ADDDF4, F0, F2 3SD0 (R1), F4 ; drop SUBI & BNEZ 4LDF6, –8 (R1); F0 becomes F6 5ADDDF8, F6, F2; F4 becomes F8 6SD–8 (R1), F8 ; drop SUBI & BNEZ 7LDF10, –16 (R1); F0 becomes F10 8ADDDF12, F10, F2; F4 becomes F12 9SD–16 (R1), F12 ; drop SUBI & BNEZ 10LDF14, –24 (R1); F0 becomes F14 11ADDDF16, F14, F2; F4 becomes F16 12SD–24 (R1), F16 13SUBIR1, R1, #32; alter to 4*8 14BNEZR1, LOOP 15NOP Register renaming

20 Reschedule code to minimize stalls Rewrite loop to minimize stalls?  (1+2) +1 = 28 clock cycles to initiate, or 7 per iteration Assumes R1 is multiple of 4

21 Unrolled Loop That Minimizes Stalls What assumptions were made when we moved code? -OK to move store past SUBI even though SUBI changes the register -OK to move loads before stores: get right data? -When is it safe for compiler to do such changes? 14+1=15 clock cycles, or 3.75 per iteration Can we eliminate the remaining stall?

22 Compiler Loop Unrolling Most important: Code Correctness Unrolling produces larger code that might interfere with cache: –Code sequence no longer fits in L1 cache –Cache to memory bandwidth might not be wide enough Compiler must understand hardware: –Enough registers must be available OR –Compiler must rely on hardware register renaming Compiler must understand the code: –Determine that loop iterations are independent –Eliminate branch instructions while preserving correctness –Determine that the LD and SD are independent over the loop –Rescheduling of instructions and adjusting the offsets

23 Multiple Issue Machines Superscalar: multiple parallel dedicated pipelines: –Varying number of instructions per cycle, scheduled by compiler and/or by hardware (Tomasulo) –IBM PowerPC, Sun UltraSparc, DEC Alpha, IA32 Pentium VLIW (Very Long Instruction Word): multiple operations encoded in instruction: –Instructions have wide template (4-16 operations) –IA-64 Itanium

24 Getting CPI < 1: Issuing Multiple Instructions/Cycle Superscalar DLX: 2 instructions, 1 FP & 1 anything else –Fetch 64-bits/clock cycle; integer on left, FP on right –Can only issue 2nd instruction if 1st instruction issues –More ports for FP registers to do FP load & FP op in a pair 1 cycle load delay expands to 3 instructions in superscalar DLX – Instruction in right half can’t use it, nor instructions in next slot

25 Superscalar Example Superscalar: –Our system can issue one floating point and one other (non-floating point) instruction per cycle. –Instructions are dynamically scheduled from the window –Unroll the loop 5 times and reschedule to minimize cycles per iteration. (WHY?) While Integer/FP split is simple for the HW, get CPI of 0.5 only for programs with: –Exactly 50% FP operations –No hazards If more instructions issued at same time, greater difficulty in decode and issue –Even 2-way superscalar  examine 2 opcodes, 6 register specifiers, & decide if 1 or 2 instructions can issue

26 Loop Unrolling in Superscalar Integer instructionFP instructionClock cycle Loop:LD F0, 0 (R1)1 LD F6, –8 (R1)2 LD F10, –16 (R1)ADDD F4, F0, F23 LD F14, –24 (R1)ADDD F8, F6, F24 LD F18, –32 (R1)ADDD F12, F10, F25 SD 0 (R1), F4ADDD F16, F14, F26 SD –8 (R1), F8ADDD F20, F18, F27 SD –16 (R1), F128 SUBI R1, R1, #409 SD 16 (R1), F16 10 BNEZ R1, Loop11 SD 8 (R1), F2012 Unrolled 5 times to avoid delays (+ 1 due to SS) 12 clocks to initiate, or 2.4 clocks per iteration

27 VLIW Example VLIW: –5 instructions in one very long instruction word. 2 FP, 2 Memory, 1 branch/integer –Compiler avoids hazards –Not all slots are always full VLIW: tradeoff instruction space for simple decoding –The long instruction word has room for many operations –By definition, all the operations the compiler puts in the long instruction word are independent  execute in parallel –E.g., 2 integer operations, 2 FP ops, 2 memory refs, 1 branch  16 to 24 bits per field  7*16 or 112 bits to 7*24 or 168 bits wide –Need compiling technique that schedules across several branches

28 Loop Unrolling in VLIW Memory MemoryFPFPInt. op/Clock reference 1reference 2operation 1 op. 2 branch LD F0, 0 (R1)LD F6, –8 (R1)1 LD F10, –16 (R1)LD F14, –24 (R1)2 LD F18, –32 (R1)LD F22, –40 (R1)ADDD F4, F0, F2ADDD F8, F6, F23 LD F26, –48 (R1)ADDD F12, F10, F2ADDD F16, F14, F24 ADDD F20, F18, F2ADDD F24, F22, F25 SD 0 (R1), F4SD –8 (R1), F8ADDD F28, F26, F26 SD –16 (R1), F12SD –24 (R1), F167 SD –32 (R1), F20SD –40 (R1), F24SUBI R1, R1, #488 SD 0 (R1), F28BNEZ R1, LOOP9 Unrolled 7 times to avoid delays 9 clocks to initiate, or 1.3 clocks per iteration Average: 2.5 ops per clock, 50% efficiency Note: Need more registers in VLIW (15 vs. 6 in SS)

29 Limits to Multi-Issue Machines Inherent limitations of instruction-level parallelism –1 branch in 5: How to keep a 5-way VLIW busy? –Latencies of units: many operations must be scheduled –Easy: More instruction bandwidth –Easy: Duplicate functional units to get parallel execution –Hard: Increase ports to register file (bandwidth) VLIW example needs 7 reads and 3 writes for integer registers & 5 reads and 3 writes for FP registers –Harder: Increase ports to memory (bandwidth) –Pipelines in lockstep: One pipeline stall, stalls all others to avoid hazards

30 Limits to Multi-Issue Machines Limitations specific to either superscalar or VLIW implementation –Decode issue in superscalar: how wide is practical? –VLIW code size: unroll loops + wasted fields in VLIW IA-64 compresses dependent instructions, but still larger –VLIW lock step  1 hazard & all instructions stall IA-64 not lock step? Dynamic pipeline? –VLIW & binary compatibility: IA-64 promises binary compatibility

31 Dependences Two instructions are parallel if they can execute simultaneously in a pipeline without causing any stalls (assuming no structural hazards) and can be reordered (depending on code semantics) Two instructions that are dependent are not parallel and cannot be reordered Types of dependences –Data dependences –Name dependences –Control dependences Dependences are properties of programs Hazards are properties of the pipeline organization Dependence indicates the potential for a hazard

32 Compiler Perspectives on Code Movement Hard for memory accesses –Does 100(R4) = 20 (R6)? –From different loop iterations, does 20(R6) = 20(R6)? Our example required compiler to know that if R1 doesn’t change then: 0(R1)  -8 (R1)  -16 (R1)  -24 (R1) There were no dependences between some loads and stores so they could be moved by each other

33 Detecting Loop Level Dependences for (i=1; i<=100; i=i+1) { A[i] = A[i] + B[i];/* S1 */ B[i+1] = C[i] + D[i];/* S2 */ } Loop carried dependence: S1 relies on the S2 of the previous iteration There is no dependence between S1 and S2, consider: A[1] = A[1] + B[1]; for (i=1; i<=99; i=i+1) { B[i+1] = C[i] + D[i]; A[i+1] = A [i+1] + B[i+1]; } B[101] = C[100] + D[100];

34 Dependence Distance for (i=6; i<=100; i=i+1) Y[i] = Y[i-5] + Y[i]; Loop carried dependence in the form of a recurrence of Y Dependence distance of 5 Higher dependence distance allows for more ILP

35 Greatest Common Divisor test Affine array indices: –All array indices DIRECTLY depend on loop variable i Assume the code properties: –for loop runs from n to m with index i –loop has an access pattern: X [a * i +b] = X [c * i +d] … –two values for i: j and k both between n and m –store indexed by j and a load later on index by k with: a*j+b = c*k+d A loop carried dependence exists if GCD (c,a) must divide (d-b) a=2, b=3, c=2, d=0 GDC(a,c) = 2 and d-b = -3 There is no loop dependence since 2 does not divide -3 for (i=1; i<=100; i=i+1) X[2*i+3] = X[2*i] * 5.0;

36 Problem Cases Reference by pointers instead of array indices –partly eliminated by strict type checking Sparse arrays with indexing through other arrays (similar to pointers) When a dependence exists for values of the indices but those values are never reached The loop-carried dependence has a distance far greater than what loop- unrolling would cover