Presentation is loading. Please wait.

Presentation is loading. Please wait.

COMP25212 Lecture 51 Pipelining Reducing Instruction Execution Time.

Similar presentations


Presentation on theme: "COMP25212 Lecture 51 Pipelining Reducing Instruction Execution Time."— Presentation transcript:

1 COMP25212 Lecture 51 Pipelining Reducing Instruction Execution Time

2 COMP25212 Lecture 52 The Fetch-Execute Cycle Instruction execution is a simple repetitive cycle Fetch Instruction Execute Instruction CPU Memory

3 COMP25212 Lecture 53 Cycles of Operation Most logic circuits are driven by a clock In its simplest form one operations would take one clock cycle This is assuming that getting an instruction and accessing data memory can each be done in a 1/5 th of a cycle (i.e. a cache hit)

4 COMP25212 Lecture 54 Fetch-Execute Detail The two parts of the cycle can be subdivided Fetch –Get instruction from memory –Decode instruction & select registers Execute –Perform operation or calculate address –Access an operand in data memory –Write result to a register

5 COMP25212 Processor Detail Register Bank Data Cache PC Instruction Cache MUX ALU IF ID EX MEM WB Instruction Instruction Execute Access Write Fetch Decode Instruction Memory Back

6 COMP25212 Lecture 56 Logic to do this Each stage will do its work and pass work to the next Each block is only doing any work once every 1/5 th of a cycle Fetch Logic Decode Logic Exec LogicMem Logic Write Logic Inst CacheData Cache

7 COMP25212 Lecture 57 Can We Overlap Operations? E.g while decoding one instruction we could be fetching the next

8 COMP25212 Lecture 58 Insert Buffers Between Stages Instead of direct connection between stages – use extra buffers to hold state Clock buffers once per cycle Fetch Logic Decode Logic Exec LogicMem Logic Write Logic Inst CacheData Cache clock Instruction Reg.

9 COMP25212 Lecture 59 This is a Pipeline Just like a car production line, one stage puts engine in, next puts wheels on etc. We still execute one instruction every cycle We can now increase the clock speed by 5x 5x faster! But it isn’t quite that easy!

10 COMP25212 Lecture 510 Why 5 Stages Simply because early pipelined processors determined that dividing into these 5 stages of roughly equal complexity was appropriate Some recent processors have used more than 30 pipeline stages We will consider 5 for simplicity at the moment

11 Control Hazards

12 COMP25212 Lecture 512 The Control Transfer Problem The obvious way to fetch instructions is in serial program order (i.e. just incrementing the PC) What if we fetch a branch? We only know it’s a branch when we decode it in the second stage of the pipeline By that time we are already fetching the next instruction in serial order

13 COMP25212 Lecture 513 A Pipeline ‘Bubble’ Inst 1 Inst 2 Inst 3 Branch n Inst 5. Inst n 5 Bra321 n 5 3n+1 5Bra32n cycles We must mark Inst 5 as unwanted and Ignore it as it goes down the pipeline. But we have wasted a cycle Decode here

14 COMP25212 Lecture 514 Conditional Branches It gets worse! Suppose we have a conditional branch It is possible that we might not be able to determine the branch outcome until the execute (3 rd ) stage We would then have 2 ‘bubbles’ We can often avoid this by reading registers during the decode stage.

15 COMP25212 Lecture 515 Longer Pipelines ‘Bubbles’ due to branches are usually called Control Hazards They occur when it takes one or more pipeline stages to detect the branch The more stages, the less each does More likely to take multiple stages Longer pipelines usually suffer more degradation from control hazards

16 COMP25212 Lecture 516 Branch Prediction In most programs a branch instruction is executed many times Also, the instructions will be at the same (virtual) address in memory What if, when a branch was executed –We ‘remembered’ its address –We ‘remembered’ the address that was fetched next

17 COMP25212 Lecture 517 Branch Target Buffer We could do this with some sort of cache As we fetch the branch we check the target If a valid entry in buffer we use that to fetch next instruction Address Data Branch Add Target Add

18 COMP25212 Lecture 518 Branch Target Buffer For an unconditional branch we would always get it right For a conditional branch it depends on the probability that the next branch is the same as the previous E.g. a ‘for’ loop which jumps back many times we will get it right most of the time But it is only a prediction, if we get it wrong we correct next cycle (suffer a ‘bubble’)

19 COMP25212 Lecture 519 Outline Implementation Fetch Stage PC Inst Cache Branch Target Buffer valid inc

20 COMP25212 Lecture 520 Other Branch Prediction BTB is simple to understand but expensive to implement Also, as described, it just uses the last branch to predict In practice branch prediction depends on –More history (several previous branches) –Context (how did we get to this branch) Real branch predictors are more complex and vital to performance (long pipelines)

21 Data Hazards

22 COMP25212 Data Hazards Pipeline can cause other problems Consider ADD R1,R2,R3 MUL R0,R1,R1 The ADD instruction is producing a value in R1 The following MUL instruction uses R1 as input

23 COMP25212 Instructions in the Pipeline Register Bank Data Cache PC Instruction Cache MUX ALU IF ID EX MEM WB ADD R1,R2,R3MUL R0,R1,R1

24 COMP25212 The Data isn’t Ready At end of ID cycle, MUL instruction should have selected value in R1 to put into buffer at input to EX stage But the correct value for R1 from ADD instruction is being put into the buffer at output of EX stage at this time It won’t get to input of Register Bank until one cycle later – then probably another cycle to write into R1

25 COMP25212 Insert Delays? One solution is to detect such data dependencies in hardware and hold instruction in decode stage until data is ready – ‘bubbles’ & wasted cycles again Another is to use the compiler to try to reorder instructions Only works if we can find something useful to do – otherwise insert NOPs - waste

26 COMP25212 Forwarding Register Bank Data Cache PC Instruction Cache MUX ALU ADD R1,R2,R3MUL R0,R1,R1 We can add extra paths for specific cases Control becomes more complex

27 COMP25212 Why did it Occur? Due to the design of our pipeline In this case, the result we want is ready one stage ahead of where it was needed, why pass it down the pipeline? But what if we have the sequence LDR R1,[R2,R3] MUL R0,R1,R1 LDR instruction means load R1 from memory address R2+R3

28 COMP25212 Pipeline Sequence for LDR Fetch Decode and read registers (R2 & R3) Execute – add R2+R3 to form address Memory access, read from address Now we can write the value into register R1 We have designed the ‘worst case’ pipeline to work for all instructions

29 Forwarding Register Bank Data Cache PC Instruction Cache MUX ALU NOPMUL R0,R1,R1 We can add extra paths for specific cases Control becomes more complex LDR R1,[R2,R3]

30 COMP25212 Longer Pipelines As mentioned previously we can go to longer pipelines –Do less per pipeline stage –Each step takes less time –So can increase clock frequency –But greater penalty for hazards –More complex control Negative returns?

31 COMP25212 Where Next? Despite these difficulties it is possible to build processors which approach 1 cycle per instruction (cpi) Given that the computational model is one of serial instruction execution can we do any better than this?

32 Instruction Level Parallelism

33 Instruction Level Parallelism (ILP) Suppose we have an expression of the form x = (a+b) * (c-d) Assuming a,b,c & d are in registers, this might turn into ADD R0, R2, R3 SUB R1, R4, R5 MUL R0, R0, R1 STR R0, x

34 ILP (cont) The MUL has a dependence on the ADD and the SUB, and the STR has a dependence on the MUL However, the ADD and SUB are independent In theory, we could execute them in parallel, even out of order ADD R0, R2, R3 SUB R1, R4, R5 MUL R0, R0, R1 STR R0, x

35 The Data Flow Graph We can see this more clearly if we draw the data flow graph ADD SUB MUL R2 R3 R4 R5 x As long as R2, R3, R4 & R5 are available, We can execute the ADD & SUB in parallel

36 Amount of ILP? This is obviously a very simple example However, real programs often have quite a few independent instructions which could be executed in parallel Exact number is clearly program dependent but analysis has shown that maybe 4 is not uncommon (in parts of the program anyway).

37 How to Exploit? We need to fetch multiple instructions per cycle – wider instruction fetch Need to decode multiple instructions per cycle But must use common registers – they are logically the same registers Need multiple ALUs for execution But also access common data cache

38 Dual Issue Pipeline Structure Two instructions can now execute in parallel (Potentially) double the execution rate Called a ‘Superscalar’ architecture Register Bank Data Cache PC Instruction Cache MUX ALU I1 I2 ALU MUX

39 Register & Cache Access Note the access rate to both registers & cache will be doubled To cope with this we may need a dual ported register bank & dual ported cache. This can be done either by duplicating access circuitry or even duplicating whole register & cache structure

40 Selecting Instructions To get the doubled performance out of this structure, we need to have independent instructions We can have a ‘dispatch unit’ in the fetch stage which uses hardware to examine the instruction dependencies and only issue two in parallel if they are independent

41 Instruction order If we had ADD R1,R1,R0 MUL R0,R1,R1 ADD R3,R4,R5 MUL R4,R3,R3 Issued in pairs as above We wouldn’t be able to issue any in parallel because of dependencies

42 Compiler Optimisation But if the compiler had examined dependencies and produced ADD R1,R1,R0 ADD R3,R4,R5 MUL R0,R1,R1 MUL R4,R3,R3 We can now execute pairs in parallel (assuming appropriate forwarding logic)

43 Relying on the Compiler If compiler can’t manage to reorder the instructions, we still need hardware to avoid issuing conflicts But if we could rely on the compiler, we could get rid of expensive checking logic This is the principle of VLIW (Very Long Instruction Word) Compiler must add NOPs if necessary

44 Out of Order Execution There are arguments against relying on the compiler –Legacy binaries – optimum code tied to a particular hardware configuration –‘Code Bloat’ in VLIW – useless NOPs Instead rely on hardware to re-order instructions if necessary Complex but effective

45 Out of Order Execution Processor An instruction buffer needs to be added to store all issued instructions An scheduler is in charge of sending non- conflicted instructions to execute Memory and register accesses need to be delayed until all older instructions are finished to comply with application semantics.

46 Out of Order Execution Instruction Dispatching and Scheduling Memory and register accesses deferred Register Bank Memory Queue PC Instr. Cache ALU Instruction Buffer Dispatch Schedule Register Queue Data Cache Delay

47 Programmer Assisted ILP / Vector Instructions Linear Algebra operations such as Vector Product, Matrix Multiplication have LOTS of parallelism This can be hard to detect in languages like C Instructions can be too separated for hardware detection. Programmer can use types such as float4

48 Limits of ILP Modern processors are up to 4 way superscalar (but rarely achieve 4x speed) Not much beyond this –Hardware complexity –Limited amounts of ILP in real programs Limited ILP not surprising, conventional programs are written assuming a serial execution model – what next?


Download ppt "COMP25212 Lecture 51 Pipelining Reducing Instruction Execution Time."

Similar presentations


Ads by Google