Presentation is loading. Please wait.

Presentation is loading. Please wait.

Chapter 6 Pipelined CPU Design. Spring 2005 ELEC 5200/6200 From Patterson/Hennessey Slides Pipelined operation – laundry analogy Text Fig. 6.1.

Similar presentations


Presentation on theme: "Chapter 6 Pipelined CPU Design. Spring 2005 ELEC 5200/6200 From Patterson/Hennessey Slides Pipelined operation – laundry analogy Text Fig. 6.1."— Presentation transcript:

1 Chapter 6 Pipelined CPU Design

2 Spring 2005 ELEC 5200/6200 From Patterson/Hennessey Slides Pipelined operation – laundry analogy Text Fig. 6.1

3 Spring 2005 ELEC 5200/6200 From Patterson/Hennessey Slides Improve performance by increasing instruction throughput Ideal speedup is number of stages in the pipeline. Do we achieve this? Pipelined operation of MIPS CPU Text Fig. 6.3

4 Spring 2005 ELEC 5200/6200 From Patterson/Hennessey Slides Pipelining What makes it easy all instructions are the same length just a few instruction formats memory operands appear only in loads and stores What makes it hard? structural hazards: suppose we had only one memory control hazards: need to worry about branch instructions data hazards: an instruction depends on a previous instruction We’ll build a simple pipeline and look at these issues We’ll talk about modern processors and what really makes it hard: exception handling trying to improve performance with out-of-order execution, etc.

5 Spring 2005 ELEC 5200/6200 From Patterson/Hennessey Slides “Stages” in the single-cycle CPU What do we need to add to actually split the datapath into stages?

6 Spring 2005 ELEC 5200/6200 From Patterson/Hennessey Slides Pipelined Datapath Text Fig. 6.11

7 Spring 2005 ELEC 5200/6200 From Patterson/Hennessey Slides Corrected Datapath Text Fig. 6.17 Note: destination register number forwarded to register file

8 Spring 2005 ELEC 5200/6200 From Patterson/Hennessey Slides Graphically Representing Pipelines Can help with answering questions like: how many cycles does it take to execute this code? what is the ALU doing during cycle 4? use this representation to help understand datapaths

9 Spring 2005 ELEC 5200/6200 From Patterson/Hennessey Slides Pipeline Control - Identify control signals

10 Spring 2005 ELEC 5200/6200 From Patterson/Hennessey Slides We have 5 stages. What needs to be controlled in each stage? Instruction Fetch and PC Increment Instruction Decode / Register Fetch Execution Memory Stage Write Back How would control be handled in an automobile plant? a fancy control center telling everyone what to do? should we use a finite state machine? Pipeline control

11 Spring 2005 ELEC 5200/6200 From Patterson/Hennessey Slides Pass control signals along just like the data Pipeline Control

12 Spring 2005 ELEC 5200/6200 From Patterson/Hennessey Slides Datapath with Control

13 Spring 2005 ELEC 5200/6200 From Patterson/Hennessey Slides Problem with starting next instruction before first is finished dependencies that “go backward in time” are data hazards Dependencies

14 Spring 2005 ELEC 5200/6200 From Patterson/Hennessey Slides Have compiler guarantee no hazards by Where do we insert the “nops” ? sub$2, $1, $3 and $12, $2, $5 or$13, $6, $2 add$14, $2, $2 sw$15, 100($2) Problem: this really slows us down! Software Solution

15 Spring 2005 ELEC 5200/6200 From Patterson/Hennessey Slides Use temporary results, don’t wait for them to be written register file forwarding to handle read/write to same register ALU forwarding Forwarding what if this $2 was $13?

16 Spring 2005 ELEC 5200/6200 From Patterson/Hennessey Slides Forwarding The main idea (some details not shown)

17 Spring 2005 ELEC 5200/6200 From Patterson/Hennessey Slides Load word can still cause a hazard: an instruction tries to read a register following a load instruction that writes to the same register. Thus, we need a hazard detection unit to “stall” the load instruction Can't always forward

18 Spring 2005 ELEC 5200/6200 From Patterson/Hennessey Slides Stalling We can stall the pipeline by keeping an instruction in the same stage bubble Program execution order (in instructions) lw$2, 20($1) and becomes nop and $4, $2, $5 or $8, $2, $6 add $9, $4, $2 Time (in clock cycles) CC 1CC 2CC 3CC 4CC 5CC 6CC 7CC 8CC 9CC 10 IMDM RegReg IMDMReg Reg IMDM RegReg IMDM Reg Reg IMDM Reg Reg

19 Spring 2005 ELEC 5200/6200 From Patterson/Hennessey Slides Hazard Detection Unit Stall by letting an instruction that won’t write anything go forward

20 Spring 2005 ELEC 5200/6200 From Patterson/Hennessey Slides Improving Performance Try and avoid stalls! E.g., reorder these instructions: lw $t0, 0($t1) lw $t2, 4($t1) sw $t2, 0($t1) sw $t0, 4($t1) Dynamic Pipeline Scheduling Hardware chooses which instructions to execute next Will execute instructions out of order (e.g., doesn’t wait for a dependency to be resolved, but rather keeps going!) Speculates on branches and keeps the pipeline full (may need to rollback if prediction incorrect) Trying to exploit instruction-level parallelism

21 Spring 2005 ELEC 5200/6200 From Patterson/Hennessey Slides Cypress CY7C601 Sparc CPU

22 Spring 2005 ELEC 5200/6200 From Patterson/Hennessey Slides Fujitsu MB86900 Sparc CPU

23 Spring 2005 ELEC 5200/6200 From Patterson/Hennessey Slides When we decide to branch, other instructions are in the pipeline! We are predicting “branch not taken” need to add hardware for flushing instructions if we are wrong Branch Hazards

24 Spring 2005 ELEC 5200/6200 From Patterson/Hennessey Slides Flushing Instructions Note: we’ve also moved branch decision to ID stage

25 Spring 2005 ELEC 5200/6200 From Patterson/Hennessey Slides Minimizing Branch Penalties Make branch decision earlier in the pipeline Branch penalty = # instructions fetched from wrong path Earlier branch decision reduces the number of “in progress” instructions to be flushed Delayed branch Delay the branch for one or more instructions These instructions do not get flushed from the pipeline Branch prediction Early in the pipeline, predict the branch outcome (taken vs. not taken) Fetch next instructions from predicted path No time penalty unless prediction is incorrect

26 Spring 2005 ELEC 5200/6200 From Patterson/Hennessey Slides Delayed Branching Allow instruction immediately following the branch to be completed (effectively delaying the branch) ExampleMove add and “delay” beq add $3,$2,$4 beq $1,$2,label beq $1,$2,labeladd $3,$2,$4sub $5,$1,$2xor $4,$6,$7 Flush sub & xor instructions Execute add instruction in if branch taken. “delay slot” whether or not branch is taken. Only sub instruction is flushed. Compiler must move an instruction into the delay slot that will not affect the branch decision.

27 Spring 2005 ELEC 5200/6200 From Patterson/Hennessey Slides Branch Prediction If the branch is taken, we have a penalty of one cycle For our simple design, this is reasonable With deeper pipelines, penalty increases and static branch prediction drastically hurts performance Solution: dynamic branch prediction A 2-bit prediction scheme

28 Spring 2005 ELEC 5200/6200 From Patterson/Hennessey Slides Branch Prediction Sophisticated Techniques: A “branch target buffer” to help us look up the destination Correlating predictors that base prediction on global behavior and recently executed branches (e.g., prediction for a specific branch instruction based on what happened in previous branches) Tournament predictors that use different types of prediction strategies and keep track of which one is performing best. A “branch delay slot” which the compiler tries to fill with a useful instruction (make the one cycle delay part of the ISA) Branch prediction is especially important because it enables other more advanced pipelining techniques to be effective! Modern processors predict correctly 95% of the time!

29 Spring 2005 ELEC 5200/6200 From Patterson/Hennessey Slides Branch Target Buffer (Prediction Table) Instruction Fetch Address Branch Target Address Recent Branch History 400420Taken 520434Taken 560500Not taken 600610Taken Enter branch information when first executed (computed target address, taken/not taken) Next time instruction fetched, use previously computed target address if branch previously taken, or PC+4 otherwise (flush pipeline if branch later found to not be taken)

30 Spring 2005 ELEC 5200/6200 From Patterson/Hennessey Slides Handling Exceptions in a Pipeline Example: 40sub$11, $2, $4 44and$12, $2, $5 48or$13, $2, $6 4Cadd $1, $2, $1 - produces an overflow 50slt$15, $6, $7 54lw$16, 50($7) Actions in response to exception (overflow) 1. Finish instructions in MEM & WB stages (and, or) 2. Flush instructions in IF & ID stages (slt, lw) 3. Flush the offending instruction (add) from EX stage, saving the PC value (extra hardware) 4. Fetch next instruction from exception handler routine

31 Spring 2005 ELEC 5200/6200 From Patterson/Hennessey Slides Advanced Pipelining Increase the depth of the pipeline (speedup proportional to number of pipeline stages – in ideal case) Start more than one instruction each cycle (multiple issue) Loop unrolling to expose more ILP (better scheduling) “Superscalar” processors DEC Alpha 21264: 9 stage pipeline, 6 instruction issue All modern processors are superscalar and issue multiple instructions usually with some limitations (e.g., different “pipes”) VLIW: very long instruction word, static multiple issue (relies more on compiler technology

32 Spring 2005 ELEC 5200/6200 From Patterson/Hennessey Slides Pentium 4 Microarchitecture

33 Spring 2005 ELEC 5200/6200 From Patterson/Hennessey Slides Pentium 4 Pipeline IA-32 instructions decoded to micro-operations Multiple functional units (7) and multiple issue to achieve high performance Up to 126 “outstanding instructions” at a time Trace cache – save decoded instructions to eliminate subsequent decode Branch prediction – track 4K branch instructions Deep pipeline: 20 or more stages per instruction

34 Spring 2005 ELEC 5200/6200 From Patterson/Hennessey Slides Chapter 6 Summary Pipelining does not improve latency, but does improve throughput


Download ppt "Chapter 6 Pipelined CPU Design. Spring 2005 ELEC 5200/6200 From Patterson/Hennessey Slides Pipelined operation – laundry analogy Text Fig. 6.1."

Similar presentations


Ads by Google