Computer Architecture 2014– Pipeline 1 Computer Architecture Pipeline By Yoav Etsion & Dan Tsafrir Presentation based on slides by David Patterson, Avi.

Slides:



Advertisements
Similar presentations
Computer Structure 2013 – Pipeline 1 Computer Structure MIPS Pipeline Lihu Rappoport and Adi Yoaz Some of the slides were taken from: (1) Avi Mendelson.
Advertisements

Computer Architecture 2011 – Pipeline 1 Computer Architecture MIPS Pipeline Lihu Rappoport and Adi Yoaz Some of the slides were taken from: (1) Avi Mendelson.
Review: MIPS Pipeline Data and Control Paths
ENEE350 Ankur Srivastava University of Maryland, College Park Based on Slides from Mary Jane Irwin ( )
ENEE350 Ankur Srivastava University of Maryland, College Park Based on Slides from Mary Jane Irwin ( )
Mary Jane Irwin ( ) [Adapted from Computer Organization and Design,
ENEE350 Ankur Srivastava University of Maryland, College Park Based on Slides from Mary Jane Irwin ( )
1  2004 Morgan Kaufmann Publishers Chapter Six. 2  2004 Morgan Kaufmann Publishers Pipelining The laundry analogy.
©UCB CS 162 Computer Architecture Lecture 3: Pipelining Contd. Instructor: L.N. Bhuyan
Computer Architecture 2011 – out-of-order execution (lec 7) 1 Computer Architecture Out-of-order execution By Dan Tsafrir, 11/4/2011 Presentation based.
1 Stalling  The easiest solution is to stall the pipeline  We could delay the AND instruction by introducing a one-cycle delay into the pipeline, sometimes.
Computer ArchitectureFall 2007 © October 24nd, 2007 Majd F. Sakr CS-447– Computer Architecture.
Computer ArchitectureFall 2007 © October 22nd, 2007 Majd F. Sakr CS-447– Computer Architecture.
ENGS 116 Lecture 51 Pipelining and Hazards Vincent H. Berk September 30, 2005 Reading for today: Chapter A.1 – A.3, article: Patterson&Ditzel Reading for.
1 Stalls and flushes  So far, we have discussed data hazards that can occur in pipelined CPUs if some instructions depend upon others that are still executing.
Pipeline Hazard CT101 – Computing Systems. Content Introduction to pipeline hazard Structural Hazard Data Hazard Control Hazard.
1 Pipelining Reconsider the data path we just did Each instruction takes from 3 to 5 clock cycles However, there are parts of hardware that are idle many.
Pipeline Data Hazards: Detection and Circumvention Adapted from Computer Organization and Design, Patterson & Hennessy, © 2005, and from slides kindly.
EEL5708 Lotzi Bölöni EEL 5708 High Performance Computer Architecture Pipelining.
CSE 340 Computer Architecture Summer 2014 Basic MIPS Pipelining Review.
CS.305 Computer Architecture Enhancing Performance with Pipelining Adapted from Computer Organization and Design, Patterson & Hennessy, © 2005, and from.
Computer Organization CS224 Chapter 4 Part b The Processor Spring 2010 With thanks to M.J. Irwin, T. Fountain, D. Patterson, and J. Hennessy for some lecture.
CMPE 421 Parallel Computer Architecture
1 Designing a Pipelined Processor In this Chapter, we will study 1. Pipelined datapath 2. Pipelined control 3. Data Hazards 4. Forwarding 5. Branch Hazards.
Electrical and Computer Engineering University of Cyprus LAB3: IMPROVING MIPS PERFORMANCE WITH PIPELINING.
Chapter 6 Pipelined CPU Design. Spring 2005 ELEC 5200/6200 From Patterson/Hennessey Slides Pipelined operation – laundry analogy Text Fig. 6.1.
CECS 440 Pipelining.1(c) 2014 – R. W. Allison [slides adapted from D. Patterson slides with additional credits to M.J. Irwin]
CSIE30300 Computer Architecture Unit 04: Basic MIPS Pipelining Hsin-Chou Chi [Adapted from material by and
1 (Based on text: David A. Patterson & John L. Hennessy, Computer Organization and Design: The Hardware/Software Interface, 3 rd Ed., Morgan Kaufmann,
CSE431 L07 Overcoming Data Hazards.1Irwin, PSU, 2005 CSE 431 Computer Architecture Fall 2005 Lecture 07: Overcoming Data Hazards Mary Jane Irwin (
Branch Hazards and Static Branch Prediction Techniques
CS252/Patterson Lec 1.1 1/17/01 معماري کامپيوتر - درس نهم pipeline برگرفته از درس : Prof. David A. Patterson.
CSIE30300 Computer Architecture Unit 05: Overcoming Data Hazards Hsin-Chou Chi [Adapted from material by and
Adapted from Computer Organization and Design, Patterson & Hennessy, UCB ECE232: Hardware Organization and Design Part 13: Branch prediction (Chapter 4/6)
Lecture 9. MIPS Processor Design – Pipelined Processor Design #1 Prof. Taeweon Suh Computer Science Education Korea University 2010 R&E Computer System.
Computer Architecture 2011– pipeline (lec2-3) 1 Computer Architecture MIPS Pipeline By Dan Tsafrir, 7/3/2011, 14/3/2011 Presentation based on slides by.
Computer Architecture 2015– Pipeline 1 Computer Architecture Pipeline By Yoav Etsion & Dan Tsafrir Presentation based on slides by David Patterson, Avi.
ECE/CS 552: Pipeline Hazards © Prof. Mikko Lipasti Lecture notes based in part on slides created by Mark Hill, David Wood, Guri Sohi, John Shen and Jim.
Computer Structure 2015 – Pipeline 1 Computer Structure Pipeline Lecturer: Aharon Kupershtok Created by Lihu Rappoport.
Pipelining: Implementation CPSC 252 Computer Organization Ellen Walker, Hiram College.
CSE 340 Computer Architecture Spring 2016 Overcoming Data Hazards.
Computer Organization
Stalling delays the entire pipeline
Note how everything goes left to right, except …
Pipelining: Hazards Ver. Jan 14, 2014
5 Steps of MIPS Datapath Figure A.2, Page A-8
Appendix C Pipeline implementation
Chapter 4 The Processor Part 4
ECS 154B Computer Architecture II Spring 2009
ECS 154B Computer Architecture II Spring 2009
ECE232: Hardware Organization and Design
Forwarding Now, we’ll introduce some problems that data hazards can cause for our pipelined processor, and show how to handle them with forwarding.
Chapter 4 The Processor Part 3
Review: MIPS Pipeline Data and Control Paths
Morgan Kaufmann Publishers The Processor
Chapter 4 The Processor Part 2
Computer Architecture MIPS Pipeline
Pipelining review.
Pipelining in more detail
The Processor Lecture 3.6: Control Hazards
Control unit extension for data hazards
Instruction Execution Cycle
Control unit extension for data hazards
Pipelining Appendix A and Chapter 3.
Introduction to Computer Organization and Architecture
Control unit extension for data hazards
Stalls and flushes Last time, we discussed data hazards that can occur in pipelined CPUs if some instructions depend upon others that are still executing.
©2003 Craig Zilles (derived from slides by Howard Huang)
Computer Structure Pipeline
ELEC / Computer Architecture and Design Spring 2015 Pipeline Control and Performance (Chapter 6) Vishwani D. Agrawal James J. Danaher.
Presentation transcript:

Computer Architecture 2014– Pipeline 1 Computer Architecture Pipeline By Yoav Etsion & Dan Tsafrir Presentation based on slides by David Patterson, Avi Mendelson, Randi Katz, and Lihu Rappoport

Computer Architecture 2014– Pipeline 2 Pipeline idea: keep everyone busy

Computer Architecture 2014– Pipeline 3 Pipeline: more accurately… Expert in placing tomato and closing the sandwich Expert in placing roast beef Expert in cutting bread Pipelining elsewhere Unix shell grep string File | wc -l Assembling cars Whenever want to keep functional units busy

Computer Architecture 2014– Pipeline 4 Data Access Data Access Pipeline: microarchitecture Inst Fetch Reg ALU Reg Inst Fetch Reg ALU Reg Inst Fetch 8 ns Time Program execution order lw R1, 100(R0) lw R2, 200(R0) lw R3, 300(R0) before

Computer Architecture 2014– Pipeline 5 Data Access Data Access Pipeline: microarchitecture Inst Fetch Reg ALU Reg Inst Fetch Reg ALU Reg Inst Fetch 8 ns Time Program execution order lw R1, 100(R0) lw R2, 200(R0) lw R3, 300(R0) before // R1 = mem[0+100] fetch decode & bring regs to ALU 100+R0 access mem write back result to R1

Computer Architecture 2014– Pipeline 6 Data Access Data Access Data Access Data Access Data Access Pipeline: microarchitecture Speed set by slowest component (instruction takes longer in pipeline) First commercial use in 1985 In Intel chips since 486 (until then, serial execution) Inst Fetch Reg ALU Reg Inst Fetch Reg ALU Reg Inst Fetch Inst Fetch Reg ALU Reg Inst Fetch Reg ALU Reg Inst Fetch Reg ALU Reg 2 ns 8 ns Time Program execution order lw R1, 100(R0) lw R2, 200(R0) lw R3, 300(R0) Time Program execution order lw R1, 100(R0) lw R2, 200(R0) lw R3, 300(R0) before after

Computer Architecture 2014– Pipeline 7 Data Access Data Access Data Access Data Access Data Access Pipeline: microarchitecture Speed set by slowest component (instruction takes longer in pipeline) First commercial use in 1985 In Intel chips since 486 (until then, serial execution) Inst Fetch Reg ALU Reg Inst Fetch Reg ALU Reg Inst Fetch Inst Fetch Reg ALU Reg Inst Fetch Reg ALU Reg Inst Fetch Reg ALU Reg 2 ns 8 ns Time Program execution order lw R1, 100(R0) lw R2, 200(R0) lw R3, 300(R0) Time Program execution order lw R1, 100(R0) lw R2, 200(R0) lw R3, 300(R0) before after // R1 = mem[0+100] fetch decode & bring regs to ALU 100+R0 access mem write back result to R1

Computer Architecture 2014– Pipeline 8 MIPS  Introduced in 1981 by Hennessy (of “Patterson & Hennessy”)  Idea suggested earlier, e.g., by John Cocke and friends at IBM in the 1970s, but not developed in full  MIPS = Microprocessor without Interlocked Pipeline Stages  RISC  Often used in computer architecture courses  Was very successful (e.g., inspired the Alpha ISA)  Interlocks (“without interlocks”)  Mechanisms to allow stages to indicate they are busy  E.g., ‘divide’ & ‘multiply’ required interlocks => paused other stages upstream  With MIPS, every sub-phase of all instructions fits into 1 cycle  No die area wasted on pausing mechanisms => faster cycle

Computer Architecture 2014– Pipeline 9 Pipeline: principles  Ideal speedup = num of pipeline stages  An instruction finishes every clock cycle  Namely, IPC of an ideal pipelined machine is 1  Increase throughput rather than reduce latency  One instruction still takes the same (or longer)  Since max speedup = num of stages & Latency determined by slowest stage, should:  Partition pipe to many stages  Balance work across stages  Shorten longest stage as much as possible

Computer Architecture 2014– Pipeline 10 Pipeline: overheads & limitations  Can increase per-instruction latency  Due to stages imbalance  Requires more logic than serial execution  Time to “fill” pipe reduces speedup Time to “drain” pipe reduces speedup  E.g., upon interrupt or context switch  Stalls when there are dependencies

Computer Architecture 2014– Pipeline 11 Pipelined CPU

Computer Architecture 2014– Pipeline 12 Pipeline: fetch bring next instruction from memory; 4 bytes (32 bit) per instruction Instruction saved in register, in preparation of next pipe stage when not branching, next instruction is in next word

Computer Architecture 2014– Pipeline 13 Pipeline: decode + regs fetch decode source reg numbers read their values from reg file reg IDs are 5 bits (2^5 = 32)

Computer Architecture 2014– Pipeline 14 Pipeline: decode + regs fetch decode & sign-extend immediate (from 16 bit to 32)

Computer Architecture 2014– Pipeline 15 Pipeline: decode + regs fetch decode destination reg (can be one of two, depending on op) & save in register for next stage…

Computer Architecture 2014– Pipeline 16 Pipeline: decode + regs fetch decode destination reg (can be one of two, depending on op) & save in latch for next stage… …based on the op type, next phase will determine, which reg of the two is the destination

Computer Architecture 2014– Pipeline 17 Pipeline: execute ALU computes – “R” operation (the “shift” field is missing from this illustration) reg1 reg2 func (6bit) to reg3

Computer Architecture 2014– Pipeline 18 Pipeline: execute ALU computes – “I” operation (not branch & not load/store) reg1 imm opcode to reg2

Computer Architecture 2014– Pipeline 19 Pipeline: execute ALU computes – “I” operation conditional branch BEQ or BNE [ if (reg1==reg2) pc = pc+4 + (imm<<2) ] reg1 imm opcode reg2 Branch?

Computer Architecture 2014– Pipeline 20 Pipeline: execute ALU computes – “I” operation load (store is similar) ( reg2 = mem[reg1+imm] ) reg1 imm to reg2

Computer Architecture 2014– Pipeline 21 Pipeline: updating PC no branch: just add 4 to PC unconditional branch: add immediate to PC+4 (type J operation) conditional branch: depends on result of ALU

Computer Architecture 2014– Pipeline 22 Pipelined CPU with Control

Computer Architecture 2014– Pipeline 23 Pipeline Example: cycle 1 0 lw R10,9(R1) 4 sub R11,R2,R3 8 and R12,R4,R5 12 or R13,R6,R7

Computer Architecture 2014– Pipeline 24 Pipeline Example: cycle 2 0 lw R10,9(R1) 4 sub R11,R2,R3 8 and R12,R4,R5 12 or R13,R6,R7

Computer Architecture 2014– Pipeline 25 Pipeline Example: cycle 3 0 lw R10,9(R1) 4 sub R11,R2,R3 8 and R12,R4,R5 12 or R13,R6,R7

Computer Architecture 2014– Pipeline 26 Pipeline Example: cycle 4 ALUSrc 6 ALU result Zero Add result Add Shift left 2 ALU Control ALUOp RegDst RegWrite Read reg 1 Read reg 2 Write reg Write data Read data 1 Read data 2 Register File [15-0] [20-16] [15-11] Sign extend ID/EX EX/MEM MEM/WB Instruction MemRead MemWrite Address Write Data Read Data Memory Branch PCSrc MemtoReg 4 Instruction Memory Address Add IF/ID 0 1 muxmux 0 1 muxmux 0 1 muxmux 1 0 muxmux Instruction lw PC or [R4 ] Data from memory address [R1]+9 sub 4 [R5] and [R2]-[R3] 0 lw R10,9(R1) 4 sub R11,R2,R3 8 and R12,R4,R5 12 or R13,R6,R7

Computer Architecture 2014– Pipeline 27 Pipeline Hazards: 1. Structural Hazards

Computer Architecture 2014– Pipeline 28 Structural Hazard  Two instructions attempt to use same resource simultaneously  Problem: register-file accessed in 2 stages  Write during stage 5 (WB)  Read during stage 2 (ID) => Resource (RF) conflict  Solution  Split stage into two sub-stages  Do write in first half  Do reads in second half  2 read ports, 1 write port (separate)

Computer Architecture 2014– Pipeline 29 Structural Hazard  Problem: memory accessed in 2 stages  Fetch (stage 1), when reading instructions from memory  Memory (stage 4), when data is read/written from/to memory  Princeton architecture  Solution  Split data/inst. Memories  Harvard architecture  Today, separate instruction cache and data cache

Computer Architecture 2014– Pipeline 30 Pipeline Hazards: 2. Data Hazards

Computer Architecture 2014– Pipeline 31  When two instructions access the same register  RAW: Read-After-Write  True dependency  WAR: Write-After-Read  Anti-dependency  WAW: Wrtie-After-Write  False-dependency  Key problem with regular in-order pipelines is RAW  We will also learn about out-of-order pipelines Data Dependencies

Computer Architecture 2014– Pipeline 32  Problem with starting next instruction before first is finished  dependencies that “go backward in time” are data hazards Data Dependencies sub R2, R1, R3 and R12,R2, R5 or R13,R6, R2 add R14,R2, R2 sw R15,100(R2) Program execution order CC 1CC 2CC 3CC 4CC 5CC 6 Time (clock cycles) CC 7CC 8CC –20 Value of R

Computer Architecture 2014– Pipeline 33 IM bubble IM IM RAW Hazard: HW Solution 1 - Add Stalls IMReg CC 1CC 2CC 3CC 4CC 5CC 6 Time (clock cycles) CC 7CC 8CC 9 100–20 Value of R2 DM Reg IMDMReg Reg IMReg IM Reg DMReg IMDMReg Reg Reg DM sub R2, R1, R3 stall and R12,R2, R5 or R13,R6, R2 add R14,R2, R2 sw R15,100(R2) Program execution order Let the hardware detect hazard and add stalls if needed Problem: slow! Solution: forwarding whenever possible

Computer Architecture 2014– Pipeline 34  Use temporary results, don’t wait for them to be written to the register file  register file forwarding to handle read/write to same register  ALU forwarding RAW Hazard: HW Solution 2 - Forwarding IMReg IMReg IMRegDMReg IMDMReg IMDMReg DMReg Reg Reg Reg XXX–20XXXXX XXXX– XXXX DM sub R2, R1, R3 and R12,R2, R5 or R13,R6, R2 add R14,R2, R2 sw R15,100(R2) Program execution order CC 1CC 2CC 3CC 4CC 5CC 6 Time (clock cycles) CC 7CC 8CC –20 Value of R Value EX/MEM Value MEM/WB

Computer Architecture 2014– Pipeline 35 Forwarding Hardware

Computer Architecture 2014– Pipeline 36 Forwarding Hardware Added 2 mux units before ALU Each mux gets 3 inputs, from: 1.Prev stage (ID/EX) 2.Next stage (EX/MEM) 3.The one after (MEM/WB) Forward unit tells the 2 mux units which input to use

Computer Architecture 2014– Pipeline 37 Forwarding Control  EX Hazard: A. if (EX/MEM.RegWrite and (EX/MEM.WriteReg = ID/EX.ReadReg1)) then ALUSelA = 1 B. if (EX/MEM.RegWrite and (EX/MEM.WriteReg = ID/EX.ReadReg2)) then ALUSelB = 1  MEM Hazard:  if (not A and MEM/WB.RegWrite (MEM/WB.WriteReg = ID/EX.ReadReg1)) then ALUSelA = 2  if (not B and MEM/WB.RegWrite and (MEM/WB.WriteReg = ID/EX.ReadReg2)) then ALUSelB = 2

Computer Architecture 2014– Pipeline 38 Forwarding Control  EX Hazard: A. if (EX/MEM.RegWrite and (EX/MEM.WriteReg = ID/EX.ReadReg1)) then ALUSelA = 1 B. if (EX/MEM.RegWrite and (EX/MEM.WriteReg = ID/EX.ReadReg2)) then ALUSelB = 1  MEM Hazard:  if (not A and MEM/WB.RegWrite (MEM/WB.WriteReg = ID/EX.ReadReg1)) then ALUSelA = 2  if (not B and MEM/WB.RegWrite and (MEM/WB.WriteReg = ID/EX.ReadReg2)) then ALUSelB = 2 If, in memory stage, we’re writing the output to a register And the reg we’re writing to also happens to be inp_reg1 for the execute stage Then mux_A should select inp_1, namely, the ALU should feed itself

Computer Architecture 2014– Pipeline 39 Forwarding Hardware Example: Bypassing From EX to Src1 and From WB to Src2 lw R11,9(R1) sub R10,R2, R3 and R12,R10,R11 load op => read from “1”

Computer Architecture 2014– Pipeline 40 Forwarding Hardware Example #2: Bypassing From WB to Src2 sub R10,R2, R3 xxx and R12,R10,R11 not load op => read from “0”

Computer Architecture 2014– Pipeline 41 RF Split => fewer forwarding paths sub R2, R1, R3 xxx and R12,R2,R11  Register file is written during first half of the cycle and read during second half of the cycle  Returns updated value  Compiler must cleverly order instructions  Ineffective if pipeline stages require more than 1 cycle IMReg IM Reg IMDMReg IMDMReg DM Reg Reg DM RegReg

Computer Architecture 2014– Pipeline 42  “load” op can cause “un-forwardable” hazards  load value to R  In the next instruction, use R as input  A bigger problem in longer pipelines Can't always forward (stall inevitable) Reg IM Reg Reg IM IMRegDMReg IMDMReg IMDMReg DMReg Reg Reg DM CC 1CC 2CC 3CC 4CC 5CC 6 Time (clock cycles) CC 7CC 8CC 9 Program execution order lw R2, 30(R1) and R12,R2, R5 or R13,R6, R2 add R14,R2, R2 sw R15,100(R2)

Computer Architecture 2014– Pipeline 43  De-assert the enable to ID/EXE  The dependant instruction ( and ) stays another cycle in IF/EXE  De-assert the enable to the IF/ID latch and to the PC  Freeze pipeline stages preceding the stalled instruction  Issue a NOP into the EXE/MEM latch (instead of the stalled inst.)  Allow the stalling instruction ( lw ) to move on Stalling

Computer Architecture 2014– Pipeline 44 Hazard Detection (Stall) Logic if ( (ID/EX.RegWrite) and (ID/EX.opcode == lw) and ( (ID/EX.WriteReg == IF/ID.ReadReg1) or (ID/EX.WriteReg == IF/ID.ReadReg2)) then stall IF/ID

Computer Architecture 2014– Pipeline 45 Forwarding + Hazard Detection Unit

Computer Architecture 2014– Pipeline 46 Example: code for (assume all variables are in memory): a = b + c; d = e – f; Slow code LW Rb,b LW Rc,c Stall ADD Ra,Rb,Rc SW a,Ra LW Re,e LW Rf,f Stall SUB Rd,Re,Rf SWd,Rd Instruction order can be changed as long as correctness is kept (no dependencies violated) Compiler scheduling helps avoid load hazards (when possible) Fast code LW Rb,b LW Rc,c LW Re,e ADD Ra,Rb,Rc LW Rf,f SW a,Ra SUB Rd,Re,Rf SWd,Rd

Computer Architecture 2014– Pipeline 47 Pipeline Hazards: 3. Control Hazards

Computer Architecture 2014– Pipeline 48 Branch, but where?  The decision to branch happens deep within the pipeline  Likewise, the target of the branch becomes known deep within the pipeline  How does this effect the pipeline logic?  For example…

Computer Architecture 2014– Pipeline 49 Executing a BEQ Instruction (i) BEQ R4, R5, 27 → if (R4-R5=0) then PC  PC+4+SignExt(27)*4 ; else PC  PC+4 0 or 4 beq R4, R5, 27 8 and 12 sw 16 sub Assume this program state

Computer Architecture 2014– Pipeline 50 Executing a BEQ Instruction (i) BEQ R4, R5, 27 → if (R4-R5=0) then PC  PC+4+SignExt(27)*4 ; else PC  PC+4 0 or 4 beq R4, R5, 27 8 and 12 sw 16 sub We know: Values of registers We don’t know: If branch will be taken What is its target

Computer Architecture 2014– Pipeline 51 Executing a BEQ Instruction (ii) BEQ R4, R5, 27 → if (R4-R5=0) then PC  PC+4+SignExt(27)*4 ; else PC  PC+4 0 or 4 beq R4, R5, 27 8 and 12 sw 16 sub …Now we know, but only in next cycle will this effect PC Calculate branch condition = compute R4-R5 & compare to 0 Calculate branch target

Computer Architecture 2014– Pipeline 52 Executing a BEQ Instruction (iii) BEQ R4, R5, 27 → if (R4-R5=0) then PC  PC+4+SignExt(27)*4 ; else PC  PC+4 0 or 4 beq R4, R5, 27 8 and 12 sw 16 sub Finally, if taken, branch sets the PC

Computer Architecture 2014– Pipeline 53 Control Hazard on Branches And Beq sub sw Inst from target IMRegDM Reg PC IMRegDM Reg IMRegDM Reg IMRegDM Reg IMRegDM Reg Outcome: The 3 instructions following the branch are in the pipeline even if branch is taken!

Computer Architecture 2014– Pipeline 54 Traps, Exceptions and Interrupts  Indication of events that require a higher authority to intervene (i.e. the operating system)  Atomically changes the protection mode and branches to OS  Protection mode determines what the running is allowed to do (access devices, memory, etc).  Traps: Synchronous  The program asks for OS services (e.g. access a device)  Exceptions: Synchronous  The program did something bad (divide-by-zero; prot. violation)  Interrupts: Asynchronous  An external device needs OS attention (finished an operation)  Can these be handled like regular branches?

Computer Architecture 2014– Pipeline 55 Stall  Easiest solution:  Stall pipe when branch encountered until resolved  But there’s a prices. Assume:  CPI = 1  20% of instructions are branches (realistic)  Stall 3 cycles on every branch (extra 3 cycles for each branch)  Then the price is:  CPI new = × 3 = 1.6// 1 = all instr., including branch  [ CPI new = CPI Ideal + avg. stall cycles / instr. ]  Namely:  We lose 60% of the performance!

Computer Architecture 2014– Pipeline 56 Branch Prediction and Speculative Execution

Computer Architecture 2014– Pipeline 57 Static prediction: branch not taken  Execute instructions from the fall-through (not-taken) path  As if there is no branch  If the branch is not-taken (~50%), no penalty is paid  If branch actually taken  Flush the fall-through path instructions before they change the machine state (memory / registers)  Fetch the instructions from the correct (taken) path  Assuming ~50% branches not taken on average  CPI new = 1 + (0.2 × 0.5) × 3 = 1.3  30% slowdown instead of 60%  What happens in longer pipelines?

Computer Architecture 2014– Pipeline 58 Dynamic branch prediction  Branch prediction is a key impediment to performance  Modern processors employ complex branch predictors  Often achieve < 3% misprediction rate  Given an instruction, we need to predict  Is it a branch?  Branch taken?  Target address?  To avoid stalling  Prediction needed at end of ‘fetch’  Before we even now what’s the instruction…  A simple mechanism: Branch Target Buffer (BTB)

Computer Architecture 2014– Pipeline 59 BTB – the idea fast lookup table PC of fetched instruction ?= Predicted branch taken or not taken? (last few times) No => we don’t know, so we don’t predict Yes => instruction is a branch, so let’s predict it Branch PC Target PC History Predicted Target (Works in a straightforward manner only for direct branches, otherwise target PC changes)

Computer Architecture 2014– Pipeline 60 How it works in a nutshell  Until proven otherwise, assume branches are not taken  Fall through instructions (assume branch has no effect)  Upon the first time a branch is taken  Pay the price (in terms of stalls), but  Save the details of the branch in the BTB (= PC, target PC, and whether or not branch was taken)  While fetching, HW checks in parallel  Whether PC is in BTB  If found, make a prediction  Taken? Address?  Upon misprediction  Flush (throw out) pipeline content & start over from right PC

Computer Architecture 2014– Pipeline 61 Prediction steps 1. Allocate  Insert instruction to BTB once identified as taken branch  Do not insert not-taken branches  Implicitly predict they’d continue not to be taken  Insert both conditional & unconditional  To identify, and to save arithmetic 2. Predict  BTB lookup done in parallel to PC-lookup, providing:  Indication whether PC is a branch (=> BTB “hit”)  Branch target  Branch direction (forward or backward in program)  Branch type (conditional or not) 3. Update (when branch taken & its outcome becomes known)  Branch target, history (taken or not)

Computer Architecture 2014– Pipeline 62 Misprediction  Occurs when  Predict = not taken, reality = taken  Predict = taken, reality = not taken  Branch taken as predicted, but wrong target (indirect, as in the jmp register)  Must flush pipeline  Reset pipeline registers (similar to turning all into NOPs)  Commonly, other flush methods are easier to implement  Set the PC to the correct path  Start fetching instruction from correct path

Computer Architecture 2014– Pipeline 63 CPI  Assuming a fraction of p correct predictions  CPI_new = 1 + (0.2 × (1-p)) × 3  Example, p=0.7  CPI_new = 1 + (0.2 × 0.3) × 3 = 1.18  Example, p=0.98  CPI_new = 1 + (0.2 × 0.02) × 3 =  (But this is a simplistic model; in reality the price can sometimes be much higher.)

Computer Architecture 2014– Pipeline 64 History & prediction algorithm  “Always backward” prediction  Works for long loops  Some branches exhibit “locality”  Typically behave as the last time they were invoked  Typically depend on their previous outcome (& it alone)  Can save a history window  What happened last time, and before that, and before…  The bigger the window, the greater the complexity  Some branches regularly alternate between taken & untaken  Taken, then untaken, then taken, …  Need only one history bit to identify this  Some branches are correlated with previous branches  Those that lead to them

Computer Architecture 2014– Pipeline 65 Adding a BTB to the Pipeline

Computer Architecture 2014– Pipeline 66 Using The BTB PC moves to next instruction Inst Mem gets PC and fetches new inst BTB gets PC and looks it up IF/ID latch loaded with new inst BTB Hit ?Br taken ? PC  PC + 4PC  pred addr IF ID IF/ID latch loaded with pred inst IF/ID latch loaded with seq. inst Branch ? yesno yes noyes EXE

Computer Architecture 2014– Pipeline 67 Using The BTB (cont.) ID EXE MEM WB Branch ? Calculate br cond & trgt Flush pipe & update PC Corect pred ? yesno IF/ID latch loaded with correct inst continue Update BTB yes no continue

Computer Architecture 2014– Pipeline 68 Prediction algorithm  Can do an entire course on this issue  Still actively researched  As noted, modern predictors can often achieve misprediction < 2%  Still, it has been shown that these 2% can sometimes significantly worsen performance  A real problem in out-of-order pipelines  We did not talk about the issue of indirect branches  As in virtual function calls (object oriented)  Where the branch target is written in memory, elsewhere