12/26/20151 Pipeline Architecture since 1985  Last time, we completed the 5-stage pipeline MIPS. —Processors like this were first shipped around 1985.

Slides:



Advertisements
Similar presentations
Chapter 4 The Processor. Chapter 4 — The Processor — 2 The Main Control Unit Control signals derived from instruction 0rsrtrdshamtfunct 31:265:025:2120:1615:1110:6.
Advertisements

Lecture 8 Dynamic Branch Prediction, Superscalar and VLIW Advanced Computer Architecture COE 501.
Advanced Pipelining Optimally Scheduling Code Optimally Programming Code Scheduling for Superscalars (6.9) Exceptions (5.6, 6.8)
1 Advanced Computer Architecture Limits to ILP Lecture 3.
Pipeline Computer Organization II 1 Hazards Situations that prevent starting the next instruction in the next cycle Structural hazards – A required resource.
Intro to Computer Org. Pipelining, Part 2 – Data hazards + Stalls.
Lecture Objectives: 1)Define pipelining 2)Calculate the speedup achieved by pipelining for a given number of instructions. 3)Define how pipelining improves.
Chapter 4 The Processor CprE 381 Computer Organization and Assembly Level Programming, Fall 2013 Zhao Zhang Iowa State University Revised from original.
CPE432 Chapter 4C.1Dr. W. Abu-Sufah, UJ Chapter 4C: The Processor, Part C Read Section 4.10 Parallelism and Advanced Instruction-Level Parallelism Adapted.
Instruction Level Parallelism Chapter 4: CS465. Instruction-Level Parallelism (ILP) Pipelining: executing multiple instructions in parallel To increase.
Chapter 4 CSF 2009 The processor: Instruction-Level Parallelism.
Real Processors Lecture for CPSC 5155 Edward Bosworth, Ph.D. Computer Science Department Columbus State University.
1 xkcd/619. Multicore & Parallel Processing Guest Lecture: Kevin Walsh CS 3410, Spring 2011 Computer Science Cornell University.
EECE476: Computer Architecture Lecture 23: Speculative Execution, Dynamic Superscalar (text 6.8 plus more) The University of British ColumbiaEECE 476©
Kevin Walsh CS 3410, Spring 2010 Computer Science Cornell University Multicore & Parallel Processing P&H Chapter ,
Computer Architecture 2011 – Out-Of-Order Execution 1 Computer Architecture Out-Of-Order Execution Lihu Rappoport and Adi Yoaz.
Prof. John Nestor ECE Department Lafayette College Easton, Pennsylvania ECE Computer Organization Lecture 19 - Pipelined.
1  2004 Morgan Kaufmann Publishers Chapter Six. 2  2004 Morgan Kaufmann Publishers Pipelining The laundry analogy.
Computer Architecture 2011 – out-of-order execution (lec 7) 1 Computer Architecture Out-of-order execution By Dan Tsafrir, 11/4/2011 Presentation based.
1 Chapter Six - 2nd Half Pipelined Processor Forwarding, Hazards, Branching EE3055 Web:
1 Stalling  The easiest solution is to stall the pipeline  We could delay the AND instruction by introducing a one-cycle delay into the pipeline, sometimes.
Goal: Reduce the Penalty of Control Hazards
Prof. John Nestor ECE Department Lafayette College Easton, Pennsylvania Computer Organization Pipelined Processor Design 3.
COMP381 by M. Hamdi 1 (Recap) Control Hazards. COMP381 by M. Hamdi 2 Control (Branch) Hazard A: beqz r2, label B: label: P: Problem: The outcome.
Chapter 4 Pipelining and Instruction Level Parallelism.
Pipelined Datapath and Control (Lecture #15) ECE 445 – Computer Organization The slides included herein were taken from the materials accompanying Computer.
CS 3410, Spring 2014 Computer Science Cornell University See P&H Chapter: 4.10, 1.7, 1.8, 5.10, 6.
1  1998 Morgan Kaufmann Publishers Chapter Six Enhancing Performance with Pipelining.
Multicore and Parallel Processing Hakim Weatherspoon CS 3410, Spring 2013 Computer Science Cornell University P & H Chapter ,
Multicore and Parallel Processing Hakim Weatherspoon CS 3410, Spring 2012 Computer Science Cornell University P & H Chapter ,
Multicore and Parallel Processing Hakim Weatherspoon CS 3410, Spring 2012 Computer Science Cornell University P & H Chapter ,
Instructor: Justin Hsia CS 61C: Great Ideas in Computer Architecture Multiple Instruction Issue, Virtual Memory Introduction 7/26/2012Summer Lecture.
Architecture Basics ECE 454 Computer Systems Programming
Lecture 15: Pipelining and Hazards CS 2011 Fall 2014, Dr. Rozier.
Abstraction Question General purpose processors have an abstraction layer fixed at the ISA and have little control over the compilers or code run on the.
1 Advanced Computer Architecture Dynamic Instruction Level Parallelism Lecture 2.
Comp Sci pipelining 1 Ch. 13 Pipelining. Comp Sci pipelining 2 Pipelining.
Instruction Level Parallelism Pipeline with data forwarding and accelerated branch Loop Unrolling Multiple Issue -- Multiple functional Units Static vs.
CSE 340 Computer Architecture Summer 2014 Basic MIPS Pipelining Review.
CS.305 Computer Architecture Enhancing Performance with Pipelining Adapted from Computer Organization and Design, Patterson & Hennessy, © 2005, and from.
CS 61C: Great Ideas in Computer Architecture (Machine Structures) Lecture 32: Pipeline Parallelism 3 Instructor: Dan Garcia inst.eecs.Berkeley.edu/~cs61c.
Branch.1 10/14 Branch Prediction Static, Dynamic Branch prediction techniques.
CS 1104 Help Session IV Five Issues in Pipelining Colin Tan, S
Chapter 6 Pipelined CPU Design. Spring 2005 ELEC 5200/6200 From Patterson/Hennessey Slides Pipelined operation – laundry analogy Text Fig. 6.1.
CSIE30300 Computer Architecture Unit 04: Basic MIPS Pipelining Hsin-Chou Chi [Adapted from material by and
Instructor: Justin Hsia CS 61C: Great Ideas in Computer Architecture Multiple Instruction Issue, Virtual Memory Introduction 8/01/2013Summer Lecture.
1  1998 Morgan Kaufmann Publishers Chapter Six. 2  1998 Morgan Kaufmann Publishers Pipelining Improve perfomance by increasing instruction throughput.
Branch Hazards and Static Branch Prediction Techniques
10/11: Lecture Topics Execution cycle Introduction to pipelining
Lecture 9. MIPS Processor Design – Pipelined Processor Design #1 Prof. Taeweon Suh Computer Science Education Korea University 2010 R&E Computer System.
Csci 136 Computer Architecture II – Superscalar and Dynamic Pipelining Xiuzhen Cheng
CS 61C: Great Ideas in Computer Architecture (Machine Structures) Instruction Level Parallelism: Multiple Instruction Issue Guest Lecturer: Justin Hsia.
Advanced Pipelining 7.1 – 7.5. Peer Instruction Lecture Materials for Computer Architecture by Dr. Leo Porter is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike.
PipeliningPipelining Computer Architecture (Fall 2006)
Prof. Hakim Weatherspoon CS 3410, Spring 2015 Computer Science Cornell University P & H Chapter 4.10, 1.7, 1.8, 5.10, 6.
Use of Pipelining to Achieve CPI < 1
Chapter Six.
CS 352H: Computer Systems Architecture
Instruction Level Parallelism
Pipeline Architecture since 1985
Pipeline Implementation (4.6)
Pipelining: Advanced ILP
Morgan Kaufmann Publishers The Processor
Chapter 4 The Processor Part 6
Morgan Kaufmann Publishers The Processor
Computer Architecture
Chapter Six.
Chapter Six.
CSC3050 – Computer Architecture
Wackiness Algorithm A: Algorithm B:
Presentation transcript:

12/26/20151 Pipeline Architecture since 1985  Last time, we completed the 5-stage pipeline MIPS. —Processors like this were first shipped around 1985 —Still a fundamentally solid design  Nevertheless, there have been advances in the past 25 years. —Deeper Pipelines —Dynamic Branch Prediction —Branch Target Buffers (removing the taken branch penalty) —Multiple Issue / Superscalar —Out-of-order Scheduling  I will briefly overview these ideas —For more complete coverage, see CS433

Informal Early Feedback, Part I  Top likes: (in order of frequency) —Presentation style —Handouts & in-class questions —Piazza —Everything —Discussion sections —Slides —SPIMbot —Appropriate MPs 12/26/20152

Informal Early Feedback, Part II  Top wants: (in order of frequency) —Video record lecture —Upload annotated lecture slides / MP solutions faster —Release slides before lecture —More (online) practice problems —More office hours —More SPIMbot testcases —Motivate the hardware material —List corresponding readings 12/26/20153

Informal Early Feedback, Part III  Top gripes: (in order of frequency) —Xspim is janky (use QtSpim) —EWS machines suck, let us run on our own computers —Drop handin, use SVN —Errors in MPs 12/26/20154

5  Make things faster by making any component smaller!! CPU time X,P = Instructions executed P * CPI X,P * Clock cycle time X  We proposed pipelining to reduce clock cycle time. —If some is good, more is better right? Recall our equation for execution time Hardware can affect these

“Superpipeling” 12/26/20156 MIPS R4000

More Superpipelining 12/26/20157

Historical data from Intel’s processors 12/26/20158 Microprocessor YearClock RatePipeline Stages i MHz5 Pentium MHz5 Pentium Pro MHz10 P4 Willamette MHz22 P4 Prescott MHz31 Core 2 Conroe MHz14 Core 2 Yorkfield MHz16 Core i7 Gulftown MHz16 Pipeline depths and frequency at introduction. What Happened?

There is a cost to deep pipelines 12/26/20159 Microprocessor YearClock RatePipeline Stages Power i MHz55W Pentium MHz510W Pentium Pro MHz1029W P4 Willamette MHz2275W P4 Prescott MHz31103W Core 2 Conroe MHz1475W Core 2 Yorkfield MHz1695W Core i7 Gulftown MHz16130W  Two effects: —Diminishing returns: pipeline register latency becomes significant —Negatively impacts CPI (longer stalls, more instructions flushed)

12/26/ Mitigating CPI loss 1: Dynamic Branch Prediction  “Predict not-taken” is cheap, but —Some branches are almost always taken Like loop back edges.  It turns out, instructions tend to do the same things over and over again —Idea: Use past history to predict future behavior —First attempt: Keep 1 bit per branch that remembers last outcome  What fraction of time will the highlighted branch mispredict? for (int i = 0 ; i < 1000 ; i ++) { for (int j = 0 ; j < 10 ; j ++) { // do something }

12/26/ Two-bit branch prediction  Solution: add longer term memory (hysteresis)  Use a saturating 2-bit counter: —Increment when branch taken —Decrement when branch not-taken —Use top bit as prediction  How often will the branch mispredict? T T T T T T T T T NT T T T T T …

Branch prediction tables  Too expensive to keep 2 bits per branch in the program  Instead keep a fixed sized table in the processor —Say bit counters.  “Hash” the program counter (PC) to construct an index: —Index = (PC >> 2) ^ (PC >> 12)  Multiple branches will map to the same entry (interference) —But generally not at the same time Programs tend to have working sets. 12/26/201512

When to predict branches?  Need: —PC (to access predictor) —To know it is a branch (must have decoded the instruction) —The branch target (computed from the instruction bits)  How many flushes on a not taken prediction?  How many flushes on a taken prediction?  Is this the best we can do? 12/26/201513

Mitigating CPI loss 1: Branch Target Buffers  Need: —PC —To know it is a branch —The branch target  Create a table: Branch Target Buffer —Allocate an entry whenever a branch is taken (& not already present) 12/26/ Already have at fetch. Can remember and make available at fetch PC2-bit countertarget

BTB accessed in parallel with reading the instruction  If matching entry found, and …  2-bit counter predicts taken —Redirect fetch to branch target —Instead of PC+4  What is the taken branch penalty? —(i.e., how many flushes on a predicted taken branch?) 12/26/ Instruction memory Instruction [31-0] 4 PCPC Add 1010 Misprediction Read address 1 0 Match & Taken Target PC BTB from EX stage

12/26/ CPU time X,P = Instructions executed P * CPI X,P * Clock cycle time X  Removing stalls & flushes can bring CPI down 1. —Can we bring it lower? Back to our equation for execution time

Multiple Issue 12/26/201517

Issue width over time 12/26/ Microprocessor YearClock RatePipeline Stages Issue width i MHz51 Pentium MHz52 Pentium Pro MHz103 P4 Willamette MHz223 P4 Prescott MHz313 Core 2 Conroe MHz144 Core 2 Yorkfield MHz164 Core i7 Gulftown MHz164

12/26/2015 Static Multiple Issue  Compiler groups instructions into issue packets —Group of instructions that can be issued on a single cycle —Determined by pipeline resources required  Think of an issue packet as a very long instruction —Specifies multiple concurrent operations  Compiler must remove some/all hazards —Reorder instructions into issue packets —No dependencies within a packet —Pad with nop if necessary 19

12/26/2015 Example: MIPS with Static Dual Issue  Dual-issue packets —One ALU/branch instruction —One load/store instruction —64-bit aligned ALU/branch, then load/store Pad an unused instruction with nop AddressInstruction typePipeline Stages nALU/branchIFIDEXMEMWB n + 4Load/storeIFIDEXMEMWB n + 8ALU/branchIFIDEXMEMWB n + 12Load/storeIFIDEXMEMWB n + 16ALU/branchIFIDEXMEMWB n + 20Load/storeIFIDEXMEMWB 20

12/26/2015 Hazards in the Dual-Issue MIPS  More instructions executing in parallel  EX data hazard —Forwarding avoided stalls with single-issue —Now can’t use ALU result in load/store in same packet add $t0, $s0, $s1 load $s2, 0($t0) Split into two packets, effectively a stall  Load-use hazard —Still one cycle use latency, but now two instructions  More aggressive scheduling required 21

12/26/2015 Scheduling Example  Schedule this for dual-issue MIPS Loop: lw $t0, 0($s1) # $t0=array element addu $t0, $t0, $s2 # add scalar in $s2 sw $t0, 0($s1) # store result addi $s1, $s1,–4 # decrement pointer bne $s1, $zero, Loop # branch $s1!=0 ALU/branchLoad/storecycle Loop:noplw $t0, 0($s1)1 addi $s1, $s1,–4nop2 addu $t0, $t0, $s2nop3 bne $s1, $zero, Loop sw $t0, 4($s1)4 IPC = 5/4 = 1.25 (c.f. peak IPC = 2) 22

12/26/2015 Loop Unrolling  Replicate loop body to expose more parallelism —Reduces loop-control overhead  Use different registers per replication —Called register renaming —Avoid loop-carried anti-dependencies Store followed by a load of the same register Aka “name dependence”  Reuse of a register name 23

12/26/2015 Loop Unrolling Example  IPC = 14/8 = 1.75 —Closer to 2, but at cost of registers and code size ALU/branchLoad/storecycle Loop:addi $s1, $s1,–16lw $t0, 0($s1)1 noplw $t1, 12($s1)2 addu $t0, $t0, $s2lw $t2, 8($s1)3 addu $t1, $t1, $s2lw $t3, 4($s1)4 addu $t2, $t2, $s2sw $t0, 16($s1)5 addu $t3, $t4, $s2sw $t1, 12($s1)6 nopsw $t2, 8($s1)7 bne $s1, $zero, Loop sw $t3, 4($s1)8 24

Dynamic Multiple Issue = Superscalar  CPU decides whether to issue 0, 1, 2, … instructions each cycle —Avoiding structural and data hazards  Avoids need for compiler scheduling —Though it may still help —Code semantics ensured by the CPU By stalling appropriately  Limited benefit without compiler support —Adjacent instructions are often dependent 12/26/201525

Out-of-order Execution (Dynamic Scheduling)  Allow the CPU to execute instructions out of order to avoid stalls —But commit result to registers in order  Example lw $t0, 20($s2) add $t1, $t0, $t2 sub $s4, $s4, $t3 slti $t5, $s4, 20 —Can start sub while add is waiting for lw  Why not just let the compiler schedule code? 12/26/201526

Out-of-order Scheduling  Allow the CPU to execute instructions out of order to avoid stalls —But commit result to registers in order  Example lw $t0, 20($s2) add $t1, $t0, $t2 sub $s4, $s4, $t3 slti $t5, $s4, 20 —Can start sub while add is waiting for lw  Why not just let the compiler schedule code? —Not all stalls are predicable e.g., we’ll see shortly that memory has variable latency —Can’t always schedule around branches Branch outcome is dynamically determined (i.e., predicted —Different implementations have different latencies and hazards 12/26/201527

Implementing Out-of-Order Execution Basically, unroll loops in hardware: 1.Fetch instructions in program order (≤4/clock) 2.Predict branches as taken/not-taken 3.To avoid hazards on registers, rename registers using a set of internal registers (~80 registers) 4.Collection of renamed instructions might execute in a window (~60 instructions) 5.Execute instructions with ready operands in 1 of multiple functional units (ALUs, FPUs, Ld/St) 6.Buffer results of executed instructions until predicted branches are resolved in reorder buffer 7.If predicted branch correctly, commit results in program order 8.If predicted branch incorrectly, discard all dependent results and start with correct PC 12/26/201528

12/26/2015 Dynamically Scheduled CPU Results also sent to any waiting reservation stations Reorders buffer for register writes Can supply operands for issued instructions Preserves dependencies Hold pending operands 29

12/26/ Takeaway points  The 5-stage pipeline is not a bad mental model for SW developers: —Integer arithmetic is cheap —Loads can be relatively expensive Especially if there is not other work to be done (e.g., linked list traversals) We’ll further explain why starting on Friday —Branches can be relatively expensive But, primarily if they are not predictable  In addition, try to avoid long serial dependences; given double D[10] ((D[0] + D[1]) + (D[2] + D[3])) + ((D[4] + D[5]) + (D[6] + D[7])) —Is faster than: (((((((D[0] + D[1]) + D[2]) + D[3]) + D[4]) + D[5]) + D[6]) + D[7])  There is phenomenal engineering in modern processors