EECS 470 Control Hazards and ILP Lecture 3 – Winter 2015 Slides developed in part by Profs. Austin, Brehob, Falsafi, Hill, Hoe, Lipasti, Martin, Roth,

Slides:



Advertisements
Similar presentations
CSE 502: Computer Architecture
Advertisements

1 Pipelining Part 2 CS Data Hazards Data hazards occur when the pipeline changes the order of read/write accesses to operands that differs from.
Computer Structure 2014 – Out-Of-Order Execution 1 Computer Structure Out-Of-Order Execution Lihu Rappoport and Adi Yoaz.
CPE 731 Advanced Computer Architecture ILP: Part V – Multiple Issue Dr. Gheith Abandah Adapted from the slides of Prof. David Patterson, University of.
Dynamic ILP: Scoreboard Professor Alvin R. Lebeck Computer Science 220 / ECE 252 Fall 2008.
1 Lecture: Out-of-order Processors Topics: out-of-order implementations with issue queue, register renaming, and reorder buffer, timing, LSQ.
Computer Organization and Architecture (AT70.01) Comp. Sc. and Inf. Mgmt. Asian Institute of Technology Instructor: Dr. Sumanta Guha Slide Sources: Based.
Pipeline Hazards Pipeline hazards These are situations that inhibit that the next instruction can be processed in the next stage of the pipeline. This.
Lecture 4 Slide 1 EECS 470 Portions © Austin, Brehob, Falsafi, Hill, Hoe, Lipasti, Martin, Roth, Shen, Smith, Sohi, Tyson, Vijaykumar, Wenisch EECS 470.
Instruction-Level Parallelism (ILP)
Spring 2003CSE P5481 Reorder Buffer Implementation (Pentium Pro) Hardware data structures retirement register file (RRF) (~ IBM 360/91 physical registers)
Chapter 8. Pipelining. Instruction Hazards Overview Whenever the stream of instructions supplied by the instruction fetch unit is interrupted, the pipeline.
EECS 470 Lecture 7 Branches: Address prediction and recovery (And interrupt recovery too.)
EECE476: Computer Architecture Lecture 23: Speculative Execution, Dynamic Superscalar (text 6.8 plus more) The University of British ColumbiaEECE 476©
1 IF IDEX MEM L.D F4,0(R2) MUL.D F0, F4, F6 ADD.D F2, F0, F8 L.D F2, 0(R2) WB IF IDM1 MEM WBM2M3M4M5M6M7 stall.
Computer Architecture 2011 – Out-Of-Order Execution 1 Computer Architecture Out-Of-Order Execution Lihu Rappoport and Adi Yoaz.
1 Lecture 7: Out-of-Order Processors Today: out-of-order pipeline, memory disambiguation, basic branch prediction (Sections 3.4, 3.5, 3.7)
EECS 470 Pipeline Hazards Lecture 4 Coverage: Appendix A.
1  2004 Morgan Kaufmann Publishers Chapter Six. 2  2004 Morgan Kaufmann Publishers Pipelining The laundry analogy.
Computer Architecture 2011 – out-of-order execution (lec 7) 1 Computer Architecture Out-of-order execution By Dan Tsafrir, 11/4/2011 Presentation based.
1 Lecture 9: More ILP Today: limits of ILP, case studies, boosting ILP (Sections )
1 Lecture 5: Pipeline Wrap-up, Static ILP Basics Topics: loop unrolling, VLIW (Sections 2.1 – 2.2) Assignment 1 due at the start of class on Thursday.
Computer ArchitectureFall 2007 © October 24nd, 2007 Majd F. Sakr CS-447– Computer Architecture.
1 Lecture 18: Pipelining Today’s topics:  Hazards and instruction scheduling  Branch prediction  Out-of-order execution Reminder:  Assignment 7 will.
1 Lecture 9: Dynamic ILP Topics: out-of-order processors (Sections )
EECS 470 Control Hazards and ILP Lecture 3 – Fall 2013 Slides developed in part by Profs. Austin, Brehob, Falsafi, Hill, Hoe, Lipasti, Martin, Roth, Shen,
Pipelining to Superscalar Prof. Mikko H. Lipasti University of Wisconsin-Madison Lecture notes based on notes by John P. Shen Updated by Mikko Lipasti.
Computer Architecture 2010 – Out-Of-Order Execution 1 Computer Architecture Out-Of-Order Execution Lihu Rappoport and Adi Yoaz.
EECS 470 Further review: Pipeline Hazards and More Lecture 2 – Winter 2014 Slides developed in part by Profs. Austin, Brehob, Falsafi, Hill, Hoe, Lipasti,
1 Appendix A Pipeline implementation Pipeline hazards, detection and forwarding Multiple-cycle operations MIPS R4000 CDA5155 Spring, 2007, Peir / University.
1 Advanced Computer Architecture Dynamic Instruction Level Parallelism Lecture 2.
CIS 662 – Computer Architecture – Fall Class 16 – 11/09/04 1 Compiler Techniques for ILP  So far we have explored dynamic hardware techniques for.
ECE 252 / CPS 220 Pipelining Professor Alvin R. Lebeck Compsci 220 / ECE 252 Fall 2008.
1 Out-Of-Order Execution (part I) Alexander Titov 14 March 2015.
1 COMP541 Pipelined MIPS Montek Singh Mar 30, 2010.
Chapter 6 Pipelined CPU Design. Spring 2005 ELEC 5200/6200 From Patterson/Hennessey Slides Pipelined operation – laundry analogy Text Fig. 6.1.
Winter 2002CSE Topic Branch Hazards in the Pipelined Processor.
Chap 6.1 Computer Architecture Chapter 6 Enhancing Performance with Pipelining.
1 Lecture 7: Speculative Execution and Recovery Branch prediction and speculative execution, precise interrupt, reorder buffer.
1  1998 Morgan Kaufmann Publishers Chapter Six. 2  1998 Morgan Kaufmann Publishers Pipelining Improve perfomance by increasing instruction throughput.
Lecture 1: Introduction Instruction Level Parallelism & Processor Architectures.
L17 – Pipeline Issues 1 Comp 411 – Fall /23/09 CPU Pipelining Issues Read Chapter This pipe stuff makes my head hurt! What have you been.
1 Lecture: Out-of-order Processors Topics: a basic out-of-order processor with issue queue, register renaming, and reorder buffer.
Out-of-order execution Lihu Rappoport 11/ MAMAS – Computer Architecture Out-Of-Order Execution Dr. Lihu Rappoport.
ECE/CS 552: Pipeline Hazards © Prof. Mikko Lipasti Lecture notes based in part on slides created by Mark Hill, David Wood, Guri Sohi, John Shen and Jim.
Advanced Pipelining 7.1 – 7.5. Peer Instruction Lecture Materials for Computer Architecture by Dr. Leo Porter is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike.
CS 352H: Computer Systems Architecture
Dynamic Scheduling Why go out of style?
/ Computer Architecture and Design
CPE 731 Advanced Computer Architecture ILP: Part V – Multiple Issue
CS203 – Advanced Computer Architecture
ECE/CS 552: Pipelining to Superscalar
Sequential Execution Semantics
Lecture 6: Advanced Pipelines
Out of Order Processors
Lecture 10: Out-of-order Processors
Lecture 11: Out-of-order Processors
Lecture: Out-of-order Processors
Lecture 8: ILP and Speculation Contd. Chapter 2, Sections 2. 6, 2
CSCE 432/832 High Performance Processor Architectures Scalar to Superscalar Adopted from Lecture notes based in part on slides created by Mikko H. Lipasti,
Lecture 18: Pipelining Today’s topics:
Lecture: Out-of-order Processors
Lecture 8: Dynamic ILP Topics: out-of-order processors
How to improve (decrease) CPI
CC423: Advanced Computer Architecture ILP: Part V – Multiple Issue
Instruction Execution Cycle
ECE/CS 552: Pipelining to Superscalar
CSC3050 – Computer Architecture
Lecture 9: Dynamic ILP Topics: out-of-order processors
Conceptual execution on a processor which exploits ILP
Presentation transcript:

EECS 470 Control Hazards and ILP Lecture 3 – Winter 2015 Slides developed in part by Profs. Austin, Brehob, Falsafi, Hill, Hoe, Lipasti, Martin, Roth, Shen, Smith, Sohi, Tyson, Vijaykumar, and Wenisch of Carnegie Mellon University, Purdue University, University of Michigan, University of Pennsylvania, and University of Wisconsin. 1

Portions © Austin, Brehob, Falsafi, Hill, Hoe, Lipasti, Martin, Roth, Shen, Smith, Sohi, Tyson, Vijaykumar, Wenisch 2 Announcements HW1 due today Project 1 due –Note: can submit multiple times, only last one counts. HW2 posted later today.

Portions © Austin, Brehob, Falsafi, Hill, Hoe, Lipasti, Martin, Roth, Shen, Smith, Sohi, Tyson, Vijaykumar, Wenisch Teaching Staff Instructor Dr. Mark Brehob, brehob –Office hours: (4632 Beyster unless noted) Tuesday 2:30-4:30 Wednesday 9-11am Friday 1:00-3:00 –2431 EECS (Engin 100/EECS 270 lab) I’ll also be around after class for minutes most days. GSIs/IA Akshay Moorthy, moorthya Steve Zekany, szekany Office hours: –Monday: 3 - 4:30 (Steve) in 1695 BBB –Tuesday: 10:30am - 12 (Steve) in 1695 BBB –Wednesday 3 – 4:30 (Steve) in 1695 BBB –Thursday: 6-7 (Steve) and 6 - 7:30 (Akshay) in 1620 BBB –Friday: 12:30 - 1:30 (Steve) and 3: (Akshay), both in 1620 BBB –Saturday: 4 - 5:30 (Akshay) in 1620 BBB –Sunday: 4 - 5:30 (Akshay) in 1620 BBB These times and locations may change slightly during the first week of class, especially if the rooms are busy or reserved for other classes. Reminder! Don’t forget Piazza. If you don’t have access, let us know ASAP.

Portions © Austin, Brehob, Falsafi, Hill, Hoe, Lipasti, Martin, Roth, Shen, Smith, Sohi, Tyson, Vijaykumar, Wenisch Today Review and finish up other options Costs and Power Instruction Level Parallelism (ILP) and Dynamic Execution 4

Portions © Austin, Brehob, Falsafi, Hill, Hoe, Lipasti, Martin, Roth, Shen, Smith, Sohi, Tyson, Vijaykumar, Wenisch 5 Three approaches to handling data hazards Avoidance –Make sure there are no hazards in the code Detect and Stall –If hazards exist, stall the processor until they go away. Detect and Forward –If hazards exist, fix up the pipeline to get the correct value (if possible)

Portions © Austin, Brehob, Falsafi, Hill, Hoe, Lipasti, Martin, Roth, Shen, Smith, Sohi, Tyson, Vijaykumar, Wenisch Handling Control Hazards Avoidance (static) –No branches? –Convert branches to predication Control dependence becomes data dependence Detect and Stall (dynamic) –Stop fetch until branch resolves Speculate and squash (dynamic) –Keep going past branch, throw away instructions if wrong 6

Portions © Austin, Brehob, Falsafi, Hill, Hoe, Lipasti, Martin, Roth, Shen, Smith, Sohi, Tyson, Vijaykumar, Wenisch Branch Delay Slot (MIPS, SPARC) FDEMW F FDEM t0t0 t1t1 t2t2 t3t3 t4t4 t5t5 W next: target: i: beq 1, 2, tgt j:add 3, 4, 5 What can we put here? branch: FDEMW FDEMW FDEM W delay: target: branch: Squash - Instruction in delay slot executes even on taken branch Pipelining & Control Hazards Branch Delay Slot 7

Portions © Austin, Brehob, Falsafi, Hill, Hoe, Lipasti, Martin, Roth, Shen, Smith, Sohi, Tyson, Vijaykumar, Wenisch Improving pipeline performance Add more stages Widen pipeline 8 Improving pipeline performance

Portions © Austin, Brehob, Falsafi, Hill, Hoe, Lipasti, Martin, Roth, Shen, Smith, Sohi, Tyson, Vijaykumar, Wenisch 9 Adding pipeline stages Pipeline frontend –Fetch, Decode Pipeline middle –Execute Pipeline backend –Memory, Writeback Improving pipeline performance

Portions © Austin, Brehob, Falsafi, Hill, Hoe, Lipasti, Martin, Roth, Shen, Smith, Sohi, Tyson, Vijaykumar, Wenisch 10 Adding stages to fetch, decode Delays hazard detection No change in forwarding paths No performance penalty with respect to data hazards Improving pipeline performance

Portions © Austin, Brehob, Falsafi, Hill, Hoe, Lipasti, Martin, Roth, Shen, Smith, Sohi, Tyson, Vijaykumar, Wenisch 11 Adding stages to execute Check for structural hazards –ALU not pipelined –Multiple ALU ops completing at same time Data hazards may cause delays –If multicycle op hasn't computed data before the dependent instruction is ready to execute Performance penalty for each stall Improving pipeline performance

Portions © Austin, Brehob, Falsafi, Hill, Hoe, Lipasti, Martin, Roth, Shen, Smith, Sohi, Tyson, Vijaykumar, Wenisch 12 Adding stages to memory, writeback Instructions ready to execute may need to wait longer for multi-cycle memory stage Adds more pipeline registers –Thus more source registers to forward More complex hazard detection Wider muxes More control bits to manage muxes Improving pipeline performance

Portions © Austin, Brehob, Falsafi, Hill, Hoe, Lipasti, Martin, Roth, Shen, Smith, Sohi, Tyson, Vijaykumar, Wenisch 13 Wider pipelines fetch decodeexecute memWB fetch decodeexecute memWB More complex hazard detection 2X pipeline registers to forward from 2X more instructions to check 2X more destinations (muxes) Need to worry about dependent instructions in the same stage Improving pipeline performance

Portions © Austin, Brehob, Falsafi, Hill, Hoe, Lipasti, Martin, Roth, Shen, Smith, Sohi, Tyson, Vijaykumar, Wenisch 14 Making forwarding explicit add r1  r2, EX/Mem ALU result –Include direct mux controls into the ISA –Hazard detection is now a compiler task –New micro-architecture leads to new ISA Is this why this approach always seems to fail? (e.g., simple VLIW, Motorola 88k) –Can reduce some resources Eliminates complex conflict checkers

Portions © Austin, Brehob, Falsafi, Hill, Hoe, Lipasti, Martin, Roth, Shen, Smith, Sohi, Tyson, Vijaykumar, Wenisch Today Review and finish up other options Costs and Power Instruction Level Parallelism (ILP) and Dynamic Execution 15

Portions © Austin, Brehob, Falsafi, Hill, Hoe, Lipasti, Martin, Roth, Shen, Smith, Sohi, Tyson, Vijaykumar, Wenisch 16 Digital System Cost Cost is also a key design constraint –Architecture is about trade-offs –Cost plays a major role Huge difference between Cost & Price E.g., –Higher Price  Lower Volume  Higher Cost  Higher Price –Direct Cost –List vs. Selling Price Price also depends on the customer –College student vs. US Government SupercomputerServersDesktopsPortablesEmbedded $$$$$$$ The cost of computing… $ and Watts

Portions © Austin, Brehob, Falsafi, Hill, Hoe, Lipasti, Martin, Roth, Shen, Smith, Sohi, Tyson, Vijaykumar, Wenisch 17 Direct Cost Cost distribution for a Personal Computer –Processor board 37% CPU, memory, –I/O devices 37% Hard disk, DVD, monitor, … –Software 20% –Tower/cabinet 6% Integrated systems account for a substantial fraction of cost The cost of computing… $ and Watts

Portions © Austin, Brehob, Falsafi, Hill, Hoe, Lipasti, Martin, Roth, Shen, Smith, Sohi, Tyson, Vijaykumar, Wenisch 18 IC Cost Equation Die cost + Test cost + Packaging cost IC cost = Final test yield Wafer cost Die cost = Dies/wafer x Die yield Die yield = f(defect density, die area) The cost of computing… $ and Watts

Portions © Austin, Brehob, Falsafi, Hill, Hoe, Lipasti, Martin, Roth, Shen, Smith, Sohi, Tyson, Vijaykumar, Wenisch Why is power a problem in a μP? Power used by the μP, vs. system power Dissipating Heat –Melting (very bad) –Packaging (to cool  $) –Heat leads to poorer performance. Providing Power –Battery –Cost of electricity Introduction The cost of computing… $ and Watts 19

Portions © Austin, Brehob, Falsafi, Hill, Hoe, Lipasti, Martin, Roth, Shen, Smith, Sohi, Tyson, Vijaykumar, Wenisch Where does the juice go in laptops? Others have measured ~55% processor increase under max load in laptops The cost of computing… $ and Watts 20

Portions © Austin, Brehob, Falsafi, Hill, Hoe, Lipasti, Martin, Roth, Shen, Smith, Sohi, Tyson, Vijaykumar, Wenisch Why worry about power dissipation? Environment Thermal issues: affect cooling, packaging, reliability, timing Battery life Why is power a problem? The cost of computing… $ and Watts 21

Portions © Austin, Brehob, Falsafi, Hill, Hoe, Lipasti, Martin, Roth, Shen, Smith, Sohi, Tyson, Vijaykumar, Wenisch Power-Aware Computing Applications Energy-Constrained Computing Temperature/Current-Constrained The cost of computing… $ and Watts 22

Portions © Austin, Brehob, Falsafi, Hill, Hoe, Lipasti, Martin, Roth, Shen, Smith, Sohi, Tyson, Vijaykumar, Wenisch Today Review and finish up other options Costs and Power Instruction Level Parallelism (ILP) and Dynamic Execution 23

Portions © Austin, Brehob, Falsafi, Hill, Hoe, Lipasti, Martin, Roth, Shen, Smith, Sohi, Tyson, Vijaykumar, Wenisch More Performance Optimization Exploiting Instruction Level Parallelism Exploiting ILP: Basics Measuring ILP Dynamic execution 24

Portions © Austin, Brehob, Falsafi, Hill, Hoe, Lipasti, Martin, Roth, Shen, Smith, Sohi, Tyson, Vijaykumar, Wenisch Limitations of Scalar Pipelines Upper Bound on Scalar Pipeline Throughput Limited by IPC=1 “Flynn Bottleneck” Performance Lost Due to Rigid In-order Pipeline Unnecessary stalls Exploiting ILP: Basics Measuring ILP Dynamic execution 25

Portions © Austin, Brehob, Falsafi, Hill, Hoe, Lipasti, Martin, Roth, Shen, Smith, Sohi, Tyson, Vijaykumar, Wenisch Terms Instruction parallelism –Number of instructions being worked on Operation Latency –The time (in cycles) until the result of an instruction is available for use as an operand in a subsequent instruction. For example, if the result of an Add instruction can be used as an operand of an instruction that is issued in the cycle after the Add is issued, we say that the Add has an operation latency of one. Peak IPC –The maximum sustainable number of instructions that can be executed per clock. # Performance modeling for computer architects, C. M. Krishna Exploiting ILP: Basics Measuring ILP Dynamic execution 26

Portions © Austin, Brehob, Falsafi, Hill, Hoe, Lipasti, Martin, Roth, Shen, Smith, Sohi, Tyson, Vijaykumar, Wenisch Architectures for Exploiting Instruction-Level Parallelism Exploiting ILP: Basics Measuring ILP Dynamic execution 27

Portions © Austin, Brehob, Falsafi, Hill, Hoe, Lipasti, Martin, Roth, Shen, Smith, Sohi, Tyson, Vijaykumar, Wenisch Superscalar Machine Exploiting ILP: Basics Measuring ILP Dynamic execution 28

Portions © Austin, Brehob, Falsafi, Hill, Hoe, Lipasti, Martin, Roth, Shen, Smith, Sohi, Tyson, Vijaykumar, Wenisch What is the real problem? CPI of in-order pipelines degrades very sharply if the machine parallelism is increased beyond a certain point. i.e., when NxM approaches average distance between dependent instructions Forwarding is no longer effective Pipeline may never be full due to frequent dependency stalls! Exploiting ILP: Basics Measuring ILP Dynamic execution 29

Portions © Austin, Brehob, Falsafi, Hill, Hoe, Lipasti, Martin, Roth, Shen, Smith, Sohi, Tyson, Vijaykumar, Wenisch Missed Speedup in In-Order Pipelines What’s happening in cycle 4? – mulf stalls due to RAW hazard OK, this is a fundamental problem – subf stalls due to pipeline hazard Why? subf can’t proceed into D because mulf is there That is the only reason, and it isn’t a fundamental one Why can’t subf go into D in cycle 4 and E+ in cycle 5? addf f0,f1,f2 FDE+ W mulf f2,f3,f2 FDd* E* W subf f0,f1,f4 Fp* DE+ W Exploiting ILP: Basics Measuring ILP Dynamic execution 30

Portions © Austin, Brehob, Falsafi, Hill, Hoe, Lipasti, Martin, Roth, Shen, Smith, Sohi, Tyson, Vijaykumar, Wenisch The Problem With In-Order Pipelines In-order pipeline –Structural hazard: 1 insn register (latch) per stage 1 instruction per stage per cycle (unless pipeline is replicated) Younger instr. can’t “pass” older instr. without “clobbering” it Out-of-order pipeline –Implement “passing” functionality by removing structural hazard regfile D$ I$ BPBP Exploiting ILP: Basics Measuring ILP Dynamic execution 31

Portions © Austin, Brehob, Falsafi, Hill, Hoe, Lipasti, Martin, Roth, Shen, Smith, Sohi, Tyson, Vijaykumar, Wenisch New Pipeline Terminology In-order pipeline –Often written as F,D,X,W (multi-cycle X includes M) –Variable latency 1-cycle integer (including mem) 3-cycle pipelined FP regfile D$ I$ BPBP Exploiting ILP: Basics Measuring ILP Dynamic execution 32

Portions © Austin, Brehob, Falsafi, Hill, Hoe, Lipasti, Martin, Roth, Shen, Smith, Sohi, Tyson, Vijaykumar, Wenisch ILP: Instruction-Level Parallelism ILP is a measure of the amount of inter-dependencies between instructions Average ILP =no. instruction / no. cyc required code1: ILP = 1 i.e. must execute serially code2: ILP = 3 i.e. can execute at the same time code1: r1  r2 + 1 r3  r1 / 17 r4  r0 - r3 code2:r1  r2 + 1 r3  r9 / 17 r4  r0 - r10 Exploiting ILP: Basics Measuring ILP Dynamic execution 33

Portions © Austin, Brehob, Falsafi, Hill, Hoe, Lipasti, Martin, Roth, Shen, Smith, Sohi, Tyson, Vijaykumar, Wenisch Purported Limits on ILP Weiss and Smith [1984]1.58 Sohi and Vajapeyam [1987]1.81 Tjaden and Flynn [1970]1.86 Tjaden and Flynn [1973]1.96 Uht [1986]2.00 Smith et al. [1989]2.00 Jouppi and Wall [1988]2.40 Johnson [1991]2.50 Acosta et al. [1986]2.79 Wedig [1982]3.00 Butler et al. [1991]5.8 Melvin and Patt [1991]6 Wall [1991]7 Kuck et al. [1972]8 Riseman and Foster [1972]51 Nicolau and Fisher [1984] 90 Exploiting ILP: Basics Measuring ILP Dynamic execution 34

Portions © Austin, Brehob, Falsafi, Hill, Hoe, Lipasti, Martin, Roth, Shen, Smith, Sohi, Tyson, Vijaykumar, Wenisch Scope of ILP Analysis r1  r2 + 1 r3  r1 / 17 r4  r0 - r3 r11  r r13  r11 / 17 r14  r13 - r20 ILP=2 ILP=1 Out-of-order execution exposes more ILP ILP=1 Exploiting ILP: Basics Measuring ILP Dynamic execution 35

Portions © Austin, Brehob, Falsafi, Hill, Hoe, Lipasti, Martin, Roth, Shen, Smith, Sohi, Tyson, Vijaykumar, Wenisch How Large Must the “Window” Be? Exploiting ILP: Basics Measuring ILP Dynamic execution 36

Portions © Austin, Brehob, Falsafi, Hill, Hoe, Lipasti, Martin, Roth, Shen, Smith, Sohi, Tyson, Vijaykumar, Wenisch Dynamic Scheduling – OoO Execution Dynamic scheduling –Totally in the hardware –Also called “out-of-order execution” (OoO) Fetch many instructions into instruction window –Use branch prediction to speculate past (multiple) branches –Flush pipeline on branch misprediction Rename to avoid false dependencies (WAW and WAR) Execute instructions as soon as possible –Register dependencies are known –Handling memory dependencies more tricky (much more later) Commit instructions in order –Any strange happens before commit, just flush the pipeline Current machines: 100+ instruction scheduling window Exploiting ILP: Basics Measuring ILP Dynamic execution 37

Portions © Austin, Brehob, Falsafi, Hill, Hoe, Lipasti, Martin, Roth, Shen, Smith, Sohi, Tyson, Vijaykumar, Wenisch Motivation for Dynamic Scheduling Dynamic scheduling (out-of-order execution) –Execute instructions in non-sequential order… +Reduce RAW stalls +Increase pipeline and functional unit (FU) utilization –Original motivation was to increase FP unit utilization +Expose more opportunities for parallel issue (ILP) –Not in-order  can be in parallel –…but make it appear like sequential execution Important –But difficult Next few lectures Exploiting ILP: Basics Measuring ILP Dynamic execution 38

Portions © Austin, Brehob, Falsafi, Hill, Hoe, Lipasti, Martin, Roth, Shen, Smith, Sohi, Tyson, Vijaykumar, Wenisch regfile D$ I$ BPBP insn buffer SD add p2,p3,p4 sub p2,p4,p5 mul p2,p5,p6 div p4,4,p7 Ready Table P2P3P4P5P6P7 Yes div p4,4,p7 mul p2,p5,p6 sub p2,p4,p5 add p2,p3,p4 and Dynamic Scheduling: The Big Picture Instructions fetch/decoded/renamed into Instruction Buffer –Also called “instruction window” or “instruction scheduler” Instructions (conceptually) check ready bits every cycle –Execute when ready t 39

Portions © Austin, Brehob, Falsafi, Hill, Hoe, Lipasti, Martin, Roth, Shen, Smith, Sohi, Tyson, Vijaykumar, Wenisch Going Forward: What’s Next We’ll build this up in steps over the next few weeks –Register renaming to eliminate “false” dependencies –“Tomasulo’s algorithm” to implement OoO execution –Handling precise state and speculation –Handling memory dependencies Let’s get started! Exploiting ILP: Basics Measuring ILP Dynamic execution 40

Portions © Austin, Brehob, Falsafi, Hill, Hoe, Lipasti, Martin, Roth, Shen, Smith, Sohi, Tyson, Vijaykumar, Wenisch 41 Dependency vs. Hazard A dependency exists independent of the hardware. –So if Inst #1’s result is needed for Inst #1000 there is a dependency –It is only a hazard if the hardware has to deal with it. So in our pipelined machine we only worried if there wasn’t a “buffer” of two instructions between the dependent instructions. Dynamic execution

Portions © Austin, Brehob, Falsafi, Hill, Hoe, Lipasti, Martin, Roth, Shen, Smith, Sohi, Tyson, Vijaykumar, Wenisch 42 True Data dependencies True data dependency –RAW – Read after Write R1=R2+R3 R4=R1+R5 True dependencies prevent reordering –(Mostly) unavoidable Dynamic execution Hazards Renaming

Portions © Austin, Brehob, Falsafi, Hill, Hoe, Lipasti, Martin, Roth, Shen, Smith, Sohi, Tyson, Vijaykumar, Wenisch False Data Dependencies False or Name dependencies –WAW – Write after Write R1=R2+R3 R1=R4+R5 –WAR – Write after Read R2=R1+R3 R1=R4+R5 False dependencies prevent reordering –Can they be eliminated? (Yes, with renaming!) 43 Dynamic execution Hazards Renaming

Portions © Austin, Brehob, Falsafi, Hill, Hoe, Lipasti, Martin, Roth, Shen, Smith, Sohi, Tyson, Vijaykumar, Wenisch 44 Data Dependency Graph: Simple example R1=MEM[R2+0] // A R2=R2+4 // B R3=R1+R4 // C MEM[R2+0]=R3 // D RAW WAW WAR Dynamic execution Hazards Renaming

Portions © Austin, Brehob, Falsafi, Hill, Hoe, Lipasti, Martin, Roth, Shen, Smith, Sohi, Tyson, Vijaykumar, Wenisch 45 Data Dependency Graph: More complex example R1=MEM[R3+4] // A R2=MEM[R3+8] // B R1=R1*R2 // C MEM[R3+4]=R1 // D MEM[R3+8]=R1 // E R1=MEM[R3+12] // F R2=MEM[R3+16] // G R1=R1*R2 // H MEM[R3+12]=R1 // I MEM[R3+16]=R1 // J RAW WAW WAR Dynamic execution Hazards Renaming

Portions © Austin, Brehob, Falsafi, Hill, Hoe, Lipasti, Martin, Roth, Shen, Smith, Sohi, Tyson, Vijaykumar, Wenisch 46 Eliminating False Dependencies R1=MEM[R3+4] // A R2=MEM[R3+8] // B R1=R1*R2 // C MEM[R3+4]=R1 // D MEM[R3+8]=R1 // E R1=MEM[R3+12] // F R2=MEM[R3+16] // G R1=R1*R2 // H MEM[R3+12]=R1 // I MEM[R3+16]=R1 // J Well, logically there is no reason for F-J to be dependent on A-E. So….. ABFG CH DEIJ –Should be possible. But that would cause either C or H to have the wrong reg inputs How do we fix this? –Remember, the dependency is really on the name of the register –So… change the register names! Dynamic execution Hazards Renaming

Portions © Austin, Brehob, Falsafi, Hill, Hoe, Lipasti, Martin, Roth, Shen, Smith, Sohi, Tyson, Vijaykumar, Wenisch 47 Register Renaming Concept –The register names are arbitrary –The register name only needs to be consistent between writes. R1= ….. …. = R1 …. ….= … R1 R1= …. The value in R1 is “alive” from when the value is written until the last read of that value. Dynamic execution Hazards Renaming

Portions © Austin, Brehob, Falsafi, Hill, Hoe, Lipasti, Martin, Roth, Shen, Smith, Sohi, Tyson, Vijaykumar, Wenisch 48 So after renaming, what happens to the dependencies? P1=MEM[R3+4] //A P2=MEM[R3+8] //B P3=P1*P2 //C MEM[R3+4]=P3 //D MEM[R3+8]=P3 //E P4=MEM[R3+12] //F P5=MEM[R3+16] //G P6=P4*P5 //H MEM[R3+12]=P6 //I MEM[R3+16]=P6 //J RAW WAW WAR Dynamic execution Hazards Renaming

Portions © Austin, Brehob, Falsafi, Hill, Hoe, Lipasti, Martin, Roth, Shen, Smith, Sohi, Tyson, Vijaykumar, Wenisch Register Renaming Approach Every time an architected register is written we assign it to a physical register –Until the architected register is written again, we continue to translate it to the physical register number –Leaves RAW dependencies intact It is really simple, let’s look at an example: –Names: r1,r2,r3 –Locations: p1,p2,p3,p4,p5,p6,p7 –Original mapping: r1  p1, r2  p2, r3  p3, p4 – p7 are “free” MapTableFreeListOrig. insnsRenamed insns r1r2r3 p1p2p3p4,p5,p6,p7add r2,r3,r1add p2,p3,p4 p4p2p3p5,p6,p7sub r2,r1,r3sub p2,p4,p5 p4p2p5p6,p7mul r2,r3,r3mul p2,p5,p6 p4p2p6p7div r1,4,r1div p4,4,p7 Dynamic execution Hazards Renaming 49

EECS 470 Portions © Austin, Brehob, Falsafi, Hill, Hoe, Lipasti, Martin, Roth, Shen, Smith, Sohi, Tyson, Vijaykumar, Wenisch 50 R1=MEM[P7+4] // A R2=MEM[R3+8] // B R1=R1*R2 // C MEM[R3+4]=R1 // D MEM[R3+8]=R1 // E R1=MEM[R3+12] // F R2=MEM[R3+16] // G R1=R1*R2 // H MEM[R3+12]=R1 // I MEM[R3+16]=R1 // J P1=MEM[R3+4] P2=MEM[R3+8] P3=P1*P2 MEM[R3+4]=P3 MEM[R3+8]=P3 P4=MEM[R3+12] P5=MEM[R3+16] P6=P4*P5 MEM[R3+12]=P6 MEM[R3+16]=P6 ArchV?Physical Dynamic execution Hazards Renaming 50

EECS 470 Portions © Austin, Brehob, Falsafi, Hill, Hoe, Lipasti, Martin, Roth, Shen, Smith, Sohi, Tyson, Vijaykumar, Wenisch 51 Terminology There are a lot of terms and ideas in out-of-order processors. r And because of lot of the work was done in parallel, there isn’t a standard set of names for things. r Here we’ve called the table that maps the architected register to a physical register the “map table”. That is probably the m m I generally use Intel’s term “Register Alias Table” or RAT. m Also “rename table” isn’t an uncommon term for it. I try to use a mix of terminology in this class so that you can understand others when they are describing something… r It’s not as bad as it sounds, but it is annoying at first.

Portions © Austin, Brehob, Falsafi, Hill, Hoe, Lipasti, Martin, Roth, Shen, Smith, Sohi, Tyson, Vijaykumar, Wenisch 52 Register Renaming Hardware Really simple table (rename table) –Every time an instruction which writes a register is encountered assign it a new physical register number But there is some complexity –When do you free physical registers? Dynamic execution Hazards Renaming