CS 61C L5.2.2 Pipelining I (1) K. Meinz, Summer 2004 © UCB CS61C : Machine Structures Lecture 5.2.2 Pipelining I 2004-07-22 Kurt Meinz inst.eecs.berkeley.edu/~cs61c.

Slides:



Advertisements
Similar presentations
PipelineCSCE430/830 Pipeline: Introduction CSCE430/830 Computer Architecture Lecturer: Prof. Hong Jiang Courtesy of Prof. Yifeng Zhu, U of Maine Fall,
Advertisements

Chapter 8. Pipelining.
Review: Pipelining. Pipelining Laundry Example Ann, Brian, Cathy, Dave each have one load of clothes to wash, dry, and fold Washer takes 30 minutes Dryer.
Pipelining I (1) Fall 2005 Lecture 18: Pipelining I.
Pipelining Hwanmo Sung CS147 Presentation Professor Sin-Min Lee.
CS252/Patterson Lec 1.1 1/17/01 Pipelining: Its Natural! Laundry Example Ann, Brian, Cathy, Dave each have one load of clothes to wash, dry, and fold Washer.
Chapter Six 1.
Pipelining II (1) Fall 2005 Lecture 19: Pipelining II.
CS61C L29 CPU Design : Pipelining to Improve Performance II (1) Garcia, Fall 2006 © UCB Running away with it  Our #8 football team had a great effort.
Instructor: Senior Lecturer SOE Dan Garcia CS 61C: Great Ideas in Computer Architecture Pipelining Hazards 1.
CS61C L24 Review Pipeline © UC Regents 1 CS61C - Machine Structures Lecture 24 - Review Pipelined Execution November 29, 2000 David Patterson
Inst.eecs.berkeley.edu/~cs61c UCB CS61C : Machine Structures Lecture 29 – CPU Design : Pipelining to Improve Performance II Designed for the.
CS Computer Architecture 1 CS 430 – Computer Architecture Pipelined Execution - Review William J. Taffe using slides of David Patterson.
Inst.eecs.berkeley.edu/~cs61c UCB CS61C : Machine Structures Lecture 29 – CPU Design : Pipelining to Improve Performance II Cal researcher Marty.
inst.eecs.berkeley.edu/~cs61c UCB CS61C : Machine Structures Lecture 28 – CPU Design : Pipelining to Improve Performance The College Board.
CS61C L29 CPU Design : Pipelining to Improve Performance (1) Garcia, Spring 2007 © UCB Wirelessly recharge batt  Powercast & Philips have developed a.
Mary Jane Irwin ( ) [Adapted from Computer Organization and Design,
CS61C L30 CPU Design : Pipelining to Improve Performance II (1) Garcia, Spring 2007 © UCB E-voting bill in congress!  Rep Rush Holt (D-NJ) has a bill.
ENEE350 Ankur Srivastava University of Maryland, College Park Based on Slides from Mary Jane Irwin ( )
Prof. John Nestor ECE Department Lafayette College Easton, Pennsylvania Computer Organization Pipelined Processor Design 1.
Computer ArchitectureFall 2007 © October 24nd, 2007 Majd F. Sakr CS-447– Computer Architecture.
CS61C L28 CPU Design : Pipelining to Improve Performance I (1) Garcia, Fall 2006 © UCB 100 Msites!  Sometimes it’s nice to stop and reflect. The web was.
Cs 61C L25 pipeline.1 Patterson Spring 99 ©UCB CS61C Introduction to Pipelining Lecture 25 April 28, 1999 Dave Patterson (http.cs.berkeley.edu/~patterson)
Computer ArchitectureFall 2007 © October 22nd, 2007 Majd F. Sakr CS-447– Computer Architecture.
CS61C L21 Pipeline © UC Regents 1 CS61C - Machine Structures Lecture 21 - Introduction to Pipelined Execution November 15, 2000 David Patterson
CS61C L31 Pipelined Execution, part II (1) Garcia, Fall 2004 © UCB Lecturer PSOE Dan Garcia inst.eecs.berkeley.edu/~cs61c.
CS430 – Computer Architecture Introduction to Pipelined Execution
CS61C L21 Pipelining I (1) Chae, Summer 2008 © UCB Albert Chae, Instructor inst.eecs.berkeley.edu/~cs61c CS61C : Machine Structures Lecture #21 – Pipelining.
Scott Beamer, Instructor
CS 61C L30 Introduction to Pipelined Execution (1) Garcia, Fall 2004 © UCB Lecturer PSOE Dan Garcia inst.eecs.berkeley.edu/~cs61c.
CS 61C L18 Pipelining I (1) A Carle, Summer 2005 © UCB inst.eecs.berkeley.edu/~cs61c/su05 CS61C : Machine Structures Lecture #18: Pipelining
Computer ArchitectureFall 2008 © October 6th, 2008 Majd F. Sakr CS-447– Computer Architecture.
Prof. John Nestor ECE Department Lafayette College Easton, Pennsylvania ECE Computer Organization Lecture 17 - Pipelined.
CS61C L19 Introduction to Pipelined Execution (1) Garcia, Fall 2005 © UCB Lecturer PSOE, new dad Dan Garcia inst.eecs.berkeley.edu/~cs61c.
CS 61C L38 Pipelined Execution, part II (1) Garcia, Spring 2004 © UCB Lecturer PSOE Dan Garcia inst.eecs.berkeley.edu/~cs61c.
Introduction to Pipelining Rabi Mahapatra Adapted from the lecture notes of Dr. John Kubiatowicz (UC Berkeley)
CS3350B Computer Architecture Winter 2015 Lecture 6.2: Instructional Level Parallelism: Hazards and Resolutions Marc Moreno Maza
CS1104: Computer Organisation School of Computing National University of Singapore.
EEL5708 Lotzi Bölöni EEL 5708 High Performance Computer Architecture Pipelining.
Pipelining (I). Pipelining Example  Laundry Example  Four students have one load of clothes each to wash, dry, fold, and put away  Washer takes 30.
Analogy: Gotta Do Laundry
CSE 340 Computer Architecture Summer 2014 Basic MIPS Pipelining Review.
CS.305 Computer Architecture Enhancing Performance with Pipelining Adapted from Computer Organization and Design, Patterson & Hennessy, © 2005, and from.
1 Designing a Pipelined Processor In this Chapter, we will study 1. Pipelined datapath 2. Pipelined control 3. Data Hazards 4. Forwarding 5. Branch Hazards.
CSCI-365 Computer Organization Lecture Note: Some slides and/or pictures in the following are adapted from: Computer Organization and Design, Patterson.
ECE 232 L18.Pipeline.1 Adapted from Patterson 97 ©UCBCopyright 1998 Morgan Kaufmann Publishers ECE 232 Hardware Organization and Design Lecture 18 Pipelining.

Cs 152 L1 3.1 DAP Fa97,  U.CB Pipelining Lessons °Pipelining doesn’t help latency of single task, it helps throughput of entire workload °Multiple tasks.
Chap 6.1 Computer Architecture Chapter 6 Enhancing Performance with Pipelining.
CSIE30300 Computer Architecture Unit 04: Basic MIPS Pipelining Hsin-Chou Chi [Adapted from material by and
Instructor: Senior Lecturer SOE Dan Garcia CS 61C: Great Ideas in Computer Architecture Pipelining Hazards 1.
CS252/Patterson Lec 1.1 1/17/01 معماري کامپيوتر - درس نهم pipeline برگرفته از درس : Prof. David A. Patterson.
LECTURE 7 Pipelining. DATAPATH AND CONTROL We started with the single-cycle implementation, in which a single instruction is executed over a single cycle.
CDA 3101 Summer 2003 Introduction to Computer Organization Pipeline Control And Pipeline Hazards 17 July 2003.
Lecture 9. MIPS Processor Design – Pipelined Processor Design #1 Prof. Taeweon Suh Computer Science Education Korea University 2010 R&E Computer System.
CSCI-365 Computer Organization Lecture Note: Some slides and/or pictures in the following are adapted from: Computer Organization and Design, Patterson.
Advanced Computer Architecture CS 704 Advanced Computer Architecture Lecture 10 Computer Hardware Design (Pipeline Datapath and Control Design) Prof. Dr.
CS 61C: Great Ideas in Computer Architecture Pipelining and Hazards 1 Instructors: Vladimir Stojanovic and Nicholas Weaver
Lecture 18: Pipelining I.
Pipelines An overview of pipelining
CDA 3101 Spring 2016 Introduction to Computer Organization
Pipelining concepts, datapath and hazards
CMSC 611: Advanced Computer Architecture
ECE232: Hardware Organization and Design
Chapter 4 The Processor Part 2
Lecturer: Alan Christopher
Chapter 8. Pipelining.
Pipelining Appendix A and Chapter 3.
Guest Lecturer: Justin Hsia
Presentation transcript:

CS 61C L5.2.2 Pipelining I (1) K. Meinz, Summer 2004 © UCB CS61C : Machine Structures Lecture Pipelining I Kurt Meinz inst.eecs.berkeley.edu/~cs61c

CS 61C L5.2.2 Pipelining I (2) K. Meinz, Summer 2004 © UCB An Abstract View of the Critical Path Critical Path (Load Operation) = Delay clock through PC (FFs) + Instruction Memory’s Access Time + Register File’s Access Time + ALU to Perform a 32-bit Add + Data Memory Access Time + Stable Time for Register File Write Clk 5 RwRaRb bit Registers Rd ALU Clk Data In Data Address Ideal Data Memory Instruction Address Ideal Instruction Memory Clk PC 5 Rs 5 Rt 16 Imm 32 A B Next Address This affects how much you can overclock your PC!

CS 61C L5.2.2 Pipelining I (3) K. Meinz, Summer 2004 © UCB Improve Critical Path  Improve Clock “Critical path” (longest path through logic) determines length of clock period To reduce clock period  decrease path through CL by inserting State Clk

CS 61C L5.2.2 Pipelining I (4) K. Meinz, Summer 2004 © UCB °5 steps to design a processor 1. Analyze instruction set => datapath requirements 2. Select set of datapath components & establish clock methodology 3. Assemble datapath meeting the requirements 4. Analyze implementation of each instruction to determine setting of control points that effects the register transfer. 5. Assemble the control logic °Control is the hard part °MIPS makes that easier Instructions same size Source registers always in same place Immediates same size, location Operations always on registers/immediates Review: Single cycle datapath Control Datapath Memory Processor Input Output

CS 61C L5.2.2 Pipelining I (5) K. Meinz, Summer 2004 © UCB Review Datapath (1/3) Datapath is the hardware that performs operations necessary to execute programs. Control instructs datapath on what to do next. Datapath needs: access to storage (general purpose registers and memory) computational ability (ALU) helper hardware (local registers and PC)

CS 61C L5.2.2 Pipelining I (6) K. Meinz, Summer 2004 © UCB Review Datapath (2/3) Five stages of datapath (executing an instruction): 1. Instruction Fetch (Increment PC) 2. Instruction Decode (Read Registers) 3. ALU (Computation) 4. Memory Access 5. Write to Registers ALL instructions must go through ALL five stages.

CS 61C L5.2.2 Pipelining I (7) K. Meinz, Summer 2004 © UCB Review Datapath (3/3) PC instruction memory +4 rt rs rd registers ALU Data memory imm 1. Instruction Fetch 2. Decode/ Register Read 3. Execute4. Memory 5. Write Back

CS 61C L5.2.2 Pipelining I (8) K. Meinz, Summer 2004 © UCB Gotta Do Laundry °Ann, Brian, Cathy, Dave each have one load of clothes to wash, dry, fold, and put away ABCD °Dryer takes 30 minutes °“Folder” takes 30 minutes °“Stasher” takes 30 minutes to put clothes into drawers °Washer takes 30 minutes

CS 61C L5.2.2 Pipelining I (9) K. Meinz, Summer 2004 © UCB Sequential Laundry Sequential laundry takes 8 hours for 4 loads TaskOrderTaskOrder B C D A 30 Time 30 6 PM AM

CS 61C L5.2.2 Pipelining I (10) K. Meinz, Summer 2004 © UCB Pipelined Laundry Pipelined laundry takes 3.5 hours for 4 loads! TaskOrderTaskOrder B C D A 12 2 AM 6 PM Time 30

CS 61C L5.2.2 Pipelining I (11) K. Meinz, Summer 2004 © UCB General Definitions Latency: time to completely execute a certain task for example, time to read a sector from disk is disk access time or disk latency Instruction latency is time from when instruction starts to time when it finishes. Throughput: amount of work that can be done over a period of time

CS 61C L5.2.2 Pipelining I (12) K. Meinz, Summer 2004 © UCB Pipelining Lessons (0/2) Terminology: Issue: When instruction goes into first stage of pipe. Commit: when instruction finishes last stage 6 PM 789 Time B C D A 30 TaskOrderTaskOrder

CS 61C L5.2.2 Pipelining I (13) K. Meinz, Summer 2004 © UCB Pipelining Lessons (1/2) Pipelining doesn’t help latency of single task, it helps throughput of entire workload Multiple tasks operating simultaneously using different resources Potential speedup = Number pipe stages Time to “fill” pipeline and time to “drain” it reduces speedup: 2.3X v. 4X in this example 6 PM 789 Time B C D A 30 TaskOrderTaskOrder

CS 61C L5.2.2 Pipelining I (14) K. Meinz, Summer 2004 © UCB Pipelining Lessons (2/2) Suppose new Washer takes 20 minutes, new Stasher takes 20 minutes. How much faster is pipeline? Pipeline rate limited by slowest pipeline stage Unbalanced lengths of pipe stages also reduces speedup 6 PM 789 Time B C D A 30 TaskOrderTaskOrder

CS 61C L5.2.2 Pipelining I (15) K. Meinz, Summer 2004 © UCB Steps in Executing MIPS 1) IFetch: Fetch Instruction, Increment PC 2) Decode Instruction, Read Registers 3) Execute: Mem-ref:Calculate Address Arith-log: Perform Operation 4) Memory: Load:Read Data from Memory Store:Write Data to Memory 5) Write Back: Write Data to Register

CS 61C L5.2.2 Pipelining I (16) K. Meinz, Summer 2004 © UCB Pipelined Execution Representation Every instruction must take same number of steps, also called pipeline “stages”, so some will go idle sometimes IFtchDcdExecMemWB IFtchDcdExecMemWB IFtchDcdExecMemWB IFtchDcdExecMemWB IFtchDcdExecMemWB IFtchDcdExecMemWB Time

CS 61C L5.2.2 Pipelining I (17) K. Meinz, Summer 2004 © UCB Review: Datapath for MIPS Use datapath figure to represent pipeline IFtchDcdExecMemWB ALU I$ Reg D$Reg PC instruction memory +4 rt rs rd registers ALU Data memory imm 1. Instruction Fetch 2. Decode/ Register Read 3. Execute4. Memory 5. Write Back

CS 61C L5.2.2 Pipelining I (18) K. Meinz, Summer 2004 © UCB Graphical Pipeline Representation I n s t r. O r d e r Load Add Store Sub Or I$ Time (clock cycles) I$ ALU Reg I$ D$ ALU Reg D$ Reg I$ D$ Reg ALU Reg D$ Reg D$ ALU (In Reg, right half highlight read, left half write) Reg I$

CS 61C L5.2.2 Pipelining I (19) K. Meinz, Summer 2004 © UCB Example Suppose 2 ns for memory access, 2 ns for ALU operation, and 1 ns for register file read or write; compute instruction throughput Nonpipelined Execution: lw : IF + Read Reg + ALU + Memory + Write Reg = = 8 ns add: IF + Read Reg + ALU + Write Reg = = 6 ns Pipelined Execution: Max(IF,Read Reg,ALU,Memory,Write Reg) = 2 ns

CS 61C L5.2.2 Pipelining I (20) K. Meinz, Summer 2004 © UCB Example Suppose 2 ns for memory access, 2 ns for ALU operation, and 1 ns for register file read or write; compute instruction latency Nonpipelined Execution: lw : IF + Read Reg + ALU + Memory + Write Reg = = 8 ns add: IF + Read Reg + ALU + Write Reg = = 6 ns Pipelined Execution: SUM(IF,Read Reg,ALU,Memory,Write Reg) = 10 ns

CS 61C L5.2.2 Pipelining I (21) K. Meinz, Summer 2004 © UCB Things to Remember Optimal Pipeline Each stage is executing part of an instruction each clock cycle. One instruction finishes during each clock cycle. On average, executes far more quickly. What makes this work? Similarities between instructions allow us to use same stages for all instructions (generally). Each stage takes about the same amount of time as all others: little wasted time.

CS 61C L5.2.2 Pipelining I (22) K. Meinz, Summer 2004 © UCB Pipeline Summary Pipelining is a BIG IDEA widely used concept What makes it less than perfect? …

CS 61C L5.2.2 Pipelining I (23) K. Meinz, Summer 2004 © UCB Pipeline Hazard: Matching socks in later load A depends on D; stall since folder tied up TaskOrderTaskOrder B C D A E F bubble 12 2 AM 6 PM Time 30

CS 61C L5.2.2 Pipelining I (24) K. Meinz, Summer 2004 © UCB Problems for Computers Limits to pipelining: Hazards prevent next instruction from executing during its designated clock cycle Structural hazards: HW cannot support this combination of instructions (single person to fold and put clothes away) Control hazards: Pipelining of branches & other instructions stall the pipeline until the hazard; “bubbles” in the pipeline Data hazards: Instruction depends on result of prior instruction still in the pipeline (missing sock)

CS 61C L5.2.2 Pipelining I (25) K. Meinz, Summer 2004 © UCB Structural Hazard #1: Single Memory (1/2) Read same memory twice in same clock cycle I$ Load Instr 1 Instr 2 Instr 3 Instr 4 ALU I$ Reg D$Reg ALU I$ Reg D$Reg ALU I$ Reg D$Reg ALU Reg D$Reg ALU I$ Reg D$Reg I n s t r. O r d e r Time (clock cycles)

CS 61C L5.2.2 Pipelining I (26) K. Meinz, Summer 2004 © UCB Structural Hazard #1: Single Memory (2/2) Solution: infeasible and inefficient to create second memory (We’ll learn about this more next week) so simulate this by having two Level 1 Caches (a temporary smaller [of usually most recently used] copy of memory) have both an L1 Instruction Cache and an L1 Data Cache requires complex hardware to control when both caches miss!

CS 61C L5.2.2 Pipelining I (27) K. Meinz, Summer 2004 © UCB Structural Hazard #2: Registers (1/2) Can’t read and write to registers simultaneously I$ sw Instr 1 Instr 2 Instr 3 Instr 4 ALU I$ Reg D$Reg ALU I$ Reg D$Reg ALU I$ Reg D$Reg ALU Reg D$Reg ALU I$ Reg D$Reg I n s t r. O r d e r Time (clock cycles)

CS 61C L5.2.2 Pipelining I (28) K. Meinz, Summer 2004 © UCB Structural Hazard #2: Registers (2/2) Fact: Register access is VERY fast: takes less than half the time of ALU stage Solution: introduce convention always Write to Registers during first half of each clock cycle always Read from Registers during second half of each clock cycle (easy when async) Result: can perform Read and Write during same clock cycle

CS 61C L5.2.2 Pipelining I (29) K. Meinz, Summer 2004 © UCB Control Hazard: Branching (1/7) Where do we do the compare for the branch? I$ beq Instr 1 Instr 2 Instr 3 Instr 4 ALU I$ Reg D$Reg ALU I$ Reg D$Reg ALU I$ Reg D$Reg ALU Reg D$Reg ALU I$ Reg D$Reg I n s t r. O r d e r Time (clock cycles)

CS 61C L5.2.2 Pipelining I (30) K. Meinz, Summer 2004 © UCB Control Hazard: Branching (2/7) We put branch decision-making hardware in ALU stage therefore two more instructions after the branch will always be fetched, whether or not the branch is taken Desired functionality of a branch if we do not take the branch, don’t waste any time and continue executing normally if we take the branch, don’t execute any instructions after the branch, just go to the desired label

CS 61C L5.2.2 Pipelining I (31) K. Meinz, Summer 2004 © UCB Control Hazard: Branching (3/7) Initial Solution: Stall until decision is made insert “no-op” instructions: those that accomplish nothing, just take time Drawback: branches take 3 clock cycles each (assuming comparator is put in ALU stage) Drawback: Will still fetch inst at branch+4. Must either decode branch in IF or squash fetched branch+4.

CS 61C L5.2.2 Pipelining I (32) K. Meinz, Summer 2004 © UCB Control Hazard: Branching (4/7) Optimization #1: move asynchronous comparator up to Stage 2 as soon as instruction is decoded (Opcode identifies is as a branch), immediately make a decision and set the value of the PC (if necessary) Benefit: since branch is complete in Stage 2, only one unnecessary instruction is fetched, so only one no-op is needed Side Note: This means that branches are idle in Stages 3, 4 and 5.

CS 61C L5.2.2 Pipelining I (33) K. Meinz, Summer 2004 © UCB Insert a single no-op (bubble) Control Hazard: Branching (5/7) add beq lw ALU I$ Reg D$Reg ALU I$ Reg D$Reg ALU Reg D$Reg I$ I n s t r. O r d e r Time (clock cycles) bub ble Impact: 2 clock cycles per branch instruction  slow

CS 61C L5.2.2 Pipelining I (34) K. Meinz, Summer 2004 © UCB Control Hazard: Branching (6/7) Optimization #2: Redefine branches Old definition: if we take the branch, none of the instructions after the branch get executed by accident New definition: whether or not we take the branch, the single instruction immediately following the branch gets executed (called the branch-delay slot)

CS 61C L5.2.2 Pipelining I (35) K. Meinz, Summer 2004 © UCB Control Hazard: Branching (7/7) Notes on Branch-Delay Slot Worst-Case Scenario: can always put a no-op in the branch-delay slot Better Case: can find an instruction preceding the branch which can be placed in the branch-delay slot without affecting flow of the program -re-ordering instructions is a common method of speeding up programs -compiler must be very smart in order to find instructions to do this -usually can find such an instruction at least 50% of the time -Jumps also have a delay slot…

CS 61C L5.2.2 Pipelining I (36) K. Meinz, Summer 2004 © UCB Example: Nondelayed vs. Delayed Branch add $1,$2,$3 sub $4, $5,$6 beq $1, $4, Exit or $8, $9,$10 xor $10, $1,$11 Nondelayed Branch add $1,$2,$3 sub $4, $5,$6 beq $1, $4, Exit or $8, $9,$10 xor $10, $1,$11 Delayed Branch Exit:

CS 61C L5.2.2 Pipelining I (37) K. Meinz, Summer 2004 © UCB Data Hazards (1/2) add $t0, $t1, $t2 sub $t4, $t0,$t3 and $t5, $t0,$t6 or $t7, $t0,$t8 xor $t9, $t0,$t10 Consider the following sequence of instructions

CS 61C L5.2.2 Pipelining I (38) K. Meinz, Summer 2004 © UCB $t0 not written back in time! Data Hazards (2/2) sub $t4,$t0,$t3 ALU I$ Reg D$Reg and $t5,$t0,$t6 ALU I$ Reg D$Reg or $t7,$t0,$t8 I$ ALU Reg D$Reg xor $t9,$t0,$t10 ALU I$ Reg D$Reg add $t0,$t1,$t2 IFID/RFEXMEMWB ALU I$ Reg D$ Reg I n s t r. O r d e r Time (clock cycles)

CS 61C L5.2.2 Pipelining I (39) K. Meinz, Summer 2004 © UCB Fix by Forwarding result as soon as we h ave it to where we need it: Data Hazard Solution: Forwarding sub $t4,$t0,$t3 ALU I$ Reg D$Reg and $t5,$t0,$t6 ALU I$ Reg D$Reg or $t7,$t0,$t8 * I$ ALU Reg D$Reg xor $t9,$t0,$t10 ALU I$ Reg D$Reg add $t0,$t1,$t2 IFID/RFEXMEMWB ALU I$ Reg D$ Reg * “ or ” hazard solved by register hardware

CS 61C L5.2.2 Pipelining I (40) K. Meinz, Summer 2004 © UCB Forwarding works if value is available (but not written back) before it is needed. But consider … Data Hazard: Loads (1/4) sub $t3,$t0,$t2 ALU I$ Reg D$Reg lw $t0,0($t1) IFID/RFEXMEMWB ALU I$ Reg D$ Reg Need result before it is calculated! Must stall use (sub) 1 cycle and then forward. …

CS 61C L5.2.2 Pipelining I (41) K. Meinz, Summer 2004 © UCB Hardware must stall pipeline Called “interlock” Data Hazard: Loads (2/4) sub $t3,$t0,$t2 ALU I$ Reg D$Reg bub ble and $t5,$t0,$t4 ALU I$ Reg D$Reg bub ble or $t7,$t0,$t6 I$ ALU Reg D$ bub ble lw $t0, 0($t1) IFID/RFEXMEMWB ALU I$ Reg D$ Reg

CS 61C L5.2.2 Pipelining I (42) K. Meinz, Summer 2004 © UCB Data Hazard: Loads (3/4) Instruction slot after a load is called “load delay slot” If that instruction uses the result of the load, then the hardware interlock will stall it for one cycle. If the compiler puts an unrelated instruction in that slot, then no stall Letting the hardware stall the instruction in the delay slot is equivalent to putting a nop in the slot (except the latter uses more code space)

CS 61C L5.2.2 Pipelining I (43) K. Meinz, Summer 2004 © UCB Data Hazard: Loads (4/4) Stall is equivalent to nop sub $t3,$t0,$t2 and $t5,$t0,$t4 or $t7,$t0,$t6 I$ ALU Reg D$ lw $t0, 0($t1) ALU I$ Reg D$ Reg bub ble ALU I$ Reg D$ Reg ALU I$ Reg D$ Reg nop

CS 61C L5.2.2 Pipelining I (44) K. Meinz, Summer 2004 © UCB C.f. Branch Delay vs. Load Delay Load Delay occurs only if necessary (dependent instructions). Branch Delay always happens (part of the ISA). Why not have Branch Delay interlocked? Answer: Interlocks only work if you can detect hazard ahead of time. By the time we detect a branch, we already need its value … hence no interlock is possible!

CS 61C L5.2.2 Pipelining I (45) K. Meinz, Summer 2004 © UCB Historical Trivia First MIPS design did not interlock and stall on load-use data hazard Real reason for name behind MIPS: Microprocessor without Interlocked Pipeline Stages Word Play on acronym for Millions of Instructions Per Second, also called MIPS Load/Use  Wrong Answer!

CS 61C L5.2.2 Pipelining I (46) K. Meinz, Summer 2004 © UCB “And in Conclusion..” Pipeline challenge is hazards Forwarding helps w/many data hazards Delayed branch helps with control hazard in 5 stage pipeline Monday: Pipelined Datapath and Control in Detail! Please finish phase 1 of proj 3 by Monday.