CS 430 - Computer Architecture 1 CS 430 – Computer Architecture Pipelined Execution - Review William J. Taffe using slides of David Patterson.

Slides:



Advertisements
Similar presentations
Morgan Kaufmann Publishers The Processor
Advertisements

CMSC 611: Advanced Computer Architecture Pipelining Some material adapted from Mohamed Younis, UMBC CMSC 611 Spr 2003 course slides Some material adapted.
Mehmet Can Vuran, Instructor University of Nebraska-Lincoln Acknowledgement: Overheads adapted from those provided by the authors of the textbook.
Pipeline Computer Organization II 1 Hazards Situations that prevent starting the next instruction in the next cycle Structural hazards – A required resource.
ELEN 468 Advanced Logic Design
Review: Pipelining. Pipelining Laundry Example Ann, Brian, Cathy, Dave each have one load of clothes to wash, dry, and fold Washer takes 30 minutes Dryer.
Pipelining I (1) Fall 2005 Lecture 18: Pipelining I.
Chapter Six 1.
Pipelining II (1) Fall 2005 Lecture 19: Pipelining II.
CS61C L29 CPU Design : Pipelining to Improve Performance II (1) Garcia, Fall 2006 © UCB Running away with it  Our #8 football team had a great effort.
Instructor: Senior Lecturer SOE Dan Garcia CS 61C: Great Ideas in Computer Architecture Pipelining Hazards 1.
CS61C L24 Review Pipeline © UC Regents 1 CS61C - Machine Structures Lecture 24 - Review Pipelined Execution November 29, 2000 David Patterson
Inst.eecs.berkeley.edu/~cs61c UCB CS61C : Machine Structures Lecture 29 – CPU Design : Pipelining to Improve Performance II Designed for the.
Inst.eecs.berkeley.edu/~cs61c UCB CS61C : Machine Structures Lecture 29 – CPU Design : Pipelining to Improve Performance II Cal researcher Marty.
CS61C L22 CPU Design : Pipelining to Improve Performance II (1) Beamer, Summer 2007 © UCB Scott Beamer, Instructor inst.eecs.berkeley.edu/~cs61c CS61C.
CS61C L29 CPU Design : Pipelining to Improve Performance (1) Garcia, Spring 2007 © UCB Wirelessly recharge batt  Powercast & Philips have developed a.
CS61C L30 CPU Design : Pipelining to Improve Performance II (1) Garcia, Spring 2007 © UCB E-voting bill in congress!  Rep Rush Holt (D-NJ) has a bill.
Prof. John Nestor ECE Department Lafayette College Easton, Pennsylvania Computer Organization Pipelined Processor Design 1.
1  2004 Morgan Kaufmann Publishers Chapter Six. 2  2004 Morgan Kaufmann Publishers Pipelining The laundry analogy.
©UCB CS 162 Computer Architecture Lecture 3: Pipelining Contd. Instructor: L.N. Bhuyan
Pipelining Andreas Klappenecker CPSC321 Computer Architecture.
CS61C L28 CPU Design : Pipelining to Improve Performance I (1) Garcia, Fall 2006 © UCB 100 Msites!  Sometimes it’s nice to stop and reflect. The web was.
CS61C L21 Pipeline © UC Regents 1 CS61C - Machine Structures Lecture 21 - Introduction to Pipelined Execution November 15, 2000 David Patterson
CS61C L31 Pipelined Execution, part II (1) Garcia, Fall 2004 © UCB Lecturer PSOE Dan Garcia inst.eecs.berkeley.edu/~cs61c.
CS430 – Computer Architecture Introduction to Pipelined Execution
CS61C L21 Pipelining I (1) Chae, Summer 2008 © UCB Albert Chae, Instructor inst.eecs.berkeley.edu/~cs61c CS61C : Machine Structures Lecture #21 – Pipelining.
Scott Beamer, Instructor
CS 61C L30 Introduction to Pipelined Execution (1) Garcia, Fall 2004 © UCB Lecturer PSOE Dan Garcia inst.eecs.berkeley.edu/~cs61c.
CS 61C L18 Pipelining I (1) A Carle, Summer 2005 © UCB inst.eecs.berkeley.edu/~cs61c/su05 CS61C : Machine Structures Lecture #18: Pipelining
ENGS 116 Lecture 51 Pipelining and Hazards Vincent H. Berk September 30, 2005 Reading for today: Chapter A.1 – A.3, article: Patterson&Ditzel Reading for.
Prof. John Nestor ECE Department Lafayette College Easton, Pennsylvania ECE Computer Organization Lecture 17 - Pipelined.
CS61C L19 Introduction to Pipelined Execution (1) Garcia, Fall 2005 © UCB Lecturer PSOE, new dad Dan Garcia inst.eecs.berkeley.edu/~cs61c.
CS 61C L38 Pipelined Execution, part II (1) Garcia, Spring 2004 © UCB Lecturer PSOE Dan Garcia inst.eecs.berkeley.edu/~cs61c.
Pipelining. Overview Pipelining is widely used in modern processors. Pipelining improves system performance in terms of throughput. Pipelined organization.
CS3350B Computer Architecture Winter 2015 Lecture 6.2: Instructional Level Parallelism: Hazards and Resolutions Marc Moreno Maza
CS 61C: Great Ideas in Computer Architecture (Machine Structures) Pipelining Instructors: Krste Asanovic & Vladimir Stojanovic
CS1104: Computer Organisation School of Computing National University of Singapore.
1 Pipelining Reconsider the data path we just did Each instruction takes from 3 to 5 clock cycles However, there are parts of hardware that are idle many.
Comp Sci pipelining 1 Ch. 13 Pipelining. Comp Sci pipelining 2 Pipelining.
Pipeline Hazards. CS5513 Fall Pipeline Hazards Situations that prevent the next instructions in the instruction stream from executing during its.
CSE 340 Computer Architecture Summer 2014 Basic MIPS Pipelining Review.
CS.305 Computer Architecture Enhancing Performance with Pipelining Adapted from Computer Organization and Design, Patterson & Hennessy, © 2005, and from.
CMPE 421 Parallel Computer Architecture
1 Designing a Pipelined Processor In this Chapter, we will study 1. Pipelined datapath 2. Pipelined control 3. Data Hazards 4. Forwarding 5. Branch Hazards.
CS 352 : Computer Organization and Design University of Wisconsin-Eau Claire Dan Ernst Pipelining Basics.
CS 61C: Great Ideas in Computer Architecture Pipelining & Hazards 1 Instructors: John Wawrzynek & Vladimir Stojanovic
CS 1104 Help Session IV Five Issues in Pipelining Colin Tan, S
Winter 2002CSE Topic Branch Hazards in the Pipelined Processor.
Cs 152 L1 3.1 DAP Fa97,  U.CB Pipelining Lessons °Pipelining doesn’t help latency of single task, it helps throughput of entire workload °Multiple tasks.
Chap 6.1 Computer Architecture Chapter 6 Enhancing Performance with Pipelining.
CS 61C L5.2.2 Pipelining I (1) K. Meinz, Summer 2004 © UCB CS61C : Machine Structures Lecture Pipelining I Kurt Meinz inst.eecs.berkeley.edu/~cs61c.
1  1998 Morgan Kaufmann Publishers Chapter Six. 2  1998 Morgan Kaufmann Publishers Pipelining Improve perfomance by increasing instruction throughput.
Oct. 18, 2000Machine Organization1 Machine Organization (CS 570) Lecture 4: Pipelining * Jeremy R. Johnson Wed. Oct. 18, 2000 *This lecture was derived.
Instructor: Senior Lecturer SOE Dan Garcia CS 61C: Great Ideas in Computer Architecture Pipelining Hazards 1.
CDA 3101 Summer 2003 Introduction to Computer Organization Pipeline Control And Pipeline Hazards 17 July 2003.
CS61C L22 Pipelining II, Cache I (1) Chae, Summer 2008 © UCB Albert Chae, Instructor inst.eecs.berkeley.edu/~cs61c CS61C : Machine Structures Lecture #22.
Pipelining: Implementation CPSC 252 Computer Organization Ellen Walker, Hiram College.
CS 61C: Great Ideas in Computer Architecture Pipelining and Hazards 1 Instructors: Vladimir Stojanovic and Nicholas Weaver
Chapter Six.
Lecture 18: Pipelining I.
CDA 3101 Spring 2016 Introduction to Computer Organization
ELEN 468 Advanced Logic Design
Instructor: Justin Hsia
Morgan Kaufmann Publishers The Processor
Pipelining review.
Pipelining in more detail
Chapter Six.
Chapter Six.
Instruction Execution Cycle
Guest Lecturer: Justin Hsia
Presentation transcript:

CS Computer Architecture 1 CS 430 – Computer Architecture Pipelined Execution - Review William J. Taffe using slides of David Patterson

CS Computer Architecture 2 Steps in Executing MIPS 1) IFetch: Fetch Instruction, Increment PC 2) Decode Instruction, Read Registers 3) Execute: Mem-ref:Calculate Address Arith-log: Perform Operation 4) Memory: Load:Read Data from Memory Store:Write Data to Memory 5) Write Back: Write Data to Register

CS Computer Architecture 3 Pipelined Execution Representation °Every instruction must take same number of steps, also called pipeline “stages”, so some will go idle sometimes IFtchDcdExecMemWB IFtchDcdExecMemWB IFtchDcdExecMemWB IFtchDcdExecMemWB IFtchDcdExecMemWB IFtchDcdExecMemWB Time

CS Computer Architecture 4 Review: Datapath for MIPS Stage 1 Stage 2Stage 3Stage 4Stage 5 °Use datapath figure to represent pipeline IFtchDcdExecMemWB ALU I$ Reg D$Reg PC instruction memory +4 rt rs rd registers ALU Data memory imm 1. Instruction Fetch 2. Decode/ Register Read 3. Execute4. Memory 5. Write Back

CS Computer Architecture 5 Problems for Computers °Limits to pipelining: Hazards prevent next instruction from executing during its designated clock cycle Structural hazards: HW cannot support this combination of instructions (e.g., read instruction and data from memory) Control hazards: Pipelining of branches & other instructions stall the pipeline until the hazard “bubbles” in the pipeline Data hazards: Instruction depends on result of prior instruction still in the pipeline (read and write same data)

CS Computer Architecture 6 Structural Hazard #1: Single Memory (1/2) Read same memory twice in same clock cycle I$ Load Instr 1 Instr 2 Instr 3 Instr 4 ALU I$ Reg D$Reg ALU I$ Reg D$Reg ALU I$ Reg D$Reg ALU Reg D$Reg ALU I$ Reg D$Reg I n s t r. O r d e r Time (clock cycles)

CS Computer Architecture 7 Structural Hazard #1: Single Memory (2/2) °Solution: infeasible and inefficient to create second main memory so simulate this by having two Level 1 Caches have both an L1 Instruction Cache and an L1 Data Cache need more complex hardware to control when both caches miss

CS Computer Architecture 8 Structural Hazard #2: Registers (1/2) Read and write registers simultaneously? I$ Load Instr 1 Instr 2 Instr 3 Instr 4 ALU I$ Reg D$Reg ALU I$ Reg D$Reg ALU I$ Reg D$Reg ALU Reg D$Reg ALU I$ Reg D$Reg I n s t r. O r d e r Time (clock cycles)

CS Computer Architecture 9 Structural Hazard #2: Registers (2/2) °Solution: Build registers with multiple ports, so can both read and write at the same time °What if read and write same register? Design to that it writes in first half of clock cycle, read in second half of clock cycle Thus will read what is written, reading the new contents

CS Computer Architecture 10 Data Hazards (1/2) add $t0, $t1, $t2 sub $t4, $t0,$t3 and $t5, $t0,$t6 or $t7, $t0,$t8 xor $t9, $t0,$t10 °Consider the following sequence of instructions

CS Computer Architecture 11 Dependencies backwards in time are hazards Data Hazards (2/2) sub $t4,$t0,$t3 ALU I$ Reg D$Reg and $t5,$t0,$t6 ALU I$ Reg D$Reg or $t7,$t0,$t8 I$ ALU Reg D$Reg xor $t9,$t0,$t10 ALU I$ Reg D$Reg add $t0,$t1,$t2 IFID/RFEXMEMWB ALU I$ Reg D$ Reg I n s t r. O r d e r Time (clock cycles)

CS Computer Architecture 12 Forward result from one stage to another Data Hazard Solution: Forwarding sub $t4,$t0,$t3 ALU I$ Reg D$Reg and $t5,$t0,$t6 ALU I$ Reg D$Reg or $t7,$t0,$t8 I$ ALU Reg D$Reg xor $t9,$t0,$t10 ALU I$ Reg D$Reg add $t0,$t1,$t2 IFID/RFEXMEMWB ALU I$ Reg D$ Reg “ or ” hazard solved by register hardware

CS Computer Architecture 13 Dependencies backwards in time are hazards Data Hazard: Loads (1/2) sub $t3,$t0,$t2 ALU I$ Reg D$Reg lw $t0,0($t1) IFID/RFEXMEMWB ALU I$ Reg D$ Reg Can’t solve with forwarding Must stall instruction dependent on load, then forward (more hardware)

CS Computer Architecture 14 Hardware must insert no-op in pipeline Data Hazard: Loads (2/2) sub $t3,$t0,$t2 ALU I$ Reg D$Reg bub ble and $t5,$t0,$t4 ALU I$ Reg D$Reg bub ble or $t7,$t0,$t6 I$ ALU Reg D$ bub ble lw $t0, 0($t1) IFID/RFEXMEMWB ALU I$ Reg D$ Reg

CS Computer Architecture 15 Control Hazard: Branching (1/6) °Suppose we put branch decision- making hardware in ALU stage then two more instructions after the branch will always be fetched, whether or not the branch is taken °Desired functionality of a branch if we do not take the branch, don’t waste any time and continue executing normally if we take the branch, don’t execute any instructions after the branch, just go to the desired label

CS Computer Architecture 16 Control Hazard: Branching (2/6) °Initial Solution: Stall until decision is made insert “no-op” instructions: those that accomplish nothing, just take time Drawback: branches take 3 clock cycles each (assuming comparator is put in ALU stage)

CS Computer Architecture 17 Control Hazard: Branching (3/6) °Optimization #1: move comparator up to Stage 2 as soon as instruction is decoded (Opcode identifies is as a branch), immediately make a decision and set the value of the PC (if necessary) Benefit: since branch is complete in Stage 2, only one unnecessary instruction is fetched, so only one no-op is needed Side Note: This means that branches are idle in Stages 3, 4 and 5.

CS Computer Architecture 18 °Insert a single no-op (bubble) Control Hazard: Branching (4/6) Add Beq Load ALU I$ Reg D$Reg ALU I$ Reg D$Reg ALU Reg D$Reg I$ I n s t r. O r d e r Time (clock cycles) bub ble °Impact: 2 clock cycles per branch instruction  slow

CS Computer Architecture 19 Forwarding and Moving Branch Decision °Forwarding/bypassing currently affects Execution stage: Instead of using value from register read in Decode Stage, use value from ALU output or Memory output °Moving branch decision from Execution Stage to Decode Stage means forwarding /bypassing must be replicated in Decode Stage for branches. I.e., Code below must still work: addiu$s1, $s1, -4 beq$s1, $s2, Exit

CS Computer Architecture 20 Control Hazard: Branching (5/6) °Optimization #2: Redefine branches Old definition: if we take the branch, none of the instructions after the branch get executed by accident New definition: whether or not we take the branch, the single instruction immediately following the branch gets executed (called the branch-delay slot)

CS Computer Architecture 21 Control Hazard: Branching (6/6) °Notes on Branch-Delay Slot Worst-Case Scenario: can always put a no-op in the branch-delay slot Better Case: can find an instruction preceding the branch which can be placed in the branch-delay slot without affecting flow of the program -re-ordering instructions is a common method of speeding up programs -compiler must be very smart in order to find instructions to do this -usually can find such an instruction at least 50% of the time

CS Computer Architecture 22 Example: Nondelayed vs. Delayed Branch add $1,$2,$3 sub $4, $5,$6 beq $1, $4, Exit or $8, $9,$10 xor $10, $1,$11 Nondelayed Branch add $1,$2,$3 sub $4, $5,$6 beq $1, $4, Exit or $8, $9,$10 xor $10, $1,$11 Delayed Branch Exit:

CS Computer Architecture 23 Try “Peer-to-Peer” Instruction °Given question, everyone has one minute to pick an answer °First raise hands to pick °Then break into groups of 5, talk about the solution for a few minutes °Then vote again (each group all votes together for the groups choice) °discussion should lead to convergence °Give the answer, and see if there are questions °Will try this twice today

CS Computer Architecture 24 How long to execute? °Assume delayed branch, 5 stage pipeline, forwarding/bypassing, interlock on unresolved load hazards Loop:lw$t0, 0($s1) addiu$t0, $t0, $s2 sw$t0, 0($s1) addiu$s1, $s1, -4 bne$s1, $zero, Loop nop °How many clock cycles on average to execute this code per loop iteration? a) =9 °(after 1000 iterations, so pipeline is full)

CS Computer Architecture 25 How long to execute? °Assume delayed branch, 5 stage pipeline, forwarding/bypassing, interlock on unresolved hazards °Look at this code: Loop:lw$t0, 0($s1) addiu$t0, $t0, $s2 sw$t0, 0($s1) addiu$s1, $s1, -4 bne$s1, $zero, Loop nop °How many clock cycles to execute this code per loop iteration? a) = (data hazard so stall) (delayed branch so execute nop)

CS Computer Architecture 26 Rewrite the loop to improve performance °Rewrite this code to reduce clock cycles per loop to as few as possible: Loop:lw$t0, 0($s1) addu$t0, $t0, $s2 sw$t0, 0($s1) addiu$s1, $s1, -4 bne$s1, $zero, Loop nop °How many clock cycles to execute your revised code per loop iteration? a) 4 b) 5 c) 6d) 7

CS Computer Architecture 27 Rewrite the loop to improve performance °Rewrite this code to reduce clock cycles per loop to as few as possible: Loop:lw$t0, 0($s1) addiu$s1, $s1, -4 addu$t0, $t0, $s2 bne$s1, $zero, Loop sw$t0, +4($s1) °How many clock cycles to execute your revised code per loop iteration? a) 4 b) 5 c) 6d) 7 (no hazard since extra cycle) (modified sw to put past addiu)

CS Computer Architecture 28 State of the Art: Pentium 4 °1 8KB Instruction cache, 1 8 KB Data cache, 256 KB L2 cache on chip °Clock cycle = 0.67 nanoseconds, or 1500 MHz clock rate (667 picoseconds, 1.5 GHz) °HW translates from 80x86 to MIPS-like micro-ops °20 stage pipeline °Superscalar: fetch, retire up to 3 instructions /clock cycle; Execution out- of-order °Faster memory bus: 400 MHz

CS Computer Architecture 29 Things to Remember (1/2) °Optimal Pipeline Each stage is executing part of an instruction each clock cycle. One instruction finishes during each clock cycle. On average, execute far more quickly. °What makes this work? Similarities between instructions allow us to use same stages for all instructions (generally). Each stage takes about the same amount of time as all others: little wasted time.

CS Computer Architecture 30 Things to Remember (2/2) °Pipelining a Big Idea: widely used concept °What makes it less than perfect? Structural hazards: suppose we had only one cache?  Need more HW resources Control hazards: need to worry about branch instructions?  D elayed branch or branch prediction Data hazards: an instruction depends on a previous instruction?