CS61C L21 Pipeline © UC Regents 1 CS61C - Machine Structures Lecture 21 - Introduction to Pipelined Execution November 15, 2000 David Patterson

Slides:



Advertisements
Similar presentations
PipelineCSCE430/830 Pipeline: Introduction CSCE430/830 Computer Architecture Lecturer: Prof. Hong Jiang Courtesy of Prof. Yifeng Zhu, U of Maine Fall,
Advertisements

Chapter 8. Pipelining.
Review: Pipelining. Pipelining Laundry Example Ann, Brian, Cathy, Dave each have one load of clothes to wash, dry, and fold Washer takes 30 minutes Dryer.
Pipelining I (1) Fall 2005 Lecture 18: Pipelining I.
Pipelining Hwanmo Sung CS147 Presentation Professor Sin-Min Lee.
EECS 318 CAD Computer Aided Design LECTURE 2: DSP Architectures Instructor: Francis G. Wolff Case Western Reserve University This presentation.
CS252/Patterson Lec 1.1 1/17/01 Pipelining: Its Natural! Laundry Example Ann, Brian, Cathy, Dave each have one load of clothes to wash, dry, and fold Washer.
Chapter Six 1.
Pipelining II (1) Fall 2005 Lecture 19: Pipelining II.
CS61C L29 CPU Design : Pipelining to Improve Performance II (1) Garcia, Fall 2006 © UCB Running away with it  Our #8 football team had a great effort.
CS61C L24 Review Pipeline © UC Regents 1 CS61C - Machine Structures Lecture 24 - Review Pipelined Execution November 29, 2000 David Patterson
Inst.eecs.berkeley.edu/~cs61c UCB CS61C : Machine Structures Lecture 29 – CPU Design : Pipelining to Improve Performance II Designed for the.
CS Computer Architecture 1 CS 430 – Computer Architecture Pipelined Execution - Review William J. Taffe using slides of David Patterson.
Inst.eecs.berkeley.edu/~cs61c UCB CS61C : Machine Structures Lecture 29 – CPU Design : Pipelining to Improve Performance II Cal researcher Marty.
inst.eecs.berkeley.edu/~cs61c UCB CS61C : Machine Structures Lecture 28 – CPU Design : Pipelining to Improve Performance The College Board.
CS61C L29 CPU Design : Pipelining to Improve Performance (1) Garcia, Spring 2007 © UCB Wirelessly recharge batt  Powercast & Philips have developed a.
CS61C L30 CPU Design : Pipelining to Improve Performance II (1) Garcia, Spring 2007 © UCB E-voting bill in congress!  Rep Rush Holt (D-NJ) has a bill.
Prof. John Nestor ECE Department Lafayette College Easton, Pennsylvania Computer Organization Pipelined Processor Design 1.
Pipelining Andreas Klappenecker CPSC321 Computer Architecture.
CS61C L28 CPU Design : Pipelining to Improve Performance I (1) Garcia, Fall 2006 © UCB 100 Msites!  Sometimes it’s nice to stop and reflect. The web was.
Cs 61C L25 pipeline.1 Patterson Spring 99 ©UCB CS61C Introduction to Pipelining Lecture 25 April 28, 1999 Dave Patterson (http.cs.berkeley.edu/~patterson)
Computer ArchitectureFall 2007 © October 22nd, 2007 Majd F. Sakr CS-447– Computer Architecture.
Pipelining Datapath Adapted from the lecture notes of Dr. John Kubiatowicz (UC Berkeley) and Hank Walker (TAMU)
CS61C L31 Pipelined Execution, part II (1) Garcia, Fall 2004 © UCB Lecturer PSOE Dan Garcia inst.eecs.berkeley.edu/~cs61c.
CS430 – Computer Architecture Introduction to Pipelined Execution
1 COMP 206: Computer Architecture and Implementation Montek Singh Mon., Sep 9, 2002 Topic: Pipelining Basics.
CS61C L21 Pipelining I (1) Chae, Summer 2008 © UCB Albert Chae, Instructor inst.eecs.berkeley.edu/~cs61c CS61C : Machine Structures Lecture #21 – Pipelining.
1 Atanasoff–Berry Computer, built by Professor John Vincent Atanasoff and grad student Clifford Berry in the basement of the physics building at Iowa State.
Scott Beamer, Instructor
CS 61C L30 Introduction to Pipelined Execution (1) Garcia, Fall 2004 © UCB Lecturer PSOE Dan Garcia inst.eecs.berkeley.edu/~cs61c.
CS 61C L18 Pipelining I (1) A Carle, Summer 2005 © UCB inst.eecs.berkeley.edu/~cs61c/su05 CS61C : Machine Structures Lecture #18: Pipelining
Computer ArchitectureFall 2008 © October 6th, 2008 Majd F. Sakr CS-447– Computer Architecture.
Prof. John Nestor ECE Department Lafayette College Easton, Pennsylvania ECE Computer Organization Lecture 17 - Pipelined.
CS61C L19 Introduction to Pipelined Execution (1) Garcia, Fall 2005 © UCB Lecturer PSOE, new dad Dan Garcia inst.eecs.berkeley.edu/~cs61c.
CS 61C L38 Pipelined Execution, part II (1) Garcia, Spring 2004 © UCB Lecturer PSOE Dan Garcia inst.eecs.berkeley.edu/~cs61c.
Introduction to Pipelining Rabi Mahapatra Adapted from the lecture notes of Dr. John Kubiatowicz (UC Berkeley)
9.2 Pipelining Suppose we want to perform the combined multiply and add operations with a stream of numbers: A i * B i + C i for i =1,2,3,…,7.
CS1104: Computer Organisation School of Computing National University of Singapore.
Integrated Circuits Costs
EEL5708 Lotzi Bölöni EEL 5708 High Performance Computer Architecture Pipelining.
Analogy: Gotta Do Laundry
CSE 340 Computer Architecture Summer 2014 Basic MIPS Pipelining Review.
1 Designing a Pipelined Processor In this Chapter, we will study 1. Pipelined datapath 2. Pipelined control 3. Data Hazards 4. Forwarding 5. Branch Hazards.
CSCI-365 Computer Organization Lecture Note: Some slides and/or pictures in the following are adapted from: Computer Organization and Design, Patterson.
ECE 232 L18.Pipeline.1 Adapted from Patterson 97 ©UCBCopyright 1998 Morgan Kaufmann Publishers ECE 232 Hardware Organization and Design Lecture 18 Pipelining.

Cs 152 L1 3.1 DAP Fa97,  U.CB Pipelining Lessons °Pipelining doesn’t help latency of single task, it helps throughput of entire workload °Multiple tasks.
Chap 6.1 Computer Architecture Chapter 6 Enhancing Performance with Pipelining.
CSIE30300 Computer Architecture Unit 04: Basic MIPS Pipelining Hsin-Chou Chi [Adapted from material by and
CS 61C L5.2.2 Pipelining I (1) K. Meinz, Summer 2004 © UCB CS61C : Machine Structures Lecture Pipelining I Kurt Meinz inst.eecs.berkeley.edu/~cs61c.
Pipelining Example Laundry Example: Three Stages
Instructor: Senior Lecturer SOE Dan Garcia CS 61C: Great Ideas in Computer Architecture Pipelining Hazards 1.
CS252/Patterson Lec 1.1 1/17/01 معماري کامپيوتر - درس نهم pipeline برگرفته از درس : Prof. David A. Patterson.
CDA 3101 Summer 2003 Introduction to Computer Organization Pipeline Control And Pipeline Hazards 17 July 2003.
Lecture 9. MIPS Processor Design – Pipelined Processor Design #1 Prof. Taeweon Suh Computer Science Education Korea University 2010 R&E Computer System.
CMSC 611: Advanced Computer Architecture Pipelining Some material adapted from Mohamed Younis, UMBC CMSC 611 Spr 2003 course slides Some material adapted.
CSCI-365 Computer Organization Lecture Note: Some slides and/or pictures in the following are adapted from: Computer Organization and Design, Patterson.
Advanced Computer Architecture CS 704 Advanced Computer Architecture Lecture 10 Computer Hardware Design (Pipeline Datapath and Control Design) Prof. Dr.
CS 110 Computer Architecture Lecture 11: Pipelining Instructor: Sören Schwertfeger School of Information Science and Technology.
Lecture 18: Pipelining I.
Pipelines An overview of pipelining
CDA 3101 Spring 2016 Introduction to Computer Organization
CMSC 611: Advanced Computer Architecture
ECE232: Hardware Organization and Design
Lecturer: Alan Christopher
An Introduction to pipelining
Chapter 8. Pipelining.
Pipelining Appendix A and Chapter 3.
A relevant question Assuming you’ve got: One washer (takes 30 minutes)
Presentation transcript:

CS61C L21 Pipeline © UC Regents 1 CS61C - Machine Structures Lecture 21 - Introduction to Pipelined Execution November 15, 2000 David Patterson

CS61C L21 Pipeline © UC Regents 2 Review (1/3) °Datapath is the hardware that performs operations necessary to execute programs. °Control instructs datapath on what to do next. °Datapath needs: access to storage (general purpose registers and memory) computational ability (ALU) helper hardware (local registers and PC)

CS61C L21 Pipeline © UC Regents 3 Review (2/3) °Five stages of datapath (executing an instruction): 1. Instruction Fetch (Increment PC) 2. Instruction Decode (Read Registers) 3. ALU (Computation) 4. Memory Access 5. Write to Registers °ALL instructions must go through ALL five stages. °Datapath designed in hardware.

CS61C L21 Pipeline © UC Regents 4 Review Datapath PC instruction memory +4 rt rs rd registers ALU Data memory imm 1. Instruction Fetch 2. Decode/ Register Read 3. Execute4. Memory 5. Write Back

CS61C L21 Pipeline © UC Regents 5 Outline °Pipelining Analogy °Pipelining Instruction Execution °Hazards °Advanced Pipelining Concepts by Analogy

CS61C L21 Pipeline © UC Regents 6 Gotta Do Laundry °Ann, Brian, Cathy, Dave each have one load of clothes to wash, dry, fold, and put away ABCD °Dryer takes 30 minutes °“Folder” takes 30 minutes °“Stasher” takes 30 minutes to put clothes into drawers °Washer takes 30 minutes

CS61C L21 Pipeline © UC Regents 7 Sequential Laundry °Sequential laundry takes 8 hours for 4 loads TaskOrderTaskOrder B C D A 30 Time 30 6 PM AM

CS61C L21 Pipeline © UC Regents 8 Pipelined Laundry °Pipelined laundry takes 3.5 hours for 4 loads! TaskOrderTaskOrder B C D A 12 2 AM 6 PM Time 30

CS61C L21 Pipeline © UC Regents 9 General Definitions °Latency: time to completely execute a certain task for example, time to read a sector from disk is disk access time or disk latency °Throughput: amount of work that can be done over a period of time

CS61C L21 Pipeline © UC Regents 10 Pipelining Lessons (1/2) °Pipelining doesn’t help latency of single task, it helps throughput of entire workload °Multiple tasks operating simultaneously using different resources °Potential speedup = Number pipe stages °Time to “fill” pipeline and time to “drain” it reduces speedup: 2.3X v. 4X in this example 6 PM 789 Time B C D A 30 TaskOrderTaskOrder

CS61C L21 Pipeline © UC Regents 11 Pipelining Lessons (2/2) °Suppose new Washer takes 20 minutes, new Stasher takes 20 minutes. How much faster is pipeline? °Pipeline rate limited by slowest pipeline stage °Unbalanced lengths of pipe stages also reduces speedup 6 PM 789 Time B C D A 30 TaskOrderTaskOrder

CS61C L21 Pipeline © UC Regents 12 Steps in Executing MIPS 1) IFetch: Fetch Instruction, Increment PC 2) Decode Instruction, Read Registers 3) Execute: Mem-ref:Calculate Address Arith-log: Perform Operation 4) Memory: Load:Read Data from Memory Store:Write Data to Memory 5) Write Back: Write Data to Register

CS61C L21 Pipeline © UC Regents 13 Pipelined Execution Representation °Every instruction must take same number of steps, also called pipeline “stages”, so some will go idle sometimes IFtchDcdExecMemWB IFtchDcdExecMemWB IFtchDcdExecMemWB IFtchDcdExecMemWB IFtchDcdExecMemWB IFtchDcdExecMemWB Time

CS61C L21 Pipeline © UC Regents 14 Review: Datapath for MIPS Stage 1 Stage 2Stage 3Stage 4Stage 5 °Use datapath figure to represent pipeline IFtchDcdExecMemWB ALU I$ Reg D$Reg PC instruction memory +4 rt rs rd registers ALU Data memory imm 1. Instruction Fetch 2. Decode/ Register Read 3. Execute4. Memory 5. Write Back

CS61C L21 Pipeline © UC Regents 15 Graphical Pipeline Representation I n s t r. O r d e r Load Add Store Sub Or I$ Time (clock cycles) I$ ALU Reg I$ D$ ALU Reg D$ Reg I$ D$ Reg ALU Reg D$ Reg D$ ALU (In Reg, right half highlight read, left half write) Reg I$

CS61C L21 Pipeline © UC Regents 16 Example °Suppose 2 ns for memory access, 2 ns for ALU operation, and 1 ns for register file read or write °Nonpipelined Execution: lw : IF + Read Reg + ALU + Memory + Write Reg = = 8 ns add: IF + Read Reg + ALU + Write Reg = = 6 ns °Pipelined Execution: Max(IF,Read Reg,ALU,Memory,Write Reg) = 2 ns

CS61C L21 Pipeline © UC Regents 17 Pipeline Hazard: Matching socks in later load A depends on D; stall since folder tied up TaskOrderTaskOrder B C D A E F bubble 12 2 AM 6 PM Time 30

CS61C L21 Pipeline © UC Regents 18 Administrivia: Rest of 61C Rest of 61C slower pace 1 project, 1 lab, no more homeworks F11/17Performance; Cache Sim Project W11/24X86, PC buzzwords and 61C W11/29Review: Pipelines; RAID Lab F 12/1Review: Caches/TLB/VM; Section 7.5 M12/4Deadline to correct your grade record W12/6Review: Interrupts (A.7); Feedback lab F12/861C Summary / Your Cal heritage / HKN Course Evaluation Sun 12/10 Final Review, 2PM (155 Dwinelle) Tues12/12 Final (5PM 1 Pimintel)

CS61C L21 Pipeline © UC Regents 19 Problems for Computers °Limits to pipelining: Hazards prevent next instruction from executing during its designated clock cycle Structural hazards: HW cannot support this combination of instructions (single person to fold and put clothes away) Control hazards: Pipelining of branches & other instructions stall the pipeline until the hazard “bubbles” in the pipeline Data hazards: Instruction depends on result of prior instruction still in the pipeline (missing sock)

CS61C L21 Pipeline © UC Regents 20 Structural Hazard #1: Single Memory (1/2) Read same memory twice in same clock cycle I$ Load Instr 1 Instr 2 Instr 3 Instr 4 ALU I$ Reg D$Reg ALU I$ Reg D$Reg ALU I$ Reg D$Reg ALU Reg D$Reg ALU I$ Reg D$Reg I n s t r. O r d e r Time (clock cycles)

CS61C L21 Pipeline © UC Regents 21 Structural Hazard #1: Single Memory (2/2) °Solution: infeasible and inefficient to create second memory so simulate this by having two Level 1 Caches have both an L1 Instruction Cache and an L1 Data Cache need more complex hardware to control when both caches miss

CS61C L21 Pipeline © UC Regents 22 Structural Hazard #2: Registers (1/2) Can’t read and write to registers simultaneously I$ Load Instr 1 Instr 2 Instr 3 Instr 4 ALU I$ Reg D$Reg ALU I$ Reg D$Reg ALU I$ Reg D$Reg ALU Reg D$Reg ALU I$ Reg D$Reg I n s t r. O r d e r Time (clock cycles)

CS61C L21 Pipeline © UC Regents 23 Structural Hazard #2: Registers (2/2) °Fact: Register access is VERY fast: takes less than half the time of ALU stage °Solution: introduce convention always Write to Registers during first half of each clock cycle always Read from Registers during second half of each clock cycle Result: can perform Read and Write during same clock cycle

CS61C L21 Pipeline © UC Regents 24 Control Hazard: Branching (1/6) °Suppose we put branch decision- making hardware in ALU stage then two more instructions after the branch will always be fetched, whether or not the branch is taken °Desired functionality of a branch if we do not take the branch, don’t waste any time and continue executing normally if we take the branch, don’t execute any instructions after the branch, just go to the desired label

CS61C L21 Pipeline © UC Regents 25 Control Hazard: Branching (2/6) °Initial Solution: Stall until decision is made insert “no-op” instructions: those that accomplish nothing, just take time Drawback: branches take 3 clock cycles each (assuming comparator is put in ALU stage)

CS61C L21 Pipeline © UC Regents 26 Control Hazard: Branching (3/6) °Optimization #1: move comparator up to Stage 2 as soon as instruction is decoded (Opcode identifies is as a branch), immediately make a decision and set the value of the PC (if necessary) Benefit: since branch is complete in Stage 2, only one unnecessary instruction is fetched, so only one no-op is needed Side Note: This means that branches are idle in Stages 3, 4 and 5.

CS61C L21 Pipeline © UC Regents 27 °Insert a single no-op (bubble) Control Hazard: Branching (4/6) Add Beq Load ALU I$ Reg D$Reg ALU I$ Reg D$Reg ALU Reg D$Reg I$ I n s t r. O r d e r Time (clock cycles) bub ble °Impact: 2 clock cycles per branch instruction  slow

CS61C L21 Pipeline © UC Regents 28 Control Hazard: Branching (5/6) °Optimization #2: Redefine branches Old definition: if we take the branch, none of the instructions after the branch get executed by accident New definition: whether or not we take the branch, the single instruction immediately following the branch gets executed (called the branch-delay slot)

CS61C L21 Pipeline © UC Regents 29 Control Hazard: Branching (6/6) °Notes on Branch-Delay Slot Worst-Case Scenario: can always put a no-op in the branch-delay slot Better Case: can find an instruction preceding the branch which can be placed in the branch-delay slot without affecting flow of the program -re-ordering instructions is a common method of speeding up programs -compiler must be very smart in order to find instructions to do this -usually can find such an instruction at least 50% of the time

CS61C L21 Pipeline © UC Regents 30 Example: Nondelayed vs. Delayed Branch add $1,$2,$3 sub $4, $5,$6 beq $1, $4, Exit or $8, $9,$10 xor $10, $1,$11 Nondelayed Branch add $1,$2,$3 sub $4, $5,$6 beq $1, $4, Exit or $8, $9,$10 xor $10, $1,$11 Delayed Branch Exit:

CS61C L21 Pipeline © UC Regents 31 Things to Remember (1/2) °Optimal Pipeline Each stage is executing part of an instruction each clock cycle. One instruction finishes during each clock cycle. On average, execute far more quickly. °What makes this work? Similarities between instructions allow us to use same stages for all instructions (generally). Each stage takes about the same amount of time as all others: little wasted time.

CS61C L21 Pipeline © UC Regents 32 Advanced Pipelining Concepts (if time) °“Out-of-order” Execution °“Superscalar” execution °State-of-the-Art Microprocessor

CS61C L21 Pipeline © UC Regents 33 Review Pipeline Hazard: Stall is dependency A depends on D; stall since folder tied up TaskOrderTaskOrder 12 2 AM 6 PM Time B C D A E F bubble 30

CS61C L21 Pipeline © UC Regents 34 Out-of-Order Laundry: Don’t Wait A depends on D; rest continue; need more resources to allow out-of-order TaskOrderTaskOrder 12 2 AM 6 PM Time B C D A 30 E F bubble

CS61C L21 Pipeline © UC Regents 35 Superscalar Laundry: Parallel per stage More resources, HW to match mix of parallel tasks? TaskOrderTaskOrder 12 2 AM 6 PM Time B C D A E F (light clothing) (dark clothing) (very dirty clothing) (light clothing) (dark clothing) (very dirty clothing) 30

CS61C L21 Pipeline © UC Regents 36 Superscalar Laundry: Mismatch Mix Task mix underutilizes extra resources TaskOrderTaskOrder 12 2 AM 6 PM Time 30 (light clothing) (dark clothing) (light clothing) A B D C

CS61C L21 Pipeline © UC Regents 37 State of the Art: Compaq Alpha °Very similar instruction set to MIPS °1 64KB Instruction cache, 1 64 KB Data cache on chip; 16MB L2 cache off chip °Clock cycle = 1.5 nanoseconds, or 667 MHz clock rate °Superscalar: fetch up to 6 instructions /clock cycle, retires up to 4 instruction/clock cycle °Execution out-of-order °15 million transistors, 90 watts!

CS61C L21 Pipeline © UC Regents 38 Things to Remember (1/2) °Optimal Pipeline Each stage is executing part of an instruction each clock cycle. One instruction finishes during each clock cycle. On average, execute far more quickly. °What makes this work? Similarities between instructions allow us to use same stages for all instructions (generally). Each stage takes about the same amount of time as all others: little wasted time.

CS61C L21 Pipeline © UC Regents 39 Things to Remember (2/2) °Pipelining a Big Idea: widely used concept °What makes it less than perfect? Structural hazards: suppose we had only one cache?  Need more HW resources Control hazards: need to worry about branch instructions?  D elayed branch Data hazards: an instruction depends on a previous instruction?