CS61C L21 Pipelining I (1) Chae, Summer 2008 © UCB Albert Chae, Instructor inst.eecs.berkeley.edu/~cs61c CS61C : Machine Structures Lecture #21 – Pipelining.

Slides:



Advertisements
Similar presentations
PipelineCSCE430/830 Pipeline: Introduction CSCE430/830 Computer Architecture Lecturer: Prof. Hong Jiang Courtesy of Prof. Yifeng Zhu, U of Maine Fall,
Advertisements

Review: Pipelining. Pipelining Laundry Example Ann, Brian, Cathy, Dave each have one load of clothes to wash, dry, and fold Washer takes 30 minutes Dryer.
Pipelining I (1) Fall 2005 Lecture 18: Pipelining I.
Pipelining Hwanmo Sung CS147 Presentation Professor Sin-Min Lee.
CS252/Patterson Lec 1.1 1/17/01 Pipelining: Its Natural! Laundry Example Ann, Brian, Cathy, Dave each have one load of clothes to wash, dry, and fold Washer.
Pipelining II (1) Fall 2005 Lecture 19: Pipelining II.
CS61C L29 CPU Design : Pipelining to Improve Performance II (1) Garcia, Fall 2006 © UCB Running away with it  Our #8 football team had a great effort.
Instructor: Senior Lecturer SOE Dan Garcia CS 61C: Great Ideas in Computer Architecture Pipelining Hazards 1.
CS61C L24 Review Pipeline © UC Regents 1 CS61C - Machine Structures Lecture 24 - Review Pipelined Execution November 29, 2000 David Patterson
Inst.eecs.berkeley.edu/~cs61c UCB CS61C : Machine Structures Lecture 29 – CPU Design : Pipelining to Improve Performance II Designed for the.
CS Computer Architecture 1 CS 430 – Computer Architecture Pipelined Execution - Review William J. Taffe using slides of David Patterson.
Inst.eecs.berkeley.edu/~cs61c UCB CS61C : Machine Structures Lecture 29 – CPU Design : Pipelining to Improve Performance II Cal researcher Marty.
inst.eecs.berkeley.edu/~cs61c UCB CS61C : Machine Structures Lecture 28 – CPU Design : Pipelining to Improve Performance The College Board.
CS61C L29 CPU Design : Pipelining to Improve Performance (1) Garcia, Spring 2007 © UCB Wirelessly recharge batt  Powercast & Philips have developed a.
Mary Jane Irwin ( ) [Adapted from Computer Organization and Design,
CS61C L30 CPU Design : Pipelining to Improve Performance II (1) Garcia, Spring 2007 © UCB E-voting bill in congress!  Rep Rush Holt (D-NJ) has a bill.
CS61C L26 CPU Design : Designing a Single-Cycle CPU II (1) Garcia, Spring 2007 © UCB 3.6 TB DVDs? Maybe!  Researchers at Harvard have found a way to use.
ENEE350 Ankur Srivastava University of Maryland, College Park Based on Slides from Mary Jane Irwin ( )
Prof. John Nestor ECE Department Lafayette College Easton, Pennsylvania Computer Organization Pipelined Processor Design 1.
©UCB CS 162 Computer Architecture Lecture 3: Pipelining Contd. Instructor: L.N. Bhuyan
CS61C L28 CPU Design : Pipelining to Improve Performance I (1) Garcia, Fall 2006 © UCB 100 Msites!  Sometimes it’s nice to stop and reflect. The web was.
CS61CL L10 CPU II: Control & Pipeline (1) Huddleston, Summer 2009 © UCB Jeremy Huddleston inst.eecs.berkeley.edu/~cs61c CS61CL : Machine Structures Lecture.
Computer ArchitectureFall 2007 © October 22nd, 2007 Majd F. Sakr CS-447– Computer Architecture.
CS61C L21 Pipeline © UC Regents 1 CS61C - Machine Structures Lecture 21 - Introduction to Pipelined Execution November 15, 2000 David Patterson
CS61C L31 Pipelined Execution, part II (1) Garcia, Fall 2004 © UCB Lecturer PSOE Dan Garcia inst.eecs.berkeley.edu/~cs61c.
CS61C L26 CPU Design : Designing a Single-Cycle CPU II (1) Garcia, Fall 2006 © UCB Lecturer SOE Dan Garcia inst.eecs.berkeley.edu/~cs61c.
CS430 – Computer Architecture Introduction to Pipelined Execution
1 COMP 206: Computer Architecture and Implementation Montek Singh Mon., Sep 9, 2002 Topic: Pipelining Basics.
Scott Beamer, Instructor
CS 61C L30 Introduction to Pipelined Execution (1) Garcia, Fall 2004 © UCB Lecturer PSOE Dan Garcia inst.eecs.berkeley.edu/~cs61c.
CS 61C L18 Pipelining I (1) A Carle, Summer 2005 © UCB inst.eecs.berkeley.edu/~cs61c/su05 CS61C : Machine Structures Lecture #18: Pipelining
Computer ArchitectureFall 2008 © October 6th, 2008 Majd F. Sakr CS-447– Computer Architecture.
Prof. John Nestor ECE Department Lafayette College Easton, Pennsylvania ECE Computer Organization Lecture 17 - Pipelined.
CS61C L27 Single Cycle CPU Control (1) Garcia, Fall 2006 © UCB Wireless High Definition?  Several companies will be working on a “WirelessHD” standard,
CS61C L19 Introduction to Pipelined Execution (1) Garcia, Fall 2005 © UCB Lecturer PSOE, new dad Dan Garcia inst.eecs.berkeley.edu/~cs61c.
CS 61C L38 Pipelined Execution, part II (1) Garcia, Spring 2004 © UCB Lecturer PSOE Dan Garcia inst.eecs.berkeley.edu/~cs61c.
Introduction to Pipelining Rabi Mahapatra Adapted from the lecture notes of Dr. John Kubiatowicz (UC Berkeley)
CS 61C: Great Ideas in Computer Architecture (Machine Structures) Pipelining Instructors: Krste Asanovic & Vladimir Stojanovic
CS 61C: Great Ideas in Computer Architecture Lecture 13: 5-Stage MIPS CPU (Pipelining) Instructor: Sagar Karandikar
CS1104: Computer Organisation School of Computing National University of Singapore.
Analogy: Gotta Do Laundry
CSE 340 Computer Architecture Summer 2014 Basic MIPS Pipelining Review.
CS.305 Computer Architecture Enhancing Performance with Pipelining Adapted from Computer Organization and Design, Patterson & Hennessy, © 2005, and from.
1 Designing a Pipelined Processor In this Chapter, we will study 1. Pipelined datapath 2. Pipelined control 3. Data Hazards 4. Forwarding 5. Branch Hazards.
CSCI-365 Computer Organization Lecture Note: Some slides and/or pictures in the following are adapted from: Computer Organization and Design, Patterson.
ECE 232 L18.Pipeline.1 Adapted from Patterson 97 ©UCBCopyright 1998 Morgan Kaufmann Publishers ECE 232 Hardware Organization and Design Lecture 18 Pipelining.
CS 61C: Great Ideas in Computer Architecture Pipelining & Hazards 1 Instructors: John Wawrzynek & Vladimir Stojanovic

Cs 152 L1 3.1 DAP Fa97,  U.CB Pipelining Lessons °Pipelining doesn’t help latency of single task, it helps throughput of entire workload °Multiple tasks.
CSIE30300 Computer Architecture Unit 04: Basic MIPS Pipelining Hsin-Chou Chi [Adapted from material by and
CS 61C L5.2.2 Pipelining I (1) K. Meinz, Summer 2004 © UCB CS61C : Machine Structures Lecture Pipelining I Kurt Meinz inst.eecs.berkeley.edu/~cs61c.
Instructor: Senior Lecturer SOE Dan Garcia CS 61C: Great Ideas in Computer Architecture Pipelining Hazards 1.
CDA 3101 Summer 2003 Introduction to Computer Organization Pipeline Control And Pipeline Hazards 17 July 2003.
Lecture 9. MIPS Processor Design – Pipelined Processor Design #1 Prof. Taeweon Suh Computer Science Education Korea University 2010 R&E Computer System.
CSCI-365 Computer Organization Lecture Note: Some slides and/or pictures in the following are adapted from: Computer Organization and Design, Patterson.
Advanced Computer Architecture CS 704 Advanced Computer Architecture Lecture 10 Computer Hardware Design (Pipeline Datapath and Control Design) Prof. Dr.
CS 61C: Great Ideas in Computer Architecture Pipelining and Hazards 1 Instructors: Vladimir Stojanovic and Nicholas Weaver
CS 110 Computer Architecture Lecture 11: Pipelining Instructor: Sören Schwertfeger School of Information Science and Technology.
Lecture 18: Pipelining I.
Pipelines An overview of pipelining
CDA 3101 Spring 2016 Introduction to Computer Organization
Pipelining concepts, datapath and hazards
ECE232: Hardware Organization and Design
Chapter 4 The Processor Part 2
CS 61C: Great Ideas in Computer Architecture Control and Pipelining
Inst.eecs.berkeley.edu/~cs61c CS61C : Machine Structures Lecture 20 CPU Design: Control II & Pipelining I TA Noah Johnson Greet class.
Lecturer: Alan Christopher
Guest Lecturer: Justin Hsia
A relevant question Assuming you’ve got: One washer (takes 30 minutes)
Presentation transcript:

CS61C L21 Pipelining I (1) Chae, Summer 2008 © UCB Albert Chae, Instructor inst.eecs.berkeley.edu/~cs61c CS61C : Machine Structures Lecture #21 – Pipelining I Minesweeper circuits

CS61C L21 Pipelining I (2) Chae, Summer 2008 © UCB °5 steps to design a processor 1. Analyze instruction set  datapath requirements 2. Select set of datapath components & establish clock methodology 3. Assemble datapath meeting the requirements 4. Analyze implementation of each instruction to determine setting of control points that effects the register transfer. 5. Assemble the control logic °Control is the hard part °MIPS makes that easier Instructions same size Source registers always in same place Immediates same size, location Operations always on registers/immediates Review: Single cycle datapath Control Datapath Memory Processor Input Output

CS61C L21 Pipelining I (3) Chae, Summer 2008 © UCB RegDst = add + sub ALUSrc= ori + lw + sw MemtoReg = lw RegWrite = add + sub + ori + lw MemWrite = sw nPCsel = beq Jump = jump ExtOp = lw + sw ALUctr[0] = sub + beq (assume ALUctr is 0 ADD, 01: SUB, 10: OR ) ALUctr[1] = or where, rtype = ~op 5  ~op 4  ~op 3  ~op 2  ~op 1  ~op 0, ori = ~op 5  ~op 4  op 3  op 2  ~op 1  op 0 lw = op 5  ~op 4  ~op 3  ~op 2  op 1  op 0 sw = op 5  ~op 4  op 3  ~op 2  op 1  op 0 beq = ~op 5  ~op 4  ~op 3  op 2  ~op 1  ~op 0 jump = ~op 5  ~op 4  ~op 3  ~op 2  op 1  ~op 0 add = rtype  func 5  ~func 4  ~func 3  ~func 2  ~func 1  ~func 0 sub = rtype  func 5  ~func 4  ~func 3  ~func 2  func 1  ~func 0 With control breaking apart code to run on the datapath, what does this mean? add sub ori lw sw beq jump RegDst ALUSrc MemtoReg RegWrite MemWrite nPCsel Jump ExtOp ALUctr[0] ALUctr[1] “AND” logic “OR” logic opcodefunc How We Build The Controller

CS61C L21 Pipelining I (4) Chae, Summer 2008 © UCB lw $t0, 0($2) lw $t1, 4($2) sw $t1, 0($2) sw $t0, 4($2) High Level Language Program (e.g., C) Assembly Language Program (e.g.,MIPS) Machine Language Program (MIPS) Hardware Architecture Description (e.g., block diagrams) Compiler Assembler Machine Interpretation temp = v[k]; v[k] = v[k+1]; v[k+1] = temp; Logic Circuit Description (Circuit Schematic Diagrams) Architecture Implementation Call home, we’ve made HW/SW contact!

CS61C L21 Pipelining I (5) Chae, Summer 2008 © UCB An Abstract View of the Critical Path Critical Path (Load Instruction) = Delay clock through PC (FFs) + Instruction Memory’s Access Time + Register File’s Access Time, + ALU to Perform a 32-bit Add + Data Memory Access Time + Stable Time for Register File Write clk 5 RwRaRb Register File Rd Data In Data Addr Ideal Data Memory Instruction Address Ideal Instruction Memory PC 5 Rs 5 Rt 32 A B Next Address clk ALU (Assumes a fast controller)

CS61C L21 Pipelining I (6) Chae, Summer 2008 © UCB Processor Performance Can we estimate the clock rate (frequency) of our single-cycle processor? We know: 1 cycle per instruction lw is the most demanding instruction. Assume approximate delays for major pieces of the datapath: -Instr. Mem, ALU, Data Mem : 2ns each, regfile 1ns -Instruction execution requires: = 8ns  125 MHz What can we do to improve clock rate? Will this improve performance as well? We want increases in clock rate to result in programs executing quicker.

CS61C L21 Pipelining I (7) Chae, Summer 2008 © UCB Ways to Improve Clock Frequency Smaller Process Size Smallest feature possible in silicon fabrication Smaller process is faster because of EE reasons, and is smaller so things are closer Optimize Logic Re-arrange CL to be faster Sometimes more logic (more hardware) can be used to reduce delay Parallel Do more at once - later… Cut Down Length of Critical Path Inserting registers (pipelining) to break up CL

CS61C L21 Pipelining I (8) Chae, Summer 2008 © UCB Gotta Do Laundry °Ann, Brian, Cathy, Dave each have one load of clothes to wash, dry, fold, and put away ABCD °Dryer takes 30 minutes °“Folder” takes 30 minutes °“Stasher” takes 30 minutes to put clothes into drawers °Washer takes 30 minutes

CS61C L21 Pipelining I (9) Chae, Summer 2008 © UCB Sequential Laundry Sequential laundry takes 8 hours for 4 loads TaskOrderTaskOrder B C D A 30 Time 30 6 PM AM

CS61C L21 Pipelining I (10) Chae, Summer 2008 © UCB Pipelined Laundry Pipelined laundry takes 3.5 hours for 4 loads! TaskOrderTaskOrder B C D A 12 2 AM 6 PM Time 30

CS61C L21 Pipelining I (11) Chae, Summer 2008 © UCB General Definitions Latency: time to completely execute a certain task (delay) for example, time to read a sector from disk is disk access time or disk latency Throughput: amount of work that can be done over a period of time (rate)

CS61C L21 Pipelining I (12) Chae, Summer 2008 © UCB Pipelining Lessons (1/2) Pipelining doesn’t help latency of single task, it helps throughput of entire workload Multiple tasks operating simultaneously using different resources Potential speedup = Number pipe stages Time to “fill” pipeline and time to “drain” it reduces speedup: 2.3X v. 4X in this example 6 PM 789 Time B C D A 30 TaskOrderTaskOrder

CS61C L21 Pipelining I (13) Chae, Summer 2008 © UCB Pipelining Lessons (2/2) Suppose new Washer takes 20 minutes, new Stasher takes 20 minutes. How much faster is pipeline? Pipeline rate limited by slowest pipeline stage Unbalanced lengths of pipe stages reduces speedup 6 PM 789 Time B C D A 30 TaskOrderTaskOrder

CS61C L21 Pipelining I (14) Chae, Summer 2008 © UCB Steps in Executing MIPS 1) IFtch: Instruction Fetch, Increment PC 2) Dcd: Instruction Decode, Read Registers 3) Exec: Mem-ref:Calculate Address Arith-log: Perform Operation 4) Mem: Load:Read Data from Memory Store:Write Data to Memory 5) WB: Write Data Back to Register

CS61C L21 Pipelining I (15) Chae, Summer 2008 © UCB Pipelined Execution Representation Every instruction must take same number of steps, also called pipeline “stages”, so some will go idle sometimes IFtchDcdExecMemWB IFtchDcdExecMemWB IFtchDcdExecMemWB IFtchDcdExecMemWB IFtchDcdExecMemWB IFtchDcdExecMemWB Time

CS61C L21 Pipelining I (16) Chae, Summer 2008 © UCB Review: Datapath for MIPS Use datapath figure to represent pipeline IFtchDcdExecMemWB ALU I$ Reg D$Reg PC instruction memory +4 rt rs rd registers ALU Data memory imm 1. Instruction Fetch 2. Decode/ Register Read 3. Execute4. Memory 5. Write Back

CS61C L21 Pipelining I (17) Chae, Summer 2008 © UCB Graphical Pipeline Representation I n s t r. O r d e r Load Add Store Sub Or I$ Time (clock cycles) I$ ALU Reg I$ D$ ALU Reg D$ Reg I$ D$ Reg ALU Reg D$ Reg D$ ALU (In Reg, right half highlight read, left half write) Reg I$

CS61C L21 Pipelining I (18) Chae, Summer 2008 © UCB Example Suppose 2 ns for memory access, 2 ns for ALU operation, and 1 ns for register file read or write; compute instr rate Nonpipelined Execution: lw : IF + Read Reg + ALU + Memory + Write Reg = = 8 ns add : IF + Read Reg + ALU + Write Reg = = 6 ns (recall 8ns for single-cycle processor) Pipelined Execution: Max(IF,Read Reg,ALU,Memory,Write Reg) = 2 ns

CS61C L21 Pipelining I (19) Chae, Summer 2008 © UCB Administrivia HW5 due Tuesday 7/29 Quiz9 due Wednesday 7/30 HW6 due Friday 8/1 Proj3 out soon, due next Tuesday 8/5 Will be hand graded in person, signups will be posted soon Midterm regrades due 7/29 Proj1 grades out, proj2 hopefully soon appeals due 7/31

CS61C L21 Pipelining I (20) Chae, Summer 2008 © UCB Administrivia Lab on polling/interrupts is cancelled We will give everyone 4 pts on that lab Drop or grading option deadline August 1 summer.berkeley.edu for more details

CS61C L21 Pipelining I (21) Chae, Summer 2008 © UCB Pipeline Hazard: Matching socks in later load A depends on D; stall since folder tied up TaskOrderTaskOrder B C D A E F bubble 12 2 AM 6 PM Time 30

CS61C L21 Pipelining I (22) Chae, Summer 2008 © UCB Problems for Pipelining CPUs Limits to pipelining: Hazards prevent next instruction from executing during its designated clock cycle Structural hazards: HW cannot support some combination of instructions (single person to fold and put clothes away) Control hazards: Pipelining of branches causes later instruction fetches to wait for the result of the branch Data hazards: Instruction depends on result of prior instruction still in the pipeline (missing sock) These might result in pipeline stalls or “bubbles” in the pipeline.

CS61C L21 Pipelining I (23) Chae, Summer 2008 © UCB Structural Hazard #1: Single Memory (1/2) Read same memory twice in same clock cycle I$ Load Instr 1 Instr 2 Instr 3 Instr 4 ALU I$ Reg D$Reg ALU I$ Reg D$Reg ALU I$ Reg D$Reg ALU Reg D$Reg ALU I$ Reg D$Reg I n s t r. O r d e r Time (clock cycles)

CS61C L21 Pipelining I (24) Chae, Summer 2008 © UCB Structural Hazard #1: Single Memory (2/2) Solution: infeasible and inefficient to create second memory (We’ll see more later this week) so simulate this by having two Level 1 Caches (a temporary smaller [of usually most recently used] copy of memory) have both an L1 Instruction Cache and an L1 Data Cache need more complex hardware to control when both caches miss

CS61C L21 Pipelining I (25) Chae, Summer 2008 © UCB Structural Hazard #2: Registers (1/2) Can we read and write to registers simultaneously? I$ sw Instr 1 Instr 2 Instr 3 Instr 4 ALU I$ Reg D$Reg ALU I$ Reg D$Reg ALU I$ Reg D$Reg ALU Reg D$Reg ALU I$ Reg D$Reg I n s t r. O r d e r Time (clock cycles)

CS61C L21 Pipelining I (26) Chae, Summer 2008 © UCB Structural Hazard #2: Registers (2/2) Two different solutions have been used: 1) RegFile access is VERY fast: takes less than half the time of ALU stage -Write to Registers during first half of each clock cycle -Read from Registers during second half of each clock cycle 2) Build RegFile with independent read and write ports Result: can perform Read and Write during same clock cycle

CS61C L21 Pipelining I (27) Chae, Summer 2008 © UCB Data Hazards (1/2) add $t0, $t1, $t2 sub $t4, $t0,$t3 and $t5, $t0,$t6 or $t7, $t0,$t8 xor $t9, $t0,$t10 Consider the following sequence of instructions

CS61C L21 Pipelining I (28) Chae, Summer 2008 © UCB Data-flow backward in time are hazards Data Hazards (2/2) sub $t4,$t0,$t3 ALU I$ Reg D$Reg and $t5,$t0,$t6 ALU I$ Reg D$Reg or $t7,$t0,$t8 I$ ALU Reg D$Reg xor $t9,$t0,$t10 ALU I$ Reg D$Reg add $t0,$t1,$t2 IFID/RFEXMEMWB ALU I$ Reg D$ Reg I n s t r. O r d e r Time (clock cycles)

CS61C L21 Pipelining I (29) Chae, Summer 2008 © UCB Forward result from one stage to another Data Hazard Solution: Forwarding sub $t4,$t0,$t3 ALU I$ Reg D$Reg and $t5,$t0,$t6 ALU I$ Reg D$Reg or $t7,$t0,$t8 I$ ALU Reg D$Reg xor $t9,$t0,$t10 ALU I$ Reg D$Reg add $t0,$t1,$t2 IFID/RFEXMEMWB ALU I$ Reg D$ Reg “ or ” hazard solved by register hardware

CS61C L21 Pipelining I (30) Chae, Summer 2008 © UCB Dataflow backwards in time are hazards Data Hazard: Loads (1/4) sub $t3,$t0,$t2 ALU I$ Reg D$Reg lw $t0,0($t1) IFID/RFEXMEMWB ALU I$ Reg D$ Reg Can’t solve all cases with forwarding Must stall instruction dependent on load, then forward (more hardware)

CS61C L21 Pipelining I (31) Chae, Summer 2008 © UCB Hardware stalls pipeline Called “interlock” Data Hazard: Loads (2/4) sub $t3,$t0,$t2 ALU I$ Reg D$Reg bub ble and $t5,$t0,$t4 ALU I$ Reg D$Reg bub ble or $t7,$t0,$t6 I$ ALU Reg D$ bub ble lw $t0, 0($t1) IFID/RFEXMEMWB ALU I$ Reg D$ Reg

CS61C L21 Pipelining I (32) Chae, Summer 2008 © UCB Data Hazard: Loads (3/4) Instruction slot after a load is called “load delay slot” If that instruction uses the result of the load, then the hardware interlock will stall it for one cycle. If the compiler puts an unrelated instruction in that slot, then no stall Letting the hardware stall the instruction in the delay slot is equivalent to putting a nop in the slot (except the latter uses more code space)

CS61C L21 Pipelining I (33) Chae, Summer 2008 © UCB Data Hazard: Loads (4/4) Stall is equivalent to nop sub $t3,$t0,$t2 and $t5,$t0,$t4 or $t7,$t0,$t6 I$ ALU Reg D$ lw $t0, 0($t1) ALU I$ Reg D$ Reg bub ble ALU I$ Reg D$ Reg ALU I$ Reg D$ Reg nop

CS61C L21 Pipelining I (34) Chae, Summer 2008 © UCB Historical Trivia First MIPS design did not interlock and stall on load-use data hazard Real reason for name behind MIPS: Microprocessor without Interlocked Pipeline Stages Word Play on acronym for Millions of Instructions Per Second, also called MIPS

CS61C L21 Pipelining I (35) Chae, Summer 2008 © UCB Peer Instruction A. Thanks to pipelining, I have reduced the time it took me to wash my shirt. B. Longer pipelines are always a win (since less work per stage & a faster clock). C. We can rely on compilers to help us avoid data hazards by reordering instrs. ABC 0: FFF 1: FFT 2: FTF 3: FTT 4: TFF 5: TFT 6: TTF 7: TTT

CS61C L21 Pipelining I (36) Chae, Summer 2008 © UCB Peer Instruction Answer A. Thanks to pipelining, I have reduced the time it took me to wash my shirt. B. Longer pipelines are always a win (since less work per stage & a faster clock). C. We can rely on compilers to help us avoid data hazards by reordering instrs. F A L S E A. Throughput better, not execution time B. “…longer pipelines do usually mean faster clock, but branches cause problems! C. “they happen too often & delay too long.” Forwarding! (e.g, Mem  ALU) F A L S E ABC 0: FFF 1: FFT 2: FTF 3: FTT 4: TFF 5: TFT 6: TTF 7: TTT

CS61C L21 Pipelining I (37) Chae, Summer 2008 © UCB Things to Remember Optimal Pipeline Each stage is executing part of an instruction each clock cycle. One instruction finishes during each clock cycle. On average, execute far more quickly. What makes this work? Similarities between instructions allow us to use same stages for all instructions (generally). Each stage takes about the same amount of time as all others: little wasted time.