Presentation is loading. Please wait.

Presentation is loading. Please wait.

CS61C L21 Pipelining I (1) Chae, Summer 2008 © UCB Albert Chae, Instructor inst.eecs.berkeley.edu/~cs61c CS61C : Machine Structures Lecture #21 – Pipelining.

Similar presentations


Presentation on theme: "CS61C L21 Pipelining I (1) Chae, Summer 2008 © UCB Albert Chae, Instructor inst.eecs.berkeley.edu/~cs61c CS61C : Machine Structures Lecture #21 – Pipelining."— Presentation transcript:

1

2 CS61C L21 Pipelining I (1) Chae, Summer 2008 © UCB Albert Chae, Instructor inst.eecs.berkeley.edu/~cs61c CS61C : Machine Structures Lecture #21 – Pipelining I 2008-7-28 Minesweeper circuits http://www.claymath.org/Popular_Lectures/Minesweeper/ http://for.mat.bham.ac.uk/R.W.Kaye/minesw/ASE2003.pdf

3 CS61C L21 Pipelining I (2) Chae, Summer 2008 © UCB °5 steps to design a processor 1. Analyze instruction set  datapath requirements 2. Select set of datapath components & establish clock methodology 3. Assemble datapath meeting the requirements 4. Analyze implementation of each instruction to determine setting of control points that effects the register transfer. 5. Assemble the control logic °Control is the hard part °MIPS makes that easier Instructions same size Source registers always in same place Immediates same size, location Operations always on registers/immediates Review: Single cycle datapath Control Datapath Memory Processor Input Output

4 CS61C L21 Pipelining I (3) Chae, Summer 2008 © UCB RegDst = add + sub ALUSrc= ori + lw + sw MemtoReg = lw RegWrite = add + sub + ori + lw MemWrite = sw nPCsel = beq Jump = jump ExtOp = lw + sw ALUctr[0] = sub + beq (assume ALUctr is 0 ADD, 01: SUB, 10: OR ) ALUctr[1] = or where, rtype = ~op 5  ~op 4  ~op 3  ~op 2  ~op 1  ~op 0, ori = ~op 5  ~op 4  op 3  op 2  ~op 1  op 0 lw = op 5  ~op 4  ~op 3  ~op 2  op 1  op 0 sw = op 5  ~op 4  op 3  ~op 2  op 1  op 0 beq = ~op 5  ~op 4  ~op 3  op 2  ~op 1  ~op 0 jump = ~op 5  ~op 4  ~op 3  ~op 2  op 1  ~op 0 add = rtype  func 5  ~func 4  ~func 3  ~func 2  ~func 1  ~func 0 sub = rtype  func 5  ~func 4  ~func 3  ~func 2  func 1  ~func 0 With control breaking apart code to run on the datapath, what does this mean? add sub ori lw sw beq jump RegDst ALUSrc MemtoReg RegWrite MemWrite nPCsel Jump ExtOp ALUctr[0] ALUctr[1] “AND” logic “OR” logic opcodefunc How We Build The Controller

5 CS61C L21 Pipelining I (4) Chae, Summer 2008 © UCB lw $t0, 0($2) lw $t1, 4($2) sw $t1, 0($2) sw $t0, 4($2) High Level Language Program (e.g., C) Assembly Language Program (e.g.,MIPS) Machine Language Program (MIPS) Hardware Architecture Description (e.g., block diagrams) Compiler Assembler Machine Interpretation temp = v[k]; v[k] = v[k+1]; v[k+1] = temp; 0000 1001 1100 0110 1010 1111 0101 1000 1010 1111 0101 1000 0000 1001 1100 0110 1100 0110 1010 1111 0101 1000 0000 1001 0101 1000 0000 1001 1100 0110 1010 1111 Logic Circuit Description (Circuit Schematic Diagrams) Architecture Implementation Call home, we’ve made HW/SW contact!

6 CS61C L21 Pipelining I (5) Chae, Summer 2008 © UCB An Abstract View of the Critical Path Critical Path (Load Instruction) = Delay clock through PC (FFs) + Instruction Memory’s Access Time + Register File’s Access Time, + ALU to Perform a 32-bit Add + Data Memory Access Time + Stable Time for Register File Write clk 5 RwRaRb Register File Rd Data In Data Addr Ideal Data Memory Instruction Address Ideal Instruction Memory PC 5 Rs 5 Rt 32 A B Next Address clk ALU (Assumes a fast controller)

7 CS61C L21 Pipelining I (6) Chae, Summer 2008 © UCB Processor Performance Can we estimate the clock rate (frequency) of our single-cycle processor? We know: 1 cycle per instruction lw is the most demanding instruction. Assume approximate delays for major pieces of the datapath: -Instr. Mem, ALU, Data Mem : 2ns each, regfile 1ns -Instruction execution requires: 2 + 1 + 2 + 2 + 1 = 8ns  125 MHz What can we do to improve clock rate? Will this improve performance as well? We want increases in clock rate to result in programs executing quicker.

8 CS61C L21 Pipelining I (7) Chae, Summer 2008 © UCB Ways to Improve Clock Frequency Smaller Process Size Smallest feature possible in silicon fabrication Smaller process is faster because of EE reasons, and is smaller so things are closer Optimize Logic Re-arrange CL to be faster Sometimes more logic (more hardware) can be used to reduce delay Parallel Do more at once - later… Cut Down Length of Critical Path Inserting registers (pipelining) to break up CL

9 CS61C L21 Pipelining I (8) Chae, Summer 2008 © UCB Gotta Do Laundry °Ann, Brian, Cathy, Dave each have one load of clothes to wash, dry, fold, and put away ABCD °Dryer takes 30 minutes °“Folder” takes 30 minutes °“Stasher” takes 30 minutes to put clothes into drawers °Washer takes 30 minutes

10 CS61C L21 Pipelining I (9) Chae, Summer 2008 © UCB Sequential Laundry Sequential laundry takes 8 hours for 4 loads TaskOrderTaskOrder B C D A 30 Time 30 6 PM 7 8 9 10 11 12 1 2 AM

11 CS61C L21 Pipelining I (10) Chae, Summer 2008 © UCB Pipelined Laundry Pipelined laundry takes 3.5 hours for 4 loads! TaskOrderTaskOrder B C D A 12 2 AM 6 PM 7 8 9 10 11 1 Time 30

12 CS61C L21 Pipelining I (11) Chae, Summer 2008 © UCB General Definitions Latency: time to completely execute a certain task (delay) for example, time to read a sector from disk is disk access time or disk latency Throughput: amount of work that can be done over a period of time (rate)

13 CS61C L21 Pipelining I (12) Chae, Summer 2008 © UCB Pipelining Lessons (1/2) Pipelining doesn’t help latency of single task, it helps throughput of entire workload Multiple tasks operating simultaneously using different resources Potential speedup = Number pipe stages Time to “fill” pipeline and time to “drain” it reduces speedup: 2.3X v. 4X in this example 6 PM 789 Time B C D A 30 TaskOrderTaskOrder

14 CS61C L21 Pipelining I (13) Chae, Summer 2008 © UCB Pipelining Lessons (2/2) Suppose new Washer takes 20 minutes, new Stasher takes 20 minutes. How much faster is pipeline? Pipeline rate limited by slowest pipeline stage Unbalanced lengths of pipe stages reduces speedup 6 PM 789 Time B C D A 30 TaskOrderTaskOrder

15 CS61C L21 Pipelining I (14) Chae, Summer 2008 © UCB Steps in Executing MIPS 1) IFtch: Instruction Fetch, Increment PC 2) Dcd: Instruction Decode, Read Registers 3) Exec: Mem-ref:Calculate Address Arith-log: Perform Operation 4) Mem: Load:Read Data from Memory Store:Write Data to Memory 5) WB: Write Data Back to Register

16 CS61C L21 Pipelining I (15) Chae, Summer 2008 © UCB Pipelined Execution Representation Every instruction must take same number of steps, also called pipeline “stages”, so some will go idle sometimes IFtchDcdExecMemWB IFtchDcdExecMemWB IFtchDcdExecMemWB IFtchDcdExecMemWB IFtchDcdExecMemWB IFtchDcdExecMemWB Time

17 CS61C L21 Pipelining I (16) Chae, Summer 2008 © UCB Review: Datapath for MIPS Use datapath figure to represent pipeline IFtchDcdExecMemWB ALU I$ Reg D$Reg PC instruction memory +4 rt rs rd registers ALU Data memory imm 1. Instruction Fetch 2. Decode/ Register Read 3. Execute4. Memory 5. Write Back

18 CS61C L21 Pipelining I (17) Chae, Summer 2008 © UCB Graphical Pipeline Representation I n s t r. O r d e r Load Add Store Sub Or I$ Time (clock cycles) I$ ALU Reg I$ D$ ALU Reg D$ Reg I$ D$ Reg ALU Reg D$ Reg D$ ALU (In Reg, right half highlight read, left half write) Reg I$

19 CS61C L21 Pipelining I (18) Chae, Summer 2008 © UCB Example Suppose 2 ns for memory access, 2 ns for ALU operation, and 1 ns for register file read or write; compute instr rate Nonpipelined Execution: lw : IF + Read Reg + ALU + Memory + Write Reg = 2 + 1 + 2 + 2 + 1 = 8 ns add : IF + Read Reg + ALU + Write Reg = 2 + 1 + 2 + 1 = 6 ns (recall 8ns for single-cycle processor) Pipelined Execution: Max(IF,Read Reg,ALU,Memory,Write Reg) = 2 ns

20 CS61C L21 Pipelining I (19) Chae, Summer 2008 © UCB Administrivia HW5 due Tuesday 7/29 Quiz9 due Wednesday 7/30 HW6 due Friday 8/1 Proj3 out soon, due next Tuesday 8/5 Will be hand graded in person, signups will be posted soon Midterm regrades due 7/29 Proj1 grades out, proj2 hopefully soon appeals due 7/31

21 CS61C L21 Pipelining I (20) Chae, Summer 2008 © UCB Administrivia Lab on polling/interrupts is cancelled We will give everyone 4 pts on that lab Drop or grading option deadline August 1 summer.berkeley.edu for more details

22 CS61C L21 Pipelining I (21) Chae, Summer 2008 © UCB Pipeline Hazard: Matching socks in later load A depends on D; stall since folder tied up TaskOrderTaskOrder B C D A E F bubble 12 2 AM 6 PM 7 8 9 10 11 1 Time 30

23 CS61C L21 Pipelining I (22) Chae, Summer 2008 © UCB Problems for Pipelining CPUs Limits to pipelining: Hazards prevent next instruction from executing during its designated clock cycle Structural hazards: HW cannot support some combination of instructions (single person to fold and put clothes away) Control hazards: Pipelining of branches causes later instruction fetches to wait for the result of the branch Data hazards: Instruction depends on result of prior instruction still in the pipeline (missing sock) These might result in pipeline stalls or “bubbles” in the pipeline.

24 CS61C L21 Pipelining I (23) Chae, Summer 2008 © UCB Structural Hazard #1: Single Memory (1/2) Read same memory twice in same clock cycle I$ Load Instr 1 Instr 2 Instr 3 Instr 4 ALU I$ Reg D$Reg ALU I$ Reg D$Reg ALU I$ Reg D$Reg ALU Reg D$Reg ALU I$ Reg D$Reg I n s t r. O r d e r Time (clock cycles)

25 CS61C L21 Pipelining I (24) Chae, Summer 2008 © UCB Structural Hazard #1: Single Memory (2/2) Solution: infeasible and inefficient to create second memory (We’ll see more later this week) so simulate this by having two Level 1 Caches (a temporary smaller [of usually most recently used] copy of memory) have both an L1 Instruction Cache and an L1 Data Cache need more complex hardware to control when both caches miss

26 CS61C L21 Pipelining I (25) Chae, Summer 2008 © UCB Structural Hazard #2: Registers (1/2) Can we read and write to registers simultaneously? I$ sw Instr 1 Instr 2 Instr 3 Instr 4 ALU I$ Reg D$Reg ALU I$ Reg D$Reg ALU I$ Reg D$Reg ALU Reg D$Reg ALU I$ Reg D$Reg I n s t r. O r d e r Time (clock cycles)

27 CS61C L21 Pipelining I (26) Chae, Summer 2008 © UCB Structural Hazard #2: Registers (2/2) Two different solutions have been used: 1) RegFile access is VERY fast: takes less than half the time of ALU stage -Write to Registers during first half of each clock cycle -Read from Registers during second half of each clock cycle 2) Build RegFile with independent read and write ports Result: can perform Read and Write during same clock cycle

28 CS61C L21 Pipelining I (27) Chae, Summer 2008 © UCB Data Hazards (1/2) add $t0, $t1, $t2 sub $t4, $t0,$t3 and $t5, $t0,$t6 or $t7, $t0,$t8 xor $t9, $t0,$t10 Consider the following sequence of instructions

29 CS61C L21 Pipelining I (28) Chae, Summer 2008 © UCB Data-flow backward in time are hazards Data Hazards (2/2) sub $t4,$t0,$t3 ALU I$ Reg D$Reg and $t5,$t0,$t6 ALU I$ Reg D$Reg or $t7,$t0,$t8 I$ ALU Reg D$Reg xor $t9,$t0,$t10 ALU I$ Reg D$Reg add $t0,$t1,$t2 IFID/RFEXMEMWB ALU I$ Reg D$ Reg I n s t r. O r d e r Time (clock cycles)

30 CS61C L21 Pipelining I (29) Chae, Summer 2008 © UCB Forward result from one stage to another Data Hazard Solution: Forwarding sub $t4,$t0,$t3 ALU I$ Reg D$Reg and $t5,$t0,$t6 ALU I$ Reg D$Reg or $t7,$t0,$t8 I$ ALU Reg D$Reg xor $t9,$t0,$t10 ALU I$ Reg D$Reg add $t0,$t1,$t2 IFID/RFEXMEMWB ALU I$ Reg D$ Reg “ or ” hazard solved by register hardware

31 CS61C L21 Pipelining I (30) Chae, Summer 2008 © UCB Dataflow backwards in time are hazards Data Hazard: Loads (1/4) sub $t3,$t0,$t2 ALU I$ Reg D$Reg lw $t0,0($t1) IFID/RFEXMEMWB ALU I$ Reg D$ Reg Can’t solve all cases with forwarding Must stall instruction dependent on load, then forward (more hardware)

32 CS61C L21 Pipelining I (31) Chae, Summer 2008 © UCB Hardware stalls pipeline Called “interlock” Data Hazard: Loads (2/4) sub $t3,$t0,$t2 ALU I$ Reg D$Reg bub ble and $t5,$t0,$t4 ALU I$ Reg D$Reg bub ble or $t7,$t0,$t6 I$ ALU Reg D$ bub ble lw $t0, 0($t1) IFID/RFEXMEMWB ALU I$ Reg D$ Reg

33 CS61C L21 Pipelining I (32) Chae, Summer 2008 © UCB Data Hazard: Loads (3/4) Instruction slot after a load is called “load delay slot” If that instruction uses the result of the load, then the hardware interlock will stall it for one cycle. If the compiler puts an unrelated instruction in that slot, then no stall Letting the hardware stall the instruction in the delay slot is equivalent to putting a nop in the slot (except the latter uses more code space)

34 CS61C L21 Pipelining I (33) Chae, Summer 2008 © UCB Data Hazard: Loads (4/4) Stall is equivalent to nop sub $t3,$t0,$t2 and $t5,$t0,$t4 or $t7,$t0,$t6 I$ ALU Reg D$ lw $t0, 0($t1) ALU I$ Reg D$ Reg bub ble ALU I$ Reg D$ Reg ALU I$ Reg D$ Reg nop

35 CS61C L21 Pipelining I (34) Chae, Summer 2008 © UCB Historical Trivia First MIPS design did not interlock and stall on load-use data hazard Real reason for name behind MIPS: Microprocessor without Interlocked Pipeline Stages Word Play on acronym for Millions of Instructions Per Second, also called MIPS

36 CS61C L21 Pipelining I (35) Chae, Summer 2008 © UCB Peer Instruction A. Thanks to pipelining, I have reduced the time it took me to wash my shirt. B. Longer pipelines are always a win (since less work per stage & a faster clock). C. We can rely on compilers to help us avoid data hazards by reordering instrs. ABC 0: FFF 1: FFT 2: FTF 3: FTT 4: TFF 5: TFT 6: TTF 7: TTT

37 CS61C L21 Pipelining I (36) Chae, Summer 2008 © UCB Peer Instruction Answer A. Thanks to pipelining, I have reduced the time it took me to wash my shirt. B. Longer pipelines are always a win (since less work per stage & a faster clock). C. We can rely on compilers to help us avoid data hazards by reordering instrs. F A L S E A. Throughput better, not execution time B. “…longer pipelines do usually mean faster clock, but branches cause problems! C. “they happen too often & delay too long.” Forwarding! (e.g, Mem  ALU) F A L S E ABC 0: FFF 1: FFT 2: FTF 3: FTT 4: TFF 5: TFT 6: TTF 7: TTT

38 CS61C L21 Pipelining I (37) Chae, Summer 2008 © UCB Things to Remember Optimal Pipeline Each stage is executing part of an instruction each clock cycle. One instruction finishes during each clock cycle. On average, execute far more quickly. What makes this work? Similarities between instructions allow us to use same stages for all instructions (generally). Each stage takes about the same amount of time as all others: little wasted time.


Download ppt "CS61C L21 Pipelining I (1) Chae, Summer 2008 © UCB Albert Chae, Instructor inst.eecs.berkeley.edu/~cs61c CS61C : Machine Structures Lecture #21 – Pipelining."

Similar presentations


Ads by Google