Presentation is loading. Please wait.

Presentation is loading. Please wait.

Review of CS 203A Laxmi Narayan Bhuyan Lecture2.

Similar presentations


Presentation on theme: "Review of CS 203A Laxmi Narayan Bhuyan Lecture2."— Presentation transcript:

1 Review of CS 203A Laxmi Narayan Bhuyan http://www.cs.ucr.edu/~bhuyan Lecture2

2 M Review CS 203A - Pipelining Load Instr 1 Instr 2 Instr 3 Instr 4 ALU M Reg M ALU M Reg M ALU M Reg M ALU Reg M ALU M Reg M Can’t read same memory twice in same clock cycle Structural Hazard I n s t r. O r d e r Time (clock cycles)

3 Other Hazards Data Hazards – Due to data dependencies Control Hazards – Due to branches

4 Getting CPI < 1: Issuing Multiple Instructions/Cycle Superscalar MIPS: 2 instructions, 1 FP & 1 anything – Fetch 64-bits/clock cycle; Int on left, FP on right – Can only issue 2nd instruction if 1st instruction issues – More ports for FP registers to do FP load & FP op in a pair TypePipeStages Int. instructionIFIDEXMEMWB FP instructionIFIDEXMEMWB Int. instructionIFIDEXMEMWB FP instructionIFIDEXMEMWB Int. instructionIFIDEXMEMWB FP instructionIFIDEXMEMWB

5 MIPS R4000 Pipeline

6 Comparison of Issue Capabilities Courtesy of Susan Eggers; Used with Permission

7 VLIW and Superscalar sequential stream of long instruction words instructions scheduled statically by the compiler number of simultaneously issued instructions is fixed during compile-time instruction issue is less complicated than in a superscalar processor Disadvantage: VLIW processors cannot react on dynamic events, e.g. cache misses, with the same flexibility like superscalars. The number of instructions in a VLIW instruction word is usually fixed. Padding VLIW instructions with no-ops is needed in case the full issue bandwidth is not be met. This increases code size. More recent VLIW architectures use a denser code format which allows to remove the no-ops. VLIW is an architectural technique, whereas superscalar is a microarchitecture technique. VLIW processors take advantage of spatial parallelism.

8 Multithreading How can we guarantee no dependencies between instructions in a pipeline? –One way is to interleave execution of instructions from different program threads on same pipeline – Micro context switching Interleave 4 threads, T1-T4, on non-bypassed 5-stage pipe T1: LW r1, 0(r2) T2: ADD r7, r1, r4 T3: XORI r5, r4, #12 T4: SW 0(r7), r5 T1: LW r5, 12(r1)

9 HW Schemes: Instruction Parallelism Out-of-order execution divides ID stage: 1. Issue—decode instructions, check for structural hazards, Issue in order if the functional unit is free and no WAW. 2.Read operands (RO)—wait until no data hazards, then read operands  ADDD would stall at RO, and SUBD could proceed with no stalls. Scoreboards allow instruction to execute whenever 1 & 2 hold, not waiting for prior instructions. (WAR?) IFISSUE …ROEX 1 … EX m ROEX 1 …EX n …ROEX 1 …EX p WB? WB …

10 FP unit and load-store unit using Tomasulo’s alg.

11 Four Steps of Speculative Tomasulo Algorithm 1. Issue— get instruction from FP Op Queue If reservation station and reorder buffer slot free, issue instr & send operands & reorder buffer no. for destination (this stage sometimes called “dispatch”) 2.Execution— operate on operands (EX) When both operands ready then execute; if not ready, watch CDB for result; when both in reservation station, execute; checks RAW (sometimes called “issue”) 3.Write result— finish execution (WB) Write on Common Data Bus to all awaiting FUs & reorder buffer; mark reservation station available. 4.Commit— update register with reorder result When instr. at head of reorder buffer & result present, update register with result (or store to memory) and remove instr from reorder buffer. Mispredicted branch flushes reorder buffer (sometimes called “graduation”)

12


Download ppt "Review of CS 203A Laxmi Narayan Bhuyan Lecture2."

Similar presentations


Ads by Google