Lecture 7: Dynamic Branch Prediction, Superscalar, VLIW, and Software Pipelining Professor Alvin R. Lebeck Computer Science 220 Fall 2001.

Slides:



Advertisements
Similar presentations
Speculative ExecutionCS510 Computer ArchitecturesLecture Lecture 11 Trace Scheduling, Conditional Execution, Speculation, Limits of ILP.
Advertisements

Lecture 8 Dynamic Branch Prediction, Superscalar and VLIW Advanced Computer Architecture COE 501.
1 COMP 206: Computer Architecture and Implementation Montek Singh Wed, Oct 19, 2005 Topic: Instruction-Level Parallelism (Multiple-Issue, Speculation)
A scheme to overcome data hazards
ENGS 116 Lecture 101 ILP: Software Approaches Vincent H. Berk October 12 th Reading for today: , 4.1 Reading for Friday: 4.2 – 4.6 Homework #2:
CPE 631: ILP, Static Exploitation Electrical and Computer Engineering University of Alabama in Huntsville Aleksandar Milenkovic,
CPE 731 Advanced Computer Architecture ILP: Part V – Multiple Issue Dr. Gheith Abandah Adapted from the slides of Prof. David Patterson, University of.
COMP4611 Tutorial 6 Instruction Level Parallelism
Dynamic ILP: Scoreboard Professor Alvin R. Lebeck Computer Science 220 / ECE 252 Fall 2008.
Lecture 6: ILP HW Case Study— CDC 6600 Scoreboard & Tomasulo’s Algorithm Professor Alvin R. Lebeck Computer Science 220 Fall 2001.
CSE 502 Graduate Computer Architecture Lec 11 – More Instruction Level Parallelism Via Speculation Larry Wittie Computer Science, StonyBrook University.
Dynamic Branch PredictionCS510 Computer ArchitecturesLecture Lecture 10 Dynamic Branch Prediction, Superscalar, VLIW, and Software Pipelining.
CS152 Lec15.1 Advanced Topics in Pipelining Loop Unrolling Super scalar and VLIW Dynamic scheduling.
Pipelining 5. Two Approaches for Multiple Issue Superscalar –Issue a variable number of instructions per clock –Instructions are scheduled either statically.
Computer Architecture Lec 8 – Instruction Level Parallelism.
CPE 631: Branch Prediction Electrical and Computer Engineering University of Alabama in Huntsville Aleksandar Milenkovic,
Dynamic Branch Prediction
CS136, Advanced Architecture Speculation. CS136 2 Outline Speculation Speculative Tomasulo Example Memory Aliases Exceptions VLIW Increasing instruction.
1 COMP 206: Computer Architecture and Implementation Montek Singh Mon., Oct. 14, 2002 Topic: Instruction-Level Parallelism (Multiple-Issue, Speculation)
Lecture 8: More ILP stuff Professor Alvin R. Lebeck Computer Science 220 Fall 2001.
CPE 731 Advanced Computer Architecture ILP: Part IV – Speculative Execution Dr. Gheith Abandah Adapted from the slides of Prof. David Patterson, University.
DAP Spr.‘98 ©UCB 1 Lecture 6: ILP Techniques Contd. Laxmi N. Bhuyan CS 162 Spring 2003.
CSE 502 Graduate Computer Architecture Lec – More Instruction Level Parallelism Via Speculation Larry Wittie Computer Science, StonyBrook University.
EECC551 - Shaaban #1 lec # 6 Fall Multiple Instruction Issue: CPI < 1 To improve a pipeline’s CPI to be better [less] than one, and to.
1 EE524 / CptS561 Computer Architecture Speculation: allow an instruction to issue that is dependent on branch predicted to be taken without any consequences.
Limits on ILP. Achieving Parallelism Techniques – Scoreboarding / Tomasulo’s Algorithm – Pipelining – Speculation – Branch Prediction But how much more.
W04S1 COMP s1 Seminar 4: Branch Prediction Slides due to David A. Patterson, 2001.
1 COMP 740: Computer Architecture and Implementation Montek Singh Tue, Mar 17, 2009 Topic: Instruction-Level Parallelism (Multiple-Issue, Speculation)
CPSC614 Lec 5.1 Instruction Level Parallelism and Dynamic Execution #4: Based on lectures by Prof. David A. Patterson E. J. Kim.
1 Zvika Guz Slides modified from Prof. Dave Patterson, Prof. John Kubiatowicz, and Prof. Nancy Warter-Perez Out Of Order Execution.
EENG449b/Savvides Lec /17/04 February 17, 2004 Prof. Andreas Savvides Spring EENG 449bG/CPSC 439bG.
COMP381 by M. Hamdi 1 Superscalar Processors. COMP381 by M. Hamdi 2 Recall from Pipelining Pipeline CPI = Ideal pipeline CPI + Structural Stalls + Data.
1 COMP 206: Computer Architecture and Implementation Montek Singh Mon., Oct. 9, 2002 Topic: Instruction-Level Parallelism (Multiple-Issue, Speculation)
Review of CS 203A Laxmi Narayan Bhuyan Lecture2.
1 IBM System 360. Common architecture for a set of machines. Robert Tomasulo worked on a high-end machine, the Model 91 (1967), on which they implemented.
Tomasulo’s Approach and Hardware Based Speculation
EECC551 - Shaaban #1 lec # 8 Winter Multiple Instruction Issue: CPI < 1 To improve a pipeline’s CPI to be better [less] than one, and to.
CIS 629 Fall 2002 Multiple Issue/Speculation Multiple Instruction Issue: CPI < 1 To improve a pipeline’s CPI to be better [less] than one, and to utilize.
Dynamic Branch Prediction
ENGS 116 Lecture 91 Dynamic Branch Prediction and Speculation Vincent H. Berk October 10, 2005 Reading for today: Chapter 3.2 – 3.6 Reading for Wednesday:
CPSC614 Lec 5.1 Instruction Level Parallelism and Dynamic Execution #4: Based on lectures by Prof. David A. Patterson E. J. Kim.
1 Overcoming Control Hazards with Dynamic Scheduling & Speculation.
1 Chapter 2: ILP and Its Exploitation Review simple static pipeline ILP Overview Dynamic branch prediction Dynamic scheduling, out-of-order execution Hardware-based.
CS/EE 5810 CS/EE 6810 F00: 1 Extracting More ILP.
CSCE 614 Fall Hardware-Based Speculation As more instruction-level parallelism is exploited, maintaining control dependences becomes an increasing.
Branch.1 10/14 Branch Prediction Static, Dynamic Branch prediction techniques.
1 Lecture 7: Speculative Execution and Recovery Branch prediction and speculative execution, precise interrupt, reorder buffer.
Review Professor Alvin R. Lebeck Compsci 220 / ECE 252 Fall 2006.
CS 5513 Computer Architecture Lecture 6 – Instruction Level Parallelism continued.
CS203 – Advanced Computer Architecture ILP and Speculation.
Ch2. Instruction-Level Parallelism & Its Exploitation 2. Dynamic Scheduling ECE562/468 Advanced Computer Architecture Prof. Honggang Wang ECE Department.
Instruction-Level Parallelism and Its Dynamic Exploitation
/ Computer Architecture and Design
CPE 731 Advanced Computer Architecture ILP: Part V – Multiple Issue
COMP 740: Computer Architecture and Implementation
Tomasulo Loop Example Loop: LD F0 0 R1 MULTD F4 F0 F2 SD F4 0 R1
CS152 Computer Architecture and Engineering Lecture 18 Dynamic Scheduling (Cont), Speculation, and ILP.
Lecture 5: VLIW, Software Pipelining, and Limits to ILP
Lecture 8: ILP and Speculation Contd. Chapter 2, Sections 2. 6, 2
CPE 631: Branch Prediction
Adapted from the slides of Prof
Lecture 7: Dynamic Scheduling with Tomasulo Algorithm (Section 2.4)
Advanced Computer Architecture
CC423: Advanced Computer Architecture ILP: Part V – Multiple Issue
Adapted from the slides of Prof
September 20, 2000 Prof. John Kubiatowicz
Dynamic Hardware Prediction
Overcoming Control Hazards with Dynamic Scheduling & Speculation
CPE 631 Lecture 12: Branch Prediction
Presentation transcript:

Lecture 7: Dynamic Branch Prediction, Superscalar, VLIW, and Software Pipelining Professor Alvin R. Lebeck Computer Science 220 Fall 2001

2 © Alvin R. Lebeck 2001 Admin No office hours today Moving Thursday office hours to Wednesday 10am Homework #2 Due Tuesday Bob Wagner’s project Please me your projects, need my approval… Project Proposal (October 2) –Short document –Short presentation Papers to read on web page (2 classes from today)

3 © Alvin R. Lebeck 2001 CPS 220 Scoreboard Summary 1. Issue: decode instructions & check for structural hazards and WAW hazards (stall all following insts) 2. Read operands: wait until no data hazards, then read operands (RAW) 3. Execution 4. Write Result: check for WAR Limitations of 6600 scoreboard –No forwarding –Limited to instructions in basic block (small window) –Number of functional units (structural hazards) –Wait for WAR hazards –Prevent WAW hazards

4 © Alvin R. Lebeck 2001 CPS 220 Tomasulo Summary Prevents Register as bottleneck Avoids WAR, WAW hazards of Scoreboard Allows loop unrolling in HW Not limited to basic blocks (provided branch prediction) Lasting Contributions –Dynamic scheduling –Register renaming –Load/store disambiguation You should know how Scoreboard and Tomaulo’s alg would execute a piece of code…

5 © Alvin R. Lebeck 2001 CPS 220 Preview for CPI < 1 Issue more than 1 instruction per cycle First branches (why?)

6 © Alvin R. Lebeck 2001 CPS 220 Dynamic Branch Prediction With CPI < 1 frequency of branches increases –Remember Amdahl’s Law... Performance = ƒ(accuracy, cost of misprediction) Branch History Table is simplest –Lower bits of PC address index table of 1-bit values –Says whether or not branch taken last time Question: How many mispredictions in a loop? Answer: 2 –End of loop case, when it exits instead of looping as before –First time through loop on next time through code, when it predicts exit instead of looping

7 © Alvin R. Lebeck 2001 CPS 220 Dynamic Branch Prediction Solution: 2-bit counter where prediction changes only if mispredict twice: Increment for taken, decrement for not-taken –00,01,10,11 Helps when target is known before condition T T T T NT Predict Taken Predict Not Taken Predict Taken Predict Not Taken

8 © Alvin R. Lebeck 2001 CPS 220 BHT Accuracy Mispredict because either: –Wrong guess for that branch –Got branch history of wrong branch when index the table 4096 entry table programs vary from 1% misprediction (nasa7, tomcatv) to 18% (eqntott), with spice at 9% and gcc at 12% 4096 about as good as infinite table, but 4096 is a lot of HW

9 © Alvin R. Lebeck 2001 CPS 220 Correlating Branches Idea: taken/not taken of recently executed branches is related to behavior of next branch (as well as the history of that branch behavior) –Then behavior of recent branches selects between, say, four predictions of next branch, updating just that prediction Branch address 2-bits per branch predictor Prediction 2-bit global branch history

10 © Alvin R. Lebeck 2001 CPS 220 Accuracy of Different Schemes (Figure 4.21, p. 272) 4096 Entries 2-bit BHT Unlimited Entries 2-bit BHT 1024 Entries (2,2) BHT 0% 18% Frequency of Mispredictions

11 © Alvin R. Lebeck 2001 CPS 220 Need Same Time as Prediction Branch Target Buffer (BTB): Address of branch index to get prediction AND branch address (if taken) –Note: must check for branch match now, since can’t use wrong branch address (Figure 4.22, p. 273) Predicted PC Branch Prediction: Taken or not Taken Procedure Return Addresses Predicted with a Stack PC of Inst to fetch = … 0 n-1 Yes, use predicted PC No, not branch

12 © Alvin R. Lebeck 2001 CPS 220 Getting CPI < 1: Issuing Multiple Instructions/Cycle Two variations Superscalar: varying no. instructions/cycle (1 to 8), scheduled by compiler (statically scheduled) or by HW (Tomasulo; dynamically scheduled) –Pentium4, IBM PowerPC, Sun SuperSparc, DEC Alpha, HP PA Very Long Instruction Words (VLIW): fixed number of instructions (16) scheduled by the compiler

13 © Alvin R. Lebeck 2001 CPS 220 Getting CPI < 1: Issuing Multiple Instructions/Cycle Superscalar DLX: 2 instructions, 1 FP & 1 anything else – Fetch 64-bits/clock cycle; Int on left, FP on right – Can only issue 2nd instruction if 1st instruction issues – More ports for FP registers to do FP load & FP op in a pair TypePipeStages Int. instructionIFIDEXMEMWB FP instructionIFIDEXMEMWB Int. instructionIFIDEXMEMWB FP instructionIFIDEXMEMWB Int. instructionIFIDEXMEMWB FP instructionIFIDEXMEMWB 1 cycle load delay expands to 3 instructions in SS –instruction in right half can’t use it, nor instructions in next slot

14 © Alvin R. Lebeck 2001 CPS 220 Unrolled Loop that Minimizes Stalls for Scalar 1 Loop:LDF0,0(R1) 2 LDF6,-8(R1) 3 LDF10,-16(R1) 4 LDF14,-24(R1) 5 ADDDF4,F0,F2 6 ADDDF8,F6,F2 7 ADDDF12,F10,F2 8 ADDDF16,F14,F2 9 SD0(R1),F4 10 SD-8(R1),F8 11 SD-16(R1),F12 12 SUBIR1,R1,#32 13 BNEZR1,LOOP 14 SD8(R1),F16; 8-32 = clock cycles, or 3.5 per iteration LD to ADDD: 1 Cycle ADDD to SD: 2 Cycles

15 © Alvin R. Lebeck 2001 CPS 220 Loop Unrolling in Superscalar Integer instructionFP instructionClock cycle Loop:LD F0,0(R1)1 LD F6,-8(R1)2 LD F10,-16(R1)ADDD F4,F0,F23 LD F14,-24(R1)ADDD F8,F6,F24 LD F18,-32(R1)ADDD F12,F10,F25 SD 0(R1),F4ADDD F16,F14,F26 SD -8(R1),F8ADDD F20,F18,F27 SD -16(R1),F128 SD -24(R1),F169 SUBI R1,R1,#4010 BNEZ R1,LOOP11 SD -32(R1),F2012 Unrolled 5 times to avoid delays (+1 due to SS) 12 clocks, or 2.4 clocks per iteration

16 © Alvin R. Lebeck 2001 CPS 220 Dynamic Scheduling in Superscalar Dependencies stop instruction issue Code compiled for scalar version will run poorly on SS –May want code to vary depending on how superscalar Simple approach: separate Tomasulo Control for separate reservation stations for Integer FU/Reg and for FP FU/Reg

17 © Alvin R. Lebeck 2001 CPS 220 Dynamic Scheduling in Superscalar How to do instruction issue with two instructions and keep in-order instruction issue for Tomasulo? –Issue 2X Clock Rate, so that issue remains in order –Only FP loads might cause dependency between integer and FP issue: »Replace load reservation station with a load queue; operands must be read in the order they are fetched »Load checks addresses in Store Queue to avoid RAW violation »Store checks addresses in Load Queue to avoid WAR,WAW

18 © Alvin R. Lebeck 2001 CPS 220 Performance of Dynamic SS Iteration InstructionsIssues ExecutesWrites result no. clock-cycle number 1LD F0,0(R1)124 1ADDD F4,F0,F2158 1SD 0(R1),F429 1SUBI R1,R1,#8345 1BNEZ R1,LOOP45 2LD F0,0(R1)568 2ADDD F4,F0,F SD 0(R1),F4613 2SUBI R1,R1,#8789 2BNEZ R1,LOOP89 ­ 4 clocks per iteration Branches, Decrements still take 1 clock cycle

19 © Alvin R. Lebeck 2001 CPS 220 Limits of Superscalar While Integer/FP split is simple for the HW, get CPI of 0.5 only for programs with: –Exactly 50% FP operations –No hazards If more instructions issue at same time, greater difficulty of decode and issue –Even 2-scalar => examine 2 opcodes, 6 register specifiers, & decide if 1 or 2 instructions can issue VLIW: tradeoff instruction space for simple decoding –The long instruction word has room for many operations –By definition, all the operations the compiler puts in the long instruction word can execute in parallel –E.g., 2 integer operations, 2 FP ops, 2 Memory refs, 1 branch »16 to 24 bits per field => 7*16 or 112 bits to 7*24 or 168 bits wide –Need compiling technique that schedules across several branches

20 © Alvin R. Lebeck 2001 CPS 220 Loop Unrolling in VLIW Memory MemoryFPFPInt. op/Clock reference 1reference 2operation 1 op. 2 branch LD F0,0(R1)LD F6,-8(R1)1 LD F10,-16(R1)LD F14,-24(R1)2 LD F18,-32(R1)LD F22,-40(R1)ADDD F4,F0,F2ADDD F8,F6,F23 LD F26,-48(R1)ADDD F12,F10,F2ADDD F16,F14,F24 ADDD F20,F18,F2ADDD F24,F22,F25 SD 0(R1),F4SD -8(R1),F8ADDD F28,F26,F26 SD -16(R1),F12SD -24(R1),F167 SD -32(R1),F20SD -40(R1),F24SUBI R1,R1,#488 SD -0(R1),F28BNEZ R1,LOOP9 Unrolled 7 times to avoid delays 7 results in 9 clocks, or 1.3 clocks per iteration Need more registers in VLIW

21 © Alvin R. Lebeck 2001 CPS 220 Limits to Multi-Issue Machines Inherent limitations of ILP –1 branch in 5 instructions => how to keep a 5-way VLIW busy? –Latencies of units => many operations must be scheduled –Need about Pipeline Depth x No. Functional Units of independent instructions Difficulties in building HW –Duplicate FUs to get parallel execution –Increase ports to Register File »VLIW example needs 7 read and 3 write for Int. Reg. & 5 read and 3 write for FP reg –Increase ports to memory –Decoding SS and impact on clock rate, pipeline depth

22 © Alvin R. Lebeck 2001 CPS 220 Limits to Multi-Issue Machines Limitations specific to either SS or VLIW implementation –Decode issue in SS –VLIW code size: unroll loops + wasted fields in VLIW –VLIW lock step => 1 hazard & all instructions stall –VLIW & binary compatibility is practical weakness

23 © Alvin R. Lebeck 2001 CPS 220 Software Pipelining Observation: if iterations from loops are independent, then can get ILP by taking instructions from different iterations Software pipelining: reorganizes loops so that each iteration is made from instructions chosen from different iterations of the original loop (­ Tomasulo in SW) i4 i3 i2 i1 i0 Software Pipeline Iteration

24 © Alvin R. Lebeck 2001 CPS 220 SW Pipelining Example Before: Unrolled 3 times 1 LDF0,0(R1) 2 ADDDF4,F0,F2 3 SD0(R1),F4 4 LDF6,-8(R1) 5 ADDDF8,F6,F2 6 SD-8(R1),F8 7 LDF10,-16(R1) 8 ADDDF12,F10,F2 9 SD-16(R1),F12 10 SUBIR1,R1,#24 11 BNEZR1,LOOP After: Software Pipelined LDF0,0(R1) ADDDF4,F0,F2 LDF0,-8(R1) 1 SD0(R1),F4; Stores M[i] 2 ADDDF4,F0,F2; Adds to M[i-1] 3 LDF0,-16(R1); loads M[i-2] 4 SUBIR1,R1,#8 5 BNEZR1,LOOP SD0(R1),F4 ADDDF4,F0,F2 SD-8(R1),F4 IF ID EX Mem WB SD ADDD LD Read F4 Write F4 Read F0 Write F0

25 © Alvin R. Lebeck 2001 CPS 220 SW Pipelining Example Symbolic Loop Unrolling – Less code space – Overhead paid only once vs. each iteration in loop unrolling Software Pipelining Loop Unrolling 100 iterations = 25 loops with 4 unrolled iterations each Full Overlap Number of Overlapped Operations Number of Overlapped Operations Proportional to number of unrolls Overlap between unrolled iters

26 © Alvin R. Lebeck 2001 CPS 220 Summary Branch Prediction –Branch History Table: 2 bits for loop accuracy –Correlation: Recently executed branches correlated with next branch –Branch Target Buffer: include branch address & prediction Superscalar and VLIW –CPI < 1 –Dynamic issue vs. Static issue –More instructions issue at same time, larger the penalty of hazards SW Pipelining –Symbolic Loop Unrolling to get most from pipeline with little code expansion, little overhead What about non-loop codes? –How do you get > 1 instruction

27 © Alvin R. Lebeck 2001 CPS 220 Trace Scheduling Parallelism across IF branches vs. LOOP branches Two steps: –Trace Selection »Find likely sequence of basic blocks (trace) of (statically predicted) long sequence of straight-line code –Trace Compaction »Squeeze trace into few VLIW instructions »Need bookkeeping code in case prediction is wrong

28 © Alvin R. Lebeck 2001 CPS 220 Trace Scheduling Reorder these instructions to improve ILP Fix-up instructions In case we were wrong

29 © Alvin R. Lebeck 2001 CPS 220 HW support for More ILP Avoid branch prediction by turning branches into conditionally executed instructions: if (x) then A = B op C else NOP –If false, then neither store result nor cause exception –Expanded ISA of Alpha, MIPS, PowerPC, SPARC have conditional move; PA-RISC can annul any following instr, IA-64 predicated execution. Drawbacks to conditional instructions –Still takes a clock even if “annulled” –Stall if condition evaluated late –Complex conditions reduce effectiveness; condition becomes known late in pipeline

30 © Alvin R. Lebeck 2001 CPS 220 HW support for More ILP Speculation: allow an instruction to issue that is dependent on branch predicted to be taken without any consequences (including exceptions) if branch is not actually taken (“HW undo” squash) Often try to combine with dynamic scheduling Tomasulo: separate speculative bypassing of results from real bypassing of results –When instruction no longer speculative, write results (instruction commit) –execute out-of-order but commit in order

31 © Alvin R. Lebeck 2001 CPS 220 Speculation ADD LD SUB R2 BEQ L1 ADD R1, R3 LD R2 SUB ST ADD R2, R1 LD SUB ST Predicted Path Correct Path B1 B2B3 4-way issue All of B2 issued speculatively Must be squashed Could have execute B1 in speculative mode

32 © Alvin R. Lebeck 2001 CPS 220 HW support for More ILP Need HW buffer for results of uncommitted instructions: reorder buffer –Reorder buffer can be operand source –Once operand commits, result is found in register –3 fields: instr. type, destination, value –Use reorder buffer number instead of reservation station –Instructions commit in order –As a result, its easy to undo speculated instructions on mispredicted branches or on exceptions Reorder Buffer FP Regs FP Op Queue FP Adder Res Stations Figure 4.34, page 311

33 © Alvin R. Lebeck 2001 CPS 220 Four Steps of Speculative Tomasulo Algorithm 1.Issue—get instruction from FP Op Queue If reservation station and reorder buffer slot free, issue instr & send operands & reorder buffer no. for destination. 2.Execution—operate on operands (EX) When both operands ready then execute; if not ready, watch CDB for result; when both in reservation station, execute 3.Write result—finish execution (WB) Write on Common Data Bus to all awaiting FUs & reorder buffer; mark reservation station available. 4.Commit—update register with reorder result When instr. at head of reorder buffer & result present, update register with result (or store to memory) and remove instr from reorder buffer.

34 © Alvin R. Lebeck 2001 CPS 220 Limits to ILP Conflicting studies of amount of parallelism available in late 1980s and early 1990s. Different assumptions about: –Benchmarks (vectorized Fortran FP vs. integer C programs) –Hardware sophistication –Compiler sophistication

35 © Alvin R. Lebeck 2001 CPS 220 Limits to ILP Initial HW Model here; MIPS compilers 1. Register renaming–infinite virtual registers and all WAW & WAR hazards are avoided 2. Branch prediction–perfect; no mispredictions 3. Jump prediction–all jumps perfectly predicted => machine with perfect speculation & an unbounded buffer of instructions available 4. Memory-address disambiguation–addresses are known & a store can be moved before a load provided addresses not equal 1 cycle latency for all instructions

36 © Alvin R. Lebeck 2001 CPS 220 Upper Limit to ILP (Figure 4.38, page 319)

37 © Alvin R. Lebeck 2001 CPS 220 More Realistic HW: Branch Impact Figure 4.40, Page 323 Change from Infinite window to examine to 2000 and maximum issue of 64 instructions per clock cycle Profile BHT (512)Pick Cor. or BHTPerfect

38 © Alvin R. Lebeck 2001 CPS 220 Selective Branch Predictor 8096 x 2 bits 2048 x 4 x 2 bits Branch Addr Global History Taken/Not Taken 8K x 2 bit Selector Choose Non-correlator Choose Correlator Taken Not Taken 00

39 © Alvin R. Lebeck 2001 CPS 220 More Realistic HW: Register Impact Figure 4.44, Page 328 Change 2000 instr window, 64 instr issue, 8K 2 level Prediction 64None256Infinite32128

40 © Alvin R. Lebeck 2001 CPS 220 More Realistic HW: Alias Impact Figure 4.46, Page 330 Change 2000 instr window, 64 instr issue, 8K 2 level Prediction, 256 renaming registers NoneGlobal/Stack perf; heap conflicts PerfectInspec. Assem.

41 © Alvin R. Lebeck 2001 CPS 220 Realistic HW for ‘9X: Window Impact (Figure 4.48, Page 332) Perfect disambiguation (HW), 1K Selective Prediction, 16 entry return, 64 registers, issue as many as window Infinite

42 © Alvin R. Lebeck 2001 CPS scalar IBM 71.5 MHz (5 stage pipe) vs. 2-scalar Alpha 200 MHz (7 stages) Braniac vs. Speed Demon (Spec Ratio)

43 © Alvin R. Lebeck 2001 Next Time Read papers and be ready to discuss them –Open discussion format Microarchitecture of Superscalar processors Complexity Effective Processors IPC vs. Clock Rate HW #2 Due