Presentation is loading. Please wait.

Presentation is loading. Please wait.

Lecture 20: OOO, Memory Hierarchy

Similar presentations


Presentation on theme: "Lecture 20: OOO, Memory Hierarchy"— Presentation transcript:

1 Lecture 20: OOO, Memory Hierarchy
Today’s topics: Out-of-order execution Cache basics

2 Slowdowns from Stalls Perfect pipelining with no hazards  an instruction completes every cycle (total cycles ~ num instructions)  speedup = increase in clock speed = num pipeline stages With hazards and stalls, some cycles (= stall time) go by during which no instruction completes, and then the stalled instruction completes Total cycles = number of instructions + stall cycles

3 Multicycle Instructions
Multiple parallel pipelines – each pipeline can have a different number of stages Instructions can now complete out of order – must make sure that writes to a register happen in the correct order

4 An Out-of-Order Processor Implementation
Reorder Buffer (ROB) Branch prediction and instr fetch Instr 1 Instr 2 Instr 3 Instr 4 Instr 5 Instr 6 T1 T2 T3 T4 T5 T6 Register File R1-R32 R1  R1+R2 R2  R1+R3 BEQZ R2 R3  R1+R2 R1  R3+R2 Decode & Rename T1  R1+R2 T2  T1+R3 BEQZ T2 T4  T1+T2 T5  T4+T2 ALU ALU ALU Instr Fetch Queue Results written to ROB and tags broadcast to IQ Issue Queue (IQ)

5 Example Code Completion times with in-order with ooo
ADD R1, R2, R ADD R4, R1, R LW R5, 8(R4) ADD R7, R6, R ADD R8, R7, R LW R9, 16(R4) ADD R10, R6, R ADD R11, R10, R

6 Cache Hierarchies Data and instructions are stored on DRAM chips – DRAM is a technology that has high bit density, but relatively poor latency – an access to data in memory can take as many as 300 cycles today! Hence, some data is stored on the processor in a structure called the cache – caches employ SRAM technology, which is faster, but has lower bit density Internet browsers also cache web pages – same concept

7 Memory Hierarchy As you go further, capacity and latency increase
Disk 80 GB 10M cycles Memory 1GB 300 cycles L2 cache 2MB 15 cycles L1 data or instruction Cache 32KB 2 cycles Registers 1KB 1 cycle

8 Locality Why do caches work?
Temporal locality: if you used some data recently, you will likely use it again Spatial locality: if you used some data recently, you will likely access its neighbors No hierarchy: average access time for data = 300 cycles 32KB 1-cycle L1 cache that has a hit rate of 95%: average access time = 0.95 x x (301) = 16 cycles

9 Accessing the Cache Byte address 101000 Offset 8-byte words
8 words: 3 index bits Direct-mapped cache: each address maps to a unique address Sets Data array

10 The Tag Array Byte address 101000 Tag 8-byte words Compare
Direct-mapped cache: each address maps to a unique address Tag array Data array

11 Example Access Pattern
Byte address Assume that addresses are 8 bits long How many of the following address requests are hits/misses? 4, 7, 10, 13, 16, 68, 73, 78, 83, 88, 4, 7, 10… 101000 Tag 8-byte words Compare Direct-mapped cache: each address maps to a unique address Tag array Data array

12 Increasing Line Size Byte address
A large cache line size  smaller tag array, fewer misses because of spatial locality 32-byte cache line size or block size Tag Offset Tag array Data array

13 Associativity Byte address
Set associativity  fewer conflicts; wasted power because multiple data and tags are read Tag Way-1 Way-2 Tag array Data array Compare

14 How many offset/index/tag bits if the cache has
Associativity How many offset/index/tag bits if the cache has 64 sets, each set has 64 bytes, 4 ways Byte address Tag Way-1 Way-2 Tag array Data array Compare

15 Example 32 KB 4-way set-associative data cache array with 32
byte line sizes How many sets? How many index bits, offset bits, tag bits? How large is the tag array?

16 Title Bullet


Download ppt "Lecture 20: OOO, Memory Hierarchy"

Similar presentations


Ads by Google