Presentation is loading. Please wait.

Presentation is loading. Please wait.

Chapter 5 Large and Fast: Exploiting Memory Hierarchy.

Similar presentations


Presentation on theme: "Chapter 5 Large and Fast: Exploiting Memory Hierarchy."— Presentation transcript:

1 Chapter 5 Large and Fast: Exploiting Memory Hierarchy

2 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 2 Memory Technology Static RAM (SRAM) 0.5ns – 2.5ns, $2000 – $5000 per GB Dynamic RAM (DRAM) 50ns – 70ns, $20 – $75 per GB Magnetic disk 5ms – 20ms, $0.20 – $2 per GB Ideal memory Access time of SRAM Capacity and cost/GB of disk §5.1 Introduction

3 The Principle of Locality Observation: A program accesses a relatively small portion of its address space during any short period of its execution time. Two Different Types of Locality: Temporal Locality (Locality in Time): If an item is referenced, it tends to be referenced again soon (e.g., instructions of a loop, scalar data items reused) Spatial Locality (Locality in Space): If an item is referenced, items whose addresses are close by tend to be referenced soon (e.g., instructions of straightline code, array data items) Locality is a property of programs which is exploited in computer design

4 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 4 Taking Advantage of Locality Memory of a computer could be organized in a hierarchy

5 CPE 432 Levels of the Memory Hierarchy -8 Registers Cache Main Memory Disk Tape Instructions, Operands Blocks (Lines) Pages Files Upper Level Lower Level faster Larger

6 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 6 Taking Advantage of Locality Organize the computer memory system in a hierarchy Store everything on disk Copy recently accessed (and nearby) items from disk to smaller DRAM memory Main memory Copy more recently accessed (and nearby) items from DRAM to smaller SRAM memory Cache memory attached to CPU

7 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 7 Memory Hierarchy Levels Block (aka line): unit of copying May be multiple words If accessed data is present in upper level Hit: access satisfied by upper level Hit ratio: hits/accesses If accessed data is absent Miss: block copied from lower level Time taken: miss penalty Miss ratio: misses/accesses = 1 – hit ratio Then accessed data supplied from upper level

8 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 8 Cache Memory Cache memory The level of the memory hierarchy closest to the CPU Given accesses X 1, …, X n–1, X n §5.2 The Basics of Caches Fig. 5.4 The cache just before and just after a reference to a word X n that is not initially in the cache How do we know if the data is present? Where do we look?

9 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 9 Direct Mapped Cache Example Block = 1 word; cache size = 8 blocks Block location in cache determined by its address Only one choice for mapping a memory block to a cache block: cache block# = (Memory Block address) modulo (#Blocks in cache) Memory block 00001 mapped to (00001)mod 8 = cache block 001 Memory block 01001 mapped to (01001)mod 8 = cache block 001 …….. Memory block 10101 mapped to (10101)mod 8 = cache block 101

10 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 10 5 memory address bits; Block = 1 word; cache size = 8 blocks Block location in cache determined by its address Only one choice for mapping a memory block to a cache block: cache block# = (Memory block address) modulo (#Blocks in cache) Memory block 00001 mapped to (00001)mod 8 = cache block 001 Memory block 01001 mapped to (01001)mod 8 = cache block 001 …….. Memory block 10101 mapped to (10101)mod 8 = cache block 101 Direct Mapped Cache Example

11 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 11 Direct Mapped Cache Example (cont.) # of cache blocks is a power of 2 (i.e. 8) Use low-order address bits in mapping memory blocks to cache blocks 00001 mapped TO 001 00101 mapped TO 101 01001 mapped TO 001 01101 mapped TO 101 10001 mapped TO 001 10101 mapped TO 101 11001 mapped TO 001 11101 mapped TO 101

12 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 12 Tags and Valid Bits How do we know which particular memory block is stored in a cache block? Store in the cache block the block memory address as well as its data Actually, only need to store the high-order bits of the address (in the example 2 bits) Called the tag What if there is no data in a cache block? Valid bit: 1 = data present, 0 = data not present Initially 0

13 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 13 Cache Example 8-blocks, 1 word/block, direct mapped Initial state IndexVTagData 000N 001N 010N 011N 100N 101N 110N 111N

14 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 14 Cache Example IndexVTagData 000N 001N 010N 011N 100N 101N 110Y10Mem[10110] 111N Word addrBinary addrHit/missCache block 2210 110Miss110

15 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 15 Cache Example IndexVTagData 000N 001N 010Y11Mem[11010] 011N 100N 101N 110Y10Mem[10110] 111N Word addrBinary addrHit/missCache block 2611 010Miss010

16 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 16 Cache Example IndexVTagData 000N 001N 010Y11Mem[11010] 011N 100N 101N 110Y10Mem[10110] 111N Word addrBinary addrHit/missCache block 2210 110Hit110 2611 010Hit010

17 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 17 Cache Example IndexVTagData 000Y10Mem[10000] 001N 010Y11Mem[11010] 011Y00Mem[00011] 100N 101N 110Y10Mem[10110] 111N Word addrBinary addrHit/missCache block 1610 000Miss000 300 011Miss011 1610 000Hit000

18 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 18 Cache Example IndexVTagData 000Y10Mem[10000] 001N 010Y10Mem[10010] 011Y00Mem[00011] 100N 101N 110Y10Mem[10110] 111N Word addrBinary addrHit/missCache block 1810 010Miss010

19 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 19 Address Subdivision

20 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 20 Example: Larger Block Size 64 blocks, 16 bytes/block To what cache block number does the byte memory address 1200 map? Memory block address =  1200/16  = 75 Cache block number = 75 modulo 64 = 11 TagIndexOffset 03491031 4 bits6 bits22 bits

21 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 21 Block Size Considerations Larger blocks should reduce miss rate Due to spatial locality But in a fixed-sized cache Larger blocks  fewer of them More competition  increased miss rate Larger miss penalty Can override benefit of reduced miss rate Early restart and critical-word-first can help

22 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 22 Cache Misses On cache hit, CPU proceeds normally On cache miss Stall the CPU pipeline Fetch block from next level of hierarchy Instruction cache miss Restart instruction fetch Data cache miss Complete data access

23 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 23 Write-Through Policy On data-write hit, could just update the block in cache But then cache and memory would be inconsistent Write through policy: also update memory But “write through” makes writes take longer e.g., if base CPI = 1, 10% of instructions are stores and write to memory takes 100 cycles Effective CPI = 1 + 0.1×100 = 11 Solution: write buffer Holds data waiting to be written to memory CPU continues immediately CPU only stalls on write if write buffer is already full

24 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 24 Write-Back Policy Alternative: On data-write hit, just update the block in cache Keep track of whether each block is dirty When a dirty block is replaced Write it back to memory Can use a write buffer to allow replacing block to be read first

25 Write Allocation What should happen on a write miss? For write-back caches Allocate the cache block; fetch the block from memory For write-through caches: 2 alternatives 1.Allocate the cache block fetch the block from memory 2.Write around: don’t fetch the block from memory Allocate block in cache and write to the cache and “write buffer” This often works well since programs often write a whole block before reading it (e.g., initialization)

26 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 26 Example: Intrinsity FastMATH Embedded MIPS processor 12-stage pipeline Instruction and data access on each cycle Split cache: separate I-cache and D-cache Each 16KB: 256 blocks × 16 words/block D-cache: write-through or write-back SPEC2000 miss rates I-cache: 0.4% D-cache: 11.4% Weighted average: 3.2%

27 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 27 Example: Intrinsity FastMATH

28 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 28 Main Memory Supporting Caches Use DRAMs for main memory Fixed width main memory (e.g., 1 word) Connected by fixed-width clocked bus Bus clock is typically slower than CPU clock Example cache block read 1 bus cycle for address transfer 15 bus cycles per DRAM word access 1 bus cycle per DRAM word transfer For 4-word block, 1-word-wide DRAM Miss penalty = 1 + 4×15 + 4×1 = 65 bus cycles Bandwidth = 16 bytes / 65 bus cycles = 0.25 B/bus cycle

29 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 29 Increasing Memory Bandwidth 4-word wide memory Miss penalty = 1 + 15 + 1 = 17 bus cycles Bandwidth = 16 bytes / 17 cycles = 0.94 B/cycle 4-bank interleaved memory Miss penalty = 1 + 15 + 4×1 = 20 bus cycles Bandwidth = 16 bytes / 20 cycles = 0.8 B/cycle

30 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 30 Advanced DRAM Organization Bits in a DRAM are organized as a rectangular array DRAM accesses an entire row of bits Burst mode: supply successive words from a row with reduced latency Double data rate (DDR) DRAM Transfer on rising and falling clock edges Quad data rate (QDR) DRAM Separate DDR inputs and outputs

31 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 31 DRAM Generations YearCapacity$/GB 198064Kbit$1,500,000 1983256Kbit$500,000 19851Mbit$200,000 19894Mbit$50,000 199216Mbit$15,000 199664Mbit$10,000 1998128Mbit$4000 2000256Mbit$1000 2004512Mbit$250 20071Gbit$50

32 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 32 Measuring Cache Performance Components of CPU time Program execution cycles Includes cache hit time Memory stall cycles Mainly because of cache misses With simplifying assumptions: §5.3 Measuring and Improving Cache Performance

33 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 33 Cache Performance Example Given I-cache miss rate = 2% D-cache miss rate = 4% Miss penalty = 100 cycles Base CPI (ideal cache) = 2 Load & stores are 36% of instructions Miss cycles per instruction I-cache: 0.02 × 100 = 2 D-cache: 0.36 × 0.04 × 100 = 1.44 Actual CPI = 2 + 2 + 1.44 = 5.44 Ideal CPU is 5.44/2 =2.72 times faster

34 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 34 Average Access Time Hit time is also important for performance Average memory access time (AMAT) AMAT = Hit time + Miss rate × Miss penalty Example CPU with 1ns clock, hit time = 1 cycle, miss penalty = 20 cycles, I-cache miss rate = 5% AMAT = 1 + 0.05 × 20 = 2ns 2 cycles per instruction

35 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 35 Performance Summary When CPU performance increased Miss penalty becomes more significant Decreasing base CPI Greater proportion of time spent on memory stalls Increasing clock rate Memory stalls account for more CPU cycles Can’t neglect cache behavior when evaluating system performance

36 Fully Associative Caches Allow a memory block to go in any cache block Requires all cache blocks to be searched simultaneously Comparator per cache block is needed (expensive) 36

37 n-way Set Associative Caches Organize the cache into sets (groups) of blocks Each set contains n cache blocks Main memory block number is used to determine which set in the cache will store the memory block (Memory block number) modulo (#Sets in cache) Search all cache blocks in a given set simultaneously n comparators are needed (less expensive than a fully associative cache organization) 37

38 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 38 Associative Cache Example

39 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 39 Spectrum of Associativity For a cache with 8 blocks

40 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 40 Associativity Example Compare 4-block caches Direct mapped, 2-way set associative, fully associative Memory blocks access sequence: 0, 8, 0, 6, 8 Direct mapped: (000 00, 010 00, 000 00, 001 10, 010 00) Memory block number Cache index Hit/missCache content after access 0123 00missMem[0] 80missMem[8] 00missMem[0] 62missMem[0]Mem[6] 80missMem[8]Mem[6] Number of misses = 5

41 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 41 Associativity Example (cont. I) 2-way set associative, 4 blocks cache Memory blocks access sequence: 0, 8, 0, 6, 8 Mapping function: (Memory block number) modulo (#Sets in cache) 0 modulo 2= 0 ; 8 modulo 2= 0 ; 6 modulo 2= 0 Hence cache sets accessed: 0, 0, 0, 0, 0 Block address Cache set Hit/missCache content after access Set 0Set 1 00missMem[0] 80missMem[0]Mem[8] 00hitMem[0]Mem[8] 60missMem[0]Mem[6] 80missMem[8]Mem[6] Number of misses = 4

42 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 42 Associativity Example (cont. II) Fully associative, 4 blocks cache Memory blocks access sequence: 0, 8, 0, 6, 8 Mapping function: a memory block can go in any cache block Block address Hit/missCache content after access 0missMem[0] 8missMem[0]Mem[8] 0hitMem[0]Mem[8] 6missMem[0]Mem[8]Mem[6] 8hitMem[0]Mem[8]Mem[6] Number of misses = 3

43 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 43 How Much Associativity Increased associativity decreases miss rate But with diminishing returns Simulation of a system with 64KB D-cache, 16-word blocks, SPEC2000 miss rates: 1-way: 10.3% 2-way: 8.6% 4-way: 8.3% 8-way: 8.1%

44 44 Set Associative Cache Organization Example 4 Way Set Associative Cache, Cache Size = 4KB, Block Size= 1 Word. # address bits = 32 Byte offset = 2 bits Memory block number is the most significant 30 bits # of cache sets= (4*1024)/{(4*4)= 256} Mapping function: (Memory block number) modulo (#Sets in cache) (Memory block number) modulo (256) # of cache sets is a power of 2 (i.e. 256) Use low-order address bits in mapping memory blocks to cache blocks

45 45 Set Associative Cache Organization

46 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 46 Block Replacement Policy Direct mapped: no choice Set associative Prefer non-valid block, if there is one Otherwise, choose among blocks in the set Least-recently used (LRU) Choose the block unused for the longest time Simple for 2-way, manageable for 4-way, too hard beyond that Random Gives approximately the same performance as LRU for high associativity

47 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 47 Multilevel Caches Primary cache attached to CPU Small, but fast Level-2 cache services misses from primary cache Larger than primary cache Slower than primary cache Faster than main memory Main memory services L-2 cache misses Some high-end systems include L-3 cache

48 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 48 Multilevel Cache Example Given CPU base CPI = 1, clock rate = 4GHz Miss rate/instruction = 2% Main memory access time = 100ns With just primary cache Miss penalty = 100ns/0.25ns = 400 cycles Effective CPI = 1 + 0.02 × 400 = 9

49 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 49 Example (cont.) Now add L-2 cache Access time = 5ns Global miss rate to main memory = 0.5% Primary miss with L-2 hit Penalty = 5ns/0.25ns = 20 cycles Primary miss with L-2 miss Extra penalty = 500 cycles CPI = 1 + 0.02 × 20 + 0.005 × 400 = 3.4 Performance ratio = 9/3.4 = 2.6

50 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 50 Multilevel Cache Considerations Primary cache Focus on minimal hit time L-2 cache Focus on low miss rate to avoid main memory access Hit time has less overall impact Results L-1 cache usually smaller than a single cache L-1 block size smaller than L-2 block size

51 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 51 Interactions with Software Misses depend on memory access patterns Algorithm behavior Compiler optimization for memory access Instructions / Item Clock Cycles / Item Cache Misses / Item

52 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 52 Virtual Memory Use main memory as a “cache” for secondary (disk) storage Managed jointly by CPU hardware and the operating system (OS) Programs share main memory Each gets a private virtual address space holding its frequently used code and data Protected from other programs CPU and OS translate virtual addresses to physical addresses VM “block” is called a page VM “miss” is called a page fault §5.4 Virtual Memory

53 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 53 Address Translation Fixed-size pages (e.g., 4K)

54 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 54 Page Fault Penalty On page fault, the page must be fetched from disk Takes millions of clock cycles Handled by OS code Try to minimize page fault rate Fully associative placement Smart replacement algorithms

55 Virtual to Physical Address Translation How to translate a virtual address into a physical address? Use a page table stored in memory and a base register pointing to the address of the first entry of the page table Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 55

56 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 56 Page Tables Stores virtual address translation information Array of page table entries, indexed by virtual page number Page table register in CPU points to page table in physical memory If page is present in memory Page Table Entry (PTE) stores the physical page number Plus other status bits (referenced, dirty, …) If page is not present PTE can refer to location in swap space on disk

57 Example Virtual and Physical Addresses Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 57 31 30 29 28 …...…………………………...15 14 13 1211 10 9...........1 0  32 bits virtual address  Virtual memory size 4GB  Page size= 4KB  12 bits page offset  20 bits virtual page #  2 20 virtual pages 29 28 ………………………………...15 14 13 1211 10 9.......1 0  30 bits physical address  Physical memory size 1GB  12 bits page offset  18 bits physical page #  2 18 physical pages Page Offset Virtual Page # Physical Page #

58 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 58 Translation Using a Page Table

59 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 59 Mapping Pages to Storage

60 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 60 Replacement and Writes To reduce page fault rate, prefer least- recently used (LRU) replacement Reference bit (aka use bit) in PTE set to 1 on access to page Periodically cleared to 0 by OS A page with reference bit = 0 has not been used recently Disk writes take millions of cycles Write a page to disk, not individual words Write through is impractical Use write-back Dirty bit in PTE set when page is written

61 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 61 Fast Translation Using a TLB Address translation would appear to require extra memory references One to access the Page Table Entry, PTE Then the actual memory access But access to page tables has good locality So use a fast cache of PTEs within the CPU Called a Translation Look-aside Buffer (TLB) Typical: 16–512 PTEs, 0.5–1 cycle for hit, 10–100 cycles for miss, 0.01%–1% miss rate Misses could be handled by hardware or software

62 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 62 Fast Translation Using a TLB

63 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 63 TLB Misses If page is in memory Load the PTE from memory and retry Could be handled in hardware Can get complex for more complicated page table structures Or in software Raise a special exception, with optimized handler If page is not in memory (page fault) OS handles fetching the page and updating the page table Then restart the faulting instruction

64 TLB Miss Handler TLB miss indicates either Page is present in main memory, but the associated PTE is not in TLB Page not preset (TLB is a cache for PTEs of pages residing in memory) Must recognize TLB miss before destination register overwritten Raise exception Handler copies PTE from memory to TLB Then restarts instruction If page not present, page fault will occur

65 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 65 Page Fault Handler Use faulting virtual address to find PTE Locate page on disk Choose physical page to replace If dirty, write to disk first Read page from disk into memory and update page table Make the process that caused the page fault runnable again Restart from faulting instruction

66 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 66 TLB and Cache Interaction If cache tag uses physical address Need to translate before cache lookup Alternative: use virtual address tag

67 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 67 The Memory Hierarchy Common principles apply at all levels of the memory hierarchy Based on notions of caching At each level in the hierarchy Block placement Finding a block Replacement on a miss Write policy §5.5 A Common Framework for Memory Hierarchies The BIG Picture

68 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 68 Block Placement Determined by associativity Direct mapped (1-way associative) One choice for placement n-way set associative n choices within a set Fully associative Any location Higher associativity reduces miss rate Increases complexity, cost, and access time

69 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 69 Finding a Block Hardware caches Reduce comparisons to reduce cost Virtual memory Full table lookup makes full associativity feasible Benefit in reduced miss rate AssociativityLocation methodTag comparisons Direct mappedIndex1 n-way set associative Set index, then search entries within the set n Fully associativeSearch all entries#entries Full lookup table0

70 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 70 Replacement Choice of entry to replace on a miss Least recently used (LRU) Complex and costly hardware for high associativity Random Close to LRU, easier to implement Virtual memory LRU approximation with hardware support

71 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 71 Write Policy Write-through Update both upper and lower levels Simplifies replacement, but may require write buffer Write-back Update upper level only Update lower level when block is replaced Need to keep more state Virtual memory Only write-back is feasible, given disk write latency

72 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 72 Sources of Misses Compulsory misses (aka cold start misses) First access to a block Capacity misses Due to finite cache size A replaced block is later accessed again Conflict misses (aka collision misses) In a non-fully associative cache Due to competition for entries in a set Would not occur in a fully associative cache of the same total size

73 Cache Design Trade-offs Design changeEffect on miss rateNegative performance effect Increase cache size Increase associativity Increase block size an executing computer program loads data into cache unnecessarily, thus causing other needed data to be evicted from the cache and degrading performance. Decrease capacity misses May increase access time Decrease conflict misses May increase access time Decrease compulsory misses Increases miss penalty. For very large block size, may increase miss rate due to pollution. Remember types of misses {Compulsory, Capacity, Conflict}

74 Skip section 5.6 “Virtual Machines” Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 74

75 Next: Section 5.7 Using a Finite State Machine to Control A Simple Cache Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 75

76 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 76 Cache Control (Design of Controller) Example cache characteristics Direct-mapped, write-back, write allocate (i.e. policy on a write miss: allocate block followed by the write-hit action) 32-bit byte addresses Block size: 4 words (16 bytes) Cache size: 16 KB (1024 blocks) Valid bit and dirty bit per block Blocking cache CPU waits until access is complete §5.7 Using a Finite State Machine to Control A Simple Cache TagIndexOffset 034131431 4 bits10 bits18 bits

77 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 77 Interface Signals Cache CPU Memory Read/Write Valid Address Write Data Read Data Ready 32 Read/Write Valid Address Write Data Read Data Ready 32 128 When set to 1, indicates that there is a cache operation When set to 1, indicates that there is a memory operation When set to 1, indicates that the cache operation is complete When set to 1, indicates that the memory operation is complete

78 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 78 Review: Finite State Machines Use an FSM to sequence control steps (design the controller) Set of states, transition on each clock edge State values are binary encoded Current state stored in a register Next state = f n (current state, current inputs) Control output signals = f o (current state)

79 CSE431 Chapter 5A.79Irwin, PSU, 2008 Four State Cache Controller Idle Compare Tag If Valid && Hit Set Valid, Set Tag, If Write set Dirty Allocate Read new block from memory Write Back Write old block to memory Cache Hit Mark Cache Ready Cache Miss Old block is Dirty Memory Ready Memory Not Ready Memory Not Ready Cache Miss Old block is clean Valid CPU request 79

80 §5.8 Parallelism and Memory Hierarchies: Cache Coherence Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 80 80

81 CSE431 Chapter 5A.81Irwin, PSU, 2008 Cache Coherence in Multicores  In future multicore processors its likely that the cores will share a common physical address space, causing a cache coherence problem Core 1Core 2 L1 I$L1 D$ Unified (shared) L2 L1 I$L1 D$ 81

82 CSE431 Chapter 5A.82Irwin, PSU, 2008 Cache Coherence in Multicores  In future multicore processors its likely that the cores will share a common physical address space, causing a cache coherence problem Core 1Core 2 L1 I$L1 D$ Unified (shared) L2 L1 I$L1 D$ X = 0 Read X Write 1 to X X = 1 82

83 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 83 Coherence Defined Informally: If caches are coherent, then reads return most recently written value Formally: If core1 writes X and then core1 reads X (with no intervening writes)  the read operation returns the written value Core1 writes X then core2 reads X (sufficiently later)  the read operation returns the written value core1 writes X then core2 writes X  cores 1 & 2 see writes in the same order End up with the same final value for X

84 CSE431 Chapter 5A.84Irwin, PSU, 2008 A Coherent Memory System  Any read of a data item should return the most recently written value of the data item l Coherence – defines what values can be returned by a read -Writes to the same location are serialized (two writes to the same location must be seen in the same order by all cores) l Consistency – determines when a written value will be returned by a read 84

85 CSE431 Chapter 5A.85Irwin, PSU, 2008 A Coherent Memory System (cont.)  To enforce coherence, caches must provide Replication of shared data items in multiple cores’ caches (e.g. without accessing L2 cache in our example) l Replication reduces both latency and contention for a read shared data item l Migration of shared data items to a core’s local cache l Migration reduced the latency of the access the data and the bandwidth demand on the shared memory (L2 in our example) 85

86 CSE431 Chapter 5A.86Irwin, PSU, 2008 Cache Coherence Protocols  Need a hardware protocol to ensure cache coherence  the most popular of cache coherence protocols is snooping l The cache controllers monitor (snoop) on the broadcast medium (e.g., bus) to determine if their cache has a copy of a block that is requested  Write invalidate protocol – writes require exclusive access and invalidate all other copies l Exclusive access ensure that no other readable or writable copies of an item exists 86

87 CSE431 Chapter 5A.87Irwin, PSU, 2008 Cache Coherence Protocols (cont.)  If two cores attempt to write the same data at the same time, one of them wins the race causing the other core’s copy to be invalidated.  For the other core to complete, it must obtain a new copy of the data which must now contain the updated value – thus enforcing write serialization 87

88 CSE431 Chapter 5A.88Irwin, PSU, 2008 Handling Writes Ensuring that all other processors sharing data are informed of writes can be handled two ways: 1. Write-update (write-broadcast) – writing processor broadcasts new data over the bus, all copies are updated l All writes go to the bus  higher bus traffic l Since new values appear in caches sooner, can reduce latency

89 CSE431 Chapter 5A.89Irwin, PSU, 2008 Handling Writes (cont.) 2. Write-invalidate – writing processor issues invalidation signal on bus, cache snoops check to see if they have a copy of the data, if so they invalidate their cache block containing the word (this allows multiple readers but only one writer) l Uses the bus only on the first write  lower bus traffic, so better use of bus bandwidth

90 CSE431 Chapter 5A.90Irwin, PSU, 2008 Example of Snooping Invalidation Core 1Core 2 L1 I$L1 D$ Unified (shared) L2 L1 I$L1 D$

91 CSE431 Chapter 5A.91Irwin, PSU, 2008 Example of Snooping Invalidation  When the second miss (second read) by Core 2 occurs, Core 1 responds with the value canceling the response from the L2 cache (and also updating the L2 copy) Core 1Core 2 L1 I$L1 D$ Unified (shared) L2 L1 I$L1 D$ X = 0 Read X Write 1 to X X = 1 Read X X = I X = 1 INVALID COPY 91

92 Invalidating Snooping Protocols (cont.) Core activityBus activityCore 1’s L1 cache Core 2’s L1 cache Contents of L2 Cache X 0 Core 1 reads XCache miss for X00 Core 2 reads XCache miss for X000 Core 1 writes 1 to XInvalidate for X1 Invalid Core 2 read XCache miss for X111 When the second miss (second read) by Core 2 occurs, Core 1 responds with the value canceling the response from the L2 cache (and also updating the L2 copy) 92

93 Invalidating Snooping Protocols A processor’s cache gets exclusive access to a block when this processor writes in this block The exclusive access is achieved by broadcasting an “invalidate” message on the bus which will make copies of the block in caches of other processors invalid. A subsequent read by a another processor produces a cache miss Owning cache supplies updated value 93

94 Skip sections 5.9, 5.10, 5.11 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 94

95 §5.12 Concluding Remarks Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 95

96 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 96 Concluding Remarks Fast memories are small, large memories are slow We really want fast, large memories  Caching gives this illusion Principle of locality Programs use a small part of their memory space frequently Memory hierarchy L1 cache  L2 cache  …  DRAM memory  disk Memory system design is critical for multiprocessors (e.g. multicore microprocessors)


Download ppt "Chapter 5 Large and Fast: Exploiting Memory Hierarchy."

Similar presentations


Ads by Google