Presentation is loading. Please wait.

Presentation is loading. Please wait.

Morgan Kaufmann Publishers Memory & Cache

Similar presentations


Presentation on theme: "Morgan Kaufmann Publishers Memory & Cache"— Presentation transcript:

1 Morgan Kaufmann Publishers Memory & Cache
12 September, 2018 Lect-12 Memory & Cache Chapter 5 — Large and Fast: Exploiting Memory Hierarchy

2 Morgan Kaufmann Publishers
12 September, 2018 Memory Technology §5.1 Introduction Static RAM (SRAM) 0.5ns – 2.5ns, $2000 – $5000 per GB Dynamic RAM (DRAM) 50ns – 70ns, $20 – $75 per GB Magnetic disk 5ms – 20ms, $0.20 – $2 per GB Ideal memory Access time of SRAM Capacity and cost/GB of disk Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 2 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy

3 Morgan Kaufmann Publishers
12 September, 2018 Principle of Locality Programs access a small proportion of their address space at any time Temporal locality Items accessed recently are likely to be accessed again soon e.g., instructions in a loop, induction variables Spatial locality Items near those accessed recently are likely to be accessed soon E.g., sequential instruction access, array data Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 3 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy

4 Taking Advantage of Locality
Morgan Kaufmann Publishers 12 September, 2018 Taking Advantage of Locality Memory hierarchy Store everything on disk Copy recently accessed (and nearby) items from disk to smaller DRAM memory Main memory Copy more recently accessed (and nearby) items from DRAM to smaller SRAM memory Cache memory attached to CPU Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 4 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy

5 Memory Hierarchy Levels
Morgan Kaufmann Publishers 12 September, 2018 Memory Hierarchy Levels Block (aka line): unit of copying May be multiple words If accessed data is present in upper level Hit: access satisfied by upper level Hit ratio: hits/accesses If accessed data is absent Miss: block copied from lower level Time taken: miss penalty Miss ratio: misses/accesses = 1 – hit ratio Then accessed data supplied from upper level Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 5 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy

6 Morgan Kaufmann Publishers
12 September, 2018 Cache Memory Cache memory The level of the memory hierarchy closest to the CPU Given accesses X1, …, Xn–1, Xn §5.2 The Basics of Caches How do we know if the data is present? Where do we look? Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 6 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy

7 Morgan Kaufmann Publishers
12 September, 2018 Direct Mapped Cache Location determined by address Direct mapped: only one choice (Block address) modulo (#Blocks in cache) #Blocks is a power of 2 Use low-order address bits Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 7 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy

8 Morgan Kaufmann Publishers
12 September, 2018 Tags and Valid Bits How do we know which particular block is stored in a cache location? Store block address as well as the data Actually, only need the high-order bits Called the tag What if there is no data in a location? Valid bit: 1 = present, 0 = not present Initially 0 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 8 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy

9 Morgan Kaufmann Publishers
12 September, 2018 Cache Example 8-blocks, 1 word/block, direct mapped Initial state Index V Tag Data 000 N 001 010 011 100 101 110 111 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 9 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy

10 Morgan Kaufmann Publishers
12 September, 2018 Cache Example Word addr Binary addr Hit/miss Cache block 22 10 110 Miss 110 Index V Tag Data 000 N 001 010 011 100 101 110 Y 10 Mem[10110] 111 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 10 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy

11 Morgan Kaufmann Publishers
12 September, 2018 Cache Example Word addr Binary addr Hit/miss Cache block 26 11 010 Miss 010 Index V Tag Data 000 N 001 010 Y 11 Mem[11010] 011 100 101 110 10 Mem[10110] 111 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 11 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy

12 Morgan Kaufmann Publishers
12 September, 2018 Cache Example Word addr Binary addr Hit/miss Cache block 22 10 110 Hit 110 26 11 010 010 Index V Tag Data 000 N 001 010 Y 11 Mem[11010] 011 100 101 110 10 Mem[10110] 111 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 12 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy

13 Morgan Kaufmann Publishers
12 September, 2018 Cache Example Word addr Binary addr Hit/miss Cache block 16 10 000 Miss 000 3 00 011 011 Hit Index V Tag Data 000 Y 10 Mem[10000] 001 N 010 11 Mem[11010] 011 00 Mem[00011] 100 101 110 Mem[10110] 111 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 13 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy

14 Morgan Kaufmann Publishers
12 September, 2018 Cache Example Word addr Binary addr Hit/miss Cache block 18 10 010 Miss 010 Index V Tag Data 000 Y 10 Mem[10000] 001 N 010 Mem[10010] 011 00 Mem[00011] 100 101 110 Mem[10110] 111 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 14 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy

15 Cache Example-2 A processor has a 32 byte memory and an 8 byte direct-mapped cache. Table 0 shows the current state of the cache. Write hit or miss under the each address in the memory reference sequence below. Show the new state of the cache for each miss in a new table, label the table with the address, and circle the change: Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 15

16 Example-2 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 16

17 Morgan Kaufmann Publishers
12 September, 2018 Address Subdivision Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 17 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy

18 2n x (block size + tag size + valid field size).
Tag & Cache Size Total number of bits needed for a cache is a function of The cache size The address size The tags. For the following situation: 32-bit addresses A direct-mapped cache The cache size is 2n blocks, so n bits are used for the index The block size is 2m words (2m+2 bytes) The size of the tag field is 32 - (n + m + 2) The total number of bits in a direct-mapped cache is 2n x (block size + tag size + valid field size). Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 18

19 Example How many total bits are required for a direct-mapped cache with 16 KiB of data and 4-word blocks, assuming a 32-bit address? Solution: We know that 16 KiB is 4096 (212) words. With a block size of 4 words (22), there are 1024 (210) blocks. Each block has 4 x32 or 128 bits of data plus a tag, which is bits, plus a valid bit. Th us, the total cache size is 210 x (4 x 32 + ( ) + 1) = 210 x 147 = 147 Kibibits or 18.4 KiB for a 16 KiB cache. For this cache, the total number of bits in the cache is about 1.15 times as many as needed just for the storage of the data. Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 19

20 Example: Larger Block Size
Morgan Kaufmann Publishers 12 September, 2018 Example: Larger Block Size Consider a cache with 64 blocks and a block size of 16 bytes. To what block number does byte address 1200 map? 64 blocks, 16 bytes/block To what block number does address 1200 map? Block address = 1200/16 = 75 Block number = 75 modulo 64 = 11 Tag Index Offset 3 4 9 10 31 4 bits 6 bits 22 bits Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 20 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy

21 Block Size Considerations
Morgan Kaufmann Publishers 12 September, 2018 Block Size Considerations Larger blocks should reduce miss rate Due to spatial locality But in a fixed-sized cache Larger blocks  fewer of them More competition  increased miss rate Larger blocks  pollution Larger miss penalty Can override benefit of reduced miss rate Early restart and critical-word-first can help Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 21 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy

22 Morgan Kaufmann Publishers
12 September, 2018 Cache Misses On cache hit, CPU proceeds normally On cache miss Stall the CPU pipeline Fetch block from next level of hierarchy Instruction cache miss Restart instruction fetch Data cache miss Complete data access Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 22 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy

23 Morgan Kaufmann Publishers
12 September, 2018 Write-Through On data-write hit, could just update the block in cache But then cache and memory would be inconsistent Write through: also update memory But makes writes take longer e.g., if base CPI = 1, 10% of instructions are stores, write to memory takes 100 cycles Effective CPI = ×100 = 11 Solution: write buffer Holds data waiting to be written to memory CPU continues immediately Only stalls on write if write buffer is already full Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 23 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy

24 Morgan Kaufmann Publishers
12 September, 2018 Write-Back Alternative: On data-write hit, just update the block in cache Keep track of whether each block is dirty When a dirty block is replaced Write it back to memory Can use a write buffer to allow replacing block to be read first Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 24 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy

25 Morgan Kaufmann Publishers
12 September, 2018 Write Allocation What should happen on a write miss? Alternatives for write-through Allocate on miss: fetch the block Write around: don’t fetch the block Since programs often write a whole block before reading it (e.g., initialization) For write-back Usually fetch the block Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 25 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy

26 Example: Intrinsity FastMATH
Morgan Kaufmann Publishers 12 September, 2018 Example: Intrinsity FastMATH Embedded MIPS processor 12-stage pipeline Instruction and data access on each cycle Split cache: separate I-cache and D-cache Each 16KB: 256 blocks × 16 words/block D-cache: write-through or write-back SPEC2000 miss rates I-cache: 0.4% D-cache: 11.4% Weighted average: 3.2% Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 26 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy

27 Example: Intrinsity FastMATH
Morgan Kaufmann Publishers 12 September, 2018 Example: Intrinsity FastMATH Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 27 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy

28 Main Memory Supporting Caches
Morgan Kaufmann Publishers 12 September, 2018 Main Memory Supporting Caches Use DRAMs for main memory Fixed width (e.g., 1 word) Connected by fixed-width clocked bus Bus clock is typically slower than CPU clock Example cache block read 1 bus cycle for address transfer 15 bus cycles per DRAM access 1 bus cycle per data transfer For 4-word block, 1-word-wide DRAM Miss penalty = 1 + 4×15 + 4×1 = 65 bus cycles Bandwidth = 16 bytes / 65 cycles = 0.25 B/cycle Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 28 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy

29 Increasing Memory Bandwidth
Morgan Kaufmann Publishers 12 September, 2018 Increasing Memory Bandwidth 4-word wide memory Miss penalty = = 17 bus cycles Bandwidth = 16 bytes / 17 cycles = 0.94 B/cycle 4-bank interleaved memory Miss penalty = ×1 = 20 bus cycles Bandwidth = 16 bytes / 20 cycles = 0.8 B/cycle Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 29 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy

30 Advanced DRAM Organization
Morgan Kaufmann Publishers 12 September, 2018 Advanced DRAM Organization Bits in a DRAM are organized as a rectangular array DRAM accesses an entire row Burst mode: supply successive words from a row with reduced latency Double data rate (DDR) DRAM Transfer on rising and falling clock edges Quad data rate (QDR) DRAM Separate DDR inputs and outputs Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 30 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy


Download ppt "Morgan Kaufmann Publishers Memory & Cache"

Similar presentations


Ads by Google