Presentation is loading. Please wait.

Presentation is loading. Please wait.

The Memory Hierarchy Cache, Main Memory, and Virtual Memory (Part 2) Lecture for CPSC 5155 Edward Bosworth, Ph.D. Computer Science Department Columbus.

Similar presentations


Presentation on theme: "The Memory Hierarchy Cache, Main Memory, and Virtual Memory (Part 2) Lecture for CPSC 5155 Edward Bosworth, Ph.D. Computer Science Department Columbus."— Presentation transcript:

1 The Memory Hierarchy Cache, Main Memory, and Virtual Memory (Part 2) Lecture for CPSC 5155 Edward Bosworth, Ph.D. Computer Science Department Columbus State University

2 Cache Line Replacement The cache memory is always smaller than the main memory (else why have a cache?). For this reason, it is often the case that a memory block being placed into the cache must replace a memory block already there. The process is called cache replacement and the method to choose the block to replace is the cache replacement policy.

3 Example of a Cache Line Consider a cache memory with 256 (2 8 ) cache lines, each holding 16 (2 4 ) bytes. A 24-bit address would be divided as follows: A 4-bit offset into the cache line A 20-bit memory block tag The memory block tag is required because a number of different memory blocks can be mapped into the same cache line.

4 Direct-Mapped and Set Associative In each of these, the count of cache lines is always a power of 2; in our example 256 = 2 8. For 2 L cache lines, the lower L bits of the memory tag specify the cache line. Suppose an N bit address. For our example, K = 4, L = 8, N – L – K = 20 Address Bits(N – L – K) bitsL bitsK bits Cache AddressCache TagCache LineOffset Memory AddressMemory Block TagOffset

5 Memory Tag 0xAB712 In our example that assumes a 24-bit memory address and a 16-byte cache line, the 20-bit memory tag 0xAB712 references a block containing addresses 0xAB7120 – 0xAB712F. For 256 (2 8 ) cache lines and a direct-mapped cache, this block would go to line 0x12. The cache tag would be 0xAB7. We can reconstruct the memory tag from the cache tag and the cache line number.

6 Chapter 5 Large and Fast: Exploiting Memory Hierarchy 6 Replacement Policy Direct mapped: no choice Set associative Prefer non-valid entry, if there is one Otherwise, choose among entries in the set Least-recently used (LRU) Choose the one unused for the longest time Simple for 2-way, manageable for 4-way, too hard beyond that Random Gives approximately the same performance as LRU for high associativity

7 The Dirty Bit and Replacement Consider a cache line. If the valid bit V = 0, no data has ever been placed in the cache line. This is a great place to put a new block. (This does not apply to direct mapped caches). In some cache organizations, the dirty bit can be used to select the cache line to replace if all cache lines have V = 1. If a cache line has D = 0 (is not dirty), it is not necessary to write its contents back to main memory in order to avoid data loss.

8 Replacement for Cache Line 0x12 Suppose memory block 0xAB712 is located in cache line 0x12 of a cache with 256 cache lines. Now the CPU references memory block 0xCD112. This must also go into line 0x12. A direct-mapped cache contains only one block per cache line, so block 0xAB712 is replaced. If the block is dirty, it must be written back to main memory before being overwritten.

9 Replacement for Set Associativity In an N-way set associative memory, each cache line can hold N memory blocks with their tags. Suppose memory block 0xCD112 must be read into cache line 0x12 and is not already there. Check if any of the four sets has V = 0. If so, there are no data in it. Read into that set. The next choice is a set with D = 0. A valid set that is not dirty can be simply overwritten. Otherwise, use some other way to pick the set.

10 Writing to a Cache Suppose that the CPU writes to memory. The data written will be sent to the cache. What happens next depends on whether or not the target memory block is present in the cache. If the block is present, there is a hit. On a hit, the dirty bit for the block is set: D = 1. If the block is not present, a block is chosen for replacement, and the target block is read into the cache. The write proceeds and D = 1.

11 Chapter 5 Large and Fast: Exploiting Memory Hierarchy 11 Write Policy Write-through Update both upper and lower levels Simplifies replacement, but may require write buffer Write-back Update upper level only Update lower level when block is replaced Need to keep more state Virtual memory Only write-back is feasible, given disk write latency

12 Chapter 5 Large and Fast: Exploiting Memory Hierarchy 12 Cache Misses On cache hit, CPU proceeds normally On cache miss Stall the CPU pipeline Fetch block from next level of hierarchy Instruction cache miss Restart instruction fetch Data cache miss Complete data access

13 Chapter 5 Large and Fast: Exploiting Memory Hierarchy 13 Sources of Misses Compulsory misses (aka cold start misses) First access to a block Capacity misses Due to finite cache size A replaced block is later accessed again Conflict misses (aka collision misses) In a non-fully associative cache Due to competition for entries in a set Would not occur in a fully associative cache of the same total size

14 Chapter 5 Large and Fast: Exploiting Memory Hierarchy 14 Write-Through On data-write hit, could just update the block in cache But then cache and memory would be inconsistent Write through: also update memory But makes writes take longer e.g., if base CPI = 1, 10% of instructions are stores, write to memory takes 100 cycles Effective CPI = ×100 = 11 Solution: write buffer Holds data waiting to be written to memory CPU continues immediately Only stalls on write if write buffer is already full

15 Chapter 5 Large and Fast: Exploiting Memory Hierarchy 15 Write-Back Alternative: On data-write hit, just update the block in cache Keep track of whether each block is dirty When a dirty block is replaced Write it back to memory Can use a write buffer to allow replacing block to be read first

16 The Write Buffer This buffer exists between the L2 cache and main memory. It is used for memory writes.

17 What is Written to the Buffer? Consider our previous example, with the L2 cache organized into 16-byte cache lines. Suppose the byte at address 0xAB7129 is written. The value is put into the write buffer. If only the byte itself is put into the buffer, then its address 0xAB7129 is also put there, in order to identify where to put the byte. If the entire cache line is written to the buffer, then only the memory tag 0xAB712 is needed.

18 Chapter 5 Large and Fast: Exploiting Memory Hierarchy 18 Write Allocation What should happen on a write miss? Alternatives for write-through Allocate on miss: fetch the block Write around: dont fetch the block Since programs often write a whole block before reading it (e.g., initialization) For write-back Usually fetch the block

19 Chapter 5 Large and Fast: Exploiting Memory Hierarchy 19 Cache Design Trade-offs Design changeEffect on miss rateNegative performance effect Increase cache sizeDecrease capacity misses May increase access time Increase associativityDecrease conflict misses May increase access time Increase block sizeDecrease compulsory misses Increases miss penalty. For very large block size, may increase miss rate due to pollution.

20 Chapter 5 Large and Fast: Exploiting Memory Hierarchy 20 Block Size Considerations Larger blocks should reduce miss rate Due to spatial locality But in a fixed-sized cache Larger blocks fewer of them More competition increased miss rate Larger blocks pollution Larger miss penalty Can override benefit of reduced miss rate Early restart and critical-word-first can help

21 Chapter 5 Large and Fast: Exploiting Memory Hierarchy 21 Multilevel Caches Primary cache attached to CPU Small, but fast Level-2 cache services misses from primary cache Larger, slower, but still faster than main memory Main memory services L-2 cache misses Some high-end systems include L-3 cache

22 Chapter 5 Large and Fast: Exploiting Memory Hierarchy 22 Multilevel Cache Example Given CPU base CPI = 1, clock rate = 4GHz Miss rate/instruction = 2% Main memory access time = 100ns With just primary cache Miss penalty = 100ns/0.25ns = 400 cycles Effective CPI = × 400 = 9

23 Chapter 5 Large and Fast: Exploiting Memory Hierarchy 23 Example (cont.) Now add L-2 cache Access time = 5ns Global miss rate to main memory = 0.5% Primary miss with L-2 hit Penalty = 5ns/0.25ns = 20 cycles Primary miss with L-2 miss Extra penalty = 500 cycles CPI = × × 400 = 3.4 Performance ratio = 9/3.4 = 2.6

24 Chapter 5 Large and Fast: Exploiting Memory Hierarchy 24 Multilevel Cache Considerations Primary cache Focus on minimal hit time L-2 cache Focus on low miss rate to avoid main memory access Hit time has less overall impact Results L-1 cache usually smaller than a single cache L-1 block size smaller than L-2 block size

25 Chapter 5 Large and Fast: Exploiting Memory Hierarchy 25 Interactions with Advanced CPUs Out-of-order CPUs can execute instructions during cache miss Pending store stays in load/store unit Dependent instructions wait in reservation stations Independent instructions continue Effect of miss depends on program data flow Much harder to analyse Use system simulation

26 Chapter 5 Large and Fast: Exploiting Memory Hierarchy 26 Main Memory Supporting Caches Use DRAMs for main memory Fixed width (e.g., 1 word) Connected by fixed-width clocked bus Bus clock is typically slower than CPU clock Example cache block read 1 bus cycle for address transfer 15 bus cycles per DRAM access 1 bus cycle per data transfer For 4-word block, 1-word-wide DRAM Miss penalty = 1 + 4×15 + 4×1 = 65 bus cycles Bandwidth = 16 bytes / 65 cycles = 0.25 B/cycle

27 Chapter 5 Large and Fast: Exploiting Memory Hierarchy 27 Increasing Memory Bandwidth 4-word wide memory Miss penalty = = 17 bus cycles Bandwidth = 16 bytes / 17 cycles = 0.94 B/cycle 4-bank interleaved memory Miss penalty = ×1 = 20 bus cycles Bandwidth = 16 bytes / 20 cycles = 0.8 B/cycle

28 Chapter 5 Large and Fast: Exploiting Memory Hierarchy 28 Advanced DRAM Organization Bits in a DRAM are organized as a rectangular array DRAM accesses an entire row Burst mode: supply successive words from a row with reduced latency Double data rate (DDR) DRAM Transfer on rising and falling clock edges Quad data rate (QDR) DRAM Separate DDR inputs and outputs

29 Chapter 5 Large and Fast: Exploiting Memory Hierarchy 29 DRAM Generations YearCapacity$/GB Kbit$ Kbit$ Mbit$ Mbit$ Mbit$ Mbit$ Mbit$ Mbit$ Mbit$ Gbit$50

30 Chapter 5 Large and Fast: Exploiting Memory Hierarchy 30 Measuring Cache Performance Components of CPU time Program execution cycles Includes cache hit time Memory stall cycles Mainly from cache misses With simplifying assumptions: §5.3 Measuring and Improving Cache Performance

31 Chapter 5 Large and Fast: Exploiting Memory Hierarchy 31 Cache Performance Example Given I-cache miss rate = 2% D-cache miss rate = 4% Miss penalty = 100 cycles Base CPI (ideal cache) = 2 Load & stores are 36% of instructions Miss cycles per instruction I-cache: 0.02 × 100 = 2 D-cache: 0.36 × 0.04 × 100 = 1.44 Actual CPI = = 5.44 Ideal CPU is 5.44/2 =2.72 times faster

32 Chapter 5 Large and Fast: Exploiting Memory Hierarchy 32 Average Access Time Hit time is also important for performance Average memory access time (AMAT) AMAT = Hit time + Miss rate × Miss penalty Example CPU with 1ns clock, hit time = 1 cycle, miss penalty = 20 cycles, I-cache miss rate = 5% AMAT = × 20 = 2ns 2 cycles per instruction

33 Chapter 5 Large and Fast: Exploiting Memory Hierarchy 33 Performance Summary When CPU performance increased Miss penalty becomes more significant Decreasing base CPI Greater proportion of time spent on memory stalls Increasing clock rate Memory stalls account for more CPU cycles Cant neglect cache behavior when evaluating system performance

34 Chapter 5 Large and Fast: Exploiting Memory Hierarchy 34 Cache Control Example cache characteristics Direct-mapped, write-back, write allocate Block size: 4 words (16 bytes) Cache size: 16 KB (1024 blocks) 32-bit byte addresses Valid bit and dirty bit per block Blocking cache CPU waits until access is complete §5.7 Using a Finite State Machine to Control A Simple Cache TagIndexOffset bits10 bits18 bits

35 Chapter 5 Large and Fast: Exploiting Memory Hierarchy 35 Interface Signals Cache CPU Memory Read/Write Valid Address Write Data Read Data Ready 32 Read/Write Valid Address Write Data Read Data Ready Multiple cycles per access

36 Chapter 5 Large and Fast: Exploiting Memory Hierarchy 36 Interactions with Software Misses depend on memory access patterns Algorithm behavior Compiler optimization for memory access

37 Chapter 5 Large and Fast: Exploiting Memory Hierarchy 37 Multilevel On-Chip Caches §5.10 Real Stuff: The AMD Opteron X4 and Intel Nehalem Per core: 32KB L1 I-cache, 32KB L1 D-cache, 512KB L2 cache Intel Nehalem 4-core processor

38 Chapter 5 Large and Fast: Exploiting Memory Hierarchy 38 3-Level Cache Organization Intel NehalemAMD Opteron X4 L1 caches (per core) L1 I-cache: 32KB, 64-byte blocks, 4-way, approx LRU replacement, hit time n/a L1 D-cache: 32KB, 64-byte blocks, 8-way, approx LRU replacement, write- back/allocate, hit time n/a L1 I-cache: 32KB, 64-byte blocks, 2-way, LRU replacement, hit time 3 cycles L1 D-cache: 32KB, 64-byte blocks, 2-way, LRU replacement, write- back/allocate, hit time 9 cycles L2 unified cache (per core) 256KB, 64-byte blocks, 8-way, approx LRU replacement, write- back/allocate, hit time n/a 512KB, 64-byte blocks, 16-way, approx LRU replacement, write- back/allocate, hit time n/a L3 unified cache (shared) 8MB, 64-byte blocks, 16-way, replacement n/a, write- back/allocate, hit time n/a 2MB, 64-byte blocks, 32-way, replace block shared by fewest cores, write-back/allocate, hit time 32 cycles n/a: data not available

39 Virtual Memory and Cache Memory Any modern computer supports both virtual memory and cache memory. Consider the following example, based on results in previous lectures. Byte–addressable memory A 32–bit logical address, giving a logical address space of 2 32 bytes bytes of physical memory, requiring 24 bits to address. Virtual memory implemented using page sizes of 2 12 = 4096 bytes. Cache memory implemented using a fully associative cache with cache line size of 16 bytes.

40 The Two Address Spaces The logical address is divided as follows: The physical address is divided as follows: Bits31 – 2827 – 2423 – 2019 – 1615 – 1211 – 87 – 43 – 0 Field Page NumberOffset in Page Bits23 – 2019 – 1615 – 1211 – 87 – 43 – 0 Field Memory TagOffset

41 VM and Cache: The Complete Process

42 The Virtually Mapped Cache

43 Problems with Virtually Mapped Caches Cache misses have to invoke the virtual memory system (more on that later). This is not a problem. One problem with virtual mapping is that the translation from virtual addresses to physical addresses varies between processes. This is called the aliasing problem. A solution is to extend the virtual address by a process id.


Download ppt "The Memory Hierarchy Cache, Main Memory, and Virtual Memory (Part 2) Lecture for CPSC 5155 Edward Bosworth, Ph.D. Computer Science Department Columbus."

Similar presentations


Ads by Google