Cache Memory.

Slides:



Advertisements
Similar presentations
SE-292 High Performance Computing
Advertisements

SE-292 High Performance Computing Memory Hierarchy R. Govindarajan
Lecture 19: Cache Basics Today’s topics: Out-of-order execution
1 Lecture 13: Cache and Virtual Memroy Review Cache optimization approaches, cache miss classification, Adapted from UCB CS252 S01.
Lecture 8: Memory Hierarchy Cache Performance Kai Bu
CS 430 – Computer Architecture
Modified from notes by Saeid Nooshabadi COMP3221: Microprocessors and Embedded Systems Lecture 25: Cache - I Lecturer:
Memory Subsystem and Cache Adapted from lectures notes of Dr. Patterson and Dr. Kubiatowicz of UC Berkeley.
COMP3221 lec33-Cache-I.1 Saeid Nooshabadi COMP 3221 Microprocessors and Embedded Systems Lectures 12: Cache Memory - I
Computer ArchitectureFall 2008 © October 27th, 2008 Majd F. Sakr CS-447– Computer Architecture.
COMP3221: Microprocessors and Embedded Systems Lecture 26: Cache - II Lecturer: Hui Wu Session 2, 2005 Modified from.
CS61C L32 Caches II (1) Garcia, 2005 © UCB Lecturer PSOE Dan Garcia inst.eecs.berkeley.edu/~cs61c CS61C : Machine Structures.
Computer ArchitectureFall 2007 © November 12th, 2007 Majd F. Sakr CS-447– Computer Architecture.
1  1998 Morgan Kaufmann Publishers Chapter Seven Large and Fast: Exploiting Memory Hierarchy.
COMP3221 lec34-Cache-II.1 Saeid Nooshabadi COMP 3221 Microprocessors and Embedded Systems Lectures 34: Cache Memory - II
Cs 61C L17 Cache.1 Patterson Spring 99 ©UCB CS61C Cache Memory Lecture 17 March 31, 1999 Dave Patterson (http.cs.berkeley.edu/~patterson) www-inst.eecs.berkeley.edu/~cs61c/schedule.html.
CS 61C L21 Caches II (1) Garcia, Spring 2004 © UCB Lecturer PSOE Dan Garcia inst.eecs.berkeley.edu/~cs61c CS61C : Machine.
Computer ArchitectureFall 2007 © November 12th, 2007 Majd F. Sakr CS-447– Computer Architecture.
Memory Hierarchy and Cache Design The following sources are used for preparing these slides: Lecture 14 from the course Computer architecture ECE 201 by.
CMPE 421 Parallel Computer Architecture
Lecture 10 Memory Hierarchy and Cache Design Computer Architecture COE 501.
The Memory Hierarchy 21/05/2009Lecture 32_CA&O_Engr Umbreen Sabir.
Memory and cache CPU Memory I/O. CEG 320/52010: Memory and cache2 The Memory Hierarchy Registers Primary cache Secondary cache Main memory Magnetic disk.
10/18: Lecture topics Memory Hierarchy –Why it works: Locality –Levels in the hierarchy Cache access –Mapping strategies Cache performance Replacement.
CS1104 – Computer Organization PART 2: Computer Architecture Lecture 10 Memory Hierarchy.
CS61C L17 Cache1 © UC Regents 1 CS61C - Machine Structures Lecture 17 - Caches, Part I October 25, 2000 David Patterson
CML CML CS 230: Computer Organization and Assembly Language Aviral Shrivastava Department of Computer Science and Engineering School of Computing and Informatics.
1 Chapter Seven. 2 Users want large and fast memories! SRAM access times are ns at cost of $100 to $250 per Mbyte. DRAM access times are ns.
Multilevel Caches Microprocessors are getting faster and including a small high speed cache on the same chip.
Review °We would like to have the capacity of disk at the speed of the processor: unfortunately this is not feasible. °So we create a memory hierarchy:
1 Chapter Seven CACHE MEMORY AND VIRTUAL MEMORY. 2 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4.
Memory Hierarchy and Caches. Who Cares about Memory Hierarchy? Processor Only Thus Far in Course CPU-DRAM Gap 1980: no cache in µproc; level cache,
CPE 626 CPU Resources: Introduction to Cache Memories Aleksandar Milenkovic Web:
CMSC 611: Advanced Computer Architecture
Memory Hierarchy Ideal memory is fast, large, and inexpensive
Main Memory Cache Architectures
CSE 351 Section 9 3/1/12.
The Memory System (Chapter 5)
Memory and cache CPU Memory I/O.
Lecture 12 Virtual Memory.
Basic Performance Parameters in Computer Architecture:
Virtual Memory Use main memory as a “cache” for secondary (disk) storage Managed jointly by CPU hardware and the operating system (OS) Programs share main.
The Hardware/Software Interface CSE351 Winter 2013
Cache Memory Presentation I
CS61C : Machine Structures Lecture 6. 2
Lecture 21: Memory Hierarchy
Lecture 21: Memory Hierarchy
CS61C : Machine Structures Lecture 6. 2
Memory and cache CPU Memory I/O.
Lecture 08: Memory Hierarchy Cache Performance
CPE 631 Lecture 05: Cache Design
Memory Hierarchy Memory: hierarchy of components of various speeds and capacities Hierarchy driven by cost and performance In early days Primary memory.
CDA 5155 Caches.
Adapted from slides by Sally McKee Cornell University
ECE232: Hardware Organization and Design
Morgan Kaufmann Publishers Memory Hierarchy: Cache Basics
Main Memory Cache Architectures
Lecture 22: Cache Hierarchies, Memory
Contents Memory types & memory hierarchy Virtual memory (VM)
CS-447– Computer Architecture Lecture 20 Cache Memories
Lecturer PSOE Dan Garcia
CS 3410, Spring 2014 Computer Science Cornell University
Some of the slides are adopted from David Patterson (UCB)
Lecture 21: Memory Hierarchy
Memory Hierarchy Memory: hierarchy of components of various speeds and capacities Hierarchy driven by cost and performance In early days Primary memory.
Some of the slides are adopted from David Patterson (UCB)
Csci 211 Computer System Architecture – Review on Cache Memory
CPE 631 Lecture 04: Review of the ABC of Caches
10/18: Lecture Topics Using spatial locality
Overview Problem Solution CPU vs Memory performance imbalance
Presentation transcript:

Cache Memory

Outline Memory Hierarchy Direct-Mapped Cache Write-Through, Write-Back Cache replacement policy Examples

Principal of Locality Cache work on a principal known as locality of reference. This principal states asserts that programs continually use and re-use the same locations. -Instructions: Loops, common subroutines -Data: Look-up Tables, data sets(arrays) Temporal Locality (locality in time):Same location will be referenced again soon Spatial Locality (locality in space):Nearby locations will be referenced soon

The Tradeoff cache virtual memory larger, slower, cheaper CPU C a c h e C a c h e Memory 16 B 8 B 4 KB disk regs register reference L1-cache reference L2-cache reference memory reference disk memory reference size: speed: $/Mbyte: block size: 608 B 1.4 ns 4 B 128k B 4.2 ns 4 B 512kB -- 4MB 16.8 ns $90/MB 16 B 128 MB 112 ns $2-6/MB 4-8 KB 27GB 9 ms $0.01/MB larger, slower, cheaper (Numbers are for a DEC Alpha 21264 at 700MHz)

MOORE’s LAW Time 1000 Performance 100 10 1 Processor-DRAM Memory Gap (latency) µProc 60%/yr. (2X/1.5yr) 1000 CPU “Moore’s Law” 100 Processor-Memory Performance Gap: (grows 50% / year) Performance 10 DRAM 9%/yr. (2X/10 yrs) Y-axis is performance X-axis is time Latency Cliché: Not e that x86 didn’t have cache on chip until 1989 DRAM 1 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 Time

Levels in memory hierarchy Processor Increasing Distance from Proc., Decreasing cost / MB Levels in memory hierarchy Higher Level 1 Level 2 Level n Level 3 . . . Lower Size of memory at each level

Cache Design How do we organize cache? Where does each memory address map to? (Remember that cache is subset of memory, so multiple memory addresses map to the same cache location.) How do we know which elements are in cache? How do we quickly locate them?

Cache mappings There are three types of methods for mapping between the main memory addresses and the cache addresses. Fully associative cache Direct mapped cache Set associative cache

Direct-Mapped Cache In a direct-mapped cache, each memory address is associated with one possible block within the cache Therefore, we only need to look in a single location in the cache for the data if it exists in the cache Block is the unit of transfer between cache and memory

Cache Location 0 can be occupied by data from: Direct-Mapped Cache 4 Byte Direct Mapped Cache Cache Index 1 2 3 Memory Memory Address 1 2 3 4 5 6 7 8 9 A B C D E F Let’s look at the simplest cache one can build. A direct mapped cache that only has 4 bytes. In this direct mapped cache with only 4 bytes, location 0 of the cache can be occupied by data form memory location 0, 4, 8, C, ... and so on. While location 1 of the cache can be occupied by data from memory location 1, 5, 9, ... etc. So in general, the cache location where a memory location can map to is uniquely determined by the 2 least significant bits of the address (Cache Index). For example here, any memory location whose two least significant bits of the address are 0s can go to cache location zero. With so many memory locations to chose from, which one should we place in the cache? Of course, the one we have read or write most recently because by the principle of temporal locality, the one we just touch is most likely to be the one we will need again soon. Of all the possible memory locations that can be placed in cache Location 0, how can we tell which one is in the cache? +2 = 22 min. (Y:02) Cache Location 0 can be occupied by data from: Memory location 0, 4, 8, ... In general: any memory location that is multiple of 4

Issues with Direct-Mapped Since multiple memory addresses map to same cache index, how do we tell which one is in there? What if we have a block size > 1 byte? Result: divide memory address into three fields ttttttttttttttttt iiiiiiiiii oooo tag index byte to check to offset if have select within correct block block block

Direct-Mapped Cache Terminology All fields are read as unsigned integers. Index: specifies the cache index (which “row” of the cache we should look in) Offset: once we’ve found correct block, specifies which byte within the block we want Tag: the remaining bits after offset and index are determined; these are used to distinguish between all the memory addresses that map to the same location

Increased Miss Penalty Block Size Tradeoff Conclusions Exploits Spatial Locality Fewer blocks: compromises temporal locality Miss Rate Block Size Average Access Time Block Size Increased Miss Penalty & Miss Rate

Direct-Mapped Cache Example Suppose we have a 16KB of data in a direct-mapped cache with 4 word blocks Determine the size of the tag, index and offset fields if we’re using a 32-bit architecture Offset need to specify correct byte within a block block contains 4 words = 16 bytes = 24 bytes need 4 bits to specify correct byte

Direct-Mapped Cache Example Index: (~index into an “array of blocks”) need to specify correct row in cache cache contains 16 KB = 214 bytes block contains 24 bytes (4 words) # rows/cache =# blocks/cache (since there’s one block/row) = bytes/cache bytes/row = 214 bytes/cache 24 bytes/row = 210 rows/cache need 10 bits to specify this many rows

Direct-Mapped Cache Example Tag: use remaining bits as tag tag length = mem addr length - offset - index = 32 - 4 - 10 bits = 18 bits so tag is leftmost 18 bits of memory address Why not full 32 bit address as tag? All bytes within block need same address (-4b) Index must be same for every address within a block, so its redundant in tag check, thus can leave off to save memory (- 10 bits in this example)

Accessing data in a direct mapped cache Memory Value of Word Address (hex) 00000010 00000014 00000018 0000001C a b c d ... 00000030 00000034 00000038 0000003C e f g h 00008010 00008014 00008018 0000801C i j k l Ex.: 16KB of data, direct-mapped, 4 word blocks Read 4 addresses 0x00000014, 0x0000001C, 0x00000034, 0x00008014 Memory values on right: only cache/memory level of hierarchy

Accessing data in a direct mapped cache 4 Addresses: 0x00000014, 0x0000001C, 0x00000034, 0x00008014 4 Addresses divided (for convenience) into Tag, Index, Byte Offset fields 000000000000000000 0000000001 0100 000000000000000000 0000000001 1100 000000000000000000 0000000011 0100 000000000000000010 0000000001 0100 Tag Index Offset

Accessing data in a direct mapped cache So lets go through accessing some data in this cache 16KB data, direct-mapped, 4 word blocks Will see 4 types of events: Cache loading: Before any words and tags have been loaded into the cache, all locations contains invalid information. As lines are fetched from main memory into the cache, cache entries become valid. To represent this structure a valid bit added to each cache entry. cache miss: nothing in cache in appropriate block, so fetch from memory cache hit: cache block is valid and contains proper address, so read desired word cache miss, block replacement: wrong data is in cache at appropriate block, so discard it and fetch desired data from memory

Cache Loading Before any words and tags have been loaded into the cache, all locations contains invalid information. As lines are fetched from main memory into the cache, cache entries become valid. To represent this structure a valid bit added to each cache entry. This valid bit indicates that the associated cache line is valid(1) or invalid(0). If the valid bit is 0, then a cache miss occurs even if the tag matches the address from CPU, required addressed word to be taken from main memory.

16 KB Direct Mapped Cache, 16B blocks Valid bit: determines whether anything is stored in that row (when computer initially turned on, all entries are invalid) ... Valid Tag 0x0-3 0x4-7 0x8-b 0xc-f 1 2 3 4 5 6 7 1022 1023 Example Block Index

000000000000000000 0000000001 0100 Tag field Index field Offset 1 2 3 Read 0x00000014 = 0…00 0..001 0100 000000000000000000 0000000001 0100 Tag field Index field Offset ... Valid Tag 0x0-3 0x4-7 0x8-b 0xc-f 1 2 3 4 5 6 7 1022 1023 Index

So we read block 1 (0000000001) 000000000000000000 0000000001 0100 Tag field Index field Offset ... Valid Tag 0x0-3 0x4-7 0x8-b 0xc-f 1 2 3 4 5 6 7 1022 1023 Index

No valid data 000000000000000000 0000000001 0100 Tag field Index field Offset ... Valid Tag 0x0-3 0x4-7 0x8-b 0xc-f 1 2 3 4 5 6 7 1022 1023 Index

So load that data into cache, setting tag, valid 000000000000000000 0000000001 0100 Tag field ... Valid Tag 0x0-3 0x4-7 0x8-b 0xc-f 1 2 3 4 5 6 7 1022 1023 Index field Offset Index 1 a b c d

Read from cache at offset, return word b 000000000000000000 0000000001 0100 Tag field Index field Offset ... Valid Tag 0x0-3 0x4-7 0x8-b 0xc-f 1 2 3 4 5 6 7 1022 1023 Index 1 a b c d

Read 0x0000001C = 0…00 0..001 1100 000000000000000000 0000000001 1100 Tag field Index field Offset ... Valid Tag 0x0-3 0x4-7 0x8-b 0xc-f 1 2 3 4 5 6 7 1022 1023 Index 1 a b c d

Data valid, tag OK, so read offset return word d 000000000000000000 0000000001 1100 ... Valid Tag 0x0-3 0x4-7 0x8-b 0xc-f 1 2 3 4 5 6 7 1022 1023 Index 1 a b c d

Read 0x00000034 = 0…00 0..011 0100 000000000000000000 0000000011 0100 Tag field Index field Offset ... Valid Tag 0x0-3 0x4-7 0x8-b 0xc-f 1 2 3 4 5 6 7 1022 1023 Index 1 a b c d

So read block 3 000000000000000000 0000000011 0100 Tag field Index field Offset ... Valid Tag 0x0-3 0x4-7 0x8-b 0xc-f 1 2 3 4 5 6 7 1022 1023 Index 1 a b c d

No valid data 000000000000000000 0000000011 0100 Tag field Index field Offset ... Valid Tag 0x0-3 0x4-7 0x8-b 0xc-f 1 2 3 4 5 6 7 1022 1023 Index 1 a b c d

Load that cache block, return word f 000000000000000000 0000000011 0100 Tag field Index field Offset ... Valid Tag 0x0-3 0x4-7 0x8-b 0xc-f 1 2 3 4 5 6 7 1022 1023 Index 1 a b c d 1 e f g h

Read 0x00008014 = 0…10 0..001 0100 000000000000000010 0000000001 0100 Tag field Index field Offset ... Valid Tag 0x0-3 0x4-7 0x8-b 0xc-f 1 2 3 4 5 6 7 1022 1023 Index 1 a b c d 1 e f g h

So read Cache Block 1, Data is Valid 000000000000000010 0000000001 0100 Tag field Index field Offset ... Valid Tag 0x0-3 0x4-7 0x8-b 0xc-f 1 2 3 4 5 6 7 1022 1023 Index 1 a b c d 1 e f g h

Cache Block 1 Tag does not match (0 != 2) 000000000000000010 0000000001 0100 Tag field Index field Offset ... Valid Tag 0x0-3 0x4-7 0x8-b 0xc-f 1 2 3 4 5 6 7 1022 1023 Index 1 a b c d 1 e f g h

Miss, so replace block 1 with new data & tag 000000000000000010 0000000001 0100 Tag field Index field Offset ... Valid Tag 0x0-3 0x4-7 0x8-b 0xc-f 1 2 3 4 5 6 7 1022 1023 Index 1 2 i j k l 1 e f g h

And return word j 000000000000000010 0000000001 0100 Tag field Index field Offset ... Valid Tag 0x0-3 0x4-7 0x8-b 0xc-f 1 2 3 4 5 6 7 1022 1023 Index 1 2 i j k l 1 e f g h

Vector Product Example float dot_prod(float x[1024], y[1024]) { float sum = 0.0; int i; for (i = 0; i < 1024; i++) sum += x[i]*y[i]; return sum; } Machine DEC Station 5000 MIPS Processor with 64KB direct-mapped cache, 16 B line size Performance Good case: 24 cycles / element Bad case: 66 cycles / element

Thrashing Example Access one element from each array per iteration Cache Line y[1] Cache Line x[2] y[2] x[3] y[3] • • Cache Line Cache Line • • x[1020] y[1020] x[1021] Cache Line y[1021] Cache Line x[1022] y[1022] x[1023] y[1023] Access one element from each array per iteration

Thrashing Example: Good Case y[0] x[1] y[1] Cache Line x[2] y[2] x[3] y[3] Access Sequence Read x[0] x[0], x[1], x[2], x[3] loaded Read y[0] y[0], y[1], y[2], y[3] loaded Read x[1] Hit Read y[1] • • • 2 misses / 8 reads Analysis x[i] and y[i] map to different cache lines Miss rate = 25% Two memory accesses / iteration On every 4th iteration have two misses

Thrashing Example: Bad Case y[0] x[1] y[1] Cache Line x[2] y[2] x[3] y[3] Access Pattern Read x[0] x[0], x[1], x[2], x[3] loaded Read y[0] y[0], y[1], y[2], y[3] loaded Read x[1] Read y[1] 8 misses / 8 reads Analysis x[i] and y[i] map to same cache lines Miss rate = 100% Two memory accesses / iteration On every iteration have two misses

Average access time Average access time, ta, given by: ta = tc + (1 - h)tm assuming again that the first access must be to the cache before an access is made to the main memory. Only read requests are consider so far. Example If hit ratio is 0.85 (a typical value), main memory access time is 50 ns and cache access time is 5 ns, then average access time is 5+0.15*50 = 12.5 ns. THROUGHOUT tc is the time to access the cache, get (or write) the data if a hit or recognize a miss. In practice, these times could be different.

Example-1 Assume Average memory access time=tc+m*tm or =tc*h+(tm+tc)*m Hit Time = 1 cycle Miss rate = 5% Miss penalty = 20 cycles Average memory access time=tc+m*tm (m:miss rate, h:hit rate, tm: main memory access time, tc: Cache access time) Average memory access time = 1 + 0.05 x 20=2 cycle or =tc*h+(tm+tc)*m 1x0.95 + 21x0.05=2 cycle

Example-2 Assume L1 miss penalty = 5 + 0.15 * 100 = 20 L1 Hit Time = 1 cycle L1 Miss rate = 5% L2 Hit Time = 5 cycles L2 Miss rate = 15% (% L1 misses that miss) L2 Miss Penalty = 100 cycles L1 miss penalty = 5 + 0.15 * 100 = 20 Average memory access time = 1 + 0.05 x 20 = 2 cycle

Replacement Algorithms When a block is fetched, which block in the target set should be replaced? Optimal algorithm: replace the block that will not be used for the longest time (must know the future) Usage based algorithms: Least recently used (LRU) replace the block that has been referenced least recently hard to implement Non-usage based algorithms: First-in First-out (FIFO) treat the set as a circular queue, replace head of queue. easy to implement Random (RAND) replace a random block in the set even easier to implement

Implementing RAND and FIFO maintain a modulo E counter for each set. counter in each set points to next block for replacement. increment counter with each replacement. RAND: maintain a single modulo E counter. counter points to next block for replacement in any set. increment counter according to some schedule: each clock cycle, each mem reference, or each replacement. LRU Need state machine for each set Encodes usage ordering of each element in set E! possibilities ==> ~ E log E bits of state

What Happens on a Write? Need to keep cache consistent with the main memory Reads are easy - require no modification Writes- when does the update occur Write-Through On cache write- always update main memory as well Write-Back On cache write- remember that block is modified (dirty) Update main memory when dirty block is replaced Sometimes need to flush cache (I/O, multiprocessing)

Performance trade-offs? What to do on a write hit? Write-through update the word in cache block and corresponding word in memory Write-back update word in cache block allow memory word to be “stale” => add ‘dirty’ bit to each line indicating that memory needs to be updated when block is replaced => OS flushes cache before I/O !!! Performance trade-offs?

Write Through Store by processor updates cache and memory Memory always consistent with cache ~2X more loads than stores WT always combined with write buffers so that don’t wait for lower level memory Memory Store Processor Cache Load Cache Load

Write Back Store by processor only updates cache line Modified line written to memory only when it is evicted Requires “dirty bit” for each line Set when line in cache is modified Indicates that line in memory is stale Memory not always consistent with cache No writes of repeated writes Processor Write Back Memory Store Cache Load Cache Load

Store Miss? Write-Allocate No-write Allocate Bring written block into cache Update word in block Anticipate further use of block No-write Allocate Main memory is updated Cache contents unmodified

Handling stores (write-through) Processor Cache Memory 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 78 29 120 123 V tag data Ld R1  M[ 1 ] Ld R2  M[ 7 ] St R2  M[ 0 ] St R1  M[ 5 ] Ld R2  M[ 10 ] 71 150 162 173 18 21 33 R0 R1 R2 R3 28 19 Misses: 0 Hits: 0 200 210 225

write-through (REF 1) Processor Cache Memory 78 29 120 123 V tag data 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 78 29 120 123 V tag data Ld R1  M[ 1 ] Ld R2  M[ 7 ] St R2  M[ 0 ] St R1  M[ 5 ] Ld R2  M[ 10 ] 71 150 162 173 18 21 33 R0 R1 R2 R3 28 19 Misses: 0 Hits: 0 200 210 225

write-through (REF 1) Processor Cache Memory 78 29 120 123 V tag data 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 78 29 120 123 V tag data Ld R1  M[ 1 ] Ld R2  M[ 7 ] St R2  M[ 0 ] St R1  M[ 5 ] Ld R2  M[ 10 ] 71 1 78 150 29 162 lru 173 18 21 33 R0 R1 R2 R3 28 29 19 Misses: 1 Hits: 0 200 210 225

write-through (REF 2) Processor Cache Memory 78 29 120 123 V tag data 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 78 29 120 123 V tag data Ld R1  M[ 1 ] Ld R2  M[ 7 ] St R2  M[ 0 ] St R1  M[ 5 ] Ld R2  M[ 10 ] 71 1 78 150 29 162 lru 173 18 21 33 R0 R1 R2 R3 28 29 19 Misses: 1 Hits: 0 200 210 225

write-through (REF 2) Processor Cache Memory 78 29 120 123 V tag data 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 78 29 120 123 V tag data Ld R1  M[ 1 ] Ld R2  M[ 7 ] St R2  M[ 0 ] St R1  M[ 5 ] Ld R2  M[ 10 ] 71 lru 1 78 150 29 162 1 3 162 173 173 18 21 33 R0 R1 R2 R3 28 29 19 Misses: 2 Hits: 0 173 200 210 225

write-through (REF 3) Processor Cache Memory 78 29 120 123 V tag data 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 78 29 120 123 V tag data Ld R1  M[ 1 ] Ld R2  M[ 7 ] St R2  M[ 0 ] St R1  M[ 5 ] Ld R2  M[ 10 ] 71 lru 1 78 150 29 162 1 3 162 173 173 18 21 33 R0 R1 R2 R3 28 29 19 Misses: 2 Hits: 0 173 200 210 225

write-through (REF 3) Processor Cache Memory 173 29 120 123 V tag data 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 173 29 120 123 V tag data Ld R1  M[ 1 ] Ld R2  M[ 7 ] St R2  M[ 0 ] St R1  M[ 5 ] Ld R2  M[ 10 ] 71 173 1 150 29 162 lru 1 3 162 173 173 18 21 33 R0 R1 R2 R3 28 29 19 Misses: 2 Hits: 1 173 200 210 225

write-through (REF 4) Processor Cache Memory 173 29 120 123 V tag data 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 173 29 120 123 V tag data Ld R1  M[ 1 ] Ld R2  M[ 7 ] St R2  M[ 0 ] St R1  M[ 5 ] Ld R2  M[ 10 ] 71 1 173 150 29 162 lru 1 3 162 173 173 18 21 33 R0 R1 R2 R3 28 29 19 Misses: 2 Hits: 1 173 200 210 225

write-through (REF 4) Processor Cache Memory 173 29 120 123 V tag data 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 173 29 120 123 V tag data Ld R1  M[ 1 ] Ld R2  M[ 7 ] St R2  M[ 0 ] St R1  M[ 5 ] Ld R2  M[ 10 ] 71 lru 1 173 29 150 29 162 1 2 71 173 29 150 18 21 33 R0 R1 R2 R3 28 29 19 Misses: 3 Hits: 1 173 200 210 225

write-through (REF 5) Processor Cache Memory 173 29 120 123 V tag data 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 173 29 120 123 V tag data Ld R1  M[ 1 ] Ld R2  M[ 7 ] St R2  M[ 0 ] St R1  M[ 5 ] Ld R2  M[ 10 ] 71 lru 1 173 29 29 162 1 2 71 173 29 18 21 33 R0 R1 R2 R3 28 29 19 Misses: 3 Hits: 1 173 200 210 225

write-through (REF 5) Processor Cache Memory 173 29 120 123 V tag data 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 173 29 120 123 V tag data Ld R1  M[ 1 ] Ld R2  M[ 7 ] St R2  M[ 0 ] St R1  M[ 5 ] Ld R2  M[ 10 ] 71 1 5 33 29 28 162 lru 1 2 71 173 29 18 21 33 R0 R1 R2 R3 28 29 19 Misses: 4 Hits: 1 33 200 210 225

How many memory references? Each miss reads a block 2 bytes in this cache Each store writes a byte Total reads: 8 bytes Total writes: 2 bytes but caches generally miss < 20%

Write-through vs. write-back D the cache to NOT write all stores to memory immediately? Keep the most current copy in the cache and update the memory when that data is evicted from the cache (a write-back policy). write-back all evicted lines? No, only blocks that have been stored into Keep a “dirty bit”, reset when the line is allocated, set when the block is stored into. If a block is “dirty” when evicted, write its data back into memory.

Handling stores (write-back) Processor Cache Memory 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 78 29 120 123 V d tag data Ld R1  M[ 1 ] Ld R2  M[ 7 ] St R2  M[ 0 ] St R1  M[ 5 ] Ld R2  M[ 10 ] 71 150 162 173 18 21 33 R0 R1 R2 R3 28 19 Misses: 0 Hits: 0 200 210 225

write-back (REF 1) Processor Cache Memory 78 29 120 123 V d tag data 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 78 29 120 123 V d tag data Ld R1  M[ 1 ] Ld R2  M[ 7 ] St R2  M[ 0 ] St R1  M[ 5 ] Ld R2  M[ 10 ] 71 150 162 173 18 21 33 R0 R1 R2 R3 28 19 Misses: 0 Hits: 0 200 210 225

write-back (REF 1) Processor Cache Memory 78 29 120 123 V d tag data 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 78 29 120 123 V d tag data Ld R1  M[ 1 ] Ld R2  M[ 7 ] St R2  M[ 0 ] St R1  M[ 5 ] Ld R2  M[ 10 ] 71 1 78 150 29 162 lru 173 18 21 33 R0 R1 R2 R3 28 29 19 Misses: 1 Hits: 0 200 210 225

write-back (REF 2) Processor Cache Memory 78 29 120 123 V d tag data 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 78 29 120 123 V d tag data Ld R1  M[ 1 ] Ld R2  M[ 7 ] St R2  M[ 0 ] St R1  M[ 5 ] Ld R2  M[ 10 ] 71 1 78 150 29 162 lru 173 18 21 33 R0 R1 R2 R3 28 29 19 Misses: 1 Hits: 0 200 210 225

write-back (REF 2) Processor Cache Memory 78 29 120 123 V d tag data 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 78 29 120 123 V d tag data Ld R1  M[ 1 ] Ld R2  M[ 7 ] St R2  M[ 0 ] St R1  M[ 5 ] Ld R2  M[ 10 ] 71 lru 1 78 150 29 162 1 3 162 173 173 18 21 33 R0 R1 R2 R3 28 29 19 Misses: 2 Hits: 0 173 200 210 225

write-back (REF 3) Processor Cache Memory 78 29 120 123 V d tag data 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 78 29 120 123 V d tag data Ld R1  M[ 1 ] Ld R2  M[ 7 ] St R2  M[ 0 ] St R1  M[ 5 ] Ld R2  M[ 10 ] 71 lru 1 78 150 29 162 1 3 162 173 173 18 21 33 R0 R1 R2 R3 28 29 19 Misses: 2 Hits: 0 173 200 210 225

write-back (REF 3) Processor Cache Memory 78 29 120 123 V d tag data 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 78 29 120 123 V d tag data Ld R1  M[ 1 ] Ld R2  M[ 7 ] St R2  M[ 0 ] St R1  M[ 5 ] Ld R2  M[ 10 ] 71 1 1 173 150 29 162 lru 1 3 162 173 173 18 21 33 R0 R1 R2 R3 28 29 19 Misses: 2 Hits: 1 173 200 210 225

write-back (REF 4) Processor Cache Memory 78 29 120 123 V d tag data 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 78 29 120 123 V d tag data Ld R1  M[ 1 ] Ld R2  M[ 7 ] St R2  M[ 0 ] St R1  M[ 5 ] Ld R2  M[ 10 ] 71 1 1 173 150 29 162 lru 1 3 162 173 173 18 21 33 R0 R1 R2 R3 28 29 19 Misses: 2 Hits: 1 173 200 210 225

write-back (REF 4) Processor Cache Memory 78 29 120 123 V d tag data 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 78 29 120 123 V d tag data Ld R1  M[ 1 ] Ld R2  M[ 7 ] St R2  M[ 0 ] St R1  M[ 5 ] Ld R2  M[ 10 ] 71 lru 1 1 173 150 29 162 1 1 2 71 173 29 18 21 33 R0 R1 R2 R3 28 29 19 Misses: 3 Hits: 1 173 200 210 225

write-back (REF 5) Processor Cache Memory 78 29 120 123 V d tag data 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 78 29 120 123 V d tag data Ld R1  M[ 1 ] Ld R2  M[ 7 ] St R2  M[ 0 ] St R1  M[ 5 ] Ld R2  M[ 10 ] 71 lru 1 1 173 150 29 162 1 1 2 71 173 29 18 21 33 R0 R1 R2 R3 28 29 19 Misses: 3 Hits: 1 173 200 210 225

write-back (REF 5) Processor Cache Memory 78 173 29 120 123 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 78 173 29 120 123 V d tag data Ld R1  M[ 1 ] Ld R2  M[ 7 ] St R2  M[ 0 ] St R1  M[ 5 ] Ld R2  M[ 10 ] 71 lru 1 1 173 150 29 162 1 1 2 71 173 29 18 21 33 R0 R1 R2 R3 28 29 19 Misses: 4 Hits: 1 173 200 210 225

write-back (REF 5) Processor Cache Memory 78 29 120 123 V d tag data 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 78 29 120 123 V d tag data Ld R1  M[ 1 ] Ld R2  M[ 7 ] St R2  M[ 0 ] St R1  M[ 5 ] Ld R2  M[ 10 ] 71 1 5 33 150 28 162 lru 1 1 2 71 173 29 18 21 33 R0 R1 R2 R3 28 29 19 Misses: 4 Hits: 1 33 200 210 225

How many memory references? Each miss reads a block 2 bytes in this cache Each evicted dirty cache line writes a block Total reads: 8 bytes Total writes: 4 bytes (after final eviction) Choose write-back or write-through?