Pradondet Nilagupta (Based on notes Robert F. Hodson --- CNU)

Slides:



Advertisements
Similar presentations
Anshul Kumar, CSE IITD CSL718 : Memory Hierarchy Cache Performance Improvement 23rd Feb, 2006.
Advertisements

Lecture 8: Memory Hierarchy Cache Performance Kai Bu
Lecture 12 Reduce Miss Penalty and Hit Time
Miss Penalty Reduction Techniques (Sec. 5.4) Multilevel Caches: A second level cache (L2) is added between the original Level-1 cache and main memory.
CMSC 611: Advanced Computer Architecture Cache Some material adapted from Mohamed Younis, UMBC CMSC 611 Spr 2003 course slides Some material adapted from.
1 Adapted from UCB CS252 S01, Revised by Zhao Zhang in IASTATE CPRE 585, 2004 Lecture 14: Hardware Approaches for Cache Optimizations Cache performance.
1 COMP 206: Computer Architecture and Implementation Montek Singh Wed, Nov 2, 2005 Mon, Nov 7, 2005 Topic: Caches (contd.)
Caches Vincent H. Berk October 21, 2005
1 COMP 206: Computer Architecture and Implementation Montek Singh Wed., Nov. 10, 2003 Topic: Caches (contd.)
EE Architecture of Digital Systems Lecture 3 Cache Memory
CS252/Culler Lec 4.1 1/31/02 CS203A Graduate Computer Architecture Lecture 14 Cache Design Taken from Prof. David Culler’s notes.
ENGS 116 Lecture 121 Caches Vincent H. Berk Wednesday October 29 th, 2008 Reading for Friday: Sections C.1 – C.3 Article for Friday: Jouppi Reading for.
EENG449b/Savvides Lec /13/04 April 13, 2004 Prof. Andreas Savvides Spring EENG 449bG/CPSC 439bG Computer.
1 COMP 206: Computer Architecture and Implementation Montek Singh Wed, Nov 9, 2005 Topic: Caches (contd.)
1 COMP 206: Computer Architecture and Implementation Montek Singh Wed., Oct. 30, 2002 Topic: Caches (contd.)
ENGS 116 Lecture 131 Caches and Virtual Memory Vincent H. Berk October 31 st, 2008 Reading for Today: Sections C.1 – C.3 (Jouppi article) Reading for Monday:
CS252/Kubiatowicz Lec 3.1 1/24/01 CS252 Graduate Computer Architecture Lecture 3 Caches and Memory Systems I January 24, 2001 Prof. John Kubiatowicz.
Reducing Cache Misses 5.1 Introduction 5.2 The ABCs of Caches 5.3 Reducing Cache Misses 5.4 Reducing Cache Miss Penalty 5.5 Reducing Hit Time 5.6 Main.
1 IBM 360 Model 85 (1968) had a cache, which helped it outperform the more complex Model 91 (Tomasulo’s algorithm) Maurice Wilkes published the first paper.
Lec17.1 °Q1: Where can a block be placed in the upper level? (Block placement) °Q2: How is a block found if it is in the upper level? (Block identification)
Computer ArchitectureFall 2007 © November 12th, 2007 Majd F. Sakr CS-447– Computer Architecture.
CSC 4250 Computer Architectures December 5, 2006 Chapter 5. Memory Hierarchy.
Lecture 10 Memory Hierarchy and Cache Design Computer Architecture COE 501.
1 Improving on Caches CS #4: Pseudo-Associative Cache Also called column associative Idea –start with a direct mapped cache, then on a miss check.
3-May-2006cse cache © DW Johnson and University of Washington1 Cache Memory CSE 410, Spring 2006 Computer Systems
Lecture 12: Memory Hierarchy— Five Ways to Reduce Miss Penalty (Second Level Cache) Professor Alvin R. Lebeck Computer Science 220 Fall 2001.
Caches Where is a block placed in a cache? –Three possible answers  three different types AnywhereFully associativeOnly into one block Direct mappedInto.
Lecture 08: Memory Hierarchy Cache Performance Kai Bu
Spring 2003CSE P5481 Advanced Caching Techniques Approaches to improving memory system performance eliminate memory operations decrease the number of misses.
Chapter 5 Memory III CSE 820. Michigan State University Computer Science and Engineering Miss Rate Reduction (cont’d)
M E M O R Y. Computer Performance It depends in large measure on the interface between processor and memory. CPI (or IPC) is affected CPI = Cycles per.
Lecture 15 Calculating and Improving Cache Perfomance
Nov. 15, 2000Systems Architecture II1 Machine Organization (CS 570) Lecture 8: Memory Hierarchy Design * Jeremy R. Johnson Wed. Nov. 15, 2000 *This lecture.
Improving Direct-Mapped Cache Performance by the Addition of a Small Fully-Associative Cache And Pefetch Buffers Norman P. Jouppi Presenter:Shrinivas Narayani.
MBG 1 CIS501, Fall 99 Lecture 11: Memory Hierarchy: Caches, Main Memory, & Virtual Memory Michael B. Greenwald Computer Architecture CIS 501 Fall 1999.
1 Adapted from UC Berkeley CS252 S01 Lecture 18: Reducing Cache Hit Time and Main Memory Design Virtucal Cache, pipelined cache, cache summary, main memory.
Memory Hierarchy—Improving Performance Professor Alvin R. Lebeck Computer Science 220 Fall 2008.
1 Appendix C. Review of Memory Hierarchy Introduction Cache ABCs Cache Performance Write policy Virtual Memory and TLB.
1 Adapted from UC Berkeley CS252 S01 Lecture 17: Reducing Cache Miss Penalty and Reducing Cache Hit Time Hardware prefetching and stream buffer, software.
COMP 206 Computer Architecture and Implementation Unit 8b: Cache Misses Siddhartha Chatterjee Fall 2000.
Memory Hierarchy— Five Ways to Reduce Miss Penalty.
Advanced Computer Architecture CS 704 Advanced Computer Architecture Lecture 29 Memory Hierarchy Design Cache Performance Enhancement by: Reducing Cache.
Chapter 5 Memory Hierarchy Design. 2 Many Levels in Memory Hierarchy Pipeline registers Register file 1st-level cache (on-chip) 2nd-level cache (on same.
Lecture 12: Design with Genetic Algorithms Memory I
CMSC 611: Advanced Computer Architecture
Memory Hierarchy— Reducing Miss Penalty Reducing Hit Time Main Memory
CMSC 611: Advanced Computer Architecture
COMP 740: Computer Architecture and Implementation
Soner Onder Michigan Technological University
Memory Hierarchy 3 Cs and 6 Ways to Reduce Misses
CSC 4250 Computer Architectures
The University of Adelaide, School of Computer Science
Lecture 9: Memory Hierarchy (3)
现代计算机体系结构 主讲教师:张钢 教授 天津大学计算机学院 课件、作业、讨论网址:
5.2 Eleven Advanced Optimizations of Cache Performance
Cache Memory Presentation I
CPE 631 Lecture 06: Cache Design
CS252 Graduate Computer Architecture Lecture 7 Cache Design (continued) Feb 12, 2002 Prof. David Culler.
CS252 Graduate Computer Architecture Lecture 4 Cache Design
CMSC 611: Advanced Computer Architecture
Lecture 14: Reducing Cache Misses
Lecture 08: Memory Hierarchy Cache Performance
CS203A Graduate Computer Architecture Lecture 13 Cache Design
Memory Hierarchy.
Siddhartha Chatterjee
Summary 3 Cs: Compulsory, Capacity, Conflict Misses Reducing Miss Rate
Cache - Optimization.
Lecture 7 Memory Hierarchy and Cache Design
Cache Performance Improvements
10/18: Lecture Topics Using spatial locality
Presentation transcript:

Lecture note 8 Memory Hierarchy— Misses, 3 Cs and 7 Ways to Reduce Misses Pradondet Nilagupta (Based on notes Robert F. Hodson --- CNU) (Based on notes by Randy H. Katz)

Review: Who Cares About the Memory Hierarchy? Processor Only Thus Far in Course: CPU cost/performance, ISA, Pipelined Execution CPU-DRAM Gap 1980: no cache in ตproc; 1995 2-level cache, 60% trans. on Alpha 21164 ตproc (150 clock cycles for a miss!)

Review: Four Questions for Memory Hierarchy Designers Q1: Where can a block be placed in the upper level? (Block placement) Fully Associative, Set Associative, Direct Mapped Q2: How is a block found if it is in the upper level? (Block identification) Tag/Block Q3: Which block should be replaced on a miss? (Block replacement) Random, LRU Q4: What happens on a write? (Write strategy) Write Back or Write Through (with Write Buffer)

Review Cache Performance Equations CPUtime = (CPU execution cycles + Mem stall cycles) * Cycle time Mem stall cycles = Mem accesses * Miss rate * Miss penalty CPUtime = IC * (CPIexecution + Mem accesses per instr * Miss rate * Miss penalty) * Cycle time Misses per instr = Mem accesses per instr * Miss rate CPUtime = IC * (CPIexecution + Misses per instr * Miss penalty) * Cycle time

Improving Memory Performance Latency single trip delay Bandwidth maximum throughput Cache CPU

Review: Improving Cache Performance 1. Reduce the miss rate, 2. Reduce the miss penalty, or 3. Reduce the time to hit in the cache.

Reducing Misses Classifying Misses: 3 Cs Compulsory—The first access to a block is not in the cache, so the block must be brought into the cache. These are also called cold start misses or first reference misses. (Misses in Infinite Cache) Capacity—If the cache cannot contain all the blocks needed during execution of a program, capacity misses will occur due to blocks being discarded and later retrieved. (Misses in Size ) Conflict—If the block-placement strategy is set associative or direct mapped, conflict misses (in addition to compulsory and capacity misses) will occur because a block can be discarded and later retrieved if too many blocks map to its set. These are also called collision misses or interference misses. (Misses in N-way Associative)

3Cs Absolute Miss Rate Conflict

3Cs Relative Miss Rate (scaled to DM miss rate) Conflict

How to Reduce Misses? Increase Block Size Increase Associativity Use a Victim Cache Use a Pseudo Associative Cache Hardware Prefetching Compiler-Controlled Prefetching Compiler Optimizations

1. Increase Block Size One way to reduce the miss rate is to increase the block size Reduce compulsory misses - why? Take advantage of spacial locality However, larger blocks have disadvantages May increase the miss penalty (need to get more data) May increase hit time (need to read more data from cache and larger mux) May increase conflict and capacity misses Increasing the block size can help, but don’t overdo it.

1. Reduce Misses via Larger Block Size Cache Size (bytes) 25% 1K 20% 4K 15% Miss 16K Rate 10% 64K 5% 256K 0% 16 32 64 128 256

2. Reduce Misses via Higher Associativity Increasing associativity helps reduce conflict misses 2:1 Cache Rule: The miss rate of a direct mapped cache of size N is about equal to the miss rate of a 2-way set associative cache of size N/2 Disadvantages of higher associativity Need to do large number of comparisons Need n-to-1 multiplexer for n-way set associative Could increase hit time

Example: Avg. Memory Access Time vs. Associativity Example: assume CCT = 1.10 for 2-way, 1.12 for 4-way, 1.14 for 8-way vs. CCT of direct mapped Cache Size Associativity (KB) 1-way 2-way 4-way 8-way 1 7.65 6.60 6.22 5.44 2 5.90 4.90 4.62 4.09 4 4.60 3.95 3.57 3.19 8 3.30 3.00 2.87 2.59 16 2.45 2.20 2.12 2.04 32 2.00 1.80 1.77 1.79 64 1.70 1.60 1.57 1.59 128 1.50 1.45 1.42 1.44 (Red means A.M.A.T. not improved by more associativity) Does not take into account effect of slower clock on rest of program

Avg. Memory Access vs Cache Size vs Associativity Time

3. Reducing Misses via Victim Cache Address Data Data in out CPU Write buffer Lower level memory =? Tag Data Victim Cache Add a small fully associative victim cache to place data discarded from regular cache When data not found in cache, check victim cache 4-entry victim cache removed 20% to 95% of conflicts for a 4 KB direct mapped data cache Get access time of direct mapped with reduced miss rate

4. Reducing Misses via Pseudo-Associativity How to combine fast hit time of Direct Mapped and have the lower conflict misses of 2-way SA cache? Divide cache: on a miss, check other half of cache to see if there, if so have a pseudo-hit (slow hit) Drawback: CPU pipeline is hard if hit takes 1 or 2 cycles Better for caches not tied directly to processor Hit Time Pseudo Hit Time Miss Penalty Time

Example Compare: direct-mapped, 2-way set associative, and pseudo associative. Use AMAT It takes two extra cycles to find an entry in the alternative location (one cycle to check and one cycle to swap) Miss rate pseudo = Miss rate 2-way Miss penalty pseudo = Miss penalty 1-way + 1 Hit time pseudo = Hit time 1-way + Alternate hit rate pseudo * 2 Alternate hit rate pseudo = Hit rate 2-way – Hit rate 1-way = Miss rate 1-way – Miss rate 2-way

5. Reducing Misses by HW Prefetching of Instruction & Data E.g., Instruction Prefetching Alpha 21064 fetches 2 blocks on a miss Extra block placed in stream buffer On miss check stream buffer Works with data blocks too: Jouppi [1990] 1 data stream buffer got 25% misses from 4KB cache; 4 streams got 43% Palacharla & Kessler [1994] for scientific programs for 8 streams got 50% to 70% of misses from 2 64KB, 4-way set associative caches Prefetching relies on extra memory bandwidth that can be used without penalty

6. Reducing Misses by SW Prefetching Data Compiler inserts data prefetch instructions Load data into register (HP PA-RISC loads) Cache Prefetch: load into cache (MIPS IV, PowerPC, SPARC v. 9) Special prefetching instructions cannot cause faults; a form of speculative execution Issuing Prefetch Instructions takes time Is cost of prefetch issues < savings in reduced misses? Higher superscalar reduces difficulty of issue bandwidth

7. Reducing Misses by Compiler Optimizations Instructions Reorder procedures in memory so as to reduce misses Profiling to look at conflicts between groups of instructions McFarling [1989] reduced caches misses by 75% on 8KB direct mapped cache with 4 byte blocks Data Merging Arrays: improve spatial locality by single array of compound elements vs. 2 arrays (prevents the same indicies being used) Loop Interchange: change nesting of loops to access data in order stored in memory Loop Fusion: Combine 2 independent loops that have same looping and some variables overlap Blocking: Improve temporal locality by accessing “blocks” of data repeatedly vs. going down whole columns or rows

Merging Arrays Example /* Before */ int val[SIZE]; int key[SIZE]; /* After */ struct merge { int val; int key; }; struct merge merged_array[SIZE]; Reducing conflicts between val & key; improve spatial locality

Array Organizations X[i][j] Row Major Ordering Row Col Memory 1 2 1 2 1 2 3 4 1 2 X[i][j] Row Major Ordering

Loop Interchange Example /* Before */ for (j = 0; j < 100; j = j+1) for (i = 0; i < 5000; i = i+1) x[i][j] = 2 * x[i][j]; /* After */ Sequential accesses Instead of striding through memory every 100 words

Loop Fusion Example /* Before */ for (i = 0; i < N; i = i+1) for (j = 0; j < N; j = j+1) a[i][j] = 1/b[i][j] * c[i][j]; d[i][j] = a[i][j] + c[i][j]; /* After */ { a[i][j] = 1/b[i][j] * c[i][j]; d[i][j] = a[i][j] + c[i][j];} 2 misses per access to a & c vs. one miss per access

Blocking Example Two Inner Loops: /* Before */ for (i = 0; i < N; i = i+1) for (j = 0; j < N; j = j+1) {r = 0; for (k = 0; k < N; k = k+1){ r = r + y[i][k]*z[k][j];}; x[i][j] = r; }; Two Inner Loops: Read all NxN elements of z[] Read N elements of 1 row of y[] repeatedly Write N elements of 1 row of x[] Capacity Misses a function of N & Cache Size: 3 NxNx4 => no capacity misses; otherwise ... Idea: compute on BxB submatrix that fits

Blocking Example Capacity Misses from 2N3 + N2 to 2N3/B +N2 /* After */ for (jj = 0; jj < N; jj = jj+B) for (kk = 0; kk < N; kk = kk+B) for (i = 0; i < N; i = i+1) for (j = jj; j < min(jj+B-1,N); j = j+1) {r = 0; for (k = kk; k < min(kk+B-1,N); k = k+1) { r = r + y[i][k]*z[k][j];}; x[i][j] = x[i][j] + r; }; Capacity Misses from 2N3 + N2 to 2N3/B +N2 B called Blocking Factor Conflict Misses Too?

Snapshot of Array during Matrix Multiply j k j i i k X = Y x Z Blue indicates the most recently access values.

Snapshot of Array during Matrix Multiply with Blocking j k j i i k X = Y x Z Blue indicates the most recently access values.

Reducing Conflict Misses by Blocking (by reducing the number of active words in the cache) Conflict misses in caches not FA vs. Blocking size Lam et al [1991] a blocking factor of 24 had a fifth the misses vs. 48 despite both fit in cache

Summary of Compiler Optimizations to Reduce Cache Misses

Summary 3 Cs: Compulsory, Capacity, Conflict Misses Reducing Miss Rate 1. Reduce Misses via Larger Block Size 2. Reduce Misses via Higher Associativity 3. Reducing Misses via Victim Cache 4. Reducing Misses via Pseudo-Associativity 5. Reducing Misses by HW Prefetching Instr, Data 6. Reducing Misses by SW Prefetching Data 7. Reducing Misses by Compiler Optimizations Remember danger of concentrating on just one parameter when evaluating performance Next lecture: reducing Miss penalty

Review: Improving Cache Performance 1. Reduce the miss rate, 2. Reduce the miss penalty, or 3. Reduce the time to hit in the cache.

1. Reducing Miss Penalty: Read Priority over Write on Miss Write through with write buffers RAW conflicts with main memory reads on cache misses If simply wait for write buffer to empty, might increase read miss penalty Check write buffer contents before read; if no conflicts, let the memory access continue Write Back: Read miss replacing dirty block Normal: Write dirty block to memory, and then do the read Instead copy the dirty block to a write buffer, then do the read, and then do the write CPU stall less since restarts as soon as do read

2. Subblock Placement to Reduce Miss Penalty To reduce memory required for tags Make tags smaller The resulting blocks are large To reduce miss penalty Don’t load full block on a miss Have bits per subblock to indicate valid Tags Valid Bits Sub-blocks

3. Reduce Miss Penalty: Early Restart and Critical Word First Don’t wait for full block to be loaded before restarting CPU Early restart—As soon as the requested word of the block arrives, send it to the CPU and let the CPU continue execution Critical Word First—Request the missed word first from memory and send it to the CPU as soon as it arrives; let the CPU continue execution while filling the rest of the words in the block. Also called wrapped fetch and requested word first Generally useful only in large blocks, Spatial locality a problem; tend to want next sequential word, so not clear if benefit by early restart block

4. Reduce Miss Penalty: Non-blocking Caches to reduce stalls on misses Non-blocking cache or lockup-free cache allow data cache to continue to supply cache hits during a miss requires out-of-order executuion CPU “hit under miss” reduces the effective miss penalty by working during miss vs. ignoring CPU requests “hit under multiple miss” or “miss under miss” may further lower the effective miss penalty by overlapping multiple misses Significantly increases the complexity of the cache controller as there can be multiple outstanding memory accesses Requires muliple memory banks (otherwise cannot support) Penium Pro allows 4 outstanding memory misses

Value of Hit Under Miss for SPEC 0->1 1->2 2->64 Base “Hit under n Misses” Integer Floating Point FP programs on average: AMAT= 0.68 -> 0.52 -> 0.34 -> 0.26 Int programs on average: AMAT= 0.24 -> 0.20 -> 0.19 -> 0.19 8 KB Data Cache, Direct Mapped, 32B block, 16 cycle miss

5th Miss Penalty L2 Equations Definitions: AMAT = Hit TimeL1 + Miss RateL1 x Miss PenaltyL1 Miss PenaltyL1 = Hit TimeL2 + Miss RateL2 x Miss PenaltyL2 AMAT = Hit TimeL1 + Miss RateL1 x (Hit TimeL2 + Miss RateL2 + Miss PenaltyL2) Definitions: Local miss rate— misses in this cache divided by the total number of memory accesses to this cache (Miss rateL2) Global miss rate—misses in this cache divided by the total number of memory accesses generated by the CPU (Miss RateL1 x Miss RateL2) Global Miss Rate is what matters

Local and Global Miss Rates 32 KByte L1 cache; Global miss rate close to single level cache rate provided L2 >> L1 Don’t use local miss rate because it is a function of the miss rate of the first level cache. L2 not tied to clock cycle! Cost & A.M.A.T. Generally Fast Hit Times and fewer misses Since hits are few, target miss reduction

Reducing Misses: Which apply to L2 Cache? Reducing Miss Rate 1. Reduce Misses via Larger Block Size 2. Reduce Conflict Misses via Higher Associativity 3. Reducing Conflict Misses via Victim Cache 4. Reducing Conflict Misses via Pseudo-Associativity 5. Reducing Misses by HW Prefetching Instr, Data 6. Reducing Misses by SW Prefetching Data 7. Reducing Capacity/Conf. Misses by Compiler Optimizations

L2 cache block size & A.M.A.T. 32KB L1, 512KB L2, 4-byte path to memory, 1 cycle to send the address, 6 cycles to access the data, and 1 word/cycle to transfer the data

Reducing Miss Penalty Summary Five techniques Read priority over write on miss Subblock placement Early Restart and Critical Word First on miss Non-blocking Caches (Hit under Miss, Miss under Miss) Second Level Cache Can be applied recursively to Multilevel Caches Danger is that time to DRAM will grow with multiple levels in between First attempts at L2 caches can make things worse, since increased worst case is worse

Review: Improving Cache Performance 1. Reduce the miss rate, 2. Reduce the miss penalty, or 3. Reduce the time to hit in the cache.

Reducing Hit Time Hit time affects the CPU clock rate Techniques Even for machines that take multiple cycles to access the cache Techniques Small and simple caches Avoiding address translation Pipelining writes Small subblocks

1. Fast Hit times via Small and Simple Caches With direct mapped caches, you can overlap the tag check with transmitting the data to the processor. Almost always to keep the L1 cache small enough to remain on the chip Example: Alpha 21164 – Level 1: 8KB Instruction and 8KB data cache – Level 2: 96KB unified cache – Both caches are direct mapped and on- chip

2. Fast hits by Avoiding Address Translation Send virtual address to cache? Called Virtually Addressed Cache or just Virtual Cache vs. Physical Cache Every time process is switched logically must flush the cache; otherwise get false hits Cost is time to flush + “compulsory” misses from empty cache Dealing with aliases (sometimes called synonyms); Two different virtual addresses map to same physical address I/O must interact with cache, so need virtual address Solution to aliases HW guaranteess covers index field & direct mapped, they must be unique; called page coloring Solution to cache flush Add process identifier tag that identifies process as well as address within process: can’t get a hit if wrong process

Virtually Addressed Caches CPU CPU CPU VA VA VA VA Tags TB $ PA Tags $ TB PA VA PA L2 $ $ TB MEM PA PA MEM MEM Overlap $ access with VA translation: requires $ index to remain invariant across translation Conventional Organization Virtually Addressed Cache Translate only on miss Synonym Problem

2. Fast Cache Hits by Avoiding Translation: Process ID impact Purple is uniprocess Yellow is multiprocess when flush cache Red is multiprocess when use Process ID tag Y axis: Miss Rates up to 20% X axis: Cache size from 2 KB to 1024 KB

Avoiding Translation During Indexing: Virtually Indexed and Physically Tagged If index is physical part of address, can start tag access in parallel with translation so that can compare to physical tag Limits cache to page size: what if want bigger caches and uses same trick? Higher associativity moves barrier to right Page coloring Page address Address tag Page offset Index Block offset 31 12 11

3. Fast Hit Times Via Pipelined Writes Pipeline Tag Check and Update Cache as separate stages; current write tag check & previous write cache update Only STORES in the pipeline; empty during a miss

Pipelined Writes CPU Data Delayed write buffer M u x Write buffer Address Data Data in out Write buffer Lower level memory =? Tag Data Delayed write buffer M u x In shade is “Delayed Write Buffer”; must be checked on reads; either complete write or read from buffer

4. Fast Writes on Misses Via Small Subblocks If most writes are 1 word, subblock size is 1 word, & write through then always write subblock & tag immediately Tag match and valid bit already set: Writing the block was proper, & nothing lost by setting valid bit on again. Tag match and valid bit not set: The tag match means that this is the proper block; writing the data into the subblock makes it appropriate to turn the valid bit on. Tag mismatch: This is a miss and will modify the data portion of the block. Since write-through cache, no harm was done; memory still has an up-to-date copy of the old value. Only the tag to the address of the write and the valid bits of the other subblock need be changed because the valid bit for this subblock has already been set Doesn’t work with write back due to last case

What is the Impact of What You’ve Learned About Caches? 1960-1985: Speed = f(no. operations) 1990 Pipelined Execution & Fast Clock Rate Out-of-Order execution Superscalar Instruction Issue 1998: Speed = f(non-cached memory accesses) What does this mean for – Compilers?, Operating Systems?, Algorithms? Data Structures?

Cache Optimization Summary Technique MR MP HT Complexity Larger Block Size + – 0 Higher Associativity + – 1 Victim Caches + 2 Pseudo-Associative Caches + 2 HW Prefetching of Instr/Data + 2 Compiler Controlled Prefetching + 3 Compiler Reduce Misses + 0 Priority to Read Misses + 1 Subblock Placement + + 1 Early Restart & Critical Word 1st + 2 Non-Blocking Caches + 3 Second Level Caches + 2 Small & Simple Caches – + 0 Avoiding Address Translation + 2 Pipelining Writes + 1 miss rate penalty miss hit time