CS 61C: Great Ideas in Computer Architecture (Machine Structures) Set-Associative Caches Instructors: Randy H. Katz David A. Patterson

Slides:



Advertisements
Similar presentations
361 Computer Architecture Lecture 15: Cache Memory
Advertisements

Lecture 8: Memory Hierarchy Cache Performance Kai Bu
Computation I pg 1 Embedded Computer Architecture Memory Hierarchy: Cache Recap Course 5KK73 Henk Corporaal November 2014
CS2100 Computer Organisation Cache II (AY2014/2015) Semester 2.
CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 2 Instructors: Krste Asanovic & Vladimir Stojanovic
331 Week13.1Spring :332:331 Computer Architecture and Assembly Language Spring 2006 Week 13 Basics of Cache [Adapted from Dave Patterson’s UCB CS152.
Computer Systems Organization: Lecture 11
1 COMP 206: Computer Architecture and Implementation Montek Singh Mon, Oct 31, 2005 Topic: Memory Hierarchy Design (HP3 Ch. 5) (Caches, Main Memory and.
Chapter 7 Large and Fast: Exploiting Memory Hierarchy Bo Cheng.
Memory Chapter 7 Cache Memories.
1 COMP 206: Computer Architecture and Implementation Montek Singh Mon., Nov. 3, 2003 Topic: Memory Hierarchy Design (HP3 Ch. 5) (Caches, Main Memory and.
331 Lec20.1Fall :332:331 Computer Architecture and Assembly Language Fall 2003 Week 13 Basics of Cache [Adapted from Dave Patterson’s UCB CS152.
CIS °The Five Classic Components of a Computer °Today’s Topics: Memory Hierarchy Cache Basics Cache Exercise (Many of this topic’s slides were.
1  1998 Morgan Kaufmann Publishers Chapter Seven Large and Fast: Exploiting Memory Hierarchy.
ENEE350 Ankur Srivastava University of Maryland, College Park Based on Slides from Mary Jane Irwin ( )
331 Lec20.1Spring :332:331 Computer Architecture and Assembly Language Spring 2005 Week 13 Basics of Cache [Adapted from Dave Patterson’s UCB CS152.
Review: The Memory Hierarchy
Computer ArchitectureFall 2008 © November 3 rd, 2008 Nael Abu-Ghazaleh CS-447– Computer.
CSE 431 Computer Architecture Fall 2008 Chapter 5A: Exploiting the Memory Hierarchy, Part 1 Mary Jane Irwin ( ) [Adapted from Computer.
EEM 486 EEM 486: Computer Architecture Lecture 6 Memory Systems and Caches.
DAP Spr.‘98 ©UCB 1 Lecture 11: Memory Hierarchy—Ways to Reduce Misses.
Memory Hierarchy and Cache Design The following sources are used for preparing these slides: Lecture 14 from the course Computer architecture ECE 201 by.
CS3350B Computer Architecture Winter 2015 Lecture 3.2: Exploiting Memory Hierarchy: How? Marc Moreno Maza [Adapted from.
Computer Architecture and Design – ECEN 350 Part 9 [Some slides adapted from M. Irwin, D. Paterson. D. Garcia and others]
1 CMPE 421 Advanced Computer Architecture Caching with Associativity PART2.
Caching & Virtual Memory Systems Chapter 7  Caching l To address bottleneck between CPU and Memory l Direct l Associative l Set Associate  Virtual Memory.
CSE431 L22 TLBs.1Irwin, PSU, 2005 CSE 431 Computer Architecture Fall 2005 Lecture 22. Virtual Memory Hardware Support Mary Jane Irwin (
Lecture 10 Memory Hierarchy and Cache Design Computer Architecture COE 501.
EEE-445 Review: Major Components of a Computer Processor Control Datapath Memory Devices Input Output Cache Main Memory Secondary Memory (Disk)
CSIE30300 Computer Architecture Unit 9: Improving Cache Performance Hsin-Chou Chi [Adapted from material by and
CS.305 Computer Architecture Improving Cache Performance Adapted from Computer Organization and Design, Patterson & Hennessy, © 2005, and from slides kindly.
CS1104 – Computer Organization PART 2: Computer Architecture Lecture 10 Memory Hierarchy.
CSIE30300 Computer Architecture Unit 08: Cache Hsin-Chou Chi [Adapted from material by and
CPE432 Chapter 5A.1Dr. W. Abu-Sufah, UJ Chapter 5A: Exploiting the Memory Hierarchy, Part 2 Adapted from Slides by Prof. Mary Jane Irwin, Penn State University.
1 Computer Architecture Cache Memory. 2 Today is brought to you by cache What do we want? –Fast access to data from memory –Large size of memory –Acceptable.
1 CMPE 421 Parallel Computer Architecture PART4 Caching with Associativity.
1Chapter 7 Memory Hierarchies (Part 2) Outline of Lectures on Memory Systems 1. Memory Hierarchies 2. Cache Memory 3. Virtual Memory 4. The future.
Additional Slides By Professor Mary Jane Irwin Pennsylvania State University Group 3.
Computer Organization & Programming
Lecture 08: Memory Hierarchy Cache Performance Kai Bu
1 Chapter Seven. 2 Users want large and fast memories! SRAM access times are ns at cost of $100 to $250 per Mbyte. DRAM access times are ns.
Nov. 15, 2000Systems Architecture II1 Machine Organization (CS 570) Lecture 8: Memory Hierarchy Design * Jeremy R. Johnson Wed. Nov. 15, 2000 *This lecture.
CS.305 Computer Architecture Memory: Caches Adapted from Computer Organization and Design, Patterson & Hennessy, © 2005, and from slides kindly made available.
CPE232 Cache Introduction1 CPE 232 Computer Organization Spring 2006 Cache Introduction Dr. Gheith Abandah [Adapted from the slides of Professor Mary Irwin.
CS224 Spring 2011 Computer Organization CS224 Chapter 5A: Exploiting the Memory Hierarchy, Part 1 Spring 2011 With thanks to M.J. Irwin, D. Patterson,
1 Chapter Seven. 2 Users want large and fast memories! SRAM access times are ns at cost of $100 to $250 per Mbyte. DRAM access times are ns.
CPE232 Improving Cache Performance1 CPE 232 Computer Organization Spring 2006 Improving Cache Performance Dr. Gheith Abandah [Adapted from the slides of.
Cps 220 Cache. 1 ©GK Fall 1998 CPS220 Computer System Organization Lecture 17: The Cache Alvin R. Lebeck Fall 1999.
CS 61C: Great Ideas in Computer Architecture Lecture 15: Caches, Part 2 Instructor: Sagar Karandikar
CSE431 L20&21 Improving Cache Performance.1Irwin, PSU, 2005 CSE 431 Computer Architecture Fall 2005 Lecture 20&21. Improving Cache Performance Mary Jane.
CS 61C: Great Ideas in Computer Architecture Caches Part 2 Instructors: Nicholas Weaver & Vladimir Stojanovic
CS 110 Computer Architecture Lecture 15: Caches Part 2 Instructor: Sören Schwertfeger School of Information Science and Technology.
1 Memory Hierarchy Design Chapter 5. 2 Cache Systems CPUCache Main Memory Data object transfer Block transfer CPU 400MHz Main Memory 10MHz Bus 66MHz CPU.
CMSC 611: Advanced Computer Architecture
COSC3330 Computer Architecture
Yu-Lun Kuo Computer Sciences and Information Engineering
The Goal: illusion of large, fast, cheap memory
Instructors: Randy H. Katz David A. Patterson
Instructor: Justin Hsia
Instructors: Yuanqing Cheng
Rocky K. C. Chang 27 November 2017
Systems Architecture II
Lecture 08: Memory Hierarchy Cache Performance
Morgan Kaufmann Publishers
CS 314 Computer Organization Fall 2017 Chapter 5A: Exploiting the Memory Hierarchy, Part 1 Haojin Zhu Professor [Adapted from Computer Organization.
EE108B Review Session #6 Daxia Ge Friday February 23rd, 2007
Chapter Five Large and Fast: Exploiting Memory Hierarchy
Cache - Optimization.
Cache Memory Rabi Mahapatra
Presentation transcript:

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Set-Associative Caches Instructors: Randy H. Katz David A. Patterson 6/11/20161Fall Lecture #31

Agenda Cache Memory Recap Administrivia Technology Break Set Associative Caches 6/11/20162Fall Lecture #31

Agenda Cache Memory Recap Administrivia Technology Break Set Associative Caches 6/11/20163Fall Lecture #31

Recap: Components of a Computer Processor Control Datapath Memory Devices Input Output Cache Main Memory Secondary Memory (Disk) 6/11/20164Fall Lecture #31

Take advantage of the principle of locality to present the user with as much memory as is available in the cheapest technology at the speed offered by the fastest technology Second Level Cache (SRAM) Recap: Typical Memory Hierarchy Control Datapath Secondary Memory (Disk) On-Chip Components RegFile Main Memory (DRAM) Data Cache Instr Cache ITLB DTLB Speed (%cycles): ½’s 1’s 10’s 100’s 10,000’s Size (bytes): 100’s 10K’s M’s G’s T’s Cost: highest lowest 6/11/20165Fall Lecture #31

Recap: Cache Performance and Average Memory Access Time (AMAT) CPU time = IC × CPI × CC = IC × (CPI ideal + Memory-stall cycles) × CC Memory-stall cycles = Read-stall cycles + Write-stall cycles AMAT is the average time to access memory considering both hits and misses AMAT = Time for a hit + Miss rate x Miss penalty 6/11/2016Fall Lecture #316 CPI stall Read-stall cycles = reads/program × read miss rate × read miss penalty Write-stall cycles = (writes/program × write miss rate × write miss penalty) + write buffer stalls

Improving Cache Performance Reduce the time to hit in the cache – E.g., Smaller cache, direct mapped cache, smaller blocks, special tricks for handling for writes Reduce the miss rate – E.g., Bigger cache, larger blocks – More flexible placement (increase associativity) Reduce the miss penalty – E.g., Smaller blocks or critical word first in large blocks, special tricks for handling for writes, faster/higher bandwidth memories – Use multiple cache levels 6/11/2016Fall Lecture #317

Sources of Cache Misses: The 3Cs Compulsory (cold start or process migration, 1 st reference): – First access to block impossible to avoid; small effect for long running programs – Solution: increase block size (increases miss penalty; very large blocks could increase miss rate) Capacity: – Cache cannot contain all blocks accessed by the program – Solution: increase cache size (may increase access time) Conflict (collision): – Multiple memory locations mapped to the same cache location – Solution 1: increase cache size – Solution 2: increase associativity (may increase access time) 6/11/20168Fall Lecture #31

Reducing Cache Misses Allow more flexible block placement Direct mapped $: memory block maps to exactly one cache block Fully associative $: allow a memory block to be mapped to any cache block Compromise: divide $ into sets, each of which consists of n “ways” (n-way set associative) to place memory block −Memory block maps to unique set determined by index field and is placed in any of the n-ways of that set – Calculation: (block address) modulo (# sets in the cache) 6/11/2016Fall Lecture #319

Alternative Block Placement Schemes DM placement: mem block 12 in 8 block cache: only one cache block where mem block 12 can be found—(12 modulo 8) = 4 SA placement: four sets x 2-ways (8 cache blocks), memory block 12 in set (12 mod 4) = 0; either element of the set FA placement: mem block 12 can appear in any cache blocks Fall Lecture #316/11/201610

Agenda Cache Memory Recap Administrivia Technology Break Set Associative Caches 6/11/201611Fall Lecture #31

Late Breaking Results from Project #3, Part I 6/11/2016Fall Lecture #3112 Thanks TA Andrew!

Late Breaking Results from Project #3, Part I 6/11/2016Fall Lecture #3113 Thanks TA Andrew!

Administrivia Posted Optional Extra Logicsim “At Home” Lab this week Project 3: TLP+DLP+Cache Opt (Due 11/13) Project 4: Single Cycle Processor in Logicsim (by tonight) – Due Part 2 due Saturday 11/27 – Face-to-Face : Signup for 15m time slot 11/30, 12/2 EC: Fastest Project 3 (due 11/29) Final Review: Mon Dec 6, 3 hrs, afternoon (TBD) Final: Mon Dec 13 8AM-11AM (TBD) – Like midterm: T/F, M/C, short answers – Whole Course: readings, lectures, projects, labs, hw – Emphasize 2 nd half of 61C + midterm mistakes 6/11/2016Fall Lecture #3014

Agenda Cache Memory Recap Administrivia Technology Break Set-Associative Caches 6/11/201615Fall Lecture #31

Agenda Cache Memory Recap Administrivia Technology Break Set Associative Caches 6/11/201616Fall Lecture #31

Example: 4 Word Direct Mapped $ Worst Case Reference String Consider the main memory word reference string Start with an empty cache - all blocks initially marked as not valid 6/11/201617Fall Lecture #31

Example: 4 Word Direct Mapped $ Worst Case Reference String Consider the main memory word reference string miss 00 Mem(0) Mem(4) Mem(0) Mem(0) Mem(0) Mem(4) Mem(4) 0 00 Start with an empty cache - all blocks initially marked as not valid Ping pong effect due to conflict misses - two memory locations that map into the same cache block 8 requests, 8 misses 6/11/201618Fall Lecture #31

Example: 2-Way Set Associative $ (4 words = 2 sets x 2 ways per set) 0 Cache Main Memory Q: How do we find it? Use next 1 low order memory address bit to determine which cache set (i.e., modulo the number of sets in the cache) TagData Q: Is it there? Compare all the cache tags in the set to the high order 3 memory address bits to tell if the memory block is in the cache V 0000xx 0001xx 0010xx 0011xx 0100xx 0101xx 0110xx 0111xx 1000xx 1001xx 1010xx 1011xx 1100xx 1101xx 1110xx 1111xx Set Way 0 1 One word blocks Two low order bits define the byte in the word (32b words) 6/11/201619Fall Lecture #31

Example: 4 Word 2-Way SA $ Same Reference String 0404 Consider the main memory word reference string Start with an empty cache - all blocks initially marked as not valid 6/11/201620Fall Lecture #31

Example: 4 Word 2-Way SA $ Same Reference String 0404 Consider the main memory word reference string miss hit 000 Mem(0) Start with an empty cache - all blocks initially marked as not valid 010 Mem(4) 000 Mem(0) 010 Mem(4) Solves the ping pong effect in a direct mapped cache due to conflict misses since now two memory locations that map into the same cache set can co-exist! 8 requests, 2 misses 6/11/201621Fall Lecture #31

6/11/2016Fall Lecture #3122 Example: Eight Block Cache with Different Organizations Total size of $ in blocks is equal to number of sets x associativity. For fixed $ size, increasing associativity decreases number of sets while increasing number of elements per set. With eight blocks, an 8-way set-associative $ is same as a fully associative $.

Four-Way Set Associative Cache 2 8 = 256 sets each with four ways (each with one block) Byte offset Data TagV Data TagV Data TagV Index Data TagV Index 22 Tag HitData 32 4x1 select Way 0Way 1Way 2Way 3 6/11/201623Fall Lecture #31

Range of Set Associative Caches For a fixed size cache, each increase by a factor of two in associativity doubles the number of blocks per set (i.e., the number or ways) and halves the number of sets – decreases the size of the index by 1 bit and increases the size of the tag by 1 bit Block offsetByte offsetIndexTag 6/11/201624Fall Lecture #31

Range of Set Associative Caches For a fixed size cache, each increase by a factor of two in associativity doubles the number of blocks per set (i.e., the number or ways) and halves the number of sets – decreases the size of the index by 1 bit and increases the size of the tag by 1 bit Block offsetByte offsetIndexTag Decreasing associativity Fully associative (only one set) Tag is all the bits except block and byte offset Direct mapped (only one way) Smaller tags, only a single comparator Increasing associativity Selects the setUsed for tag compareSelects the word in the block 6/11/201625Fall Lecture #31

Costs of Set Associative Caches When miss occurs, which way’s block selected for replacement? – Least Recently Used (LRU): one that has been unused the longest Must track when each way’s block was used relative to other blocks in the set For 2-way SA $, one bit per set → set to 1 when a block is referenced; reset the other way’s bit (i.e., “last used”) N-way set associative cache costs – N comparators (delay and area) – MUX delay (set selection) before data is available – Data available after set selection (and Hit/Miss decision). DM $: block is available before the Hit/Miss decision Not possible to just assume a hit and continue and recover later if it was a miss 6/11/2016Fall Lecture #3126

Cache Block Replacement Policies Random Replacement – Hardware randomly selects a cache item and throw it out Least Recently Used – Hardware keeps track of access history – Replace the entry that has not been used for the longest time – For 2-way set-associative cache, need one bit for LRU replacement Example of a Simple “Pseudo” LRU Implementation – Assume 64 Fully Associative entries – Hardware replacement pointer points to one cache entry – Whenever access is made to the entry the pointer points to: Move the pointer to the next entry – Otherwise: do not move the pointer 27 : Entry 0 Entry 1 Entry 63 Replacement Pointer 6/11/2016 Fall Lecture #31

Benefits of Set Associative Caches Choice of DM $ or SA $ depends on the cost of a miss versus the cost of implementation 6/11/201628Fall Lecture #31 Largest gains are in going from direct mapped to 2-way (20%+ reduction in miss rate)

3Cs Revisted Three sources of misses (SPEC2000 integer and floating-point benchmarks) – Compulsory misses 0.006%; not visible – Capacity misses, function of cache size – Conflict portion depends on associativity and cache size 6/11/2016Fall Lecture #3129

Two Machines’ Cache Parameters 6/11/2016Fall Lecture #3130 Intel NehalemAMD Barcelona L1 cache organization & size Split I$ and D$; 32KB for each per core; 64B blocks Split I$ and D$; 64KB for each per core; 64B blocks L1 associativity4-way (I), 8-way (D) set assoc.; ~LRU replacement 2-way set assoc.; LRU replacement L1 write policywrite-back, write-allocate L2 cache organization & size Unified; 256MB (0.25MB) per core; 64B blocks Unified; 512KB (0.5MB) per core; 64B blocks L2 associativity8-way set assoc.; ~LRU16-way set assoc.; ~LRU L2 write policywrite-back L2 write policywrite-back, write-allocate L3 cache organization & size Unified; 8192KB (8MB) shared by cores; 64B blocks Unified; 2048KB (2MB) shared by cores; 64B blocks L3 associativity16-way set assoc.32-way set assoc.; evict block shared by fewest cores L3 write policywrite-back, write-allocatewrite-back; write-allocate

Summary Name of the Game: Reduce Cache Misses – Two different memory blocks mapping to same cache block could knock each other out as program bounces from one memory location to the next One way to do it: set-associativity – Memory block maps into more than one cache block – N-way: n possible places in the cache to hold a given memory block – N-way Cache of 2 N+M blocks: 2 N ways x 2 M sets 6/11/201631Fall Lecture #31