1 CMPE 421 Advanced Computer Architecture Accessing a Cache PART1.

Slides:



Advertisements
Similar presentations
CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 2 Instructors: Krste Asanovic & Vladimir Stojanovic
Advertisements

The Lord of the Cache Project 3. Caches Three common cache designs: Direct-Mapped store in exactly one cache line Fully Associative store in any cache.
Computer ArchitectureFall 2007 © November 14th, 2007 Majd F. Sakr CS-447– Computer Architecture.
How caches take advantage of Temporal locality
331 Week13.1Spring :332:331 Computer Architecture and Assembly Language Spring 2006 Week 13 Basics of Cache [Adapted from Dave Patterson’s UCB CS152.
1  1998 Morgan Kaufmann Publishers Chapter Seven Large and Fast: Exploiting Memory Hierarchy.
Memory Chapter 7 Cache Memories.
331 Lec20.1Fall :332:331 Computer Architecture and Assembly Language Fall 2003 Week 13 Basics of Cache [Adapted from Dave Patterson’s UCB CS152.
Caching I Andreas Klappenecker CPSC321 Computer Architecture.
1  1998 Morgan Kaufmann Publishers Chapter Seven Large and Fast: Exploiting Memory Hierarchy.
Caches The principle that states that if data is used, its neighbor will likely be used soon.
ENEE350 Ankur Srivastava University of Maryland, College Park Based on Slides from Mary Jane Irwin ( )
1  Caches load multiple bytes per block to take advantage of spatial locality  If cache block size = 2 n bytes, conceptually split memory into 2 n -byte.
331 Lec20.1Spring :332:331 Computer Architecture and Assembly Language Spring 2005 Week 13 Basics of Cache [Adapted from Dave Patterson’s UCB CS152.
CSCE 212 Quiz 11 – 4/13/11 Given a direct-mapped cache with 8 one-word blocks and the following 32-bit memory address references: 1 2, ,
Lecture 31: Chapter 5 Today’s topic –Direct mapped cache Reminder –HW8 due 11/21/
Memory Hierarchy and Cache Design The following sources are used for preparing these slides: Lecture 14 from the course Computer architecture ECE 201 by.
CS3350B Computer Architecture Winter 2015 Lecture 3.2: Exploiting Memory Hierarchy: How? Marc Moreno Maza [Adapted from.
Computer Architecture and Design – ECEN 350 Part 9 [Some slides adapted from M. Irwin, D. Paterson. D. Garcia and others]
1 CMPE 421 Advanced Computer Architecture Caching with Associativity PART2.
CSE331 W15.1Irwin&Li Fall 2006 PSU CSE 331 Computer Organization and Design Fall 2006 Week 15 Section 1: Mary Jane Irwin (
CMPE 421 Parallel Computer Architecture
Chapter 5 Large and Fast: Exploiting Memory Hierarchy CprE 381 Computer Organization and Assembly Level Programming, Fall 2013 Zhao Zhang Iowa State University.
Cache Basics Define temporal and spatial locality.
EEE-445 Review: Major Components of a Computer Processor Control Datapath Memory Devices Input Output Cache Main Memory Secondary Memory (Disk)
CSIE30300 Computer Architecture Unit 9: Improving Cache Performance Hsin-Chou Chi [Adapted from material by and
CS.305 Computer Architecture Improving Cache Performance Adapted from Computer Organization and Design, Patterson & Hennessy, © 2005, and from slides kindly.
CS1104 – Computer Organization PART 2: Computer Architecture Lecture 10 Memory Hierarchy.
CSIE30300 Computer Architecture Unit 08: Cache Hsin-Chou Chi [Adapted from material by and
CPE432 Chapter 5A.1Dr. W. Abu-Sufah, UJ Chapter 5A: Exploiting the Memory Hierarchy, Part 2 Adapted from Slides by Prof. Mary Jane Irwin, Penn State University.
1 CMPE 421 Parallel Computer Architecture PART4 Caching with Associativity.
Virtual Memory. Virtual Memory: Topics Why virtual memory? Virtual to physical address translation Page Table Translation Lookaside Buffer (TLB)
CSCI-365 Computer Organization Lecture Note: Some slides and/or pictures in the following are adapted from: Computer Organization and Design, Patterson.
Additional Slides By Professor Mary Jane Irwin Pennsylvania State University Group 3.
CSCI-365 Computer Organization Lecture Note: Some slides and/or pictures in the following are adapted from: Computer Organization and Design, Patterson.
1 Chapter Seven. 2 Users want large and fast memories! SRAM access times are ns at cost of $100 to $250 per Mbyte. DRAM access times are ns.
CS.305 Computer Architecture Memory: Caches Adapted from Computer Organization and Design, Patterson & Hennessy, © 2005, and from slides kindly made available.
CPE232 Cache Introduction1 CPE 232 Computer Organization Spring 2006 Cache Introduction Dr. Gheith Abandah [Adapted from the slides of Professor Mary Irwin.
1 CMPE 421 Parallel Computer Architecture PART3 Accessing a Cache.
Additional Slides By Professor Mary Jane Irwin Pennsylvania State University Group 1.
The Memory Hierarchy (Lectures #17 - #20) ECE 445 – Computer Organization The slides included herein were taken from the materials accompanying Computer.
CAM Content Addressable Memory
Chapter 5 Large and Fast: Exploiting Memory Hierarchy.
Computer Organization CS224 Fall 2012 Lessons 37 & 38.
The Memory Hierarchy Cache, Main Memory, and Virtual Memory Lecture for CPSC 5155 Edward Bosworth, Ph.D. Computer Science Department Columbus State University.
1Chapter 7 Memory Hierarchies Outline of Lectures on Memory Systems 1. Memory Hierarchies 2. Cache Memory 3. Virtual Memory 4. The future.
CS 61C: Great Ideas in Computer Architecture (Machine Structures) Set-Associative Caches Instructors: Randy H. Katz David A. Patterson
CS 61C: Great Ideas in Computer Architecture Caches Part 2 Instructors: Nicholas Weaver & Vladimir Stojanovic
CS 110 Computer Architecture Lecture 15: Caches Part 2 Instructor: Sören Schwertfeger School of Information Science and Technology.
CSCI206 - Computer Organization & Programming
CMSC 611: Advanced Computer Architecture
Improving Memory Access The Cache and Virtual Memory
Address – 32 bits WRITE Write Cache Write Main Byte Offset Tag Index Valid Tag Data 16K entries 16.
Tutorial Nine Cache CompSci Semester One 2016.
CAM Content Addressable Memory
Cache Memory Presentation I
Consider a Direct Mapped Cache with 4 word blocks
Mary Jane Irwin ( ) CSE 431 Computer Architecture Fall 2005 Lecture 19: Cache Introduction Review Mary Jane Irwin (
Instructor: Justin Hsia
Rocky K. C. Chang 27 November 2017
CSCI206 - Computer Organization & Programming
Lecture 22: Cache Hierarchies, Memory
Direct Mapping.
Morgan Kaufmann Publishers
Morgan Kaufmann Publishers Memory Hierarchy: Cache Basics
CS 314 Computer Organization Fall 2017 Chapter 5A: Exploiting the Memory Hierarchy, Part 1 Haojin Zhu Professor [Adapted from Computer Organization.
Basic Cache Operation Prof. Eric Rotenberg
Chapter Five Large and Fast: Exploiting Memory Hierarchy
CS Computer Architecture Spring Lecture 19: Cache Introduction
10/18: Lecture Topics Using spatial locality
Presentation transcript:

1 CMPE 421 Advanced Computer Architecture Accessing a Cache PART1

2 Direct Mapped Caching: A Simple First Example Cache Main Memory Q1: How do we find it? Use next 2 low order memory address bits – the index – to determine which cache block (i.e., modulo the number of blocks in the cache) TagData Q2: Is it there? Compare the cache tag to the high order 2 memory address bits to tell if the memory block is in the cache Valid 0000xx 0001xx 0010xx 0011xx 0100xx 0101xx 0110xx 0111xx 1000xx 1001xx 1010xx 1011xx 1100xx 1101xx 1110xx 1111xx Two low order bits define the byte in the word (32b words) (block address) modulo (# of blocks in the cache) Index Valid bit indicates whether an entry contains valid information – if the bit is not set, there cannot be a match for this block

3 Direct Mapped Cache Example 1  8-blocks, 1 word/block, direct mapped  Initial state IndexVTagData 000N 001N 010N 011N 100N 101N 110N 111N

4 Direct Mapped Cache Example 1 IndexVTagData 000N 001N 010N 011N 100N 101N 110Y10Mem[10110] 111N Word addrBinary addrHit/missCache block Miss110

5 Direct Mapped Cache Example 1 IndexVTagData 000N 001N 010Y11Mem[11010] 011N 100N 101N 110Y10Mem[10110] 111N Word addrBinary addrHit/missCache block Miss010

6 Direct Mapped Cache Example 1 IndexVTagData 000N 001N 010Y11Mem[11010] 011N 100N 101N 110Y10Mem[10110] 111N Word addrBinary addrHit/missCache block Hit Hit010

7 Direct Mapped Cache Example 1 IndexVTagData 000Y10Mem[10000] 001N 010Y11Mem[11010] 011Y00Mem[00011] 100N 101N 110Y10Mem[10110] 111N Word addrBinary addrHit/missCache block Miss Miss Hit000

8 Direct Mapped Cache Example 1 IndexVTagData 000Y10Mem[10000] 001N 010Y10Mem[10010] 011Y00Mem[00011] 100N 101N 110Y10Mem[10110] 111N Word addrBinary addrHit/missCache block Miss010

9 Direct Mapped Cache Example 2 ( 4, 1-word blocks )  Consider the main memory word reference string Start with an empty cache - all blocks initially marked as not valid

10 Direct Mapped Cache Example 2 ( 4, 1-word blocks )  Consider the main memory word reference string Mem(0) 00 Mem(1) 00 Mem(0) 00 Mem(1) 00 Mem(2) miss hit 00 Mem(0) 00 Mem(1) 00 Mem(2) 00 Mem(3) 01 Mem(4) 00 Mem(1) 00 Mem(2) 00 Mem(3) 01 Mem(4) 00 Mem(1) 00 Mem(2) 00 Mem(3) 01 Mem(4) 00 Mem(1) 00 Mem(2) 00 Mem(3) Mem(1) 00 Mem(2) 00 Mem(3) Start with an empty cache - all blocks initially marked as not valid l 8 requests, 6 misses, 2 hits

11  One word/block, cache size = 1K words 20 Tag 10 Index Data IndexTagValid Byte offset What kind of locality are we taking advantage of? 20 Data 32 Hit Address Subdivision : Direct Mapped Cache FIGURE 7.7 For this cache, the lower portion of the address is used to select a cache entry consisting of a data word and a tag.

12 Another Example for Direct Mapping  Consider the main memory word reference string Start with an empty cache - all blocks initially marked as not valid

13 Another Example for Direct Mapping  Consider the main memory word reference string miss 00 Mem(0) Mem(4) Mem(0) Mem(0) Mem(0) Mem(4) Mem(4) 0 00 Start with an empty cache - all blocks initially marked as not valid  Ping pong effect due to conflict misses - two memory locations that map into the same cache block l 8 requests, 8 misses

14 Multiword (4 word) Block Direct Mapped Cache 12 Index Data IndexTagValid Byte offset 16 Tag HitData 32 Block offset  Four words/block, cache size = 16K words What kind of locality are we taking advantage of? 32 2

15 Taking Advantage of Spatial Locality 0  Let cache block hold more than one word Start with an empty cache - all blocks initially marked as not valid

16 Taking Advantage of Spatial Locality 0  Let cache block hold more than one word Mem(1) Mem(0) miss 00 Mem(1) Mem(0) hit 00 Mem(3) Mem(2) 00 Mem(1) Mem(0) miss hit 00 Mem(3) Mem(2) 00 Mem(1) Mem(0) miss 00 Mem(3) Mem(2) 00 Mem(1) Mem(0) hit 00 Mem(3) Mem(2) 01 Mem(5) Mem(4) hit 00 Mem(3) Mem(2) 01 Mem(5) Mem(4) 00 Mem(3) Mem(2) 01 Mem(5) Mem(4) miss Start with an empty cache - all blocks initially marked as not valid l 8 requests, 4 misses

17 Miss Rate vs Block Size vs Cache Size  Miss rate goes up if the block size is too large relative to the cache size.  because the number of blocks that can be held in the same size cache is smaller (increasing capacity misses)

18