Instructor Paul Pearce

Slides:



Advertisements
Similar presentations
© Karen Miller, What do we want from our computers?  correct results we assume this feature, but consider... who defines what is correct?  fast.
Advertisements

CS 430 – Computer Architecture
Modified from notes by Saeid Nooshabadi COMP3221: Microprocessors and Embedded Systems Lecture 25: Cache - I Lecturer:
CS61C L31 Caches II (1) Garcia, Fall 2006 © UCB GPUs >> CPUs?  Many are using graphics processing units on graphics cards for high-performance computing.
Memory Subsystem and Cache Adapted from lectures notes of Dr. Patterson and Dr. Kubiatowicz of UC Berkeley.
CS61C L23 Cache II (1) Chae, Summer 2008 © UCB Albert Chae, Instructor inst.eecs.berkeley.edu/~cs61c CS61C : Machine Structures Lecture #23 – Cache II.
CS 61C L20 Caches I (1) Garcia, Spring 2004 © UCB Lecturer PSOE Dan Garcia inst.eecs.berkeley.edu/~cs61c CS61C : Machine.
CS61C L22 Caches III (1) A Carle, Summer 2006 © UCB inst.eecs.berkeley.edu/~cs61c/su06 CS61C : Machine Structures Lecture #22: Caches Andy.
COMP3221 lec33-Cache-I.1 Saeid Nooshabadi COMP 3221 Microprocessors and Embedded Systems Lectures 12: Cache Memory - I
1  1998 Morgan Kaufmann Publishers Chapter Seven Large and Fast: Exploiting Memory Hierarchy.
CS61C L23 Caches I (1) Beamer, Summer 2007 © UCB Scott Beamer, Instructor inst.eecs.berkeley.edu/~cs61c CS61C : Machine Structures Lecture #23 Cache I.
Computer ArchitectureFall 2008 © October 27th, 2008 Majd F. Sakr CS-447– Computer Architecture.
CS61C L21 Caches I (1) Garcia, Fall 2005 © UCB Lecturer PSOE, new dad Dan Garcia inst.eecs.berkeley.edu/~cs61c CS61C : Machine.
Inst.eecs.berkeley.edu/~cs61c UCB CS61C : Machine Structures Lecture 31 – Caches II In this week’s Science, IBM researchers describe a new class.
COMP3221: Microprocessors and Embedded Systems Lecture 26: Cache - II Lecturer: Hui Wu Session 2, 2005 Modified from.
CS61C L32 Caches II (1) Garcia, 2005 © UCB Lecturer PSOE Dan Garcia inst.eecs.berkeley.edu/~cs61c CS61C : Machine Structures.
CS61C L31 Caches I (1) Garcia 2005 © UCB Lecturer PSOE Dan Garcia inst.eecs.berkeley.edu/~cs61c CS61C : Machine Structures.
CS 61C L35 Caches IV / VM I (1) Garcia, Fall 2004 © UCB Andy Carle inst.eecs.berkeley.edu/~cs61c-ta inst.eecs.berkeley.edu/~cs61c CS61C : Machine Structures.
CS61C L20 Caches I (1) A Carle, Summer 2006 © UCB inst.eecs.berkeley.edu/~cs61c/su06 CS61C : Machine Structures Lecture #20: Caches Andy Carle.
CS61C L32 Caches II (1) Garcia, Spring 2007 © UCB Experts weigh in on Quantum CPU  Most “profoundly skeptical” of the demo. D-Wave has provided almost.
CS61C L30 Caches I (1) Garcia, Fall 2006 © UCB Shuttle can’t fly over Jan 1?  A computer bug has come up for the shuttle – its computers don’t reset to.
Inst.eecs.berkeley.edu/~cs61c UCB CS61C : Machine Structures Lecture 30 – Caches I Touted as “the fastest CPU on Earth”, IBM’s new Power6 doubles.
Computer ArchitectureFall 2007 © November 12th, 2007 Majd F. Sakr CS-447– Computer Architecture.
Computer ArchitectureFall 2007 © November 7th, 2007 Majd F. Sakr CS-447– Computer Architecture.
COMP3221 lec34-Cache-II.1 Saeid Nooshabadi COMP 3221 Microprocessors and Embedded Systems Lectures 34: Cache Memory - II
CS61C L24 Cache II (1) Beamer, Summer 2007 © UCB Scott Beamer, Instructor inst.eecs.berkeley.edu/~cs61c CS61C : Machine Structures Lecture #24 Cache II.
Cs 61C L17 Cache.1 Patterson Spring 99 ©UCB CS61C Cache Memory Lecture 17 March 31, 1999 Dave Patterson (http.cs.berkeley.edu/~patterson) www-inst.eecs.berkeley.edu/~cs61c/schedule.html.
CS61C L31 Caches I (1) Garcia, Spring 2007 © UCB Powerpoint bad!!  Research done at the Univ of NSW says that “working memory”, the brain part providing.
CS 61C L21 Caches II (1) Garcia, Spring 2004 © UCB Lecturer PSOE Dan Garcia inst.eecs.berkeley.edu/~cs61c CS61C : Machine.
Computer ArchitectureFall 2007 © November 12th, 2007 Majd F. Sakr CS-447– Computer Architecture.
Cache Memory CSE Slides from Dan Garcia, UCB.
CMPE 421 Parallel Computer Architecture
10/18: Lecture topics Memory Hierarchy –Why it works: Locality –Levels in the hierarchy Cache access –Mapping strategies Cache performance Replacement.
CS1104 – Computer Organization PART 2: Computer Architecture Lecture 10 Memory Hierarchy.
CSCI-365 Computer Organization Lecture Note: Some slides and/or pictures in the following are adapted from: Computer Organization and Design, Patterson.
1 How will execution time grow with SIZE? int array[SIZE]; int sum = 0; for (int i = 0 ; i < ; ++ i) { for (int j = 0 ; j < SIZE ; ++ j) { sum +=
CS61C L17 Cache1 © UC Regents 1 CS61C - Machine Structures Lecture 17 - Caches, Part I October 25, 2000 David Patterson
CML CML CS 230: Computer Organization and Assembly Language Aviral Shrivastava Department of Computer Science and Engineering School of Computing and Informatics.
CSCI-365 Computer Organization Lecture Note: Some slides and/or pictures in the following are adapted from: Computer Organization and Design, Patterson.
Csci 211 Computer System Architecture – Review on Cache Memory Xiuzhen Cheng
1 Chapter Seven. 2 Users want large and fast memories! SRAM access times are ns at cost of $100 to $250 per Mbyte. DRAM access times are ns.
Inst.eecs.berkeley.edu/~cs61c UCB CS61C : Machine Structures Lecture 30 – Caches I After more than 4 years C is back at position number 1 in.
Review °We would like to have the capacity of disk at the speed of the processor: unfortunately this is not feasible. °So we create a memory hierarchy:
1 Chapter Seven CACHE MEMORY AND VIRTUAL MEMORY. 2 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4.
CS61C L22 Pipelining II, Cache I (1) Chae, Summer 2008 © UCB Albert Chae, Instructor inst.eecs.berkeley.edu/~cs61c CS61C : Machine Structures Lecture #22.
1 Chapter Seven. 2 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4 to 6 transistors) DRAM: –value.
Memory COMPUTER ARCHITECTURE
How will execution time grow with SIZE?
Morgan Kaufmann Publishers Memory & Cache
CS61C : Machine Structures Lecture 6. 2
Memristor memory on its way (hopefully)
Chapter 8 Digital Design and Computer Architecture: ARM® Edition
CS61C : Machine Structures Lecture 6. 2
Cache Memory.
CS61CL Machine Structures Lec 11 – Introduction to Cache Design
Memory Hierarchy Memory: hierarchy of components of various speeds and capacities Hierarchy driven by cost and performance In early days Primary memory.
Instructor Paul Pearce
Lecturer PSOE Dan Garcia
ECE232: Hardware Organization and Design
Guest Lecturer TA: Shreyas Chand
Lecture 20: OOO, Memory Hierarchy
Morgan Kaufmann Publishers Memory Hierarchy: Cache Basics
Lecture 20: OOO, Memory Hierarchy
CS-447– Computer Architecture Lecture 20 Cache Memories
Lecturer PSOE Dan Garcia
Some of the slides are adopted from David Patterson (UCB)
Memory Hierarchy Memory: hierarchy of components of various speeds and capacities Hierarchy driven by cost and performance In early days Primary memory.
Some of the slides are adopted from David Patterson (UCB)
Csci 211 Computer System Architecture – Review on Cache Memory
10/18: Lecture Topics Using spatial locality
Presentation transcript:

Instructor Paul Pearce inst.eecs.berkeley.edu/~cs61c CS61C : Machine Structures Lecture 22 Caches I 2010-07-28 Instructor Paul Pearce IT’S NOW LEGAL TO JAILBREAK YOUR PHONE! On Monday the Library of Congress added 5 exceptions to the DMCA that, among other things, verify the legality of jail-breaking devices such as phones (details @ the URL). This means you can actually hack around with devices you already down. Ground breaking, isnt it? Greet class http://preview.tinyurl.com/29sgmra

Pipeline challenge is hazards Review : Pipelining Pipeline challenge is hazards Forwarding helps w/many data hazards Delayed branch helps with control hazard in our 5 stage pipeline Data hazards w/Loads → Load Delay Slot Interlock →“smart” CPU has HW to detect if conflict with inst following load, if so it stalls More aggressive performance (discussed in section a bit today) Superscalar (parallelism) Out-of-order execution

The Big Picture Computer Keyboard, Mouse Devices Processor Memory (active) Computer Control (“brain”) Datapath (“brawn”) Memory (passive) (where programs, data live when running) Devices Input Output Keyboard, Mouse Display, Printer Disk, Network

Memory Hierarchy Processor Memory (we’ll call “main memory”) Disk I.e., storage in computer systems Processor holds data in register file (~100 Bytes) Registers accessed on nanosecond timescale Memory (we’ll call “main memory”) More capacity than registers (~Gbytes) Access time ~50-100 ns Hundreds of clock cycles per memory access?! Disk HUGE capacity (virtually limitless) VERY slow: runs ~milliseconds

Motivation: Why We Use Caches (written $) 1989 first Intel CPU with cache on chip 1998 Pentium III has two cache levels on chip µProc 60%/yr. 1000 CPU 100 Processor-Memory Performance Gap: (grows 50% / year) Performance Y-axis is performance X-axis is time Latency Cliché: Not e that x86 didn’t have cache on chip until 1989 10 DRAM 7%/yr. DRAM 1 1989 1980 1981 1982 1983 1984 1985 1986 1987 1988 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000

Memory Caching Mismatch between processor and memory speeds leads us to add a new level: a memory cache Implemented with same IC processing technology as the CPU (usually integrated on same chip): faster but more expensive than DRAM memory. Cache is a copy of a subset of main memory. Most processors have separate caches for instructions and data. This is how we solved the single-memory structural hazard yesterday.

Increasing Distance from Proc., Decreasing speed Memory Hierarchy Processor Increasing Distance from Proc., Decreasing speed Levels in memory hierarchy Higher Level 1 Level 2 Level n Level 3 . . . Lower Size of memory at each level As we move to deeper levels the latency goes up and price per bit goes down.

If level closer to Processor, it is: Memory Hierarchy If level closer to Processor, it is: Smaller Faster More expensive subset of lower levels (contains most recently used data) Lowest Level (usually disk) contains all available data (does it go beyond the disk? Is it networked? The cloud?) Memory Hierarchy presents the processor with the illusion of a very large & fast memory

Memory Hierarchy Analogy: Library (1/2) You’re writing a term paper (Processor) at a table in your dorm Doe Library is equivalent to disk essentially limitless capacity very slow to retrieve a book Dorm room is main memory smaller capacity: means you must return book when dorm room fills up easier and faster to find a book there once you’ve already retrieved it

Memory Hierarchy Analogy: Library (2/2) Open books on table are cache smaller capacity: can have very few open books fit on table; again, when table fills up, you must close a book, put it away (in your dorm) much, much faster to retrieve data Illusion created: whole library open on the tabletop Keep as many recently used books open on table as possible since likely to use again Also keep as many books in your dorm as possible, since faster than going to library In reality, disk is SO slow, its more like having to drive to the Stanford Library And who wants to do that?

Memory Hierarchy Basis Cache contains copies of data in memory that are being used. Memory contains copies of data on disk that are being used. Caches work on the principles of temporal and spatial locality. Temporal Locality: if we use it now, chances are we’ll want to use it again soon. Spatial Locality: if we use a piece of memory, chances are we’ll use the neighboring pieces soon.

How do we organize cache? Where does each memory address map to? Cache Design How do we organize cache? Where does each memory address map to? (Remember that cache is subset of memory, so multiple memory addresses map to the same cache location.) How do we know which elements are in cache? How do we quickly locate them?

Homework 8 due tonight at midnight Administrivia Homework 8 due tonight at midnight Project 2: Find a partner, get going. Due Monday. Project 1 and HW4 grades up now. We had to regrade the project a few times to make sure the distribution was fair, and it took longer than expected. Sorry. HW5 and 6 are in the pipeline. Should be done soon. Reminder: The drop deadline is this FRIDAY. I believe you have to drop in person, so don’t wait.

Direct-Mapped Cache (1/4) In a direct-mapped cache, each memory address is associated with one possible block within the cache Therefore, we only need to look in a single location in the cache for the data if it exists in the cache Block is the unit of transfer between cache and memory

Direct-Mapped Cache (2/4) Index 4 Byte Direct Mapped Cache Memory Address Memory 1 1 2 2 3 3 Block size = 1 byte 4 5 Cache Location 0 can be occupied by data from: Memory location 0, 4, 8, ... 4 blocks any memory location that is multiple of 4 6 7 8 9 Let’s look at the simplest cache one can build. A direct mapped cache that only has 4 bytes. In this direct mapped cache with only 4 bytes, location 0 of the cache can be occupied by data form memory location 0, 4, 8, C, ... and so on. While location 1 of the cache can be occupied by data from memory location 1, 5, 9, ... etc. So in general, the cache location where a memory location can map to is uniquely determined by the 2 least significant bits of the address (Cache Index). For example here, any memory location whose two least significant bits of the address are 0s can go to cache location zero. With so many memory locations to chose from, which one should we place in the cache? Of course, the one we have read or write most recently because by the principle of temporal locality, the one we just touch is most likely to be the one we will need again soon. Of all the possible memory locations that can be placed in cache Location 0, how can we tell which one is in the cache? +2 = 22 min. (Y:02) A B C What if we wanted a block to be bigger than one byte? D E F

Direct-Mapped Cache (3/4) Index 8 Byte Direct Mapped Cache Memory Address Memory 1 1 2 3 2 2 4 5 4 3 6 7 6 Block size = 2 bytes 8 9 8 etc A C When we ask for a byte, the system finds out the right block, and loads it all! How does it know right block? How do we select the byte? E.g., Mem address 11101? How does it know WHICH colored block it originated from? What do you do at baggage claim? E 10 12 Let’s look at the simplest cache one can build. A direct mapped cache that only has 4 bytes. In this direct mapped cache with only 4 bytes, location 0 of the cache can be occupied by data form memory location 0, 4, 8, C, ... and so on. While location 1 of the cache can be occupied by data from memory location 1, 5, 9, ... etc. So in general, the cache location where a memory location can map to is uniquely determined by the 2 least significant bits of the address (Cache Index). For example here, any memory location whose two least significant bits of the address are 0s can go to cache location zero. With so many memory locations to chose from, which one should we place in the cache? Of course, the one we have read or write most recently because by the principle of temporal locality, the one we just touch is most likely to be the one we will need again soon. Of all the possible memory locations that can be placed in cache Location 0, how can we tell which one is in the cache? +2 = 22 min. (Y:02) 14 16 18 1A 1C 1E

Direct-Mapped Cache (4/4) Memory Address Cache Index 8 Byte Direct Mapped Cache w/Tag! 8 2 1E 14 1 3 2 1 1 2 3 Cache# 1 2 3 2 2 4 5 4 3 6 7 6 Tag Data (Block size = 2 bytes) 8 9 8 A etc C What should go in the tag? Do we need the entire address? What do all these tags have in common? What did we do with the immediate when we were branch addressing, always count by bytes? Why not count by cache #? It’s useful to draw memory with the same width as the block size E 10 12 Let’s look at the simplest cache one can build. A direct mapped cache that only has 4 bytes. In this direct mapped cache with only 4 bytes, location 0 of the cache can be occupied by data form memory location 0, 4, 8, C, ... and so on. While location 1 of the cache can be occupied by data from memory location 1, 5, 9, ... etc. So in general, the cache location where a memory location can map to is uniquely determined by the 2 least significant bits of the address (Cache Index). For example here, any memory location whose two least significant bits of the address are 0s can go to cache location zero. With so many memory locations to chose from, which one should we place in the cache? Of course, the one we have read or write most recently because by the principle of temporal locality, the one we just touch is most likely to be the one we will need again soon. Of all the possible memory locations that can be placed in cache Location 0, how can we tell which one is in the cache? +2 = 22 min. (Y:02) 14 16 18 1A 1C 1E

Issues with Direct-Mapped Since multiple memory addresses map to same cache index, how do we tell which one is in there? What if we have a block size > 1 byte? Answer: divide memory address into three fields 31 ttttttttttttttttt iiiiiiiiii oooo tag index byte to check to offset if have select within correct block block block

Direct-Mapped Cache Terminology All fields are read as unsigned integers. Index specifies the cache index (which “row”/block of the cache we should look in) Offset once we’ve found correct block, specifies which byte within the block we want Tag the remaining bits after offset and index are determined; these are used to distinguish between all the memory addresses that map to the same location

TIO Dan’s great cache mnemonic AREA (cache size, B) = HEIGHT (# of blocks) * WIDTH (size of one block, B/block) 2(H+W) = 2H * 2W H W WIDTH (size of one block, B/block) Tag Index Offset Addr size (usu 32 bits) AREA (cache size, B) HEIGHT (# of blocks)

Direct-Mapped Cache Example (1/3) Suppose we have a 8B of data in a direct- mapped cache with 2 byte blocks Sound familiar? Determine the size of the tag, index and offset fields if we’re using a 32-bit architecture Offset need to specify correct byte within a block block contains 2 bytes = 21 bytes need 1 bit to specify correct byte

Direct-Mapped Cache Example (2/3) Index: (~index into an “array of blocks”) need to specify correct block in cache cache contains 8 B = 23 bytes block contains 2 B = 21 bytes # blocks/cache = bytes/cache bytes/block = 23 bytes/cache 21 bytes/block = 22 blocks/cache need 2 bits to specify this many blocks

Direct-Mapped Cache Example (3/3) Tag: use remaining bits as tag tag length = addr length – offset - index = 32 - 1 - 2 bits = 29 bits so tag is leftmost 29 bits of memory address Why not full 32 bit address as tag? Not all bytes within a block have the same address. So we shouldn’t include offset Index is the same for every address within a block, so it’s redundant in tag check, thus can leave off to save memory (here 8 bits)

When reading memory, 3 things can happen: Caching Terminology When reading memory, 3 things can happen: cache hit: cache block is valid and contains proper address, so read desired word cache miss: nothing in cache in appropriate block, so fetch from memory cache miss, block replacement: wrong data is in cache at appropriate block, so discard it and fetch desired data from memory (cache always copy)

Accessing data in a direct mapped cache Ex.: 16KB of data, direct-mapped, 4 word blocks Can you work out height, width, area? Read 4 addresses 0x00000014 0x0000001C 0x00000034 0x00008014 Memory vals here: Memory Address (hex) Value of Word 00000010 00000014 00000018 0000001C a b c d ... 00000030 00000034 00000038 0000003C e f g h 00008010 00008014 00008018 0000801C i j k l

Direct-Mapped Cache Example (2/3) Offset block contains 16 bytes = 24 bytes  4 bits for offset Index: cache contains 16 KB = 214 bytes block contains 16 B = 24 bytes = 214 bytes/cache 24 bytes/block = 210 blocks/cache  10 bits for index Tag: 32 bit address  10 bits for index, 4 for offset = 32 – 10 – 4 = 18 bits for tag

Accessing data in a direct mapped cache 4 Addresses: 0x00000014, 0x0000001C, 0x00000034, 0x00008014 4 Addresses divided (for convenience) into Tag, Index, Byte Offset fields 000000000000000000 0000000001 0100 000000000000000000 0000000001 1100 000000000000000000 0000000011 0100 000000000000000010 0000000001 0100 Tag Index Offset

16 KB Direct Mapped Cache, 16B blocks Valid bit: determines whether anything is stored in that row (when computer initially turned on, all entries invalid) ... Valid Tag 0xc-f 0x8-b 0x4-7 0x0-3 1 2 3 4 5 6 7 1022 1023 Index

1. Load word from 0x00000014 000000000000000000 0000000001 0100 Tag field Index field Offset ... Valid Tag 1 2 3 4 5 6 7 1022 1023 0xc-f 0x8-b 0x4-7 0x0-3 Index

So we read block 1 (0000000001) 000000000000000000 0000000001 0100 Tag field Index field Offset ... Valid Tag 1 2 3 4 5 6 7 1022 1023 0xc-f 0x8-b 0x4-7 0x0-3 Index

No valid data 000000000000000000 0000000001 0100 Tag field Index field Offset ... Valid Tag 1 2 3 4 5 6 7 1022 1023 0xc-f 0x8-b 0x4-7 0x0-3 Index

So load that data into cache, setting tag, valid 000000000000000000 0000000001 0100 Tag field Index field Offset ... Valid Tag 1 2 3 4 5 6 7 1022 1023 0xc-f 0x8-b 0x4-7 0x0-3 Index 1 d c b a

Read from cache at offset, return word b 000000000000000000 0000000001 0100 Tag field Index field Offset ... Valid Tag 1 2 3 4 5 6 7 1022 1023 0xc-f 0x8-b 0x4-7 0x0-3 Index 1 d c b a

2. Load word from 0x0000001C 000000000000000000 0000000001 1100 Tag field Index field Offset ... Valid Tag 1 2 3 4 5 6 7 1022 1023 0xc-f 0x8-b 0x4-7 0x0-3 Index 1 d c b a

Index is Valid 000000000000000000 0000000001 1100 Tag field Index field Offset ... Valid Tag 1 2 3 4 5 6 7 1022 1023 0xc-f 0x8-b 0x4-7 0x0-3 Index 1 d c b a

Index valid, Tag Matches 000000000000000000 0000000001 1100 Tag field Index field Offset ... Valid Tag 1 2 3 4 5 6 7 1022 1023 0xc-f 0x8-b 0x4-7 0x0-3 Index 1 d c b a

Index Valid, Tag Matches, return d 000000000000000000 0000000001 1100 Tag field Index field Offset ... Valid Tag 1 2 3 4 5 6 7 1022 1023 0xc-f 0x8-b 0x4-7 0x0-3 Index 1 d c b a

Peer Instruction All caches take advantage of spatial locality. All caches take advantage of temporal locality. If you know your computer’s cache size, you can often make your code run faster. 123 a) FFT b) FTF c) FTT d) TFT e) TTT Answer: [correct=5, TFF] Individual (6, 3, 6, 3, 34, 37, 7, 4)% & Team (9, 4, 4, 4, 39, 34, 0, 7)% A: T [18/82 & 25/75] (The game breaks up very quickly into separate, independent games -- big win to solve separated & put together) B: F [80/20 & 86/14] (The game tree grows exponentially! A better static evaluator can make all the difference!) C: F [53/57 & 52/48] (If you're just looking for the best move, just keep theBestMove around [ala memoizing max])

So we create a memory hierarchy: And in Conclusion… We would like to have the capacity of disk at the speed of the processor: unfortunately this is not feasible. So we create a memory hierarchy: each successively lower level contains “most used” data from next higher level exploits temporal & spatial locality do the common case fast, worry less about the exceptions (design principle of MIPS) Locality of reference is a Big Idea