Lecturer PSOE Dan Garcia

Slides:



Advertisements
Similar presentations
CS 430 – Computer Architecture
Advertisements

Modified from notes by Saeid Nooshabadi COMP3221: Microprocessors and Embedded Systems Lecture 25: Cache - I Lecturer:
CS61C L31 Caches II (1) Garcia, Fall 2006 © UCB GPUs >> CPUs?  Many are using graphics processing units on graphics cards for high-performance computing.
Memory Subsystem and Cache Adapted from lectures notes of Dr. Patterson and Dr. Kubiatowicz of UC Berkeley.
CS61C L23 Cache II (1) Chae, Summer 2008 © UCB Albert Chae, Instructor inst.eecs.berkeley.edu/~cs61c CS61C : Machine Structures Lecture #23 – Cache II.
CS 61C L20 Caches I (1) Garcia, Spring 2004 © UCB Lecturer PSOE Dan Garcia inst.eecs.berkeley.edu/~cs61c CS61C : Machine.
COMP3221 lec33-Cache-I.1 Saeid Nooshabadi COMP 3221 Microprocessors and Embedded Systems Lectures 12: Cache Memory - I
CS61C L23 Caches I (1) Beamer, Summer 2007 © UCB Scott Beamer, Instructor inst.eecs.berkeley.edu/~cs61c CS61C : Machine Structures Lecture #23 Cache I.
Computer ArchitectureFall 2008 © October 27th, 2008 Majd F. Sakr CS-447– Computer Architecture.
CS61C L21 Caches I (1) Garcia, Fall 2005 © UCB Lecturer PSOE, new dad Dan Garcia inst.eecs.berkeley.edu/~cs61c CS61C : Machine.
Inst.eecs.berkeley.edu/~cs61c UCB CS61C : Machine Structures Lecture 31 – Caches II In this week’s Science, IBM researchers describe a new class.
CS61C L33 Caches III (1) Garcia, Spring 2007 © UCB Future of movies is 3D?  Dreamworks says they may exclusively release movies in this format. It’s based.
COMP3221: Microprocessors and Embedded Systems Lecture 26: Cache - II Lecturer: Hui Wu Session 2, 2005 Modified from.
CS61C L32 Caches II (1) Garcia, 2005 © UCB Lecturer PSOE Dan Garcia inst.eecs.berkeley.edu/~cs61c CS61C : Machine Structures.
CS61C L31 Caches I (1) Garcia 2005 © UCB Lecturer PSOE Dan Garcia inst.eecs.berkeley.edu/~cs61c CS61C : Machine Structures.
CS 61C L35 Caches IV / VM I (1) Garcia, Fall 2004 © UCB Andy Carle inst.eecs.berkeley.edu/~cs61c-ta inst.eecs.berkeley.edu/~cs61c CS61C : Machine Structures.
CS61C L20 Caches I (1) A Carle, Summer 2006 © UCB inst.eecs.berkeley.edu/~cs61c/su06 CS61C : Machine Structures Lecture #20: Caches Andy Carle.
CS61C L32 Caches II (1) Garcia, Spring 2007 © UCB Experts weigh in on Quantum CPU  Most “profoundly skeptical” of the demo. D-Wave has provided almost.
CS61C L30 Caches I (1) Garcia, Fall 2006 © UCB Shuttle can’t fly over Jan 1?  A computer bug has come up for the shuttle – its computers don’t reset to.
Inst.eecs.berkeley.edu/~cs61c UCB CS61C : Machine Structures Lecture 30 – Caches I Touted as “the fastest CPU on Earth”, IBM’s new Power6 doubles.
Computer ArchitectureFall 2007 © November 12th, 2007 Majd F. Sakr CS-447– Computer Architecture.
Computer ArchitectureFall 2007 © November 7th, 2007 Majd F. Sakr CS-447– Computer Architecture.
COMP3221 lec34-Cache-II.1 Saeid Nooshabadi COMP 3221 Microprocessors and Embedded Systems Lectures 34: Cache Memory - II
CS61C L24 Cache II (1) Beamer, Summer 2007 © UCB Scott Beamer, Instructor inst.eecs.berkeley.edu/~cs61c CS61C : Machine Structures Lecture #24 Cache II.
ENEE350 Ankur Srivastava University of Maryland, College Park Based on Slides from Mary Jane Irwin ( )
Cs 61C L17 Cache.1 Patterson Spring 99 ©UCB CS61C Cache Memory Lecture 17 March 31, 1999 Dave Patterson (http.cs.berkeley.edu/~patterson) www-inst.eecs.berkeley.edu/~cs61c/schedule.html.
CS61C L31 Caches I (1) Garcia, Spring 2007 © UCB Powerpoint bad!!  Research done at the Univ of NSW says that “working memory”, the brain part providing.
CS 61C L21 Caches II (1) Garcia, Spring 2004 © UCB Lecturer PSOE Dan Garcia inst.eecs.berkeley.edu/~cs61c CS61C : Machine.
©UCB CS 161 Ch 7: Memory Hierarchy LECTURE 14 Instructor: L.N. Bhuyan
1 CSE SUNY New Paltz Chapter Seven Exploiting Memory Hierarchy.
Computer ArchitectureFall 2007 © November 12th, 2007 Majd F. Sakr CS-447– Computer Architecture.
DAP Spr.‘98 ©UCB 1 Lecture 11: Memory Hierarchy—Ways to Reduce Misses.
Cache Memory CSE Slides from Dan Garcia, UCB.
CMPE 421 Parallel Computer Architecture
CS1104: Computer Organisation School of Computing National University of Singapore.
10/18: Lecture topics Memory Hierarchy –Why it works: Locality –Levels in the hierarchy Cache access –Mapping strategies Cache performance Replacement.
CS1104 – Computer Organization PART 2: Computer Architecture Lecture 10 Memory Hierarchy.
CSIE30300 Computer Architecture Unit 08: Cache Hsin-Chou Chi [Adapted from material by and
CSCI-365 Computer Organization Lecture Note: Some slides and/or pictures in the following are adapted from: Computer Organization and Design, Patterson.
1 How will execution time grow with SIZE? int array[SIZE]; int sum = 0; for (int i = 0 ; i < ; ++ i) { for (int j = 0 ; j < SIZE ; ++ j) { sum +=
CS61C L17 Cache1 © UC Regents 1 CS61C - Machine Structures Lecture 17 - Caches, Part I October 25, 2000 David Patterson
CML CML CS 230: Computer Organization and Assembly Language Aviral Shrivastava Department of Computer Science and Engineering School of Computing and Informatics.
CSCI-365 Computer Organization Lecture Note: Some slides and/or pictures in the following are adapted from: Computer Organization and Design, Patterson.
Csci 211 Computer System Architecture – Review on Cache Memory Xiuzhen Cheng
1 Chapter Seven. 2 Users want large and fast memories! SRAM access times are ns at cost of $100 to $250 per Mbyte. DRAM access times are ns.
CS.305 Computer Architecture Memory: Caches Adapted from Computer Organization and Design, Patterson & Hennessy, © 2005, and from slides kindly made available.
Inst.eecs.berkeley.edu/~cs61c UCB CS61C : Machine Structures Lecture 30 – Caches I After more than 4 years C is back at position number 1 in.
Review °We would like to have the capacity of disk at the speed of the processor: unfortunately this is not feasible. °So we create a memory hierarchy:
1 Chapter Seven CACHE MEMORY AND VIRTUAL MEMORY. 2 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4.
1 Chapter Seven. 2 Users want large and fast memories! SRAM access times are ns at cost of $100 to $250 per Mbyte. DRAM access times are ns.
1 Chapter Seven. 2 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4 to 6 transistors) DRAM: –value.
Memory COMPUTER ARCHITECTURE
The Goal: illusion of large, fast, cheap memory
How will execution time grow with SIZE?
Morgan Kaufmann Publishers Memory & Cache
Instructor Paul Pearce
CS61C : Machine Structures Lecture 6. 2
Memristor memory on its way (hopefully)
Cache Memory.
Memory Hierarchy Memory: hierarchy of components of various speeds and capacities Hierarchy driven by cost and performance In early days Primary memory.
Lecturer PSOE Dan Garcia
ECE232: Hardware Organization and Design
Guest Lecturer TA: Shreyas Chand
Lecture 20: OOO, Memory Hierarchy
CS-447– Computer Architecture Lecture 20 Cache Memories
Some of the slides are adopted from David Patterson (UCB)
Memory Hierarchy Memory: hierarchy of components of various speeds and capacities Hierarchy driven by cost and performance In early days Primary memory.
Some of the slides are adopted from David Patterson (UCB)
Csci 211 Computer System Architecture – Review on Cache Memory
10/18: Lecture Topics Using spatial locality
Presentation transcript:

Lecturer PSOE Dan Garcia www.cs.berkeley.edu/~ddgarcia inst.eecs.berkeley.edu/~cs61c CS61C : Machine Structures Lecture 32 Caches I 2004-11-12 Lecturer PSOE Dan Garcia www.cs.berkeley.edu/~ddgarcia The Incredibles! Greet class Is this the beginning of the end for our beloved… ?! http://beta.search.msn.com/

Pipeline challenge is hazards Review : Pipelining Pipeline challenge is hazards Forwarding helps w/many data hazards Delayed branch helps with control hazard in our 5 stage pipeline Data hazards w/Loads  Load Delay Slot Interlock “smart” CPU has HW to detect if conflict with inst following load, if so it stalls More aggressive performance: Superscalar (parallelism) Out-of-order execution

Big Ideas so far 15 weeks to learn big ideas in CS&E Principle of abstraction, used to build systems as layers Pliable Data: a program determines what it is Stored program concept: instructions just data Compilation v. interpretation to move down layers of system Greater performance by exploiting parallelism (pipeline) Principle of Locality, exploited via a memory hierarchy (cache) Principles/Pitfalls of Performance Measurement

Architecture! (aka “Systems”) Where are we now in 61C? Architecture! (aka “Systems”) CPU Organization Pipelining Caches Virtual Memory I / O Networks Performance

The Big Picture Keyboard, Mouse Computer Processor Memory Devices (active) Computer Control (“brain”) Datapath (“brawn”) Memory (passive) (where programs, data live when running) Devices Input Output Keyboard, Mouse Display, Printer Disk, Network

Memory Hierarchy (1/3) Processor Memory Disk executes instructions on order of nanoseconds to picoseconds holds a small amount of code and data in registers Memory More capacity than registers, still limited Access time ~50-100 ns Disk HUGE capacity (virtually limitless) VERY slow: runs ~milliseconds

Increasing Distance from Proc., Decreasing speed Memory Hierarchy (2/3) Processor Increasing Distance from Proc., Decreasing speed Levels in memory hierarchy Higher Level 1 Level 2 Level n Level 3 . . . Lower Size of memory at each level As we move to deeper levels the latency goes up and price per bit goes down. Q: Can $/bit go up as move deeper?

If level closer to Processor, it must be: Memory Hierarchy (3/3) If level closer to Processor, it must be: smaller faster subset of lower levels (contains most recently used data) Lowest Level (usually disk) contains all available data Other levels?

We’ve discussed three levels in the hierarchy: processor, memory, disk Memory Caching We’ve discussed three levels in the hierarchy: processor, memory, disk Mismatch between processor and memory speeds leads us to add a new level: a memory cache Implemented with SRAM technology: faster but more expensive than DRAM memory. “S” = Static, no need to refresh, ~60ns “D” = Dynamic, need to refresh, ~10ns arstechnica.com/paedia/r/ram_guide/ram_guide.part1-1.html

Memory Hierarchy Analogy: Library (1/2) You’re writing a term paper (Processor) at a table in Doe Doe Library is equivalent to disk essentially limitless capacity very slow to retrieve a book Table is memory smaller capacity: means you must return book when table fills up easier and faster to find a book there once you’ve already retrieved it

Memory Hierarchy Analogy: Library (2/2) Open books on table are cache smaller capacity: can have very few open books fit on table; again, when table fills up, you must close a book much, much faster to retrieve data Illusion created: whole library open on the tabletop Keep as many recently used books open on table as possible since likely to use again Also keep as many books on table as possible, since faster than going to library

Memory Hierarchy Basis Disk contains everything. When Processor needs something, bring it into to all higher levels of memory. Cache contains copies of data in memory that are being used. Memory contains copies of data on disk that are being used. Entire idea is based on Temporal Locality: if we use it now, we’ll want to use it again soon (a Big Idea)

How do we organize cache? Where does each memory address map to? Cache Design How do we organize cache? Where does each memory address map to? (Remember that cache is subset of memory, so multiple memory addresses map to the same cache location.) How do we know which elements are in cache? How do we quickly locate them?

Direct-Mapped Cache (1/2) In a direct-mapped cache, each memory address is associated with one possible block within the cache Therefore, we only need to look in a single location in the cache for the data if it exists in the cache Block is the unit of transfer between cache and memory

Direct-Mapped Cache (2/2) 4 Byte Direct Mapped Cache Cache Index 1 2 3 Memory Memory Address 1 2 3 4 5 6 7 8 9 A B C D E F Cache Location 0 can be occupied by data from: Memory location 0, 4, 8, ... 4 blocks => any memory location that is multiple of 4 Let’s look at the simplest cache one can build. A direct mapped cache that only has 4 bytes. In this direct mapped cache with only 4 bytes, location 0 of the cache can be occupied by data form memory location 0, 4, 8, C, ... and so on. While location 1 of the cache can be occupied by data from memory location 1, 5, 9, ... etc. So in general, the cache location where a memory location can map to is uniquely determined by the 2 least significant bits of the address (Cache Index). For example here, any memory location whose two least significant bits of the address are 0s can go to cache location zero. With so many memory locations to chose from, which one should we place in the cache? Of course, the one we have read or write most recently because by the principle of temporal locality, the one we just touch is most likely to be the one we will need again soon. Of all the possible memory locations that can be placed in cache Location 0, how can we tell which one is in the cache? +2 = 22 min. (Y:02)

Issues with Direct-Mapped Since multiple memory addresses map to same cache index, how do we tell which one is in there? What if we have a block size > 1 byte? Answer: divide memory address into three fields ttttttttttttttttt iiiiiiiiii oooo tag index byte to check to offset if have select within correct block block block

Direct-Mapped Cache Terminology All fields are read as unsigned integers. Index: specifies the cache index (which “row” of the cache we should look in) Offset: once we’ve found correct block, specifies which byte within the block we want Tag: the remaining bits after offset and index are determined; these are used to distinguish between all the memory addresses that map to the same location

Caching Terminology When we try to read memory, 3 things can happen: cache hit: cache block is valid and contains proper address, so read desired word cache miss: nothing in cache in appropriate block, so fetch from memory cache miss, block replacement: wrong data is in cache at appropriate block, so discard it and fetch desired data from memory (cache always copy)

Direct-Mapped Cache Example (1/3) Suppose we have a 16KB of data in a direct-mapped cache with 4 word blocks Determine the size of the tag, index and offset fields if we’re using a 32-bit architecture Offset need to specify correct byte within a block block contains 4 words = 16 bytes = 24 bytes need 4 bits to specify correct byte

Direct-Mapped Cache Example (2/3) Index: (~index into an “array of blocks”) need to specify correct row in cache cache contains 16 KB = 214 bytes block contains 24 bytes (4 words) # blocks/cache = bytes/cache bytes/block = 214 bytes/cache 24 bytes/block = 210 blocks/cache need 10 bits to specify this many rows

Direct-Mapped Cache Example (3/3) Tag: use remaining bits as tag tag length = addr length – offset - index = 32 - 4 - 10 bits = 18 bits so tag is leftmost 18 bits of memory address Why not full 32 bit address as tag? All bytes within block need same address (4b) Index must be same for every address within a block, so it’s redundant in tag check, thus can leave off to save memory (here 10 bits)

Peer Instruction Mem hierarchies were invented before 1950. (UNIVAC I wasn’t delivered ‘til 1951) If you know your computer’s cache size, you can often make your code run faster. Memory hierarchies take advantage of spatial locality by keeping the most recent data items closer to the processor. ABC 1: FFF 2: FFT 3: FTF 4: FTT 5: TFF 6: TFT 7: TTF 8: TTT

Peer Instruction Answer “We are…forced to recognize the possibility of constructing a hierarchy of memories, each of which has greater capacity than the preceding but which is less accessible.” – von Neumann, 1946 Certainly! That’s call “tuning” “Most Recent” items  Temporal locality Mem hierarchies were invented before 1950. (UNIVAC I wasn’t delivered ‘til 1951) If you know your computer’s cache size, you can often make your code run faster. Memory hierarchies take advantage of spatial locality by keeping the most recent data items closer to the processor. ABC 1: FFF 2: FFT 3: FTF 4: FTT 5: TFF 6: TFT 7: TTF 8: TTT

So we create a memory hierarchy: And in conclusion… We would like to have the capacity of disk at the speed of the processor: unfortunately this is not feasible. So we create a memory hierarchy: each successively lower level contains “most used” data from next higher level exploits temporal locality do the common case fast, worry less about the exceptions (design principle of MIPS) Locality of reference is a Big Idea