CS 152 Computer Architecture and Engineering Lecture 7 - Memory Hierarchy-II Krste Asanovic Electrical Engineering and Computer Sciences University of.

Slides:



Advertisements
Similar presentations
Lecture 19: Cache Basics Today’s topics: Out-of-order execution
Advertisements

CSE 490/590, Spring 2011 CSE 490/590 Computer Architecture Cache III Steve Ko Computer Sciences and Engineering University at Buffalo.
Computation I pg 1 Embedded Computer Architecture Memory Hierarchy: Cache Recap Course 5KK73 Henk Corporaal November 2014
1 Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 (and Appendix B) Memory Hierarchy Design Computer Architecture A Quantitative Approach,
CSE 490/590, Spring 2011 CSE 490/590 Computer Architecture Cache II Steve Ko Computer Sciences and Engineering University at Buffalo.
CSE 490/590, Spring 2011 CSE 490/590 Computer Architecture Virtual Memory I Steve Ko Computer Sciences and Engineering University at Buffalo.
CSCE 212 Chapter 7 Memory Hierarchy Instructor: Jason D. Bakos.
February 9, 2010CS152, Spring 2010 CS 152 Computer Architecture and Engineering Lecture 7 - Memory Hierarchy-II Krste Asanovic Electrical Engineering and.
1 Lecture 14: Cache Innovations and DRAM Today: cache access basics and innovations, DRAM (Sections )
CS 152 Computer Architecture and Engineering Lecture 6 - Memory Krste Asanovic Electrical Engineering and Computer Sciences University of California at.
1  1998 Morgan Kaufmann Publishers Chapter Seven Large and Fast: Exploiting Memory Hierarchy.
1 COMP 206: Computer Architecture and Implementation Montek Singh Mon, Oct 31, 2005 Topic: Memory Hierarchy Design (HP3 Ch. 5) (Caches, Main Memory and.
1 COMP 206: Computer Architecture and Implementation Montek Singh Mon., Nov. 3, 2003 Topic: Memory Hierarchy Design (HP3 Ch. 5) (Caches, Main Memory and.
331 Lec20.1Fall :332:331 Computer Architecture and Assembly Language Fall 2003 Week 13 Basics of Cache [Adapted from Dave Patterson’s UCB CS152.
February 9, 2011CS152, Spring 2011 CS 152 Computer Architecture and Engineering Lecture 7 - Memory Hierarchy-II Krste Asanovic Electrical Engineering and.
1  1998 Morgan Kaufmann Publishers Chapter Seven Large and Fast: Exploiting Memory Hierarchy.
ENEE350 Ankur Srivastava University of Maryland, College Park Based on Slides from Mary Jane Irwin ( )
CS 152 Computer Architecture and Engineering Lecture 7 - Memory Hierarchy-II Krste Asanovic Electrical Engineering and Computer Sciences University of.
331 Lec20.1Spring :332:331 Computer Architecture and Assembly Language Spring 2005 Week 13 Basics of Cache [Adapted from Dave Patterson’s UCB CS152.
February 9, 2010CS152, Spring 2010 CS 152 Computer Architecture and Engineering Lecture 7 - Memory Hierarchy-II Krste Asanovic Electrical Engineering and.
CS 152 Computer Architecture and Engineering Lecture 6 - Memory Krste Asanovic Electrical Engineering and Computer Sciences University of California at.
CS 152 Computer Architecture and Engineering Lecture 8 - Memory Hierarchy-III Krste Asanovic Electrical Engineering and Computer Sciences University of.
CS 152 Computer Architecture and Engineering Lecture 7 - Memory Hierarchy-II Krste Asanovic Electrical Engineering and Computer Sciences University of.
DAP Spr.‘98 ©UCB 1 Lecture 11: Memory Hierarchy—Ways to Reduce Misses.
February 11, 2010CS152, Spring 2010 CS 152 Computer Architecture and Engineering Lecture 8 - Memory Hierarchy-III Krste Asanovic Electrical Engineering.
Lecture 15: Virtual Memory EEN 312: Processors: Hardware, Software, and Interfacing Department of Electrical and Computer Engineering Spring 2014, Dr.
Memory/Storage Architecture Lab Computer Architecture Memory Hierarchy.
How to Build a CPU Cache COMP25212 – Lecture 2. Learning Objectives To understand: –how cache is logically structured –how cache operates CPU reads CPU.
CS 3410, Spring 2014 Computer Science Cornell University See P&H Chapter: , 5.8, 5.15.
10/18: Lecture topics Memory Hierarchy –Why it works: Locality –Levels in the hierarchy Cache access –Mapping strategies Cache performance Replacement.
CSE 378 Cache Performance1 Performance metrics for caches Basic performance metric: hit ratio h h = Number of memory references that hit in the cache /
CS1104 – Computer Organization PART 2: Computer Architecture Lecture 10 Memory Hierarchy.
CSIE30300 Computer Architecture Unit 08: Cache Hsin-Chou Chi [Adapted from material by and
February 9, 2012CS152, Spring 2012 CS 152 Computer Architecture and Engineering Lecture 7 - Memory Hierarchy-II Krste Asanovic Electrical Engineering and.
Caches Where is a block placed in a cache? –Three possible answers  three different types AnywhereFully associativeOnly into one block Direct mappedInto.
Lecture 08: Memory Hierarchy Cache Performance Kai Bu
Nov. 15, 2000Systems Architecture II1 Machine Organization (CS 570) Lecture 8: Memory Hierarchy Design * Jeremy R. Johnson Wed. Nov. 15, 2000 *This lecture.
CS.305 Computer Architecture Memory: Caches Adapted from Computer Organization and Design, Patterson & Hennessy, © 2005, and from slides kindly made available.
CPE232 Cache Introduction1 CPE 232 Computer Organization Spring 2006 Cache Introduction Dr. Gheith Abandah [Adapted from the slides of Professor Mary Irwin.
1 Chapter Seven CACHE MEMORY AND VIRTUAL MEMORY. 2 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4.
Princess Sumaya Univ. Computer Engineering Dept. Chapter 5:
Cache (Memory) Performance Optimization. Average memory access time = Hit time + Miss rate x Miss penalty To improve performance: reduce the miss rate.
1 Appendix C. Review of Memory Hierarchy Introduction Cache ABCs Cache Performance Write policy Virtual Memory and TLB.
Constructive Computer Architecture Realistic Memories and Caches Arvind Computer Science & Artificial Intelligence Lab. Massachusetts Institute of Technology.
1 Lecture 10 Cache Peng Liu Physical Size Affects Latency 2 Small Memory CPU Big Memory CPU  Signals have further to travel  Fan.
1 Chapter Seven. 2 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4 to 6 transistors) DRAM: –value.
COMPSYS 304 Computer Architecture Cache John Morris Electrical & Computer Enginering/ Computer Science, The University of Auckland Iolanthe at 13 knots.
Microprocessor Microarchitecture Memory Hierarchy Optimization Lynn Choi Dept. Of Computer and Electronics Engineering.
CS 61C: Great Ideas in Computer Architecture Caches Part 2 Instructors: Nicholas Weaver & Vladimir Stojanovic
CENG709 Computer Architecture and Operating Systems Lecture 7 - Memory Hierarchy-II Murat Manguoglu Department of Computer Engineering Middle East Technical.
1 Memory Hierarchy Design Chapter 5. 2 Cache Systems CPUCache Main Memory Data object transfer Block transfer CPU 400MHz Main Memory 10MHz Bus 66MHz CPU.
Multilevel Memories (Improving performance using alittle “cash”)
Virtual Memory Use main memory as a “cache” for secondary (disk) storage Managed jointly by CPU hardware and the operating system (OS) Programs share main.
Cache Memory Presentation I
Morgan Kaufmann Publishers Memory & Cache
Peng Liu Lecture 11 Cache Peng Liu
Peng Liu Lecture 10 Cache Peng Liu
Krste Asanovic Electrical Engineering and Computer Sciences
Lecture 23: Cache, Memory, Virtual Memory
Krste Asanovic Electrical Engineering and Computer Sciences
Adapted from slides by Sally McKee Cornell University
Peng Liu Lecture 10 Cache (2) Peng Liu
Morgan Kaufmann Publishers Memory Hierarchy: Cache Basics
Lecture 22: Cache Hierarchies, Memory
CS 152 Computer Architecture and Engineering CS252 Graduate Computer Architecture Lecture 6 – Memory II Krste Asanovic Electrical Engineering and Computer.
Cache - Optimization.
Cache Memory Rabi Mahapatra
CSE 486/586 Distributed Systems Cache Coherence
10/18: Lecture Topics Using spatial locality
Overview Problem Solution CPU vs Memory performance imbalance
Presentation transcript:

CS 152 Computer Architecture and Engineering Lecture 7 - Memory Hierarchy-II Krste Asanovic Electrical Engineering and Computer Sciences University of California at Berkeley

2/14/2008CS152-Spring’08 2 Last time in Lecture 6 Dynamic RAM (DRAM) is main form of main memory storage in use today –Holds values on small capacitors, need refreshing (hence dynamic) –Slow multi-step access: precharge, read row, read column Static RAM (SRAM) is faster but more expensive –Used to build on-chip memory for caches Caches exploit two forms of predictability in memory reference streams –Temporal locality, same location likely to be accessed again soon –Spatial locality, neighboring location likely to be accessed soon Cache holds small set of values in fast memory (SRAM) close to processor –Need to develop search scheme to find values in cache, and replacement policy to make space for newly accessed locations

2/14/2008CS152-Spring’08 3 Relative Memory Cell Sizes [ Foss, “Implementing Application-Specific Memory”, ISSCC 1996 ] DRAM on memory chip On-Chip SRAM in logic chip

2/14/2008CS152-Spring’08 4 Placement Policy Set Number Cache Fully (2-way) Set Direct AssociativeAssociative Mapped anywhereanywhere in only into set 0 block 4 (12 mod 4) (12 mod 8) Memory Block Number block 12 can be placed

2/14/2008CS152-Spring’08 Direct-Mapped Cache TagData Block V = Block Offset TagIndex t k b t HIT Data Word or Byte 2 k lines

2/14/2008CS152-Spring’08 2-Way Set-Associative Cache TagData Block V = Block Offset TagIndex t k b HIT TagData Block V Data Word or Byte = t

2/14/2008CS152-Spring’08 Fully Associative Cache TagData Block V = Block Offset Tag t b HIT Data Word or Byte = = t

2/14/2008CS152-Spring’08 8 Replacement Policy In an associative cache, which block from a set should be evicted when the set becomes full? Random Least Recently Used (LRU) LRU cache state must be updated on every access true implementation only feasible for small sets (2-way) pseudo-LRU binary tree often used for 4-8 way First In, First Out (FIFO) a.k.a. Round-Robin used in highly associative caches Not Least Recently Used (NLRU) FIFO with exception for most recently used block or blocks This is a second-order effect. Why? Replacement only happens on misses

2/14/2008CS152-Spring’08 9 Word3Word0Word1Word2 Block Size and Spatial Locality Larger block size has distinct hardware advantages less tag overhead exploit fast burst transfers from DRAM exploit fast burst transfers over wide busses What are the disadvantages of increasing block size? block address offset b 2 b = block size a.k.a line size (in bytes) Split CPU address b bits 32-b bits Tag Block is unit of transfer between the cache and memory 4 word block, b=2

2/14/2008CS152-Spring’08 10 CPU-Cache Interaction (5-stage pipeline) PC addr inst Primary Instruction Cache 0x4 Add IR D nop hit ? PCen Decode, Register Fetch wdata R addr wdata rdata Primary Data Cache we A B YY ALU MD1 MD2 Cache Refill Data from Lower Levels of Memory Hierarchy hit ? Stall entire CPU on data cache miss To Memory Control M E What about Instruction miss or writes to i-stream ?

2/14/2008CS152-Spring’08 11 Improving Cache Performance Average memory access time = Hit time + Miss rate x Miss penalty To improve performance: reduce the hit time reduce the miss rate reduce the miss penalty What is the simplest design strategy?

2/14/2008CS152-Spring’08 12 Causes for Cache Misses Compulsory: first-reference to a block a.k.a. cold start misses - misses that would occur even with infinite cache Capacity: cache is too small to hold all data needed by the program - misses that would occur even under perfect replacement policy Conflict: misses that occur because of collisions due to block-placement strategy - misses that would not occur with full associativity

2/14/2008CS152-Spring’08 13 Effect of Cache Parameters on Performance Larger cache size Higher associativity Larger block size

2/14/2008CS152-Spring’08 14 Write Policy Choices Cache hit: –write through: write both cache & memory »generally higher traffic but simplifies cache coherence –write back: write cache only (memory is written only when the entry is evicted) »a dirty bit per block can further reduce the traffic Cache miss: –no write allocate: only write to main memory –write allocate (aka fetch on write): fetch into cache Common combinations: –write through and no write allocate –write back with write allocate

2/14/2008CS152-Spring’08 15 Write Performance Tag Data V = Block Offset TagIndex t k b t HIT Data Word or Byte 2 k lines WE

2/14/2008CS152-Spring’08 16 Reducing Write Hit Time Problem: Writes take two cycles in memory stage, one cycle for tag check plus one cycle for data write if hit Solutions: Design data RAM that can perform read and write in one cycle, restore old value after tag miss Fully-associative (CAM Tag) caches: Word line only enabled if hit Pipelined writes: Hold write data for store in single buffer ahead of cache, write cache data during next store ’ s tag check

2/14/2008CS152-Spring’08 17 Pipelining Cache Writes TagsData TagIndexStore Data Address and Store Data From CPU Delayed Write DataDelayed Write Addr. =? Load Data to CPU Load/Store L S 10 Hit? Data from a store hit written into data portion of cache during tag access of subsequent store

2/14/2008CS152-Spring’08 18 CS152 Administrivia Krste, no office hours, Monday 2/18 (President’s Day Holiday) – for alternate time Henry office hours, 511 Soda –None on Monday due to holiday –2:00-3:00PM Fridays In-class quiz dates –Q1: Tuesday February 19 (ISAs, microcode, simple pipelining) »Material covered, Lectures 1-5, PS1, Lab 1 We’re stuck in this room for semester (nothing else open)

2/14/2008CS152-Spring’08 19 Write Buffer to Reduce Read Miss Penalty Processor is not stalled on writes, and read misses can go ahead of write to main memory Problem: Write buffer may hold updated value of location needed by a read miss Simple scheme: on a read miss, wait for the write buffer to go empty Faster scheme: Check write buffer addresses against read miss addresses, if no match, allow read miss to go ahead of writes, else, return value in write buffer Data Cache Unified L2 Cache RF CPU Write buffer Evicted dirty lines for writeback cache OR All writes in writethru cache

2/14/2008CS152-Spring’08 Serial-versus-Parallel Cache and Memory access  is HIT RATIO: Fraction of references in cache 1 -  is MISS RATIO: Remaining references CACHE Processor Main Memory Addr Data Average access time for serial search: t cache + (1 - ) t mem CACHE Processor Main Memory Addr Data Average access time for parallel search:  t cache + (1 - ) t mem Savings are usually small, t mem >> t cache, hit ratio  high High bandwidth required for memory path Complexity of handling parallel paths can slow t cache

2/14/2008CS152-Spring’08 21 Block-level Optimizations Tags are too large, i.e., too much overhead –Simple solution: Larger blocks, but miss penalty could be large. Sub-block placement (aka sector cache) –A valid bit added to units smaller than full block, called sub-blocks –Only read a sub-block on a miss –If a tag matches, is the word in the cache?

2/14/2008CS152-Spring’08 22 Set-Associative RAM-Tag Cache Not energy-efficient –A tag and data word is read from every way Two-phase approach –First read tags, then just read data from selected way –More energy-efficient –Doubles latency in L1 –OK, for L2 and above, why? =? Tag Status Data Tag Index Offset

2/14/2008CS152-Spring’08 23 Highly-Associative CAM-Tag Caches For high associativity (e.g., 32-way), use content-addressable memory (CAM) for tags (ARM, Intel XScale) Overhead: Tag+comparator bit 2-4x area of plain RAM-tag bit Only one set enabled Only hit data accessed – saves energy tag set index offset Tag =? Data Block Tag =? Data Block Tag =? Data Block Tag =? Data Block Tag =? Data Block Tag =? Data Block Tag =? Data BlockTag =? Data BlockTag =? Data Block Set 0 Set 1 Set i Hit?Data

2/14/2008CS152-Spring’08 24 Victim Caches (HP 7200) L1 Data Cache Unified L2 Cache RF CPU Victim FA Cache 4 blocks Evicted data from L1 Evicted data From VC where ? Hit data from VC (miss in L1) Victim cache is a small associative back up cache, added to a direct mapped cache, which holds recently evicted lines First look up in direct mapped cache If miss, look in victim cache If hit in victim cache, swap hit line with line now evicted from L1 If miss in victim cache, L1 victim -> VC, VC victim->? Fast hit time of direct mapped but with reduced conflict misses

2/14/2008CS152-Spring’08 25 Way Predicting Caches ( MIPS R10000 L2 cache ) Use processor address to index into way prediction table Look in predicted way at given index, then: HITMISS Return copy of data from cache Look in other way Read block of data from next level of cache MISS SLOW HIT (change entry in prediction table)

2/14/2008CS152-Spring’08 26 Multilevel Caches A memory cannot be large and fast Increasing sizes of cache at each level CPU L1 L2 DRAM Local miss rate = misses in cache / accesses to cache Global miss rate = misses in cache / CPU memory accesses Misses per instruction = misses in cache / number of instructions

2/14/2008CS152-Spring’08 27 Inclusion Policy Inclusive multilevel cache: –Inner cache holds copies of data in outer cache –External access need only check outer cache –Most common case Exclusive multilevel caches: –Inner cache may hold data not in outer cache –Swap lines between inner/outer caches on miss –Used in AMD Athlon with 64KB primary and 256KB secondary cache Why choose one type or the other?

2/14/2008CS152-Spring’08 28 A Typical Memory Hierarchy c.2008 L1 Data Cache L1 Instruction Cache Unified L2 Cache RF Memory Multiported register file (part of CPU) Split instruction & data primary caches (on-chip SRAM) Multiple interleaved memory banks (off-chip DRAM) Large unified secondary cache (on-chip SRAM) CPU

2/14/2008CS152-Spring’08 29 Itanium-2 On-Chip Caches (Intel/HP, 2002) Level 1, 16KB, 4-way s.a., 64B line, quad-port (2 load+2 store), single cycle latency Level 2, 256KB, 4-way s.a, 128B line, quad-port (4 load or 4 store), five cycle latency Level 3, 3MB, 12-way s.a., 128B line, single 32B port, twelve cycle latency

2/14/2008CS152-Spring’08 30 Workstation Memory System (Apple PowerMac G5, 2003) Dual 2GHz processors, each has: 64KB I-cache, direct mapped 32KB D-cache, 2-way 512KB L2 unified cache, 8-way All 128B lines Up to 8GB DDR SDRAM, 400MHz, 128-bit bus, 6.4GB/s 1GHz, 2x32-bit bus, 16GB/s North Bridge Chip AGP Graphics Card, 533MHz, 32-bit bus, 2.1GB/s PCI-X Expansion, 133MHz, 64-bit bus, 1 GB/s

2/14/2008CS152-Spring’08 31 Acknowledgements These slides contain material developed and copyright by: –Arvind (MIT) –Krste Asanovic (MIT/UCB) –Joel Emer (Intel/MIT) –James Hoe (CMU) –John Kubiatowicz (UCB) –David Patterson (UCB) MIT material derived from course UCB material derived from course CS252