We think you have liked this presentation. If you wish to download it, please recommend it to your friends in any social system. Share buttons are a little bit lower. Thank you!
Presentation is loading. Please wait.
Modified over 5 years ago
1 1999 ©UCB CS 161 Ch 7: Memory Hierarchy LECTURE 16 Instructor: L.N. Bhuyan www.cs.ucr.edu/~bhuyan
2 1999 ©UCB With Load Bypass: Average Access Time = Hit Time x (1 - Miss Rate) + Miss Penalty x Miss Rate OR Without Load Bypass Average Memory Acess Time = Time for a hit + Miss rate x Miss penalty Cache Access Time
3 1999 ©UCB Unified vs Split Caches Unified Cache: Low Miss ratio because more space available for either instruction or data Low cache bandwidth because instruction and data cannot be read at the same time due to one port. Split Cache: High miss ratio because either instructions or data may run out of space even though space is available at other cache High bandwidth because an instruction and data can be accessed at the same time. Example: 16KB I&D: Inst miss rate=0.64%, Data miss rate=6.47% 32KB unified: Aggregate miss rate=1.99% °Which is better (ignore L2 cache)? Assume 33% data ops 75% accesses from instructions (1.0/1.33) hit time=1, miss time=50 Note that data hit has 1 stall for unified cache (only one port) AMAT Harvard =75%x(1+0.64%x50)+25%x(1+6.47%x50) = 2.05 AMAT Unified =75%x(1+1.99%x50)+25%x(1+1+1.99%x50)= 2.24 Proc I-Cache-1 Proc Unified Cache-1 Unified Cache-2 D-Cache-1 Proc Unified Cache-2
4 1999 ©UCB Static RAM (SRAM) °Six transistors in cross connected fashion Provides regular AND inverted outputs Implemented in CMOS process Single Port 6-T SRAM Cell
5 1999 ©UCB Dynamic Random Access Memory - DRAM °DRAM organization is similar to SRAM except that each bit of DRAM is constructed using a pass transistor and a capacitor, shown in next slide °Less number of transistors/bit gives high density, but slow discharge through capacitor. °Capacitor needs to be recharged or refreshed giving rise to high cycle time. Q: What is the difference between access time and cycle time? °Uses a two-level decoder as shown later. Note that 2048 bits are accessed per row, but only one bit is used.
6 1999 ©UCB °SRAM cells exhibit high speed/poor density °DRAM: simple transistor/capacitor pairs in high density form Dynamic RAM Word Line Bit Line C Sense Amp......
7 1999 ©UCB DRAM logical organization (4 Mbit) °Square root of bits per RAS/CAS Column Decoder SenseAmps & I/O MemoryArray (2,048 x 2,048) A0…A10 … 11 D Q Word Line Storage Cell Row Decoder … Access time of DRAM = Row access time + column access time + refreshing
8 1999 ©UCB Virtual Memory °Idea 1: Many Programs sharing DRAM Memory so that context switches can occur °Idea 2: Allow program to be written without memory constraints – program can exceed the size of the main memory °Idea 3: Relocation: Parts of the program can be placed at different locations in the memory instead of a big chunk. °Virtual Memory: (1) DRAM Memory holds many programs running at same time (processes) (2) use DRAM Memory as a kind of “cache” for disk
9 1999 ©UCB Disk Technology in Brief °Disk is mechanical memory °Disk Access Time = seek time + rotational delay + transfer time usually measured in milliseconds °“Miss” to disk is extremely expensive typical access time = millions of clock cycles 3600 - 7200 RPM rotation speed tracks R/W arm
10 1999 ©UCB Virtual Memory has own terminology °Each process has its own private “virtual address space” (e.g., 2 32 Bytes); CPU actually generates “virtual addresses” °Each computer has a “physical address space” (e.g., 128 MegaBytes DRAM); also called “real memory” °Address translation: mapping virtual addresses to physical addresses Allows multiple programs to use (different chunks of physical) memory at same time Also allows some chunks of virtual memory to be represented on disk, not in main memory (to exploit memory hierarchy)
11 1999 ©UCB Mapping Virtual Memory to Physical Memory 0 Physical Memory Virtual Memory Heap 64 MB °Divide Memory into equal sized “chunks” (say, 4KB each) 0 °Any chunk of Virtual Memory assigned to any chunk of Physical Memory (“page”) Stack Heap Static Code Single Process
12 1999 ©UCB Handling Page Faults °A page fault is like a cache miss Must find page in lower level of hierarchy °If valid bit is zero, the Physical Page Number points to a page on disk °When OS starts new process, it creates space on disk for all the pages of the process, sets all valid bits in page table to zero, and all Physical Page Numbers to point to disk called Demand Paging - pages of the process are loaded from disk only as needed
13 1999 ©UCB Comparing the 2 levels of hierarchy °CacheVirtual Memory °Block or LinePage °MissPage Fault °Block Size: 32-64BPage Size: 4K-16KB °Placement:Fully Associative Direct Mapped, N-way Set Associative °Replacement: Least Recently Used LRU or Random(LRU) approximation °Write Thru or BackWrite Back °How Managed:Hardware + Software Hardware(Operating System)
14 1999 ©UCB How to Perform Address Translation? °VM divides memory into equal sized pages °Address translation relocates entire pages offsets within the pages do not change if make page size a power of two, the virtual address separates into two fields: like cache index, offset fields Virtual Page Number Page Offset virtual address
15 1999 ©UCB Mapping Virtual to Physical Address Virtual Page NumberPage Offset Physical Page Number Translation 31 30 29 28 27.………………….12 11 10 29 28 27.………………….12 11 10 9 8 ……..……. 3 2 1 0 Virtual Address Physical Address 9 8 ……..……. 3 2 1 0 1KB page size
16 1999 ©UCB Address Translation °Want fully associative page placement °How to locate the physical page? °Search impractical (too many pages) °A page table is a data structure which contains the mapping of virtual pages to physical pages There are several different ways, all up to the operating system, to keep this data around °Each process running in the system has its own page table
17 1999 ©UCB Address Translation: Page Table Virtual Address (VA): virtual page nbr offset Page Table Register Page Table is located in physical memory index into page table + Physical Memory Address (PA) Access Rights: None, Read Only, Read/Write, Executable Page Table Val -id Access Rights Physical Page Number V A.R. P. P. N.0 A.R. V P. P. N.... disk
18 1999 ©UCB Optimizing for Space °Page Table too big! 4GB Virtual Address Space ÷ 4 KB page 2 20 (~ 1 million) Page Table Entries 4 MB just for Page Table of single process! °Variety of solutions to tradeoff Page Table size for slower performance °Use a limit register to restrict page table size and let it grow with more pages,Multilevel page table, Paging page tables, etc. (Take O/S Class to learn more)
19 1999 ©UCB How to Translate Fast? °Problem: Virtual Memory requires two memory accesses! one to translate Virtual Address into Physical Address (page table lookup) one to transfer the actual data (cache hit) But Page Table is in physical memory! °Observation: since there is locality in pages of data, must be locality in virtual addresses of those pages! °Why not create a cache of virtual to physical address translations to make translation fast? (smaller is faster) °For historical reasons, such a “page table cache” is called a Translation Lookaside Buffer, or TLB
20 1999 ©UCB Typical TLB Format VirtualPhysicalValidRef Dirty Access Page Nbr Page NbrRights TLB just a cache of the page table mappings Dirty: since use write back, need to know whether or not to write page to disk when replaced Ref: Used to calculate LRU on replacement TLB access time comparable to cache (much less than main memory access time) “tag” “data”
21 1999 ©UCB Translation Look-Aside Buffers TLB is usually small, typically 32-4,096 entries Like any other cache, the TLB can be fully associative, set associative, or direct mapped Processor TLBCache Main Memory miss hit data hit miss Disk Memory OS Fault Handler page fault/ protection violation Page Table data virtual addr. physical addr.
22 1999 ©UCB Valid Tag Data Page offset Page offset Virtual page number Physical page numberValid 1220 20 16 14 Cache index 32 Data Cache hit 2 Byte offset Dirty Tag TLB hit Physical page number Physical address tag 31 30 29 15 14 13 12 11 10 9 8 3 2 1 0 DECStation 3100/ MIPS R2000 Virtual Address TLB Cache 64 entries, fully associative Physical Address 16K entries, direct mapped
23 1999 ©UCB Real Stuff: Pentium Pro Memory Hierarchy °Address Size:32 bits (VA, PA) °VM Page Size:4 KB, 4 MB °TLB organization: separate i,d TLBs (i-TLB: 32 entries, d-TLB: 64 entries) 4-way set associative LRU approximated hardware handles miss °L1 Cache:8 KB, separate i,d 4-way set associative LRU approximated 32 byte block write back °L2 Cache:256 or 512 KB
CS252/Culler Lec 5.1 2/5/02 CS203A Computer Architecture Lecture 15 Cache and Memory Technology and Virtual Memory.
Virtual Memory In this lecture, slides from lecture 16 from the course Computer Architecture ECE 201 by Professor Mike Schulte are used with permission.
1 Lecture 13: Cache and Virtual Memroy Review Cache optimization approaches, cache miss classification, Adapted from UCB CS252 S01.
Virtual Memory Chapter 18 S. Dandamudi To be used with S. Dandamudi, “Fundamentals of Computer Organization and Design,” Springer, S. Dandamudi.
Computer Organization CS224 Fall 2012 Lesson 44. Virtual Memory Use main memory as a “cache” for secondary (disk) storage l Managed jointly by CPU hardware.
Lecture 34: Chapter 5 Today’s topic –Virtual Memories 1.
Caching IV Andreas Klappenecker CPSC321 Computer Architecture.
COMP 3221: Microprocessors and Embedded Systems Lectures 27: Virtual Memory - III Lecturer: Hui Wu Session 2, 2005 Modified.
Virtual Memory Adapted from lecture notes of Dr. Patterson and Dr. Kubiatowicz of UC Berkeley.
CSCE 212 Chapter 7 Memory Hierarchy Instructor: Jason D. Bakos.
1 Lecture 20 – Caching and Virtual Memory 2004 Morgan Kaufmann Publishers Lecture 20 Caches and Virtual Memory.
Chap. 7.4: Virtual Memory. CS61C L35 VM I (2) Garcia © UCB Review: Caches Cache design choices: size of cache: speed v. capacity direct-mapped v. associative.
©UCB CS 161 Ch 7: Memory Hierarchy LECTURE 15 Instructor: L.N. Bhuyan
CS 61C L24 VM II (1) Garcia, Spring 2004 © UCB Lecturer PSOE Dan Garcia inst.eecs.berkeley.edu/~cs61c CS61C : Machine Structures.
Virtual Memory Adapted from lecture notes of Dr. Patterson and Dr. Kubiatowicz of UC Berkeley and Rabi Mahapatra & Hank Walker.
S.1 Review: The Memory Hierarchy Increasing distance from the processor in access time L1$ L2$ Main Memory Secondary Memory Processor (Relative) size of.
1 1998 Morgan Kaufmann Publishers Chapter Seven Large and Fast: Exploiting Memory Hierarchy.
1 Chapter Seven Large and Fast: Exploiting Memory Hierarchy.
CS 61C L24 VM II (1) A Carle, Summer 2005 © UCB inst.eecs.berkeley.edu/~cs61c/su05 CS61C : Machine Structures Lecture #24: VM II Andy Carle.
© 2020 SlidePlayer.com Inc. All rights reserved.