Presentation on theme: "CS252/Culler Lec 5.1 2/5/02 CS203A Computer Architecture Lecture 15 Cache and Memory Technology and Virtual Memory."— Presentation transcript:
CS252/Culler Lec 5.1 2/5/02 CS203A Computer Architecture Lecture 15 Cache and Memory Technology and Virtual Memory
CS252/Culler Lec 5.2 2/5/02 Main Memory Background Random Access Memory (vs. Serial Access Memory) Different flavors at different levels –Physical Makeup (CMOS, DRAM) –Low Level Architectures (FPM,EDO,BEDO,SDRAM) Cache uses SRAM: Static Random Access Memory –No refresh (6 transistors/bit vs. 1 transistor Size: DRAM/SRAM 4-8, Cost/Cycle time: SRAM/DRAM 8-16 Main Memory is DRAM: Dynamic Random Access Memory –Dynamic since needs to be refreshed periodically –Addresses divided into 2 halves (Memory as a 2D matrix): »RAS or Row Access Strobe »CAS or Column Access Strobe
CS252/Culler Lec 5.3 2/5/02 Static RAM (SRAM) Six transistors in cross connected fashion –Provides regular AND inverted outputs –Implemented in CMOS process Single Port 6-T SRAM Cell
CS252/Culler Lec 5.4 2/5/02 SRAM cells exhibit high speed/poor density DRAM: simple transistor/capacitor pairs in high density form Dynamic RAM Word Line Bit Line C Sense Amp......
CS252/Culler Lec 5.5 2/5/02 DRAM logical organization (4 Mbit) Square root of bits per RAS/CAS Column Decoder SenseAmps & I/O MemoryArray (2,048 x 2,048) A0…A10 … 11 D Q Word Line Storage Cell Row Decoder … Access time of DRAM = Row access time + column access time + refreshing
CS252/Culler Lec 5.6 2/5/02 Other Types of DRAM Synchronous DRAM (SDRAM): Ability to transfer a burst of data given a starting address and a burst length – suitable for transferring a block of data from main memory to cache. Page Mode DRAM: All bits on the same ROW (Spatial Locality) –Don’t need to wait for wordline to recharge –Toggle CAS with new column address Extended Data Out (EDO) –Overlap Data output w/ CAS toggle –Later brother: Burst EDO (CAS toggle used to get next addr) Rambus DRAM (RDRAM) - Pipelined control
CS252/Culler Lec 5.7 2/5/02 Main Memory Organizations CPU Cache Bus Memory CPU Bus Memory Multiplexor Cache CPU Cache Bus Memory bank 1 Memory bank 2 Memory bank 3 Memory bank 0 one-word wide memory organization wide memory organization interleaved memory organization DRAM access time >> bus transfer time
CS252/Culler Lec 5.8 2/5/02 Memory Interleaving Interleaved memory is more flexible than wide-access memory in that it can handle multiple independent accesses at once.
CS252/Culler Lec 5.9 2/5/02 Memory Access Time Example Assume that it takes 1 cycle to send the address, 15 cycles for each DRAM access and 1 cycle to send a word of data. Assuming a cache block of 4 words and one-word wide DRAM, miss penalty = 1 + 4x15 + 4x1 = 65 cycles With main memory and bus width of 2 words, miss penalty = 1 + 2x15 + 2x1 = 33 cycles. For 4-word wide memory, miss penalty is 17 cycles. Expensive due to wide bus and control circuits. With interleaved memory of 4 memory banks and same bus width, the miss penalty = 1 + 1x15 + 4x1 = 20 cycles. The memory controller must supply consecutive addresses to different memory banks. Interleaving is universally adapted in high- performance computers.
CS252/Culler Lec /5/02 Virtual Memory Idea 1: Many Programs sharing DRAM Memory so that context switches can occur Idea 2: Allow program to be written without memory constraints – program can exceed the size of the main memory Idea 3: Relocation: Parts of the program can be placed at different locations in the memory instead of a big chunk. Virtual Memory: (1) DRAM Memory holds many programs running at same time (processes) (2) use DRAM Memory as a kind of “cache” for disk
CS252/Culler Lec /5/02 Data movement in a memory hierarchy. Memory Hierarchy: The Big Picture
CS252/Culler Lec /5/02 Virtual Memory has own terminology Each process has its own private “virtual address space” (e.g., 2 32 Bytes); CPU actually generates “virtual addresses” Each computer has a “physical address space” (e.g., 128 MegaBytes DRAM); also called “real memory” Address translation: mapping virtual addresses to physical addresses –Allows multiple programs to use (different chunks of physical) memory at same time –Also allows some chunks of virtual memory to be represented on disk, not in main memory (to exploit memory hierarchy)
CS252/Culler Lec /5/02 Mapping Virtual Memory to Physical Memory 0 Physical Memory Virtual Memory Heap 64 MB Divide Memory into equal sized “chunks” (say, 4KB each) 0 Any chunk of Virtual Memory assigned to any chunk of Physical Memory (“page”) Stack Heap Static Code Single Process
CS252/Culler Lec /5/02 Handling Page Faults A page fault is like a cache miss –Must find page in lower level of hierarchy If valid bit is zero, the Physical Page Number points to a page on disk When OS starts new process, it creates space on disk for all the pages of the process, sets all valid bits in page table to zero, and all Physical Page Numbers to point to disk –called Demand Paging - pages of the process are loaded from disk only as needed
CS252/Culler Lec /5/02 Comparing the 2 levels of hierarchy CacheVirtual Memory Block or LinePage MissPage Fault Block Size: 32-64BPage Size: 4K-16KB Placement:Fully Associative Direct Mapped, N-way Set Associative Replacement: Least Recently Used LRU or Random(LRU) approximation Write Thru or BackWrite Back How Managed:Hardware + Software Hardware(Operating System)
CS252/Culler Lec /5/02 How to Perform Address Translation? VM divides memory into equal sized pages Address translation relocates entire pages –offsets within the pages do not change –if make page size a power of two, the virtual address separates into two fields: –like cache index, offset fields Virtual Page Number Page Offset virtual address
CS252/Culler Lec /5/02 Address Translation Want fully associative page placement How to locate the physical page? Search impractical (too many pages) A page table is a data structure which contains the mapping of virtual pages to physical pages –There are several different ways, all up to the operating system, to keep this data around Each process running in the system has its own page table
CS252/Culler Lec /5/02 Address Translation: Page Table Virtual Address (VA): virtual page nbr offset Page Table Register Page Table is located in physical memory index into page table + Physical Memory Address (PA) Access Rights: None, Read Only, Read/Write, Executable Page Table Val -id Access Rights Physical Page Number V A.R. P. P. N.0 A.R. V P. P. N.... disk
CS252/Culler Lec /5/02 Page Tables and Address Translation The role of page table in the virtual-to-physical address translation process.
CS252/Culler Lec /5/02 Protection and Sharing in Virtual Memory Virtual memory as a facilitator of sharing and memory protection.
CS252/Culler Lec /5/02 Optimizing for Space Page Table too big! –4GB Virtual Address Space ÷ 4 KB page 2 20 (~ 1 million) Page Table Entries 4 MB just for Page Table of single process! Variety of solutions to tradeoff Page Table size for slower performance when miss occurs in TLB Use a limit register to restrict page table size and let it grow with more pages,Multilevel page table, Paging page tables, etc. (Take O/S Class to learn more)
CS252/Culler Lec /5/02 How Translate Fast? Problem: Virtual Memory requires two memory accesses! –one to translate Virtual Address into Physical Address (page table lookup) –one to transfer the actual data (cache hit) –But Page Table is in physical memory! => 2 main memory accesses! Observation: since there is locality in pages of data, must be locality in virtual addresses of those pages! Why not create a cache of virtual to physical address translations to make translation fast? (smaller is faster) For historical reasons, such a “page table cache” is called a Translation Lookaside Buffer, or TLB
CS252/Culler Lec /5/02 Typical TLB Format VirtualPhysicalValidRef Dirty Access Page Nbr Page NbrRights TLB just a cache of the page table mappings Dirty: since use write back, need to know whether or not to write page to disk when replaced Ref: Used to calculate LRU on replacement TLB access time comparable to cache (much less than main memory access time) “tag” “data”
CS252/Culler Lec /5/02 Translation Look-Aside Buffers TLB is usually small, typically 32-4,096 entries Like any other cache, the TLB can be fully associative, set associative, or direct mapped Processor TLBCache Main Memory miss hit data hit miss Disk Memory OS Fault Handler page fault/ protection violation Page Table data virtual addr. physical addr.
CS252/Culler Lec /5/02 Translation Lookaside Buffer Virtual-to-physical address translation by a TLB and how the resulting physical address is used to access the cache memory.
CS252/Culler Lec /5/02 Valid Tag Data Page offset Page offset Virtual page number Physical page numberValid Cache index 32 Data Cache hit 2 Byte offset Dirty Tag TLB hit Physical page number Physical address tag DECStation 3100/ MIPS R2000 Virtual Address TLB Cache 64 entries, fully associative Physical Address 16K entries, direct mapped
CS252/Culler Lec /5/02 Test 2, Dec 5 (Tuesday) Topics: 1.Compiler+VLIW: Sections , Lectures 9 and paper 9a 2.multithreading and multimedia: Lecture 10 3.Network Processors: Lectures 11, 11a and 12 4.Cache Memory: Sections Sections and Lectures 13,14 5.Memory Organization and Virtual Memory: Sections and Lecture 15 Project II Due on Dec 7 in the class