Constructive Computer Architecture Virtual Memory: From Address Translation to Demand Paging Arvind Computer Science & Artificial Intelligence Lab. Massachusetts.

Slides:



Advertisements
Similar presentations
Virtual Memory Basics.
Advertisements

Virtual Memory In this lecture, slides from lecture 16 from the course Computer Architecture ECE 201 by Professor Mike Schulte are used with permission.
Virtual Memory Chapter 18 S. Dandamudi To be used with S. Dandamudi, “Fundamentals of Computer Organization and Design,” Springer,  S. Dandamudi.
Computer Organization CS224 Fall 2012 Lesson 44. Virtual Memory  Use main memory as a “cache” for secondary (disk) storage l Managed jointly by CPU hardware.
Lecture 34: Chapter 5 Today’s topic –Virtual Memories 1.
CSE 490/590, Spring 2011 CSE 490/590 Computer Architecture Virtual Memory I Steve Ko Computer Sciences and Engineering University at Buffalo.
CSE 490/590 Computer Architecture Virtual Memory II
February 25, 2010CS152, Spring 2010 CS 152 Computer Architecture and Engineering Lecture 11 - Virtual Memory and Caches Krste Asanovic Electrical Engineering.
CS 153 Design of Operating Systems Spring 2015
The Memory Hierarchy (Lectures #24) ECE 445 – Computer Organization The slides included herein were taken from the materials accompanying Computer Organization.
1 COMP 206: Computer Architecture and Implementation Montek Singh Mon., Nov. 17, 2003 Topic: Virtual Memory.
CS 152 Computer Architecture and Engineering Lecture 11 - Virtual Memory and Caches Krste Asanovic Electrical Engineering and Computer Sciences University.
CS 152 Computer Architecture and Engineering Lecture 10 - Virtual Memory Krste Asanovic Electrical Engineering and Computer Sciences University of California.
February 23, 2011CS152, Spring 2011 CS 152 Computer Architecture and Engineering Lecture 9 - Virtual Memory Krste Asanovic Electrical Engineering and Computer.
February 16, 2011CS152, Spring 2011 CS 152 Computer Architecture and Engineering Lecture 8 - Address Translation Krste Asanovic Electrical Engineering.
CS 152 Computer Architecture and Engineering Lecture 9 - Address Translation Krste Asanovic Electrical Engineering and Computer Sciences University of.
February 23, 2010CS152, Spring 2010 CS 152 Computer Architecture and Engineering Lecture 10 - Virtual Memory Krste Asanovic Electrical Engineering and.
Principle Behind Hierarchical Storage  Each level memorizes values stored at lower levels  Instead of paying the full latency for the “furthermost” level.
Virtual Memory and Paging J. Nelson Amaral. Large Data Sets Size of address space: – 32-bit machines: 2 32 = 4 GB – 64-bit machines: 2 64 = a huge number.
CS 152 Computer Architecture and Engineering Lecture 9 - Address Translation Krste Asanovic Electrical Engineering and Computer Sciences University of.
CS 61C: Great Ideas in Computer Architecture
February 18, 2010CS152, Spring 2010 CS 152 Computer Architecture and Engineering Lecture 9 - Address Translation Krste Asanovic Electrical Engineering.
Lecture 19: Virtual Memory
Operating Systems ECE344 Ding Yuan Paging Lecture 8: Paging.
ECE 552 / CPS 550 Advanced Computer Architecture I Lecture 14 Virtual Memory Benjamin Lee Electrical and Computer Engineering Duke University
Constructive Computer Architecture Virtual Memory: From Address Translation to Demand Paging Arvind Computer Science & Artificial Intelligence Lab. Massachusetts.
Virtual Memory Muhamed Mudawar CS 282 – KAUST – Spring 2010 Computer Architecture & Organization.
Virtual Memory Part 1 Li-Shiuan Peh Computer Science & Artificial Intelligence Lab. Massachusetts Institute of Technology May 2, 2012L22-1
2/14/2013 CS152, Spring 2013 CS 152 Computer Architecture and Engineering Lecture 8 - Address Translation Krste Asanovic Electrical Engineering and Computer.
Virtual Memory. Virtual Memory: Topics Why virtual memory? Virtual to physical address translation Page Table Translation Lookaside Buffer (TLB)
ECE 252 / CPS 220 Advanced Computer Architecture I Lecture 14 Virtual Memory Benjamin Lee Electrical and Computer Engineering Duke University
February 21, 2012CS152, Spring 2012 CS 152 Computer Architecture and Engineering Lecture 9 - Virtual Memory Krste Asanovic Electrical Engineering and Computer.
1 Memory Management. 2 Fixed Partitions Legend Free Space 0k 4k 16k 64k 128k Internal fragmentation (cannot be reallocated) Divide memory into n (possible.
Virtual Memory Additional Slides Slide Source: Topics Address translation Accelerating translation with TLBs class12.ppt.
Multilevel Caches Microprocessors are getting faster and including a small high speed cache on the same chip.
Virtual Memory.  Next in memory hierarchy  Motivations:  to remove programming burdens of a small, limited amount of main memory  to allow efficient.
CS2100 Computer Organisation Virtual Memory – Own reading only (AY2015/6) Semester 1.
Virtual Memory Ch. 8 & 9 Silberschatz Operating Systems Book.
Virtual Memory CENG331 - Computer Organization Instructor: Murat Manguoglu(Section 1) Adapted from:
CS 110 Computer Architecture Lecture 23: Virtual Memory Instructor: Sören Schwertfeger School of Information Science and.
CENG709 Computer Architecture and Operating Systems Lecture 8 - Address Translation Murat Manguoglu Department of Computer Engineering Middle East Technical.
Computer Architecture Lecture 12: Virtual Memory I
Virtual Memory Chapter 8.
CS 61C: Great Ideas in Computer Architecture Virtual Memory Cont.
Bernhard Boser & Randy Katz
ECE232: Hardware Organization and Design
Memory COMPUTER ARCHITECTURE
CS161 – Design and Architecture of Computer
From Address Translation to Demand Paging
From Address Translation to Demand Paging
Today How was the midterm review? Lab4 due today.
CS 704 Advanced Computer Architecture
Virtual Memory Use main memory as a “cache” for secondary (disk) storage Managed jointly by CPU hardware and the operating system (OS) Programs share main.
CSE 120 Principles of Operating
Dr. George Michelogiannakis EECS, University of California at Berkeley
CS 61C: Great Ideas in Computer Architecture Lecture 23: Virtual Memory Krste Asanović & Randy H. Katz 11/12/2018.
Lecture 14 Virtual Memory and the Alpha Memory Hierarchy
From Address Translation to Demand Paging
CS 105 “Tour of the Black Holes of Computing!”
Morgan Kaufmann Publishers Memory Hierarchy: Virtual Memory
CSE 451: Operating Systems Autumn 2003 Lecture 10 Paging & TLBs
CSE 451: Operating Systems Autumn 2004 Page Tables, TLBs, and Other Pragmatics Hank Levy 1.
CSC3050 – Computer Architecture
CSE 451: Operating Systems Autumn 2003 Lecture 10 Paging & TLBs
CS 105 “Tour of the Black Holes of Computing!”
Address Translation and Virtual Memory CENG331 - Computer Organization
CS 105 “Tour of the Black Holes of Computing!”
CSE 451: Operating Systems Winter 2005 Page Tables, TLBs, and Other Pragmatics Steve Gribble 1.
Dr. George Michelogiannakis EECS, University of California at Berkeley
Virtual Memory.
Presentation transcript:

Constructive Computer Architecture Virtual Memory: From Address Translation to Demand Paging Arvind Computer Science & Artificial Intelligence Lab. Massachusetts Institute of Technology November 9, L20-1

Modern Virtual Memory Systems Illusion of a large, private, uniform store Protection & Privacy Each user has one private and one or more shared address spaces page table name space Demand Paging Provides the ability to run programs larger than the primary memory Hides differences in machine configurations The price of VM is address translation on each memory reference OS user i VAPA mapping TLB Swapping Store Primary Memory November 9,

Names for Memory Locations Machine language address as specified in machine code Virtual address ISA specifies translation of machine code address into virtual address of program variable (sometime called effective address) Physical address operating system specifies mapping of virtual address into name for a physical memory location physical address virtual address machine language address Address Mapping ISA Physical Memory (DRAM) November 9,

Processor generated address can be interpreted as a pair A page table contains the physical address of the base of each page Paged Memory Systems Page tables make it possible to store the pages of a program non-contiguously Address Space of User-1 Page Table of User page number offset November 9,

Private Address Space per User VA1 User 1 Page Table VA1 User 2 Page Table Physical Memory free OS pages free Each user has a page table which contains an entry for each user page System has a “Page” Table that has an entry for each user (not shown) There is a PT Base register which points to the page table of the current user OS resets the PT Base register whenever the user changes; requires consulting the System Page Table November 9,

VPN offset Suppose all Pages and Page Tables reside in memory PT User 1 PT User 2 System PT Base PTB Useri + Translation: PPA = Mem[PT Base + VPA] PA = PPA + offset All links shown are physical; no VA to PA translation On user switch PT Base Reg := System PT Base + new User ID System PT PT Base Reg It requires two DRAM accesses to access one data word or instruction! November 9,

VM Implementation Issues How to reduce memory access overhead What if all the pages can’t fit in DRAM What if the user page table can’t fit in DRAM What if the System page table can’t fit in DRAM A good VM design needs to be fast and space efficient November 9,

Page Table Entries and Swap Space VPN Offset Virtual address PT Base Reg VPN Data word Data Pages Offset DRAM has to be backed up by swap space on disk because all the pages of all the users cannot fit in DRAM Page Table Entry (PTE) contains: A bit to indicate if a page exists PPN (physical page number) for a memory-resident page DPN (disk page number) for a page on the disk Protection and usage bits PPN DPN PPN Page Table DPN PPN DPN PPN November 9,

Translation Lookaside Buffers (TLB) VPN offset physical address PPN offset virtual address V R W D tag PPN hit? Exception? Mode = Kernel/User Op = Read/Write Protection Check TLB Keep some of the (VPA,PPA) translations in a cache (TLB) No need to put (VPA, DPA) in TLB Every instruction fetch and data access needs address translation and protection checks TLB miss causes one or more accesses to the page table November 9,

TLB Designs Typically entries, usually fully associative Each entry maps a large page, hence less spatial locality across pages  more likely that two entries conflict Sometimes larger TLBs ( entries) are 4-8 way set-associative TLB Reach: Size of largest virtual address space that can be simultaneously mapped by TLB Example: 64 TLB entries, 4KB pages, one page per entry TLB Reach = Replacement policy? Switching users is expensive because TLB has to be flushed 64 entries * 4 KB = 256 KB Store User IDs in TLB entries Random, FIFO, LRU,... November 9,

Handling a TLB Miss Software (MIPS, Alpha) TLB miss causes an exception and the operating system walks the page tables and reloads TLB A privileged “untranslated” addressing mode is used for PT walk Hardware (SPARC v8, x86, PowerPC) A memory management unit (MMU) walks the page tables and reloads the TLB If a missing (data or PT) page is encountered during the TLB reloading, MMU gives up and signals a Page- Fault exception for the original instruction November 9,

Handling a Page Fault When the referenced page is not in DRAM: The missing page is located (or created) It is brought in from disk, and page table is updated Another job may be run on the CPU while the first job waits for the requested page to be read from disk If no free pages are left, a page is swapped out approximate LRU replacement policy Since it takes a long time (msecs) to transfer a page, page faults are handled completely in software (OS) November 9,

Address Translation: putting it all together Virtual Address TLB Lookup Page Table Walk Update TLB Page Fault (OS loads page) Protection Check Physical Address (to cache) miss hit the page is  memory  memory denied permitted Protection Fault hardware hardware or software software SEGFAULT Where? Resume the instruction November 9,

Size of Linear Page Table With 32-bit addresses, 4-KB pages & 4-byte PTEs 2 20 PTEs, i.e, 4 MB page table per user 4 GB of swap space needed to back up the full virtual address space Larger Pages can reduce the overhead but cause Internal fragmentation (part of memory in a page is not used) Larger page-fault penalty (more time to read from disk) What about 64-bit virtual address space? Even 1MB pages would require byte PTEs (35 TB!) However, Page Tables are sparsely populated and hence hierarchical organization can help November 9,

Hierarchical Page Table Level 1 Page Table Level 2 Page Tables Data Pages page in primary memory page in secondary memory Root of the Page Table p1 offset p2 Virtual Address (Processor Register) PTE of a nonexistent page p1 p2 offset bit L1 index 10-bit L2 index November 9,

A PTE in DRAM contains DRAM or DISK addresses A PTE in DISK contains only DISK addresses  a page of a PT can be swapped out only if none its PTE’s point to pages in the primary memory Why? Swapping a Page of a Page Table Don’t want to cause a page fault during translation when the data is in memory Hierarchical page tables permit parts of the page table to kept in the swap space November 9,

Caching vs. Demand Paging CPU cache primary memory secondary memory Caching Demand paging cache slotpage frame cache line (~32 bytes)page (~4K bytes) cache miss rate (1% to 20%)page miss rate (<0.001%) cache hit (~1 cycle)page hit (~100 cycles) cache miss (~100 cycles)page miss (~5M cycles) miss is handled in hardware miss is handled mostly in software primary memory CPU November 9,

Address Translation in CPU Pipeline Software handlers need a restartable exception on page fault or protection violation Handling a TLB miss needs a hardware or software mechanism to refill TLB Need mechanisms to cope with the additional latency of a TLB: slow down the clock pipeline the TLB and cache access virtual address caches parallel TLB/cache access PC Inst TLB Inst. Cache D Decode EM Data TLB Data Cache W + TLB miss? Page Fault? Protection violation? TLB miss? Page Fault? Protection violation?  The rest of the slides are optional November 9,