2015/11/26\course\cpeg323-08F\Topic7e1 Virtual Memory.

Slides:



Advertisements
Similar presentations
Virtual Memory In this lecture, slides from lecture 16 from the course Computer Architecture ECE 201 by Professor Mike Schulte are used with permission.
Advertisements

1 Lecture 13: Cache and Virtual Memroy Review Cache optimization approaches, cache miss classification, Adapted from UCB CS252 S01.
OS Fall’02 Virtual Memory Operating Systems Fall 2002.
Virtual Memory Chapter 18 S. Dandamudi To be used with S. Dandamudi, “Fundamentals of Computer Organization and Design,” Springer,  S. Dandamudi.
Computer Organization CS224 Fall 2012 Lesson 44. Virtual Memory  Use main memory as a “cache” for secondary (disk) storage l Managed jointly by CPU hardware.
Lecture 34: Chapter 5 Today’s topic –Virtual Memories 1.
CSIE30300 Computer Architecture Unit 10: Virtual Memory Hsin-Chou Chi [Adapted from material by and
CPEG3231 Virtual Memory. CPEG3232 Review: The memory hierarchy Increasing distance from the processor in access time L1$ L2$ Main Memory Secondary Memory.
The Memory Hierarchy (Lectures #24) ECE 445 – Computer Organization The slides included herein were taken from the materials accompanying Computer Organization.
Spring 2003CSE P5481 Introduction Why memory subsystem design is important CPU speeds increase 55% per year DRAM speeds increase 3% per year rate of increase.
CSCE 212 Chapter 7 Memory Hierarchy Instructor: Jason D. Bakos.
Chap. 7.4: Virtual Memory. CS61C L35 VM I (2) Garcia © UCB Review: Caches Cache design choices: size of cache: speed v. capacity direct-mapped v. associative.
Recap. The Memory Hierarchy Increasing distance from the processor in access time L1$ L2$ Main Memory Secondary Memory Processor (Relative) size of the.
1 Chapter Seven Large and Fast: Exploiting Memory Hierarchy.
Chapter 7 Large and Fast: Exploiting Memory Hierarchy Bo Cheng.
Chapter 3.2 : Virtual Memory
1 Chapter 8 Virtual Memory Virtual memory is a storage allocation scheme in which secondary memory can be addressed as though it were part of main memory.
Virtual Memory and Paging J. Nelson Amaral. Large Data Sets Size of address space: – 32-bit machines: 2 32 = 4 GB – 64-bit machines: 2 64 = a huge number.
©UCB CS 162 Ch 7: Virtual Memory LECTURE 13 Instructor: L.N. Bhuyan
Virtual Memory I Chapter 8.
1  2004 Morgan Kaufmann Publishers Chapter Seven.
1  1998 Morgan Kaufmann Publishers Chapter Seven Large and Fast: Exploiting Memory Hierarchy (Part II)
1 CSE SUNY New Paltz Chapter Seven Exploiting Memory Hierarchy.
Lecture 33: Chapter 5 Today’s topic –Cache Replacement Algorithms –Multi-level Caches –Virtual Memories 1.
Topics covered: Memory subsystem CSE243: Introduction to Computer Architecture and Hardware/Software Interface.
Computer Architecture Lecture 28 Fasih ur Rehman.
Lecture 19: Virtual Memory
1 Chapter 3.2 : Virtual Memory What is virtual memory? What is virtual memory? Virtual memory management schemes Virtual memory management schemes Paging.
CPE432 Chapter 5A.1Dr. W. Abu-Sufah, UJ Chapter 5B:Virtual Memory Adapted from Slides by Prof. Mary Jane Irwin, Penn State University Read Section 5.4,
July 30, 2001Systems Architecture II1 Systems Architecture II (CS ) Lecture 8: Exploiting Memory Hierarchy: Virtual Memory * Jeremy R. Johnson Monday.
IT253: Computer Organization
Lecture 9: Memory Hierarchy Virtual Memory Kai Bu
1 Virtual Memory and Address Translation. 2 Review Program addresses are virtual addresses.  Relative offset of program regions can not change during.
Chapter 4 Memory Management Virtual Memory.
1 Virtual Memory Main memory can act as a cache for the secondary storage (disk) Advantages: –illusion of having more physical memory –program relocation.
1  1998 Morgan Kaufmann Publishers Recap: Memory Hierarchy of a Modern Computer System By taking advantage of the principle of locality: –Present the.
Introduction: Memory Management 2 Ideally programmers want memory that is large fast non volatile Memory hierarchy small amount of fast, expensive memory.
Multilevel Caches Microprocessors are getting faster and including a small high speed cache on the same chip.
1 Chapter Seven CACHE MEMORY AND VIRTUAL MEMORY. 2 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4.
1  2004 Morgan Kaufmann Publishers Chapter Seven Memory Hierarchy-3 by Patterson.
CS2100 Computer Organisation Virtual Memory – Own reading only (AY2015/6) Semester 1.
Virtual Memory Ch. 8 & 9 Silberschatz Operating Systems Book.
Lectures 8 & 9 Virtual Memory - Paging & Segmentation System Design.
LECTURE 12 Virtual Memory. VIRTUAL MEMORY Just as a cache can provide fast, easy access to recently-used code and data, main memory acts as a “cache”
CS203 – Advanced Computer Architecture Virtual Memory.
Memory Management memory hierarchy programs exhibit locality of reference - non-uniform reference patterns temporal locality - a program that references.
CMSC 611: Advanced Computer Architecture Memory & Virtual Memory Some material adapted from Mohamed Younis, UMBC CMSC 611 Spr 2003 course slides Some material.
CS161 – Design and Architecture of Computer
ECE232: Hardware Organization and Design
Memory COMPUTER ARCHITECTURE
CS161 – Design and Architecture of Computer
Lecture 12 Virtual Memory.
Virtual Memory.
Virtual Memory Use main memory as a “cache” for secondary (disk) storage Managed jointly by CPU hardware and the operating system (OS) Programs share main.
Day 21 Virtual Memory.
Day 22 Virtual Memory.
Morgan Kaufmann Publishers
Virtual Memory Chapter 8.
Lecture 14 Virtual Memory and the Alpha Memory Hierarchy
CMSC 611: Advanced Computer Architecture
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy Chapter 8 11/24/2018.
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy Chapter 9 12/1/2018.
Lecture 29: Virtual Memory-Address Translation
Morgan Kaufmann Publishers Memory Hierarchy: Virtual Memory
Virtual Memory Overcoming main memory size limitation
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy Chapter 9 4/5/2019.
CSC3050 – Computer Architecture
Virtual Memory Use main memory as a “cache” for secondary (disk) storage Managed jointly by CPU hardware and the operating system (OS) Programs share main.
Virtual Memory.
Virtual Memory 1 1.
Presentation transcript:

2015/11/26\course\cpeg323-08F\Topic7e1 Virtual Memory

2015/11/26\course\cpeg323-08F\Topic7e2 Historically, there were two major motivations for virtual memory: to allow efficient and safe sharing of memory among multiple programs, and to remove the programming burden of a small, limited amount of main memory. Patt&Henn 04 …a system has been devised to make the core drum combination appear to programmer as a single level store, the requisite transfers taking place automatically Kilbum et al. Virtual Memory: Motivation

2015/11/26\course\cpeg323-08F\Topic7e3 Provide sharing Automatically manage the M hierarchy (as “one-level”) Simplify loading (for relocation) MAIN PROCESSOR MEMORY MANAGE- MENT UNIT HIGH- SPEED CACHE MAIN MEMORY BACKING STORE LOGICAL ADDRESS CONTROL DATA PHYSICAL ADDRESS Purpose of Virtual Memory

2015/11/26\course\cpeg323-08F\Topic7e4 Structure of Virtual Memory Virtual AddressAddress TranslatorPhysical Address From Processor To Memory Page fault Using elaborate Software page fault Handling algorithm

2015/11/26\course\cpeg323-08F\Topic7e5 } 4K Virtual address Main memory address 64K virtual address space 32K main memory A Paging System

2015/11/26\course\cpeg323-08F\Topic7e6 1 = present in main memory, 0 = not present in main memory Virtual page Page frame Main memory Page frame Page Table

2015/11/26\course\cpeg323-08F\Topic7e7 Address Translation In Virtual Memory, blocks of memory (called pages) are mapped from one set of address (called virtual addresses) to another set (called physical addresses) See P&H Fig rd Ed or th Ed

2015/11/26\course\cpeg323-08F\Topic7e8 If the valid bit for a virtual page is off, a page fault occurs. The operating system must be given control. Once the operating system gets control, it must find the page in the next level of the hierarchy (usually magnetic disk) and decide where to place the requested page in main memory. Page Faults See P&H Fig rd Ed or th Ed

2015/11/26\course\cpeg323-08F\Topic7e9 Technology in 2008 See P&H Fig. pg th Ed

2015/11/26\course\cpeg323-08F\Topic7e10 Typical ranges of parameters for virtual memory in 2008 See P&H Fig th Ed

2015/11/26\course\cpeg323-08F\Topic7e11 VIRTUAL ADDRESS Page NumberDisplacement PAGE MAP Address within Page Base Address of Page PAGE (in Memory) Virtual Address Mapping

2015/11/26\course\cpeg323-08F\Topic7e12 Terminology Page Page fault Virtual address Physical address Memory mapping or address translation

2015/11/26\course\cpeg323-08F\Topic7e13 VM Simplifies Loading VM provide relocation function. Address mapping allows programs to be load in any location in Physical Memory Under VM relocation does not need special OS + hardware support as in the past

2015/11/26\course\cpeg323-08F\Topic7e14 Address Translation Consideration Direct mapping using register sets. Indirect mapping using tables. Associative mapping of frequently used pages.

2015/11/26\course\cpeg323-08F\Topic7e15 The Page Table (PT) must have one entry for each page in virtual memory! How many Pages? How large is PT? Design of Virtual Memory

2015/11/26\course\cpeg323-08F\Topic7e16 Pages should be large enough to amortize the high access time. (from 4 kB to 16 kB are typical, and some designers are considering size as large as 64 kB.) Organizations that reduce the page fault rate are attractive. The primary technique used here is to allow flexible placement of pages. (e.g. fully associative) 4 Key Design Decisions in VM Design

2015/11/26\course\cpeg323-08F\Topic7e17 Page fault (misses) in a virtual memory system can be handled in software, because the overhead will be small compared to the access time to disk. Furthermore, the software can afford to used clever algorithms for choosing how to place pages, because even small reductions in the miss rate will pay for the cost of such algorithms. Using write-through to manage writes in virtual memory will not work since writes take too long. Instead, we need a scheme that reduce the number of disk writes. 4 Key Design Decisions in VM Design

2015/11/26\course\cpeg323-08F\Topic7e18 What happens on a write ? Write-through to secondary storage is impractical for VM. Write-back is used: - Advantages (reduce number of writes to disk, amortize the cost). - Dirty-bit.

2015/11/26\course\cpeg323-08F\Topic7e19 Page Size Selection Constraints Efficiency of secondary memory device. Page table size. Fragmentation (internal). - Last part of last page Program logic structure - logic block size: ≤ 1k ~ 4k Table fragmentation [Kai, P68] - PT occupies some space

2015/11/26\course\cpeg323-08F\Topic7e20 Page Size Selection PT size. Miss ratio. PT transfer from disk to memory efficiency. Internal fragmentation. text heap stack Start-up time of a process - the smaller the faster! 3 x 0.5 = 1.5 times of a page size per process!

2015/11/26\course\cpeg323-08F\Topic7e21 An Example Case 1 VM page size512 VM address space 64 k Total virtual page = 64k/512 = 128 pages

2015/11/26\course\cpeg323-08F\Topic7e22 Case 2 VM page size512 VM address space 4G= 2 32 Total virtual page = = 8M pages If each PTE has 13 bits: so total PT size (bytes) ≈ 8M x 4 = 32M bytes Note : assuming Main Memory has 4M byte or = = = 2 13 frames 4G 512 4M An Example (con’t)

2015/11/26\course\cpeg323-08F\Topic7e23 How about VM address space =2 52 (R-6000) (4 Petabytes) page size 4K bytes so total number of virtual pages: = 2 40 An Example (con’t)

2015/11/26\course\cpeg323-08F\Topic7e24 Techniques for Reducing PT Size Set a lower limit, and permit dynamic growth. Permit growth from both directions. Inverted page table (a hash table). Multi-Level page table (segments and pages). PT itself can be paged: I.e. put PT itself in virtual address space (Note: some small portion of pages should be in main memory and never paged out).

2015/11/26\course\cpeg323-08F\Topic7e25 11 bits11 bits10 bits Segment Number Page Number Displacement Base of Segment Table SEGMENT TABLE Base Address of Page Table PAGE TABLE Base + 0 Base + 1 Base PAGE (in Memory) Base Address of Page Address within Page Two level Address mapping

2015/11/26\course\cpeg323-08F\Topic7e26 Placement: OS designers always pick lower miss rates vs. simpler placement algorithm So, “fully associativity - VM pages can go anywhere in the main M (compare with sector cache) Question: Why not use associative hardware? (# of PT entries too big!)

2015/11/26\course\cpeg323-08F\Topic7e27 VM: Implementation Issues Page faults handling. Translation lookahead buffer (TLB) Protection issues

2015/11/26\course\cpeg323-08F\Topic7e28 Fast Address Translation PT must involve at least two accesses of M for each M address Improvement:  Store PT in fast registers Example: Xerox: 256 R ?  TLB For multiprogramming, should store pid as part of tags in TLB.

2015/11/26\course\cpeg323-08F\Topic7e29 Page Fault Handling When a virtual page number is not in TLB, then PT in M is accessed (through PTBR) to find the PTE If PTE indicates that the page is missing a page fault occurs Context switch!

2015/11/26\course\cpeg323-08F\Topic7e30 The TLB acts as a cache on the page table for the entries that map to physical pages only Making Address translation fast See P&H Fig rd Ed or th Ed

2015/11/26\course\cpeg323-08F\Topic7e31 Typical values for a TLB in 2008 See P&H Fig th Ed Although the range of values is wide, this is partially because many of the values that have shifted over time are related; for example, as caches become larger to overcome larger miss penalties, block sizes also grow.

2015/11/26\course\cpeg323-08F\Topic7e32 TLB Design Placement policy: - Small TLBs: full-associativity can be used - Large TLBs: fully-associativity may be too slow Replacement policy: sometime even random policy is used for speed/simplicity

2015/11/26\course\cpeg323-08F\Topic7e33 Example: FasthMATH Processing a read or a write-through in the Intrinsity FastMATH TLB and cache See P&H Fig rd Ed or th Ed

2015/11/26\course\cpeg323-08F\Topic7e34 pidi p i w Virtual address TLB Page map RWX pid M C P Page frame address in memory (PFA) PFA in S.M. i w Physical address Operation validation RWX Requested access type S/U Access fault Page fault PME (x) Replacement policy If s/u = 1 - supervisor mode PME(x) * C = 1-page PFA modified PME(x) * P = 1-page is private to process PME(x) * pid is process identification number PME(x) * PFA is page frame address Virtual to real address translation using page map PME – Page map entry

2015/11/26\course\cpeg323-08F\Topic7e35 Translation Look - Aside Buffer TLB - miss rate is low - Clark-Emer data [85] 3~4 times smaller then usually cache miss ratio. When TLB-miss, the penalty is relatively low - A TLB miss usually result in a cache fetch.

2015/11/26\course\cpeg323-08F\Topic7e36 TLB-miss implies higher miss rate for the main cache TLB translation is process-dependent - Strategies for context switching 1.Tagging by context 2.Flushing cont’d complete purge by context (shared) No absolute answer Translation Look - Aside Buffer

2015/11/26\course\cpeg323-08F\Topic7e37 Integrating VM, TLBs and Caches See P&H Fig rd Ed or th Ed The TLB and cache implement the process of going from a virtual address to a data item in the Intrinsity Fast MATH. This figure shows the organization of the TLB and the data cache, assuming a 4 kB page size. This diagram focuses on a read.