CHAPTER 3-3: PAGE MAPPING MEMORY MANAGEMENT. VIRTUAL MEMORY Key Idea Disassociate addresses referenced in a running process from addresses available in.

Slides:



Advertisements
Similar presentations
Memory.
Advertisements

Operating Systems Lecture 10 Issues in Paging and Virtual Memory Adapted from Operating Systems Lecture Notes, Copyright 1997 Martin C. Rinard. Zhiqing.
CS 153 Design of Operating Systems Spring 2015
Chapter 101 Virtual Memory Chapter 10 Sections and plus (Skip:10.3.2, 10.7, rest of 10.8)
Chapter 10 – Virtual Memory Organization Outline 10.1 Introduction 10.2Virtual Memory: Basic Concepts 10.3Block Mapping 10.4Paging Paging Address.
Memory Management (II)
CS 342 – Operating Systems Spring 2003 © Ibrahim Korpeoglu Bilkent University1 Memory Management -3 CS 342 – Operating Systems Ibrahim Korpeoglu Bilkent.
Memory Management and Paging CSCI 3753 Operating Systems Spring 2005 Prof. Rick Han.
CS 333 Introduction to Operating Systems Class 11 – Virtual Memory (1)
Chapter 3.2 : Virtual Memory
9.9.2 Memory Placement Strategies Where to put incoming processes –First-fit strategy Process placed in first hole of sufficient size found Simple, low.
1 Chapter 8 Virtual Memory Virtual memory is a storage allocation scheme in which secondary memory can be addressed as though it were part of main memory.
©UCB CS 162 Ch 7: Virtual Memory LECTURE 13 Instructor: L.N. Bhuyan
Virtual Memory I Chapter 8.
Chapter 91 Translation Lookaside Buffer (described later with virtual memory) Frame.
1 Tuesday, July 04, 2006 "Programs expand to fill the memory available to hold them." - Modified Parkinson’s Law.
Virtual Memory  Early computers had a small and fixed amount to memory. All programs had to be able to fit in this memory. Overlays were used when the.
CS 241 Section Week #12 (04/22/10).
Silberschatz, Galvin and Gagne  2002 Modified for CSCI 346, Royden, Operating System Concepts Operating Systems Lecture 24 Paging.
Chapter 8 Memory Management Dr. Yingwu Zhu. Outline Background Basic Concepts Memory Allocation.
8.4 paging Paging is a memory-management scheme that permits the physical address space of a process to be non-contiguous. The basic method for implementation.
1 Chapter 3.2 : Virtual Memory What is virtual memory? What is virtual memory? Virtual memory management schemes Virtual memory management schemes Paging.
Silberschatz, Galvin and Gagne  2002 Modified for CSCI 399, Royden, Operating System Concepts Operating Systems Lecture 34 Paging Implementation.
IT253: Computer Organization
Operating Systems COMP 4850/CISG 5550 Page Tables TLBs Inverted Page Tables Dr. James Money.
Chapter 8 – Main Memory (Pgs ). Overview  Everything to do with memory is complicated by the fact that more than 1 program can be in memory.
© 2004, D. J. Foreman 1 Virtual Memory. © 2004, D. J. Foreman 2 Objectives  Avoid copy/restore entire address space  Avoid unusable holes in memory.
8.1 Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Paging Physical address space of a process can be noncontiguous Avoids.
CS 149: Operating Systems March 3 Class Meeting Department of Computer Science San Jose State University Spring 2015 Instructor: Ron Mak
Memory Management Fundamentals Virtual Memory. Outline Introduction Motivation for virtual memory Paging – general concepts –Principle of locality, demand.
Virtual Memory 1 1.
1 Some Real Problem  What if a program needs more memory than the machine has? —even if individual programs fit in memory, how can we run multiple programs?
Operating Systems ECE344 Ashvin Goel ECE University of Toronto Virtual Memory Hardware.
Operating Systems Unit 7: – Virtual Memory organization Operating Systems.
4.3 Virtual Memory. Virtual memory  Want to run programs (code+stack+data) larger than available memory.  Overlays programmer divides program into pieces.
Multilevel Caches Microprocessors are getting faster and including a small high speed cache on the same chip.
1  2004 Morgan Kaufmann Publishers Chapter Seven Memory Hierarchy-3 by Patterson.
Page Table Implementation. Readings r Silbershatz et al:
Virtual Memory Review Goal: give illusion of a large memory Allow many processes to share single memory Strategy Break physical memory up into blocks (pages)
LECTURE 12 Virtual Memory. VIRTUAL MEMORY Just as a cache can provide fast, easy access to recently-used code and data, main memory acts as a “cache”
Operating Systems Session 7: – Virtual Memory organization Operating Systems.
Silberschatz, Galvin and Gagne  Operating System Concepts Paging Logical address space of a process can be noncontiguous; process is allocated.
Memory Management memory hierarchy programs exhibit locality of reference - non-uniform reference patterns temporal locality - a program that references.
Operating Systems, Winter Semester 2011 Practical Session 9, Memory 1.
W4118 Operating Systems Instructor: Junfeng Yang.
Memory: Page Table Structure CSSE 332 Operating Systems Rose-Hulman Institute of Technology.
CS161 – Design and Architecture of Computer
Memory: Page Table Structure
Memory Management Virtual Memory.
ECE232: Hardware Organization and Design
CS161 – Design and Architecture of Computer
CSE 120 Principles of Operating
Lecture 12 Virtual Memory.
Chapter 8: Main Memory Source & Copyright: Operating System Concepts, Silberschatz, Galvin and Gagne.
Some Real Problem What if a program needs more memory than the machine has? even if individual programs fit in memory, how can we run multiple programs?
Paging and Segmentation
CSCI206 - Computer Organization & Programming
EECE.4810/EECE.5730 Operating Systems
Andy Wang Operating Systems COP 4610 / CGS 5765
Lecture 29: Virtual Memory-Address Translation
Andy Wang Operating Systems COP 4610 / CGS 5765
Morgan Kaufmann Publishers Memory Hierarchy: Virtual Memory
Practical Session 9, Memory
CSE 451: Operating Systems Autumn 2003 Lecture 10 Paging & TLBs
CSE 451: Operating Systems Autumn 2003 Lecture 10 Paging & TLBs
CS703 - Advanced Operating Systems
Sarah Diesburg Operating Systems CS 3430
4.3 Virtual Memory.
Sarah Diesburg Operating Systems COP 4610
Virtual Memory 1 1.
Presentation transcript:

CHAPTER 3-3: PAGE MAPPING MEMORY MANAGEMENT

VIRTUAL MEMORY Key Idea Disassociate addresses referenced in a running process from addresses available in storage

FEATURES Can address a storage space larger than primary storage Creates the illusion that a process is placed contiguously in memory Two Methods Paging—memory allocated in fixed-size blocks Segmentation—memory allocated in different sizes

TERMS Virtual Addresses Addresses referenced by a running process Real Addresses Addresses available in primary storage Virtual Address Space Range of virtual address that a process may reference Real Address Space Range of real addresses available on a particular computer Implication: processes are referenced in virtual memory, but run in real memory

MAPPING 1.Virtual memory is contiguous 2.Physical memory need not be contiguous 3.Virtual memory can exceed physical memory 4.#3 Implies that only a part of a process has to be in physical memory 5.Virtual memory is limited by address size 6.Physical memory has (what else?) physical limits virtual memory physical memory

ISSUES How are addresses mapped? Page Fault: Virtual memory reference has no corresponding item in physical memory Physical memory is full, but a process needs something in secondary storage

NL-MAP/MMU A FIRST LOOK Suppose every virtual address is mapped to a real address Far beyond base/limit registers Problem Page Map Table is as large as the process Solution Break process into fixed-size blocks, say 512 bytes to 8kb Map the blocks in hardware Virtual Memory Blocks: Pages Real memory Blocks: Page Frames

PAGE TABLE Page: Chunk in Virtual Memory Page Frame: Chunk in Physical Memory Relation between virtual addresses and physical memory addresses is given by the page table Every page begins on a multiple of 4096 and ends 4095 addresses higher So 4K–8K really means 4096–8191 8K to 12K means 8192–12287

A LITTLE HELP FROM HARDWARE The position and function of the MMU MMU is shown as being a part of the CPU chip Could just as easily be a separate chip

NL-MAP (1) NL-MAP is really a look-up into a page table Each process has its own page table Register in CPU is loaded with the real address, A, of the process page table Page table contains 1 entry for each page of the process

NL-MAP (2) Virtual address has two parts: (p,d) page number (p) offset from the start of the page table (d) Real address in page table has (at least) five parts present/absent bit protection bits (rwe) referenced bit: if dirty when evicted must be written to disk secondary storage address page frame address

NL-MAP (3) Consequences 1.Page table can be large 2.Mapping must be fast ref: page has been referenced mod: page has been modified prot: what kinds of access permitted pres/abs: if set to 0, page fault

SPEED Direct mapped page table is kept in main memory. Two memory accesses are required to satisfy a reference 1.Page table 2.Primary storage

SIZE Suppose we have 32 bit addresses Some of the 32 bits will be the offset (displacement) within the page table Some of the 32 bits will be the offset (displacement) within the page frame Suppose our pages are 4K (4096 bytes) Offsets from 0 – 4K-1 are necessary These are displacements from the start of the page frame, the ‘d’ part of the real address Since 4k = 2 12 we reserve the low order 12 bits for this Leaves 20 bits to address entries in the page table. Each process would have a page table with 2 20 entries. If the page size is smaller, we have more entries!!

IMPLICATIONS OF THE DIRECT MAPPING MODEL Large page tables are impractical from two perspectives Require lots of memory Loading an entire page table at each context switch would kill performance

MORE HELP FROM HARDWARE Translation lookaside buffer (TLB) Associative memory Searchable in parallel Very small number of entries, say 8 to 256

TLB

TLB CLOSE-UP

PRINCIPLE OF LOCALITY TLB with only 16 entries achieves 90% of the performance of a system with the entire page table in associative memory Why? A page referenced in the recent past is likely to be referenced again

PAGING WITH TLB (1) Small associative memory with 16 or 32 registers Store there the most recently referenced page frames Technique 1. Process references a virtual address (p,d) 2. Do a parallel search of TLB for p if found, return p’, the frame number corresponding to p else look up p in the page table Update tlb with p and p’, replacing the least recently used entry

PAGING WITH TLB (2) When P is not found in TLB, the least recently used slot in TLB is filled with P from page table

The Whole Picture

PAGE TABLES CAN BE LARGE Suppose: 32 bit machine, 4K page size, 12 bit displacement 2 20 page table entries, each (at least) 8 bytes 8MB page table per process Now Suppose 64 bit machine, 4K page size, 12 bit displacement 2 52 page table entries, each (at least) 8 bytes (2 55 bytes)/(2 40 bytes/TB) = 2 15 TB page table per process

INDIRECTION TWO LEVELS (THERE COULD BE MORE) Virtual address = p, t, d p: page number at first level t: page number at second level d: displacement into page frame

THE TWO LEVEL MODEL p’... p’’... 1 st level PT Address (A) ptd p’’d + + Level 1Level table for each entry in level 1

HOW MANY ACTUAL TABLES? Suppose 32 bit addresses, 4K page size P (10 bit displacement into first level table) T (10 bit displacement into second level table) D (12 bit displacement into page frame) 1.P has 2 10 entries 2.T has 2 10 entries Each entry in the top level table references 4MB of memory because It references a second level page table of 1024 or 2 10 entries Each of which points to a 4K page At most: 1 top level table with 1024 entries At most: 1024 second level tables, each pointing to a 4k page At most: 1024 * 1024 * 4K pages = 2 32 byte process

KEY IDEA Not all of the tables are necessary With direct mapping Each process requires a page table with up to 2 20 entries whether they are needed or not With a two-level page table the savings can be substantial

EXAMPLE Suppose a 12 MB process bottom 4 MB for code next 4 MB for data top 4 MB for stack Hole between the top of the data and bottom of the stack Top level page table has 2 10 slots, 2 3 bytes each Only three are used These point to three second level page tables, each with 2 10 slots, each slot requiring 2 3 bytes Total Page Table Memory = 2 10 slots x 2 3 bytes / slot + 3 x 2 10 slots x 2 3 bytes / slot = 2 15 bytes = 512 K for all page tables Direct Mapping: 8MB page table

MULTILEVEL PAGE TABLES Each arrow on the right points to a 4K page. The low 12 bits of the virtual address are a displacement into the page

SAMPLE PAGE REFERENCE MMU receives this virtual address: 0x (hex) 0000|0000|0100|0000|0011|0000|0000|0100 P=1T = 3 D = 4 P=1: 1 st entry, starting at 0, in the first level page table. Find p’, the address of the 2 nd level page table T=3: 3 rd entry, starting at 0, in 2 nd level page table whose address is p’. Find p’’, the address of the page frame D = 4: 4 th byte offset within the page frame

WHERE IN VIRTUAL MEMORY? 1.P indexes top level page table. P = 1 corresponds to 2 nd 4M of virt mem: 4M to 8M-1 2.T indexes 2 nd level page table. T = 3 corresponds to the 4 th 4K 3 * 4K to 4 * 4K-1 12K to 16K-1 12,288 to 16,383 within its 4M chunk But since P = 1, we are in the 2 nd 4m chunk 12, M to 16, M or absolute addresses 4,206,592 – 4,210,687 3.Entry found in the 2 nd level page table contains frame address corresponding to the virtual address: 0x To this we add the d = 4 to get virtual address 0x which translates to absolute address = within the virtual address space 5.If present/absent bit is 0: page fault Note: a) Virtual address space with (2 32 bytes)/ (2 12 bytes/page) = 2 20 pages b) But we are using only 12M/(4K/page) = 3 K pages c) Only 4 page tables are necessary to handle all of the address references

MORE LEVELS ARE POSSIBLE If we refer to the 1 st level page table, as the page directory, then the scheme we have been describing is Intel’s from 1995 Pentium Pro in 2005 added another level Page Directory Pointer Table But with 4 entries, 512 entry page tables, 4k page frames 2 2 x 2 9 x 2 9 x 2 12 = 4GB (as before, but with more flexibility) x86 uses 4 levels of 512 entry page tables 2 9 x 2 9 x 2 9 x 2 9 x 2 12 = 2 48 = 256 TB

GETTING OUT OF HAND? INVERTED PAGE TABLES Page table contains 1 entry per page frame of real memory Suppose we want to address 1GB of real memory using 4K page frames Inverted page table requires 2 18 entries because 2 18 x 2 12 = 1GB Seems big---But this is for all processes So, process n refers to virtual page p, p is no longer an index into the table Entire 256K table must be searched for page frame (n,p)

TLB AND HASH TABLES On TLB miss, entire inverted table has to be searched Hash Solution Search TLB If found, retrieve page frame if not found Hash page number to find entry in hash table All pages hashing to the same number are chained as tuples (page, page frame) Enter entry in TLB Retrieve page frame

INVERTED PAGE TABLES a)Direct mapping b) Inverted page table c) Inverted page table with hash chains