We think you have liked this presentation. If you wish to download it, please recommend it to your friends in any social system. Share buttons are a little bit lower. Thank you!
Presentation is loading. Please wait.
Published byLindsey Plough
Modified about 1 year ago
© Avi Mendelson, 5/ Lecture Address spaces and Memory management MAMAS – Computer Architecture Address spaces and Memory management Dr. Avi Mendelson Some of the slides were taken from: (1) Jim Smith (2) Patterson presentations.
© Avi Mendelson, 5/ Lecture Address spaces and Memory management Memory System Problems Different Programs have different memory requirements – How to manage program placement? Different machines have different amount of memory – How to run the same program on many different machines? At any given time each machine runs a different set of programs – How to fit the program mix into memory? Reclaiming unused memory? Moving code around? The amount of memory consumed by each program is dynamic (changes over time) – How to effect changes in memory location: add or subtract space? Program bugs can cause a program to generate reads and writes outside the program address space – How to protect one program from another?
© Avi Mendelson, 5/ Lecture Address spaces and Memory management Address Spaces
© Avi Mendelson, 5/ Lecture Address spaces and Memory management Address Space of a single process (Linux) Each process has a fixed size address space that can be divided into: – Symbol tables and other management areas – Code – Data “static data” allocated by compiler “Dynamic data allocated by “malloc” – Stack Managed automatically kernel virtual memory runtime heap (via malloc) program text (.text) initialized data (.data) uninitialized data (.bss) stack forbidden 0 %esp memory invisible to user code the “brk” ptr
© Avi Mendelson, 5/ Lecture Address spaces and Memory management Main memory can act as a cache for the secondary storage (disk) Virtual Memory Divide memory (virtual and physical) into fixed size blocks (Pages, Frames) – Pages in Virtual space, Frames in Physical space – Page size = Frame size – Page size is a power of 2: page size = 2 k All pages in the virtual address space are contiguous Pages can be mapped into physical Frames in any order Some of the pages are in main memory (DRAM), some of the pages are on disk. Virtual Addresses Physical Addresses Address Translation Disk Addresses
© Avi Mendelson, 5/ Lecture Address spaces and Memory management Managing Multiple Virtual Address spaces Each process “sees” a private address space of 4G. – 2G private address space the process can use – 2G system address space, shared by all processes, and can be accessed only if the system at “supervisor” mode (kernel mode). All processes are sharing the same (small) physical memory – Small part of the address space is kept in main memory (DRAM), most of the address space is either not mapped or is kept on disk. All programs are written using Virtual Memory Address Space The hardware does on-the-fly translation between virtual and physical address spaces. 2GB Virtual address spaces U Area
© Avi Mendelson, 5/ Lecture Address spaces and Memory management How a process is created (exec) The executable file contains the code, the initial values for the data area and initial tables (symbol tables etc). A page table is created to map the virtual addresses to the physical addresses At the start time, no page is in the main memory, so – Code pages are pointed to the disk where the code section within the file is – The initial data pages (.data) are mapped to the.data section within the executable file. – The system may allocate pages for initial stack space, data space (.BSS) and for the heap area (this is a performance optimization, the system can avoid it). As the execution continues, the system allocates pages in the main memory, copies their content from the disk and change the pointer to point to the main memory. When a page is replaced from the main memory, the system keeps it in a special place on the disk, called swap area.
© Avi Mendelson, 5/ Lecture Address spaces and Memory management Virtual Address Space for Process 1: Physical Address Space (DRAM) VP 1 VP 2 PP 2 Address Translation 0 0 N-1 0 M-1 VP 1 VP 2 PP 7 PP 10 (e.g., read only or library code) Virtual Memory can help to share address spaces – If several processes map different pages to the same physical page, the data/code of that page is shared (we will discuss how the OS takes advantage of this feature later on)... Virtual Address Space for Process 2:
© Avi Mendelson, 5/ Lecture Address spaces and Memory management VM can help process protection Page table entry contains access rights information – hardware enforces this protection (trap into OS if violation occurs) Page Tables Process i: Physical AddrRead?Write? PP 9YesNo PP 4Yes XXXXXXX No VP 0: VP 1: VP 2: Process j: 0: 1: N-1: Memory Physical AddrRead?Write? PP 6Yes PP 9YesNo XXXXXXX No VP 0: VP 1: VP 2:
© Avi Mendelson, 5/ Lecture Address spaces and Memory management Virtual Memory Implementation
© Avi Mendelson, 5/ Lecture Address spaces and Memory management Page Tables Valid 1 Physical Memory Disk Page Table Physical Page Or Disk Address Virtual page number
© Avi Mendelson, 5/ Lecture Address spaces and Memory management Virtual to Physical Address translation Virtual page number Physical page number Page offset Virtual Address Physical Addresses Page size: 2 12 byte =4K byte Page Table
© Avi Mendelson, 5/ Lecture Address spaces and Memory management The Page Table 31 Page offset 011 Virtual Page Number Page offset 110 Physical Frame Number 29 Virtual Address Physical Address VD Frame number 1 Page table base reg 0 Valid bit Dirty bit 12 AC Access Control 12
© Avi Mendelson, 5/ Lecture Address spaces and Memory management If V = 1 then page is in main memory at frame address stored in table Access data else (page fault) need to fetch page from disk causes a trap, usually accompanied by a context switch: current process is suspended while page is fetched from disk Access Control (R = Read-only, R/W = read/write, X = execute only) If kind of access not compatible with specified access rights then protection_violation_fault causes trap to hardware, or software fault handler Missing item fetched from secondary memory only on the occurrence of a fault demand load policy Address Mapping Algorithm
© Avi Mendelson, 5/ Lecture Address spaces and Memory management Managing virtual addresses Virtual memory management is mainly done by the OS. – Deciding which pages should be in memory and which – on disk. – Deciding when to write pages to disk. – Deciding what access rights are assigned to pages. – Et cetera, et cetera. – The exact algorithms are out of the scope of this course. Here, we study two hardware mechanisms – Pseudo LRU mechanism – to decide which pages to keep – TLB – to perform fast lookup
© Avi Mendelson, 5/ Lecture Address spaces and Memory management Page Replacement Algorithm (pseudo LRU) Not Recently Used (NRU) – Associated with each page is a reference flag such that ref flag = 1 if the page has been referenced in recent past If replacement is needed, choose any page frame such that its reference bit is 0. – This is a page that has not been referenced in the recent past Clock implementation of NRU: last replaced pointer (lrp) if replacement is to take place, advance lrp to next entry (mod table size) until one with a 0 bit is found; this is the target for replacement; As a side effect, all examined PTE's have their reference bits set to zero page table entry Ref bit 1 0 Possible optimization: search for a page that is both not recently referenced AND not dirty
© Avi Mendelson, 5/ Lecture Address spaces and Memory management Page Faults Page faults: the data is not in memory retrieve it from disk – The CPU must detect situation – The CPU cannot remedy the situation (has no knowledge of the disk) CPU must trap to the operating system so that it can remedy the situation – Pick a page to discard (possibly writing it to disk) – Load the page in from disk – Update the page table – Resume to program so HW will retry and succeed! Page fault incurs a huge miss penalty – Pages should be fairly large (e.g., 4KB) – Can handle the faults in software instead of hardware – Page fault causes a context switch – Using write-through is too expensive so we use write-back
© Avi Mendelson, 5/ Lecture Address spaces and Memory management Virtual Memory Unix (VAX) VAX was the first system to run UNIX, and so many of the UNIX mechanisms are based on the VAX implementation. Unix distinguishes between 3 different address spaces, each of them is controlled by a separate page table – The system address space – one per system – User code + data address space – one per process – User Stack address space – one per process. Only the system page table (one table) resists in the main memory all the other tables and address spaces are pageable. Kernel code stack code stack User space System space Run PP tableSys page table User PT Swappable area
© Avi Mendelson, 5/ Lecture Address spaces and Memory management VM in VAX: Address Format Page size: 2 9 = 512 bytes 31 Page offset 08 Virtual Page Number Virtual Address P0 process space code and data P1 process space stack S0 system space S1 not used Page offset 80 Physical Frame Number 29 Physical Address 9
© Avi Mendelson, 5/ Lecture Address spaces and Memory management Page Table Entry (PTE) V PROT M Z OWN S S 0 Physical Frame Number Valid bit =1 if page mapped to main memory, otherwise page on the swap area and the address indicates where I can find the page on the disk 4 Protection bits Modified bit 3 ownership bits This is the structure of all the entries in each of the page tables. Indicate if the line was cleaned (zero)
© Avi Mendelson, 5/ Lecture Address spaces and Memory management Address Translation The System address space is divided into “resident” part that always exists in the main memory and the rest of the 2G address space of the system is swappable. User space is always swappable. The system’s page table is part of the resident area and the SBR register points on its physical base address. For each process, there are two dedicated registers: – P0BR: points to the virtual address (in the system’s address space) of the code segment page table of the user – P1BR: points to the virtual address (in the system’s address space) of the stack segment page table of the user
© Avi Mendelson, 5/ Lecture Address spaces and Memory management System Space - Address Translation From the virtual address of the system’s address space we can extract the virtual page number (VPN). Assuming that each entry in the system address space is 4 bytes, the PTE that contains the physical address of the line exists in SBR+VPN*4. If V=1, we can extract the physical address of the page. If V=0, it causes a page fault and we need to bring the page from the disk. SBR VPN*4 PFN 10 offset VPN offset PFN 31
© Avi Mendelson, 5/ Lecture Address spaces and Memory management P0/P1 Address Translation Address translation need to be done in few phases: 1. Find the address of the proper PTE in the user level page table. 2. Since the PTE address is within the system’s address space, check if the page that contains the PTE is in the main memory 3. If page exists, check if the user page is in the main memory 4. If the page does not exist, causes a page fault to bring the value of the PTE and only than, we can check if the address exists in the memory or not. We may have up to 2 page-faults in the translation process of the singe access to a user level address.
© Avi Mendelson, 5/ Lecture Address spaces and Memory management P0/P1 Space Address Translation (cont) SBR P0BR+VPN*4 Offset’ PFN’ 00 offset VPN Offset’ VPN’ 31 PFN’ VPN’*4 Physical addr of PTE PFN Offset PFN
© Avi Mendelson, 5/ Lecture Address spaces and Memory management Handling large address space For a large virtual address space, we may get a large page table, for example: – For a 2 GB virtual address space, page size of 4KB, we get 2 31 /2 12 =2 19 entries – Assuming each entry is 4 bytes wide, page table alone will occupy 2MB in main memory Two proposed techniques to handle it: – Hierarchical page tables – “segmentation”
© Avi Mendelson, 5/ Lecture Address spaces and Memory management Large Address Spaces Solution: use two-level Page Tables Master page table resides in physical memory Secondary page tables reside in virtual address space Virtual address format At the lower levels of the tree we will keep only the segments of the tables we use P1 indexP2 indexpage offest bytes 4KB 1K PTEs
© Avi Mendelson, 5/ Lecture Address spaces and Memory management We do not need to map the addresses between the TOS (Top Of Stack) and the Break point The system needs to know how to map new regions when needed Virtual Memory – Memory-management Virtual Addresses Physical Addresses Address Translation Disk Addresses
© Avi Mendelson, 5/ Lecture Address spaces and Memory management The VAX Solution Segmentation Map only the sections the program uses Need to extend at run time – Stack manipulation – SBRK command p0 p1
© Avi Mendelson, 5/ Lecture Address spaces and Memory management How a process is forked When a process is forked, the new process and the old processes starts at the same point with the same state. The only difference between the two processes is that the Fork command returns the PID to the original process and 0 for the “new born” process. When a process is forked, we try to avoid to copy physical pages – At the initial point, the translation tables are copied – All pages in the memory that belong to the original process are marked as “read-only” – Only when one of the processes tries to change the page, we copy the page to private copies and allow private writable copies. (this mechanism is called COW – Copy On Write)
© Avi Mendelson, 5/ Lecture Address spaces and Memory management TLB – hardware support for address translation TLB is a cache that keeps the last translations the system made, using the virtual address as an index If the translation was found in the TLB, no further translation is needed. If it misses, we need to continue the process as described before.
© Avi Mendelson, 5/ Lecture Address spaces and Memory management TLB (Translation Lookaside Buffer) is a cache for recent address translations: Valid Making Address Translation Fast Physical Memory Disk Virtual page number Page Table Valid Tag Physical Page TLB Physical Page Or Disk Address
© Avi Mendelson, 5/ Lecture Address spaces and Memory management P0 space Address translation Using TLB Yes No Process TLB Access Process TLB hit? 00 VPN offset No System TLB hit? Yes Get PTE of req page from the proc. TLB Calculate PTE virtual addr (in S0): P0BR+4*VPN System TLB Access Get PTE from system TLB Get PTE of req page from the process Page table Access Sys Page Table in SBR+4*VPN(PTE) Memory Access Calculate physical address PFN Access Memory
© Avi Mendelson, 5/ Lecture Address spaces and Memory management Virtual Memory And Cache Yes No TLB Access TLB Hit ? Access Page Table Access Cache Virtual Address Cache Hit ? Yes No Access Memory Physical Addresses Data TLB access is serial with cache access
© Avi Mendelson, 5/ Lecture Address spaces and Memory management Overlapped TLB & Cache Access Physical page number Page offset Virtual Memory view of a Physical Address Disp Cache view of a Physical Address # SetTag In the above example #Set is not contained within the Page Offset The #Set is not known Until the Physical page number is known Cache can be accessed only after address translation done
© Avi Mendelson, 5/ Lecture Address spaces and Memory management Overlapped TLB & Cache Access (cont) Physical page number Page offset Virtual Memory view of a Physical Address Disp Cache view of a Physical Address # SetTag In the above example #Set is contained within the Page Offset The #Set is known immediately Cache can be accessed in parallel with address translation First the tags from the appropriate set are brought Tag comparison takes place only after the Physical page number is known (after address translation done)
© Avi Mendelson, 5/ Lecture Address spaces and Memory management Overlapped TLB & Cache Access (cont) First Stage – Virtual Page Number goes to TLB for translation – Page offset goes to cache to get all tags from appropriate set Second Stage – The Physical Page Number, obtained from the TLB, goes to the cache for tag comparison. Limitation: Cache < (page size * associativity) How can we overcome this limitation ? – Fetch 2 sets and mux after translation – Increase associativity
© Avi Mendelson, 5/ Lecture Address spaces and Memory management Virtual memory and process switch Whenever we switch the execution between processes we need to: – Save the state of the process – Load the state of the new process – Make sure that the P0BPT and P1BPT registers are pointing to the new translation tables – Clean the TLB – We do not need to clean the cache (Why????)
Design and Implementation Issues Today Design issues for paging systems Implementation issues Segmentation Next I/O.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 8: Main Memory.
Memory Management Background Swapping Contiguous Memory Allocation Paging Structure of the Page Table Segmentation Example: The Intel Pentium.
CS252/Culler Lec 5.1 2/5/02 CS203A Computer Architecture Lecture 15 Cache and Memory Technology and Virtual Memory.
1 Memory Management Chapter Basic memory management 4.2 Swapping 4.3 Virtual memory 4.4 Page replacement algorithms 4.5 Modeling page replacement.
ECE3055 Computer Architecture and Operating Systems Lecture 9 Memory Subsystem (II) OS Perspective Prof. Hsien-Hsin Sean Lee School of Electrical and Computer.
Chapters 7 & 8 Memory Management Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee Community College, Venice,
Main Memory. Goals for Today Protection: Address Spaces –What is an Address Space? –How is it Implemented? Address Translation Schemes –Segmentation –Paging.
1 Principles of Virtual Memory Virtual Memory, Paging, Segmentation.
Advanced Operating Systems Prof. Muhammad Saeed Memory Management -1.
Chapter 3 1 Process Description and Control Chapter 3.
Advanced Operating Systems Prof. Muhammad Saeed Memory Management -II.
Operating Systems Part IV: Memory Management. Main Memory Large array of words or bytes each having its own address Several processes must be kept in.
Address Translation Tore Larsen Material developed by: Kai Li, Princeton University.
Silberschatz, Galvin and Gagne Operating System Concepts Chapter 10: Virtual Memory Background Demand Paging Process Creation Page Replacement.
IOS103 OPERATING SYSTEM VIRTUAL MEMORY. Objectives At the end of the course, the student should be able to: Define virtual memory; Discuss the demand.
Hardware structures – system bus, internal (operational) memory. Piotr Mielecki Ph. D. Introduction to Computer Systems (4)
Hier wird Wissen Wirklichkeit Computer Architecture – Part 14 – page 1 of 44 – Prof. Dr. Uwe Brinkschulte, Prof. Dr. Klaus Waldschmidt Part 11 Memory Management.
SE-292 High Performance Computing Memory Hierarchy R. Govindarajan
1 Operating Systems Chapter 6. 2 What is an operating system? A program that runs on the hardware and supports Resource Abstraction Resource Sharing Abstracts.
1 Process Description and Control Chapter 2. 2 Process A program in execution An instance of a program running on a computer The entity that can be assigned.
Mr. Deven Patel, AITS, Rajkot. 1 Process Description and Control Chapter 3.
Chapter 2: Operating-System Structures. 2.2 Chapter 2: Operating-System Structures Operating System Services User Operating System Interface System Calls.
1 Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 M EMORY H IERARCHY D ESIGN Computer Architecture A Quantitative Approach, Fifth Edition.
1 Chapter 6 Operating Systems. 2 Learning outcomes Describe the functions of the Operating System and its components. Explain where the operating system.
Database System Concepts, 6 th Ed. ©Silberschatz, Korth and Sudarshan See for conditions on re-usewww.db-book.com Chapter 12: Query Processing.
Chapter 4 Memory Management Page Replacement. Thrashing If a process does not have “ enough ” pages, the page-fault rate is very high. This leads to:
COMP3221 lec38-vm-III.1 Saeid Nooshabadi COMP 3221 Microprocessors and Embedded Systems Lectures 38: Virtual Memory - III
PCI Bus CENG Spring Dr. Yuriy ALYEKSYEYENKOV 2 The PCI (Peripheral Component Interconnect) bus was developed as a low-cost, processor-independent.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Chapter 3: Processes.
© 2016 SlidePlayer.com Inc. All rights reserved.