We think you have liked this presentation. If you wish to download it, please recommend it to your friends in any social system. Share buttons are a little bit lower. Thank you!
Presentation is loading. Please wait.
Published byLindsey Plough
Modified over 2 years ago
© Avi Mendelson, 5/ Lecture Address spaces and Memory management MAMAS – Computer Architecture Address spaces and Memory management Dr. Avi Mendelson Some of the slides were taken from: (1) Jim Smith (2) Patterson presentations.
© Avi Mendelson, 5/ Lecture Address spaces and Memory management Memory System Problems Different Programs have different memory requirements – How to manage program placement? Different machines have different amount of memory – How to run the same program on many different machines? At any given time each machine runs a different set of programs – How to fit the program mix into memory? Reclaiming unused memory? Moving code around? The amount of memory consumed by each program is dynamic (changes over time) – How to effect changes in memory location: add or subtract space? Program bugs can cause a program to generate reads and writes outside the program address space – How to protect one program from another?
© Avi Mendelson, 5/ Lecture Address spaces and Memory management Address Spaces
© Avi Mendelson, 5/ Lecture Address spaces and Memory management Address Space of a single process (Linux) Each process has a fixed size address space that can be divided into: – Symbol tables and other management areas – Code – Data “static data” allocated by compiler “Dynamic data allocated by “malloc” – Stack Managed automatically kernel virtual memory runtime heap (via malloc) program text (.text) initialized data (.data) uninitialized data (.bss) stack forbidden 0 %esp memory invisible to user code the “brk” ptr
© Avi Mendelson, 5/ Lecture Address spaces and Memory management Main memory can act as a cache for the secondary storage (disk) Virtual Memory Divide memory (virtual and physical) into fixed size blocks (Pages, Frames) – Pages in Virtual space, Frames in Physical space – Page size = Frame size – Page size is a power of 2: page size = 2 k All pages in the virtual address space are contiguous Pages can be mapped into physical Frames in any order Some of the pages are in main memory (DRAM), some of the pages are on disk. Virtual Addresses Physical Addresses Address Translation Disk Addresses
© Avi Mendelson, 5/ Lecture Address spaces and Memory management Managing Multiple Virtual Address spaces Each process “sees” a private address space of 4G. – 2G private address space the process can use – 2G system address space, shared by all processes, and can be accessed only if the system at “supervisor” mode (kernel mode). All processes are sharing the same (small) physical memory – Small part of the address space is kept in main memory (DRAM), most of the address space is either not mapped or is kept on disk. All programs are written using Virtual Memory Address Space The hardware does on-the-fly translation between virtual and physical address spaces. 2GB Virtual address spaces U Area
© Avi Mendelson, 5/ Lecture Address spaces and Memory management How a process is created (exec) The executable file contains the code, the initial values for the data area and initial tables (symbol tables etc). A page table is created to map the virtual addresses to the physical addresses At the start time, no page is in the main memory, so – Code pages are pointed to the disk where the code section within the file is – The initial data pages (.data) are mapped to the.data section within the executable file. – The system may allocate pages for initial stack space, data space (.BSS) and for the heap area (this is a performance optimization, the system can avoid it). As the execution continues, the system allocates pages in the main memory, copies their content from the disk and change the pointer to point to the main memory. When a page is replaced from the main memory, the system keeps it in a special place on the disk, called swap area.
© Avi Mendelson, 5/ Lecture Address spaces and Memory management Virtual Address Space for Process 1: Physical Address Space (DRAM) VP 1 VP 2 PP 2 Address Translation 0 0 N-1 0 M-1 VP 1 VP 2 PP 7 PP 10 (e.g., read only or library code) Virtual Memory can help to share address spaces – If several processes map different pages to the same physical page, the data/code of that page is shared (we will discuss how the OS takes advantage of this feature later on)... Virtual Address Space for Process 2:
© Avi Mendelson, 5/ Lecture Address spaces and Memory management VM can help process protection Page table entry contains access rights information – hardware enforces this protection (trap into OS if violation occurs) Page Tables Process i: Physical AddrRead?Write? PP 9YesNo PP 4Yes XXXXXXX No VP 0: VP 1: VP 2: Process j: 0: 1: N-1: Memory Physical AddrRead?Write? PP 6Yes PP 9YesNo XXXXXXX No VP 0: VP 1: VP 2:
© Avi Mendelson, 5/ Lecture Address spaces and Memory management Virtual Memory Implementation
© Avi Mendelson, 5/ Lecture Address spaces and Memory management Page Tables Valid 1 Physical Memory Disk Page Table Physical Page Or Disk Address Virtual page number
© Avi Mendelson, 5/ Lecture Address spaces and Memory management Virtual to Physical Address translation Virtual page number Physical page number Page offset Virtual Address Physical Addresses Page size: 2 12 byte =4K byte Page Table
© Avi Mendelson, 5/ Lecture Address spaces and Memory management The Page Table 31 Page offset 011 Virtual Page Number Page offset 110 Physical Frame Number 29 Virtual Address Physical Address VD Frame number 1 Page table base reg 0 Valid bit Dirty bit 12 AC Access Control 12
© Avi Mendelson, 5/ Lecture Address spaces and Memory management If V = 1 then page is in main memory at frame address stored in table Access data else (page fault) need to fetch page from disk causes a trap, usually accompanied by a context switch: current process is suspended while page is fetched from disk Access Control (R = Read-only, R/W = read/write, X = execute only) If kind of access not compatible with specified access rights then protection_violation_fault causes trap to hardware, or software fault handler Missing item fetched from secondary memory only on the occurrence of a fault demand load policy Address Mapping Algorithm
© Avi Mendelson, 5/ Lecture Address spaces and Memory management Managing virtual addresses Virtual memory management is mainly done by the OS. – Deciding which pages should be in memory and which – on disk. – Deciding when to write pages to disk. – Deciding what access rights are assigned to pages. – Et cetera, et cetera. – The exact algorithms are out of the scope of this course. Here, we study two hardware mechanisms – Pseudo LRU mechanism – to decide which pages to keep – TLB – to perform fast lookup
© Avi Mendelson, 5/ Lecture Address spaces and Memory management Page Replacement Algorithm (pseudo LRU) Not Recently Used (NRU) – Associated with each page is a reference flag such that ref flag = 1 if the page has been referenced in recent past If replacement is needed, choose any page frame such that its reference bit is 0. – This is a page that has not been referenced in the recent past Clock implementation of NRU: last replaced pointer (lrp) if replacement is to take place, advance lrp to next entry (mod table size) until one with a 0 bit is found; this is the target for replacement; As a side effect, all examined PTE's have their reference bits set to zero page table entry Ref bit 1 0 Possible optimization: search for a page that is both not recently referenced AND not dirty
© Avi Mendelson, 5/ Lecture Address spaces and Memory management Page Faults Page faults: the data is not in memory retrieve it from disk – The CPU must detect situation – The CPU cannot remedy the situation (has no knowledge of the disk) CPU must trap to the operating system so that it can remedy the situation – Pick a page to discard (possibly writing it to disk) – Load the page in from disk – Update the page table – Resume to program so HW will retry and succeed! Page fault incurs a huge miss penalty – Pages should be fairly large (e.g., 4KB) – Can handle the faults in software instead of hardware – Page fault causes a context switch – Using write-through is too expensive so we use write-back
© Avi Mendelson, 5/ Lecture Address spaces and Memory management Virtual Memory Unix (VAX) VAX was the first system to run UNIX, and so many of the UNIX mechanisms are based on the VAX implementation. Unix distinguishes between 3 different address spaces, each of them is controlled by a separate page table – The system address space – one per system – User code + data address space – one per process – User Stack address space – one per process. Only the system page table (one table) resists in the main memory all the other tables and address spaces are pageable. Kernel code stack code stack User space System space Run PP tableSys page table User PT Swappable area
© Avi Mendelson, 5/ Lecture Address spaces and Memory management VM in VAX: Address Format Page size: 2 9 = 512 bytes 31 Page offset 08 Virtual Page Number Virtual Address P0 process space code and data P1 process space stack S0 system space S1 not used Page offset 80 Physical Frame Number 29 Physical Address 9
© Avi Mendelson, 5/ Lecture Address spaces and Memory management Page Table Entry (PTE) V PROT M Z OWN S S 0 Physical Frame Number Valid bit =1 if page mapped to main memory, otherwise page on the swap area and the address indicates where I can find the page on the disk 4 Protection bits Modified bit 3 ownership bits This is the structure of all the entries in each of the page tables. Indicate if the line was cleaned (zero)
© Avi Mendelson, 5/ Lecture Address spaces and Memory management Address Translation The System address space is divided into “resident” part that always exists in the main memory and the rest of the 2G address space of the system is swappable. User space is always swappable. The system’s page table is part of the resident area and the SBR register points on its physical base address. For each process, there are two dedicated registers: – P0BR: points to the virtual address (in the system’s address space) of the code segment page table of the user – P1BR: points to the virtual address (in the system’s address space) of the stack segment page table of the user
© Avi Mendelson, 5/ Lecture Address spaces and Memory management System Space - Address Translation From the virtual address of the system’s address space we can extract the virtual page number (VPN). Assuming that each entry in the system address space is 4 bytes, the PTE that contains the physical address of the line exists in SBR+VPN*4. If V=1, we can extract the physical address of the page. If V=0, it causes a page fault and we need to bring the page from the disk. SBR VPN*4 PFN 10 offset VPN offset PFN 31
© Avi Mendelson, 5/ Lecture Address spaces and Memory management P0/P1 Address Translation Address translation need to be done in few phases: 1. Find the address of the proper PTE in the user level page table. 2. Since the PTE address is within the system’s address space, check if the page that contains the PTE is in the main memory 3. If page exists, check if the user page is in the main memory 4. If the page does not exist, causes a page fault to bring the value of the PTE and only than, we can check if the address exists in the memory or not. We may have up to 2 page-faults in the translation process of the singe access to a user level address.
© Avi Mendelson, 5/ Lecture Address spaces and Memory management P0/P1 Space Address Translation (cont) SBR P0BR+VPN*4 Offset’ PFN’ 00 offset VPN Offset’ VPN’ 31 PFN’ VPN’*4 Physical addr of PTE PFN Offset PFN
© Avi Mendelson, 5/ Lecture Address spaces and Memory management Handling large address space For a large virtual address space, we may get a large page table, for example: – For a 2 GB virtual address space, page size of 4KB, we get 2 31 /2 12 =2 19 entries – Assuming each entry is 4 bytes wide, page table alone will occupy 2MB in main memory Two proposed techniques to handle it: – Hierarchical page tables – “segmentation”
© Avi Mendelson, 5/ Lecture Address spaces and Memory management Large Address Spaces Solution: use two-level Page Tables Master page table resides in physical memory Secondary page tables reside in virtual address space Virtual address format At the lower levels of the tree we will keep only the segments of the tables we use P1 indexP2 indexpage offest bytes 4KB 1K PTEs
© Avi Mendelson, 5/ Lecture Address spaces and Memory management We do not need to map the addresses between the TOS (Top Of Stack) and the Break point The system needs to know how to map new regions when needed Virtual Memory – Memory-management Virtual Addresses Physical Addresses Address Translation Disk Addresses
© Avi Mendelson, 5/ Lecture Address spaces and Memory management The VAX Solution Segmentation Map only the sections the program uses Need to extend at run time – Stack manipulation – SBRK command p0 p1
© Avi Mendelson, 5/ Lecture Address spaces and Memory management How a process is forked When a process is forked, the new process and the old processes starts at the same point with the same state. The only difference between the two processes is that the Fork command returns the PID to the original process and 0 for the “new born” process. When a process is forked, we try to avoid to copy physical pages – At the initial point, the translation tables are copied – All pages in the memory that belong to the original process are marked as “read-only” – Only when one of the processes tries to change the page, we copy the page to private copies and allow private writable copies. (this mechanism is called COW – Copy On Write)
© Avi Mendelson, 5/ Lecture Address spaces and Memory management TLB – hardware support for address translation TLB is a cache that keeps the last translations the system made, using the virtual address as an index If the translation was found in the TLB, no further translation is needed. If it misses, we need to continue the process as described before.
© Avi Mendelson, 5/ Lecture Address spaces and Memory management TLB (Translation Lookaside Buffer) is a cache for recent address translations: Valid Making Address Translation Fast Physical Memory Disk Virtual page number Page Table Valid Tag Physical Page TLB Physical Page Or Disk Address
© Avi Mendelson, 5/ Lecture Address spaces and Memory management P0 space Address translation Using TLB Yes No Process TLB Access Process TLB hit? 00 VPN offset No System TLB hit? Yes Get PTE of req page from the proc. TLB Calculate PTE virtual addr (in S0): P0BR+4*VPN System TLB Access Get PTE from system TLB Get PTE of req page from the process Page table Access Sys Page Table in SBR+4*VPN(PTE) Memory Access Calculate physical address PFN Access Memory
© Avi Mendelson, 5/ Lecture Address spaces and Memory management Virtual Memory And Cache Yes No TLB Access TLB Hit ? Access Page Table Access Cache Virtual Address Cache Hit ? Yes No Access Memory Physical Addresses Data TLB access is serial with cache access
© Avi Mendelson, 5/ Lecture Address spaces and Memory management Overlapped TLB & Cache Access Physical page number Page offset Virtual Memory view of a Physical Address Disp Cache view of a Physical Address # SetTag In the above example #Set is not contained within the Page Offset The #Set is not known Until the Physical page number is known Cache can be accessed only after address translation done
© Avi Mendelson, 5/ Lecture Address spaces and Memory management Overlapped TLB & Cache Access (cont) Physical page number Page offset Virtual Memory view of a Physical Address Disp Cache view of a Physical Address # SetTag In the above example #Set is contained within the Page Offset The #Set is known immediately Cache can be accessed in parallel with address translation First the tags from the appropriate set are brought Tag comparison takes place only after the Physical page number is known (after address translation done)
© Avi Mendelson, 5/ Lecture Address spaces and Memory management Overlapped TLB & Cache Access (cont) First Stage – Virtual Page Number goes to TLB for translation – Page offset goes to cache to get all tags from appropriate set Second Stage – The Physical Page Number, obtained from the TLB, goes to the cache for tag comparison. Limitation: Cache < (page size * associativity) How can we overcome this limitation ? – Fetch 2 sets and mux after translation – Increase associativity
© Avi Mendelson, 5/ Lecture Address spaces and Memory management Virtual memory and process switch Whenever we switch the execution between processes we need to: – Save the state of the process – Load the state of the new process – Make sure that the P0BPT and P1BPT registers are pointing to the new translation tables – Clean the TLB – We do not need to clean the cache (Why????)
Virtual Memory In this lecture, slides from lecture 16 from the course Computer Architecture ECE 201 by Professor Mike Schulte are used with permission.
Virtual Memory. Next in memory hierarchy Motivations: to remove programming burdens of a small, limited amount of main memory to allow efficient.
1 Memory Management. 2 Ideally programmers want memory that is –large –fast –non volatile Memory hierarchy –small amount of fast, expensive memory – cache.
Virtual Memory 1 Computer Organization II © McQuain Virtual Memory Use main memory as a “cache” for secondary (disk) storage – Managed jointly.
Chapter 3 Memory Management Page Replacement Algorithms Design Issues Implementation.
Operating Systems ECE344 Ding Yuan Paging Lecture 8: Paging.
1 Virtual Memory II Chapter 8. 2 Fetch Policy –Determines when a page should be brought into memory –Demand paging only brings pages into main memory.
1 Memory Management Chapter 4 Basic memory management Swapping Virtual memory Page replacement algorithms.
Ceng Operating Systems Chapter 3.3 : OS Policies for Virtual Memory Fetch policy Placement policy Replacement policy Resident set management.
VMLihu Rappoport, 12/ MAMAS – Computer Architecture Virtual Memory Dr. Lihu Rappoport.
University of Amsterdam Computer Systems – virtual memory Arnoud Visser 1 Computer Systems Virtual Memory.
LECTURE 12 Virtual Memory. VIRTUAL MEMORY Just as a cache can provide fast, easy access to recently-used code and data, main memory acts as a “cache”
Chapter 5 : Memory Management 1By : Jigar M. Pandya.
CS203 – Advanced Computer Architecture Virtual Memory.
Computer Architecture 2010 – VM 1 Computer Architecture Virtual Memory Dr. Lihu Rappoport.
Silberschatz, Galvin and Gagne Operating System Concepts Chapter 10: Virtual Memory Background Demand Paging Process Creation Page Replacement.
CS2100 Computer Organisation Virtual Memory – Own reading only (AY2015/6) Semester 1.
Computer Organization CS224 Fall 2012 Lesson 44. Virtual Memory Use main memory as a “cache” for secondary (disk) storage l Managed jointly by CPU hardware.
CS 153 Design of Operating Systems Spring 2015 Lecture 17: Paging.
1 Project 5: Virtual Memory Two-level page tables Page fault handler Physical page frame managementpage allocation, page replacement, swap in and swap.
SE-292 High Performance Computing Memory Hierarchy R. Govindarajan
Virtual Memory 1 Computer Organization II © McQuain Virtual Memory Use main memory as a cache for secondary (disk) storage – Managed jointly.
Vm Computer Architecture Lecture 16: Virtual Memory.
Virtual Memory Adapted from lecture notes of Dr. Patterson and Dr. Kubiatowicz of UC Berkeley.
Virtual Memory Ch. 8 & 9 Silberschatz Operating Systems Book.
Lecture 34: Chapter 5 Today’s topic –Virtual Memories 1.
Computer Architecture 2008 – VM 1 Computer Architecture Virtual Memory Dr. Lihu Rappoport.
Virtual Memory Review Goal: give illusion of a large memory Allow many processes to share single memory Strategy Break physical memory up into blocks (pages)
Operating Systems ECE344 Ashvin Goel ECE University of Toronto Processes and Virtual Memory.
1 Memory Management (II) Chapter 8. 2 Paging Implementation n Each page table entry contains a present bit to indicate whether the page is in main memory.
© 2004, D. J. Foreman 1 Virtual Memory. © 2004, D. J. Foreman 2 Objectives Avoid copy/restore entire address space Avoid unusable holes in memory.
Virtual Memory Adapted from lecture notes of Dr. Patterson and Dr. Kubiatowicz of UC Berkeley and Rabi Mahapatra & Hank Walker.
CS 153 Design of Operating Systems Spring 2015 Lecture 16: Memory Management and Paging.
Virtual Memory. Virtual Memory: Topics Why virtual memory? Virtual to physical address translation Page Table Translation Lookaside Buffer (TLB)
Review °Apply Principle of Locality Recursively °Manage memory to disk? Treat as cache Included protection as bonus, now critical Use Page Table of mappings.
Carnegie Mellon 1 Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition Virtual Memory: Concepts Slides adapted from Bryant.
Chapter 8 – Main Memory (Pgs ). Overview Everything to do with memory is complicated by the fact that more than 1 program can be in memory.
Chapter 4 Memory Management Page Replacement. Thrashing If a process does not have “ enough ” pages, the page-fault rate is very high. This leads to:
Paging (continued) & Caching CS-3013 A-term Paging (continued) & Caching CS-3013 Operating Systems A-term 2008 (Slides include materials from Modern.
Memory Management memory hierarchy programs exhibit locality of reference - non-uniform reference patterns temporal locality - a program that references.
Virtual Memory Expanding Memory Multiple Concurrent Processes.
1 Memory Management Chapter Basic memory management 4.2 Swapping 4.3 Virtual memory 4.4 Page replacement algorithms 4.5 Modeling page replacement.
SE-292: High Performance Computing Memory Organization R. Govindarajan
By Teacher Asma Aleisa Year 1433 H. Goals of memory management To provide a convenient abstraction for programming To allocate scarce memory resources.
1 2004 Morgan Kaufmann Publishers Chapter Seven Memory Hierarchy-3 by Patterson.
©UCB CS 162 Ch 7: Virtual Memory LECTURE 13 Instructor: L.N. Bhuyan
Operating Systems, 112 Practical Session 9, Memory 1.
Operating Systems ECE344 Ashvin Goel ECE University of Toronto Demand Paging.
© 2017 SlidePlayer.com Inc. All rights reserved.