Memory management Lecture 7 ~ Winter, 2007 ~.

Slides:



Advertisements
Similar presentations
Chapter 4 Memory Management 4.1 Basic memory management 4.2 Swapping
Advertisements

Chapter 4 Memory Management Basic memory management Swapping
9.4 Page Replacement What if there is no free frame?
4.4 Page replacement algorithms
Chapter 3 Memory Management
Background Virtual memory – separation of user logical memory from physical memory. Only part of the program needs to be in memory for execution. Logical.
Memory Management.
Page Replacement Algorithms
Chapter 4 Memory Management Page Replacement 补充:什么叫页面抖动?
Part IV: Memory Management
Chapter 4 Memory Management 4.1 Basic memory management 4.2 Swapping
MODERN OPERATING SYSTEMS Third Edition ANDREW S. TANENBAUM Chapter 3 Memory Management Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall,
1 Virtual Memory Management B.Ramamurthy. 2 Demand Paging Main memory LAS 0 LAS 1 LAS 2 (Physical Address Space -PAS) LAS - Logical Address.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 10: Virtual Memory Background Demand Paging Process Creation Page Replacement.
Module 9: Virtual Memory
Module 10: Virtual Memory Background Demand Paging Performance of Demand Paging Page Replacement Page-Replacement Algorithms Allocation of Frames Thrashing.
Virtual Memory Background Demand Paging Performance of Demand Paging
Virtual Memory Introduction to Operating Systems: Module 9.
1 Memory Management Managing memory hierarchies. 2 Memory Management Ideally programmers want memory that is –large –fast –non volatile –transparent Memory.
Chapter 3.2 : Virtual Memory
Memory Management Virtual Memory Page replacement algorithms
Memory Management 1 CS502 Spring 2006 Memory Management CS-502 Spring 2006.
CS-3013 & CS-502, Summer 2006 Memory Management1 CS-3013 & CS-502 Summer 2006.
Computer Organization and Architecture
Virtual Memory Management B.Ramamurthy. Paging (2) The relation between virtual addresses and physical memory addres- ses given by page table.
1 Virtual Memory Management B.Ramamurthy Chapter 10.
Chapter 4 Memory Management 4.1 Basic memory management 4.2 Swapping
03/29/2004CSCI 315 Operating Systems Design1 Page Replacement Algorithms (Virtual Memory)
Virtual Memory.
O RERATıNG S YSTEM LESSON 10 MEMORY MANAGEMENT II 1.
Chapter 4 Memory Management 4.1 Basic memory management 4.2 Swapping
Memory Management Page replacement algorithms, segmentation Tanenbaum, ch. 3 p Silberschatz, ch. 8, 9 p
Memory Management From Chapter 4, Modern Operating Systems, Andrew S. Tanenbaum.
Memory Management The part of O.S. that manages the memory hierarchy (i.e., main memory and disk) is called the memory manager Memory manager allocates.
MEMORY MANAGEMENT Presented By:- Lect. Puneet Gupta G.P.C.G. Patiala.
1 Chapter 3.2 : Virtual Memory What is virtual memory? What is virtual memory? Virtual memory management schemes Virtual memory management schemes Paging.
1 Memory Management Chapter Basic memory management 4.2 Swapping (εναλλαγή) 4.3 Virtual memory (εικονική/ιδεατή μνήμη) 4.4 Page replacement algorithms.
CIS250 OPERATING SYSTEMS Memory Management Since we share memory, we need to manage it Memory manager only sees the address A program counter value indicates.
1 Memory Management 4.1 Basic memory management 4.2 Swapping 4.3 Virtual memory 4.4 Page replacement algorithms 4.5 Modeling page replacement algorithms.
Chapter 8 – Main Memory (Pgs ). Overview  Everything to do with memory is complicated by the fact that more than 1 program can be in memory.
Chapter 4 Memory Management Virtual Memory.
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems Memory: Paging.
“ Memory Management Function ” Presented By Lect. Rimple Bala GPC,Amritsar 1.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts Essentials – 9 th Edition Chapter 9: Virtual-Memory Management.
Demand Paging Reference Reference on UNIX memory management
操作系统原理 OPERATING SYSTEM Chapter 3 Memory Management 内存管理.
1 Memory Management Chapter Basic memory management 4.2 Swapping 4.3 Virtual memory 4.4 Page replacement algorithms 4.5 Modeling page replacement.
1 Memory Management Adapted From Modern Operating Systems, Andrew S. Tanenbaum.
Silberschatz, Galvin and Gagne ©2011 Operating System Concepts Essentials – 8 th Edition Chapter 9: Virtual Memory.
Chapter 9: Virtual Memory. 9.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Background Virtual memory – separation of user logical memory.
Virtual Memory What if program is bigger than available memory?
Memory Management Chapter 3
MODERN OPERATING SYSTEMS Third Edition ANDREW S
Logistics Homework 5 will be out this evening, due 3:09pm 4/14
Chapter 9: Virtual Memory – Part I
Demand Paging Reference Reference on UNIX memory management
Instructor: Junfeng Yang
Operating Systems Virtual Memory Alok Kumar Jagadev.
Main Memory Management
Module 9: Virtual Memory
Chapter 8: Main Memory.
Demand Paging Reference Reference on UNIX memory management
Chapter 9: Virtual-Memory Management
Page Replacement.
5: Virtual Memory Background Demand Paging
Main Memory Background Swapping Contiguous Allocation Paging
Module 9: Virtual Memory
COMP755 Advanced Operating Systems
Virtual Memory.
CSE 542: Operating Systems
Presentation transcript:

Memory management Lecture 7 ~ Winter, 2007 ~

Contents Context and definition Basic memory management Swapping Virtual memory Paging Page replacement algorithms Segmentation

The context The need for a memory to be Types of memory – hierarchy very large very fast nonvolatile Types of memory – hierarchy small, very fast and expensive, volatile, cache memory hundreds of MBs, medium-speed, medium-price, volatile main memory (RAM) tens or hundreds of GBs of slow, cheap, nonvolatile disk storage

Definition Memory manager Its role is to an OS component that manage the main memory of a system (memory management) Its role is to coordinate how the different types of memory are used keep track of which part of memory are in use and which are not allocate and release areas of main memory to processes manage swapping between main memory and disk, when main memory is to small to hold all the processes

Basic memory management Mono-programming (1) No swapping or paging Run only one program at a time In memory are loaded the only program that is run the OS Way of memory organization OS at the bottom of memory and the user program above OS at the top of memory (ROM) and the user program bellow OS at the bottom of memory, device drivers in ROM (mapped at the top of memory – BIOS) and the user program between them

Basic memory management Mono-programming (2)

Basic memory management Multi-programming More programs loaded into the memory in the same time Increase CPU utilization Multi-programming with fixed memory partitions Divide memory up into n partitions fixed sizes, not necessarily equal  lost space Waiting queues for partitions different waiting queues for different partitions one global waiting queue Different strategies to choose a process that fits in a free partition the first that fit  waste of space the largest that fit  discriminates against small processes a process not to be skipped over more than k times

Basic memory management Multiprogr. with fixed partitions

Basic memory management Relocation and protection (1) Context of multiprogramming More processes in the same time in memory Different processes will be loaded and run at different addresses Need for relocation address locations of variables, code routines cannot be absolute, but relative the relative addresses used in a process must be translated into real addresses Need for protection Protect the code of OS against the processes Protect one process against other processes

Basic memory management Relocation and protection (2) Linker – Loader method Relocate the addresses as the program is loaded into memory The linker has to generate a list of the addresses that have to be relocated Base and limit registers Add the base register value to every address Compare every address with the value of limit reg.

Swapping Description The reason The technique no more space in memory to keep al the active processes The technique some processes are kept on the disk and brought in to run dynamically swap out (memory  HDD) swap in (memory  HDD) at one moment a process is entirely in the memory to be run or entirely on the HDD

Swapping An example

Swapping Advantages and disadvantages Similar with the technique of fixed size partitions, but variable number of partition variable size of partitions Improves memory utilizations Complicated allocating and deallocating memory Memory compaction – eliminates holes Pre-allocates more space than needed – for possibly growing segments Complicated keeping track of free memory

Swapping Preallocation of space

Swapping Memory management with bitmaps (1) The memory is divided up into allocation units of the same size Each allocation unit has a bit corresponding bit in the bitmap 0  the unit is free 1  the unit is allocated The size of the allocation unit is important The smaller the size, the greater the bitmap The greater the size, the smaller the bitmap, but results in waste of memory (internal fragmentation) Simple to use and implement The search of k consecutives free units is slow

Swapping Memory management with bitmaps (2)

Swapping Memory management with linked lists (1) A single list of allocated and free segments (process or hole) of memory List sorted by the memory address updating the list is simple and fast

Swapping Memory management with linked lists (2) Allocation of memory First fit – fast Next fit – slightly worse performance than first fit Best fit – slower; results in more wasted memory Worst fit Separate lists for processes and holes Speed up searching for a hole at allocation Complicates releasing of memory Holes list can be sorted by the size Quick fit

Virtual Memory Definition and terms (1) The context the programs (code, data, stack) exceed the amount of physical memory available for them The technique the programs are not entirely loaded in memory the OS keeps in main memory only those parts of a program that are currently in use, and the rest on the disk swapping is used between main memory and disk The result the illusion that a computer has more memory than it actually has each process has the illusion that it is the only process loaded in memory and can access entire memory

Virtual Memory Definition and terms (2) Virtual addresses program memory addresses Virtual address space all the (virtual) addresses a program can generate is given by the number of bytes used to specify an address Physical (real) addresses addresses in main memory (on memory bus) Physical memory available Memory Management Unit (MMU) a mapping unit from virtual addresses into physical addresses

Virtual Memory Memory Management Unit (MMU)

Paging Definition Virtual address space The physical memory divided up into units of the same size called pages pages from 0 to AddressSpaceSize / PageSize - 1 The physical memory divided up into units of the same size called page frames page frames from 0 to PhysicalMemSize / PageSize - 1 Pages and frames are the same size PageSize is typically be a value between 512 bytes and 64KB Transfers between RAM and disk are in units of a page

Paging Mapping virtual onto physical move REG, 0  move REG, 8192 virtual address 0 = (virtual page 0, offset 0) virtual page 0  physical page 2 physical address 8192: = (physical page 2, offset 0) move REG, 20500  move REG, 12308 virtual address 20500 = (virtual page 5, offset 20) virtual page 5  physical page 3 physical address 12308 = (physical page 3, offset 20)

Paging Providing virtual memory Only some virtual pages are mapped onto physical memory Some virtual pages are kept on disk Page table – mapping unit Present/Absent bit Page fault a trap into the OS because of a reference to an address located in a page not in memory generated by the MMU result in a swap between physical memory and disk the referenced page is loaded from disk into memory the trapped instruction is re-executed

Paging Page tables (1)

Paging Page tables (2) Role Each process has its own page table to map virtual pages onto physical page frames Each process has its own page table Page tables can be extremely large 32 bits, 4KB page => 1.048.576 entries Mapping must be fast is done on every memory reference

Paging Page tables (3) An array of fast registers, with one entry for each entry in the virtual page page table is copied from memory into registers no more memory references needed context switch is expensive A single register page table in memory the register points to the start of page table context switch is fast more references to the memory for reading page table entries

Paging Multilevel Page Tables 32 bit virtual addresses 10 bits – PT1 10 bits – PT2 12 bits – offset Page size = 4KB No. of pages = 220 Top-level page table – 1024 entries An entry  4MB Second-level page table – 1024 entries

Paging Structure of a page table entry The size is computer dependent 32 bit is commonly used Page frame number Present/Absent bit Protection bits read, write, read only etc. Modified bit (dirty bit) Referenced bit

Paging Translation Lookaside Buffers (TLB) (1) Observations Keeping the page tables in memory reduce drastically the performance Large number of references to a small number of pages TLB or associative memory a small fast hardware for mapping virtual addresses to physical addresses a sort of table with a small number of entries (usually less than 64)  maps only a small number of virtual pages a TLB entry contains information about one page the search of a virtual page in TLB is done simultaneously in all the entries of the TLB can be also implemented in software

Paging Translation Lookaside Buffers (TLB) (2) Valid Virtual page Modified Protection Page frame 1 140 RW_ 31 20 R_X 38 130 29 129 62 19 50 21 45 860 14 861 75

Paging Inverted page tables (1) Is a solution for handling large address spaces 64 bit computer, with 4KB pages  252 page table entries, with 8 bytes/entry  over 30 mil. GB page table One table per system with an entry for each page frame An entry contains the pair (process, virtual page) mapped into the corresponding page frame The virtual-to-physical translation becomes much harder and slower search the entire table at every memory reference Practically: use of TLB and hash tables

Paging Inverted page tables (2)

Page replacement algorithms The context At page fault and full physical memory Space has to be made A currently loaded virtual page has to be evicted from memory Choosing the page to be evicted not a heavily used page  reduce the number of page faults Page replacement the old page has to be written on the disk if it was modified the new virtual page overwrite the old virtual page into the page frame

Page replacement algorithms The optimal algorithm Choose the page that will be the latest one accessed in the future between all the pages actually in memory Very simple and efficient (optimal) Impossible to be implemented in practice there is no way to know when each page will be referenced next It can be simulated At first run collect information about pages references At second run use results of the first run (but with the same input) It is used to evaluate the performance of other, practically used, algorithms

Page replacement algorithms Not Recently Used (NRU) (1) Each page has two status bits associated Referenced bit (R) Modified bit (M) The two bits updated by the hardware at each memory reference once set to 1 remain so until they are reset by OS can be also simulated in software when the mechanism is not supported by hardware

Page replacement algorithms Not Recently Used (NRU) (2) At process start the bits are set to 0 Periodically (on each clock interrupt) the R bit is cleared For page replacement, pages are classified Class 0: not referenced, not modified Class 1: not referenced, modified Class 2: referenced, not modified Class 3: referenced, modified The algorithm removes a page at random from the lowest numbered nonempty class It is easy to understand, moderately efficient to implement, and gives an adequate performance

Page replacement algorithms First-In, First-out (FIFO) Reference string P1 P2 P3 P4 Frame 1 Frame 2 Frame 3 Page fault * Reference string P1 P2 P3 P4 Frame 1 Frame 2 Frame 3 Page fault *

Page replacement algorithms The second chance (1) A modification of FIFO to avoid throwing out a heavily used page Inspect the R bit of the oldest page 0  page is old und unused  replaced 1  page is old but used  its R bit = 0 and the page is moved at the end of the queue as a new arrived page Look for an old page that has not been not referenced in the previous clock interval If all the pages have been references  FIFO

Page replacement algorithms The second chance (2)

Page replacement algorithms The Clock algorithm

Page replacement algorithms Least Recently Used (LRU) (1) Based on the observation that pages that have been heavily used in the last few instructions will probably be heavily used again in the next few Throw out the page that has been unused for the longest time The algorithm keeps a linked list the referenced page is moved at the front of the list the page at the end of the list is replaced the list must be updated at each memory reference  costly

Page replacement algorithms Least Recently Used (LRU) (2) Reference string P1 P2 P3 P4 Frame 1 Frame 2 Frame 3 Page fault * Reference string P1 P2 P3 P4 Frame 1 Frame 2 Frame 3 Page fault *

Page replacement algorithms Least Recently Used (LRU) (3) Implementing LRU with a hardware counter keep a 64-bit counter which is incremented after each instruction each page table entry has a field large enough to store the counter the counter is stored in the page table entry for the page just referenced the page with the lowest value of its counter field is replaced Implementing LRU with a hardware bits matrix N page frames  N x N bits matrix when virtual page k is referenced bits of row k are set to 1 bits of column k are set to 0 the page with the lowest value is removed

Page replacement algorithms Least Recently Used (LRU) (4) Reference string: 0 1 2 3 2 1 0 3 2 3

Page replacement algorithms Not Frequently Used and Aging (1) A software implementation of LRU NFU consists of a software counter associated with each page at each clock tick the R bit is added to the counter for all the pages in memory; after that the R bits are reset to 0 the page with the lowest counter is chosen Problem: never forgets anything; does not evict pages which were heavily used in the past, but are not any more used (their counter remains great) Aging – modification of NFU Shift right the counter one position R bit is added to the leftmost bit Differences from LRU does not know the page which was referenced first between two ticks; for example pages 3 and 5 at step (e) the finite number of bits of counters  does not differentiate between pages with the value 0 of their counter

Page replacement algorithms Illustration of aging

Modeling Page Repl. Algorithms Belady’s Anomaly FIFO Algorithm with 3 page frames FIFO Algorithm with 4 page frames

Design Issues for Paging Local versus Global Allocation Policy (1) The context Page fault Page replacement Question: which pages are taken into account? Local: pages of the current process Global: pages of all processes Answer: depends on the strategy used to allocate memory between the competing runnable processes Local: every process has a fixed fraction of memory allocated Global: page frames are dynamically allocated among runnable processes

Design Issues for Paging Local versus Global Allocation Policy (2) Global algorithms work better, especially when the working set size can vary greater than the allocated size  thrashing smaller than the allocated size  waste of memory Strategies for global policy Monitor the size of working set of all processes based on the age of pages Page frames allocation algorithm allocate pages proportionally with each process size give each process a minimum number of frames the allocation is updated dynamically for example use PFF (Page Fault Frequency) algorithm Same page replacement algorithms can work with both policies (FIFO, LRU) can work only with the local policy (WSCLock)

Design Issues for Paging Memory’s Load Control When the combined working sets of all processes exceed the capacity of memory  thrashing PFF algorithm indicates that some processes need more memory no process need less memory Swap out some processes from memory keep the page-fault rate acceptable take into account the degree of multiprogramming take into account other processes’ features

Design Issues for Paging Page size Balance several competing factors argue for small size reduce internal fragmentation argue for large size page table’s dimension Example s = average process size in bytes p = page size in bytes e = number of bytes per page table entry overhead of memory for a process = se/p + p/2, due to page table size and internal fragmentation optimum is found equating the first derivative to 0 

Design Issues for Paging Shared Pages Read-only pages (code) are normally shared Problems when a process of two that share pages is swapped out terminated Sharing non read-only pages (data) use the copy-on-right strategy

Implementation Issues Page Fault Handling (1) Trap to kernel; save PC on stack; save information about the state of current instruction Save the general registers and other volatile information that the OS will alter Find the virtual page that is needed Check if the address is valid and the corresponding access is allowed Check if there is any free page frame and if not the page replacement algorithm is run If the “victim” page is dirty a disk write operation is scheduled and the faulting process is suspended (wait for I/O); page frame is marked as busy

Implementation Issues Page Fault Handling (2) When the page frame is clean the OS find on the disk the needed page and scheduled a disk read operation suspending again the faulting process Update the page table and mark as normal the page frame The faulting instruction is backed up to the state it had when it began and the PC is set to point to that instruction The faulting process is scheduled The registers and the other saved information is restored Switch to user space and continue the execution normally , as if no fault had occurred

Bibliography [Tann01] Andrew Tannenbaum, “Modern Operating Systems”, second edition, Prentice Hall, 2001, pg. 190-263.