OSes: 8. Mem. Mgmt. 1 Operating Systems v Objectives –describe some of the memory management schemes used by an OS so that several processes can be in.

Slides:



Advertisements
Similar presentations
Memory.
Advertisements

Part IV: Memory Management
Operating Systems Lecture Notes Memory Management Matthew Dailey Some material © Silberschatz, Galvin, and Gagne, 2002.
Main Memory CS Memory Management1. Background Program must be brought (from disk) into memory and placed within a process for it to be run Main.
Modified from Silberschatz, Galvin and Gagne Lecture 16 Chapter 8: Main Memory.
Memory Management.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 8: Main Memory.
Memory Management Chapter 5.
Memory Management Gordon College Stephen Brinton.
Chapter 7: Main Memory CS 170, Fall Memory Management Background Swapping Contiguous Memory Allocation Paging Structure of the Page Table Segmentation.
Silberschatz, Galvin and Gagne  Operating System Concepts Segmentation Memory-management scheme that supports user view of memory. A program.
Main Memory. Background Program must be brought (from disk) into memory and placed within a process for it to be run Main memory and registers are only.
03/05/2008CSCI 315 Operating Systems Design1 Memory Management Notice: The slides for this lecture have been largely based on those accompanying the textbook.
Memory Management Outline
Chapter 8: Main Memory.
Chap 8 Memory Management. Background Program must be brought into memory and placed within a process for it to be run Input queue – collection of processes.
Chapter 8: Main Memory. 8.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 8: Memory Management Background Swapping Contiguous.
Chapter 8: Main Memory. Chapter 8: Memory Management Background Swapping Contiguous Memory Allocation Paging Structure of the Page Table Segmentation.
Silberschatz, Galvin and Gagne  2002 Modified for CSCI 346, Royden, Operating System Concepts Operating Systems Lecture 24 Paging.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 9: Memory Management Background Swapping Contiguous Allocation Paging Segmentation.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 9: Memory Management Background Swapping Contiguous Allocation Paging Segmentation.
Memory Management -1 Background Swapping Memory Management Schemes
CS 346 – Chapter 8 Main memory –Addressing –Swapping –Allocation and fragmentation –Paging –Segmentation Commitment –Please finish chapter 8.
Copyright © 2006 by The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill Technology Education Lecture 8 Operating Systems.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 8: Main Memory.
CIS250 OPERATING SYSTEMS Memory Management Since we share memory, we need to manage it Memory manager only sees the address A program counter value indicates.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 9: Memory Management Background Swapping Contiguous Allocation Paging Segmentation.
Background Program must be brought into memory and placed within a process for it to be run. Input queue – collection of processes on the disk that are.
Chapter 8 – Main Memory (Pgs ). Overview  Everything to do with memory is complicated by the fact that more than 1 program can be in memory.
Memory. Chapter 8: Memory Management Background Swapping Contiguous Memory Allocation Paging Structure of the Page Table Segmentation.
1 Memory Management (b). 2 Paging  Logical address space of a process can be noncontiguous; process is allocated physical memory whenever the latter.
Main Memory. Chapter 8: Memory Management Background Swapping Contiguous Memory Allocation Paging Structure of the Page Table Segmentation Example: The.
CSC 360, Instructor Kui Wu Memory Management I: Main Memory.
Memory Management. Background Memory consists of a large array of words or bytes, each with its own address. The CPU fetches instructions from memory.
8.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Fragmentation External Fragmentation – total memory space exists to satisfy.
Background Program must be brought into memory and placed within a process for it to be run. Input queue – collection of processes on the disk that are.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Chapter 8: Main Memory.
Memory Management 1. Outline Background Logical versus Physical Address Space Swapping Contiguous Allocation Paging Segmentation 2.
Chapter 5 Memory Management. 2 Background Swapping Contiguous Memory Allocation Paging Structure of the Page Table Segmentation.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 8: Main Memory.
Chapter 8: Memory Management. 8.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 8: Memory Management Background Swapping Contiguous.
Chapter 7: Main Memory CS 170, Fall Program Execution & Memory Management Program execution Swapping Contiguous Memory Allocation Paging Structure.
Module 9: Memory Management
Chapter 9: Memory Management
Chapter 8: Main Memory.
Chapter 8: Memory Management
Chapter 8: Main Memory.
Chapter 8: Memory Management
Chapter 8: Main Memory.
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy 11/12/2018.
Storage Management Chapter 9: Memory Management
Operating System Concepts
Module 9: Memory Management
Chapter 8: Main Memory.
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy Chapter 8 11/24/2018.
Main Memory Session -15.
Background Program must be brought into memory and placed within a process for it to be run. Input queue – collection of processes on the disk that are.
CSS 430: Operating Systems - Main Memory
Multistep Processing of a User Program
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy Chapter 9 12/1/2018.
Memory Management-I 1.
Main Memory Background Swapping Contiguous Allocation Paging
Chapter 8: Memory management
Outline Module 1 and 2 dealt with processes, scheduling and synchronization Next two modules will deal with memory and storage Processes require data to.
Lecture 3: Main Memory.
Chapter 8: Memory Management strategies
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy Chapter 9 4/5/2019.
CSE 542: Operating Systems
Page Main Memory.
Presentation transcript:

OSes: 8. Mem. Mgmt. 1 Operating Systems v Objectives –describe some of the memory management schemes used by an OS so that several processes can be in memory (RAM) at once Certificate Program in Software Development CSE-TC and CSIM, AIT September -- November, Memory Management (Ch. 8, S&G) ch 9 in the 6th ed.

OSes: 8. Mem. Mgmt. 2 Contents 1.Background 2.Logical vs. Physical Address Spaces 3.Swapping 4.Partition Allocation 5.Paging 6.Segmentation

OSes: 8. Mem. Mgmt Background Fig. 8.1, p.241; VUW CS 305 Source Compiler Object Linker Load Module Loader Executable Image Object Libraries System Libraries Dynamic Libraries compile tmeload timeexecution

OSes: 8. Mem. Mgmt Address Binding v compile time –the compiler knows where a process will reside in memory so generates absolute code (code that starts at a fixed address) v load time –the compiler does not know where a process will reside so generates relocatable code (its starting address is determined at load time) continued

OSes: 8. Mem. Mgmt. 5 v execution time –the process can move during its execution, so its starting address is determined at run time

OSes: 8. Mem. Mgmt Dynamic Loading v A routine (function, procedure, library, etc.) is not loaded until is is called by another program –the routine must be stored as relocatable code v Unused routines are never loaded –saves space

OSes: 8. Mem. Mgmt Dynamic Linking v Often used for system libraries –e.g. Dynamically Linked Libraries (DLLs) v The first call to a DLL from a program causes the linking in and loading of the libraries –i.e. linking/loading is determined at run time continued

OSes: 8. Mem. Mgmt. 8 v Allows libraries to be changed/moved since linking/loading information is not fixed (as much) in the calling program.

OSes: 8. Mem. Mgmt Overlays v Keep in memory only those pieces of code that are needed at any given time –saves space v Example: a two-pass assembler –Pass 170K Pass 280K Symbol table20K Common routines30K continued

OSes: 8. Mem. Mgmt. 10 v Loading everything requires 200K v Use two overlays: –A: symbol table, common routines, pass 1 u requires 120K –B: symbol table, common routines, pass 2 u requires 130K

OSes: 8. Mem. Mgmt Logical vs. Physical Address Space v The user sees a logical view of the memory used by their code –in one piece, unmoving –sometimes called a virtual address space v This is mapped to the physical address space –code may be located in parts/partition –some parts may be in RAM, others in backing store continued

OSes: 8. Mem. Mgmt. 12 v The mapping from logical to physical is done by the Memory Management Unit (MMU). v Hiding the mapping is one of the main aims of memory management schemes.

OSes: 8. Mem. Mgmt. 13 Example: Dynamic Relocation Fig. 8.3, p.246 relocation register MMU CPU logical address 346 physical address memory

OSes: 8. Mem. Mgmt Swapping v If there is not enough memory (RAM) for all the ready processes then some of them may be swapped out to backing store. v They will be swapped back in when the OS can find enough memory for them. continued

OSes: 8. Mem. Mgmt. 15 P1P1 P2P2 OS swap out swap in user space Fig. 8.4, p.247 continued Diagram

OSes: 8. Mem. Mgmt. 16 v With compile time / load time address binding, the process must be swapped back to its old location –no such need for processes with execution time address binding v Must be careful not to swap out a process that is waiting for I/O.

OSes: 8. Mem. Mgmt. 17 The Ready Queue v The ready queue consists of: –processes in memory that are ready to run –processes swapped out to backing store that are ready to run u they will need to be swapped back in if they are chosen by the CPU scheduler

OSes: 8. Mem. Mgmt. 18 Swap Times v The speed of swapping affects the design of the CPU scheduler –the amount of execution time it gives to a process should be much greater than the swap time to bring it into memory

OSes: 8. Mem. Mgmt. 19 Swap Time Example v Assume a transfer rate of 1MB/sec. v A transfer of 100K process will take: –100/1000 = 100 ms v Total swap time: = transfer out + transfer in + (2 * latency) = (2 * 8) = 216 ms

OSes: 8. Mem. Mgmt Partition Allocation v Divide memory into a fixed no. of partitions, and allocate a process to each. v Partitions can be different fixed sizes. continued

OSes: 8. Mem. Mgmt. 21 v Processes are allocated to the smallest available partition. v Internal fragmentation will occur:

OSes: 8. Mem. Mgmt Variable-size Partitions Fig. 8.7, p.252 OS user space 0 400K 2560K 2160K processmemorytime P1 600K 10 P2 1000K 5 P3 300K 20 P4 700K 8 P5 500K 15 Job Queue

OSes: 8. Mem. Mgmt. 23 Memory Allocation Fig. 8.8., p.253 OS 0 400K 2560K P1 P2 P3 1000K 2000K 2300K P2 ends OS 0 400K 2560K P1 P3 1000K 2000K 2300K P4 allocated continued

OSes: 8. Mem. Mgmt. 24 OS 0 400K 2560K P1 P3 1000K 2000K 2300K P1 ends OS 0 400K 2560K P3 1000K 2000K 2300K P5 allocated continued P4 1700K P4 allocated P4 1700K

OSes: 8. Mem. Mgmt. 25 OS 0 400K 2560K P3 1000K 2000K 2300K P5 allocated P4 1700K P5 900K external fragmentation develops

OSes: 8. Mem. Mgmt Dynamic Storage Allocation v Where should a process of size N be stored when there are a selection of partitions/holes to choose from? v First fit –allocate the first hole that is big enough –fastest choice to carry out continued

OSes: 8. Mem. Mgmt. 27 v Best fit –allocate the smallest hole that is big enough –leaves smallest leftover hole v Worst fit –allocate the largest hole –leaves largest leftover hole

OSes: 8. Mem. Mgmt Compaction Fig. 8.10, p.255 OS 0 400K 2560K P3 1000K 2000K 2300K P4 1700K P5 900K OS 0 400K 2560K P3 1900K P4 1600K P5 900K

OSes: 8. Mem. Mgmt. 29 Different Strategies Fig. 8.11, p.256 OS 0 300K 2100K P4 600K 1500K 1900K P3 1200K P1 500K original allocation P2 1000K moved 600K OS 0 300K 2100K P4 600K P3 1200K P1 500K P2 800K Version 1

OSes: 8. Mem. Mgmt. 30 OS 0 300K 2100K P4 600K 1500K 1900K P3 1200K P1 500K original allocation P2 1000K moved 400K OS 0 300K 2100K P3 600K P4 1200K P1 500K P2 1000K Version 2 Or:

OSes: 8. Mem. Mgmt. 31 OS 0 300K 2100K P4 600K 1500K 1900K P3 1200K P1 500K original allocation P2 1000K moved 200K OS 0 300K 2100K P3 600K P4 1900K P1 500K P2 1500K Version 3 Or:

OSes: 8. Mem. Mgmt Paging v Divide up the logical address space of a process into fixed size pages. v These pages are mapped to same size frames in physical memory –the frames may be located anywhere in memory

OSes: 8. Mem. Mgmt The Basic Method v Each logical address has two parts: v A page table contains the mapping from a page number to the base address of its corresponding frame. v Each process has its own page table –stored in its PCB

OSes: 8. Mem. Mgmt. 34 Paging Hardware Fig. 8.12, p.258 CPU logical address physical address physical memory pdfd f p

OSes: 8. Mem. Mgmt Size of a Page v The size of a page is typically a power of 2: –512 (2 9 ) (2 13 ) bytes v This makes it easy to split a machine address into page number and offset parts. continued

OSes: 8. Mem. Mgmt. 36 v For example, assume: –the address space is 2 m bytes large –a page can be 2 n bytes in size (n < m) v The logical address format becomes: page number npage offset d m-nn

OSes: 8. Mem. Mgmt Example v Address space is 32 bytes (2 5 ) v Page size: 4 bytes (2 2 ) v Therefore, there can be 8 pages (2 3 ) v Logical address format: p.258 page number npage offset d 32

OSes: 8. Mem. Mgmt. 38 Fig. 8.14, p a 1 b 2 c 3 d 4 e 5 f 6 g 7 h 8 i 9 j 10 k 11 l 12 m 13 n 14 o 15 p logical memory page table 0 4 i j k l 8 m n o p a b c d 24 e f g h 28 physical memory

OSes: 8. Mem. Mgmt. 39 Using the Page Table v Logical AddressPhysical Address 0 (5*4) + 0 = 20 3 (5*4) + 3 = 23 5 (6*4) + 1 = (2*4) + 2 = 10

OSes: 8. Mem. Mgmt Features of Paging v No external fragmentation –any free frame can be used by a process v Internal fragmentation can occur –small pages or large pages? v There is a clear separation between logical memory (the user’s view) and physical memory (the OS/hardware view).

OSes: 8. Mem. Mgmt Performance Issues v Every access must go through a page table  Small page table  registers  Large page table  table in memory –a memory access requires indexing into the page table and a memory access v Translation Look-aside Buffer (TLB) –sometimes called an “associative cache”

OSes: 8. Mem. Mgmt Paging with a TLB Fig. 8.16, p.264 CPU logical address physical address physical memory pd fd f p TLB TLB hit TLB miss p f

OSes: 8. Mem. Mgmt. 43 Performance v Assume: –memory access takes 100 nsec –TLB access takes 20 nsec v A 80% hit rate: –effective access time is =(0.8 * 120) + (0.2 * 220) =140 nsec –40% slowdown in the memory access time TLB + page table + memory access continued

OSes: 8. Mem. Mgmt. 44 v A 98% hit rate: –effective access time is =(0.98 * 120) + (0.02 * 220) =122 nsec –22% slowdown in the memory access time

OSes: 8. Mem. Mgmt Multilevel Paging v In modern systems, the logical address space for a process is very large: –2 32, 2 64 bytes –virtual memory allows it to be bigger than physical memory (explained later) –the page table becomes too large

OSes: 8. Mem. Mgmt. 46 Example v Assume: –a 32 bit logical address space –page size is 4K bytes (2 12 ) v Logical address format: page number npage offset d 2012 continued

OSes: 8. Mem. Mgmt. 47 v The page table must store 2 20 addresses (~ 1 million), each of size 32 bits (4 bytes) –4 MB page table –too big v Solution: use a two-level paging scheme to make the page tables smaller.

OSes: 8. Mem. Mgmt. 48 Two-level Paging Scheme v In essence: “page the page table” –divide the page table into two levels v Logical address format: –p1 = index into the outer page table –p2 = index into the page obtained from the outer page table page number page offset dp1p2

OSes: 8. Mem. Mgmt. 49 Diagram Fig. 8.18, p.266 : : : : : pg 0 pg 1 pg 100 pg 500 pg 708 pg 929 pg 900 : : : : : : memory pages page tables : outer page table

OSes: 8. Mem. Mgmt. 50 Address Translation Fig. 8.19, p.267 p1 p2 d desired page page table outer page table p1p2d logical address

OSes: 8. Mem. Mgmt. 51 Two-level Page Table Sizes v Assume the logical address format: v The outer page table (and other page tables) will contain 2 10 addresses (~1000), each of size 32-bits (4 bytes) –table size = 4K page number page offset dp1p

OSes: 8. Mem. Mgmt. 52 Three-level Paging Scheme v For a 64-bit logical address space, three levels may be requires to make the page table sizes manageable. v Possible logical address format: page number page offset dp1p p3 32 continued

OSes: 8. Mem. Mgmt. 53 v But now the second outer page table (the new p1) will have to store 2 32 addresses –go to a four-level paging scheme!

OSes: 8. Mem. Mgmt. 54 Three-level Paging Slowdown v Assume: –there is a TLB cache, with access time 20 nsec –memory access takes 100 nsec v A 98% hit rate: –effective access time is =(0.98 * 120) + (0.02 * 420) =126 nsec –26% slowdown in the memory access time TLB + 3 levels + memory access

OSes: 8. Mem. Mgmt Inverted Page Table v A page table maps each page of a process to a physical frame. v If virtual memory is used, many tables may refer to the same frames u virtual memory is explained in the next chapter v Each table may have millions of entries. continued

OSes: 8. Mem. Mgmt. 56 v An inverted page table has one entry for each frame, which says which PID (process ID) and page are using it (currently) –reduces the amount of physical memory required to store page  frame mappings v A logical address is represented by:

OSes: 8. Mem. Mgmt. 57 Inverted Page Table Diagram Fig. 8.20, p.270 CPU logical address physical address physical memory pidd id i p p search inverted page table

OSes: 8. Mem. Mgmt. 58 Drawbacks v Slow linear search time over the inverted page table –use hashing –use TLBs for recent accesses v Still need (something like) ordinary page tables to record which pages are currently swapped out to backing store.

OSes: 8. Mem. Mgmt Shared Pages v Ordinary page tables allow frames to be shared –useful for reusing reentrant code (i.e. code that does not modify itself) v Example –three users of an editor (150K, split into three pages) and their data (50K each, one page each)

OSes: 8. Mem. Mgmt. 60 Editor Usage Fig. 8.21, p.271 data 1 memory data 3 ed 1 ed 2 ed 3 data 2 : ed 1 ed 2 ed 3 data P1 page table ed 1 ed 2 ed 3 data P3 page table ed 1 ed 2 ed 3 data P2 page table

OSes: 8. Mem. Mgmt. 61 Memory Savings v Total physical memory usage (with sharing): = (3 * 50)= 300K v Total physical memory usage (without sharing): = 3 * ( )= 600K

OSes: 8. Mem. Mgmt. 62 Problems v Sharing relies on being able to map several pages (logical memory) to a single frame (physical memory). –not possible with an inverted page table which only allows one page to be associated with one frame

OSes: 8. Mem. Mgmt Segmentation v A user’s view of memory: Fig. 8.22, p.272 subroutine stack symbol table main program sqrt() continued

OSes: 8. Mem. Mgmt. 64 v A logical address consists of: v A compiler can create separate segements for the distinct parts of a program: –e.g. global variables, call stack, code for each function

OSes: 8. Mem. Mgmt. 65 Segmentation Hardware Fig. 8.23, p.274 CPU physical memory d s limitbase s segment table < + yes no trap: addressing error

OSes: 8. Mem. Mgmt. 66 Example Fig. 8.24, p.275 subroutine stack symbol table main program sqrt() S0 S3 S1 S2 S4 limitbase S0 S3 S2 S4 S1 logical address space segment table physical memory

OSes: 8. Mem. Mgmt. 67 Advantages v Segments can be used to store distinct parts of a program, so it is easier to protect them –e.g. make function code read-only

OSes: 8. Mem. Mgmt. 68 Sharing of Segments Fig. 8.25, p.277 editor data 1 S0 S1 limitbase editor data 1 data 2 logical memory for P1 segment table for P editor data 2 S0 S1 limitbase logical memory for P2 segment table for P2