MSJ-1 Roadmap Contiguous memory management and its limitations Paged memory management Motivation and overview Binding, the page table, and the MMU Segmented.

Slides:



Advertisements
Similar presentations
Relocation of an Admitted Process Requires Execution Time Binding
Advertisements

Memory.
Part IV: Memory Management
Memory Management Chapter 7. Memory Management Subdividing memory to accommodate multiple processes Memory needs to be allocated efficiently to pack as.
Allocating Memory.
Memory Management (II)
CS 104 Introduction to Computer Science and Graphics Problems
Memory Management. 2 How to create a process? On Unix systems, executable read by loader Compiler: generates one object file per source file Linker: combines.
1 Memory Management Chapter 7. 2 Memory Management Subdividing memory to accommodate multiple processes Memory needs to be allocated to ensure a reasonable.
Memory Management 2010.
Memory Management 1 CS502 Spring 2006 Memory Management CS-502 Spring 2006.
CS-3013 & CS-502, Summer 2006 Memory Management1 CS-3013 & CS-502 Summer 2006.
Memory Management Chapter 5.
03/22/2004CSCI 315 Operating Systems Design1 Virtual Memory Notice: The slides for this lecture have been largely based on those accompanying the textbook.
A. Frank - P. Weisberg Operating Systems Simple/Basic Paging.
03/17/2008CSCI 315 Operating Systems Design1 Virtual Memory Notice: The slides for this lecture have been largely based on those accompanying the textbook.
1 Lecture 8: Memory Mangement Operating System I Spring 2008.
Silberschatz, Galvin and Gagne  2002 Modified for CSCI 346, Royden, Operating System Concepts Operating Systems Lecture 24 Paging.
Lecture 21 Last lecture Today’s lecture Cache Memory Virtual memory
Review of Memory Management, Virtual Memory CS448.
Computer Architecture and Operating Systems CS 3230: Operating System Section Lecture OS-7 Memory Management (1) Department of Computer Science and Software.
SOCSAMS e-learning Dept. of Computer Applications, MES College Marampally MEMORYMANAGEMNT.
Memory Management Chapter 7.
Operating Systems Chapter 8
Chapter 8 Memory Management Dr. Yingwu Zhu. Outline Background Basic Concepts Memory Allocation.
Cosc 2150: Computer Organization Chapter 6, Part 2 Virtual Memory.
CIS250 OPERATING SYSTEMS Memory Management Since we share memory, we need to manage it Memory manager only sees the address A program counter value indicates.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 9: Memory Management Background Swapping Contiguous Allocation Paging Segmentation.
Background Program must be brought into memory and placed within a process for it to be run. Input queue – collection of processes on the disk that are.
Subject: Operating System.
Chapter 8 – Main Memory (Pgs ). Overview  Everything to do with memory is complicated by the fact that more than 1 program can be in memory.
8.1 Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Paging Physical address space of a process can be noncontiguous Avoids.
1 Address Translation Memory Allocation –Linked lists –Bit maps Options for managing memory –Base and Bound –Segmentation –Paging Paged page tables Inverted.
CE Operating Systems Lecture 14 Memory management.
1 Memory Management (b). 2 Paging  Logical address space of a process can be noncontiguous; process is allocated physical memory whenever the latter.
1 Memory Management Chapter 7. 2 Memory Management Subdividing memory to accommodate multiple processes Memory needs to be allocated to ensure a reasonable.
1 Memory Management Chapter 7. 2 Memory Management Subdividing memory to accommodate multiple processes Memory needs to be allocated to ensure a reasonable.
Informationsteknologi Wednesday, October 3, 2007Computer Systems/Operating Systems - Class 121 Today’s class Memory management Virtual memory.
Memory Management OS Fazal Rehman Shamil. swapping Swapping concept comes in terms of process scheduling. Swapping is basically implemented by Medium.
Chapter 8: Main Memory. 8.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 22, 2005 Memory and Addressing It all starts.
8.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Fragmentation External Fragmentation – total memory space exists to satisfy.
Background Program must be brought into memory and placed within a process for it to be run. Input queue – collection of processes on the disk that are.
Chapter 7 Memory Management Eighth Edition William Stallings Operating Systems: Internals and Design Principles.
Memory Management. 2 How to create a process? On Unix systems, executable read by loader Compiler: generates one object file per source file Linker: combines.
Memory Management Chapter 5 Advanced Operating System.
Chapter 8: Memory Management. 8.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 8: Memory Management Background Swapping Contiguous.
Chapter 7: Main Memory CS 170, Fall Program Execution & Memory Management Program execution Swapping Contiguous Memory Allocation Paging Structure.
Memory Management Chapter 7.
Basic Paging (1) logical address space of a process can be made noncontiguous; process is allocated physical memory whenever the latter is available. Divide.
Chapter 9: Memory Management
Memory Management.
Non Contiguous Memory Allocation
Paging COMP 755.
COMBINED PAGING AND SEGMENTATION
Chapter 8: Main Memory.
Chapter 8: Main Memory.
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy 11/12/2018.
Operating System Concepts
Memory Management 11/17/2018 A. Berrached:CS4315:UHD.
Background Program must be brought into memory and placed within a process for it to be run. Input queue – collection of processes on the disk that are.
Computer Architecture
Memory Management-I 1.
Main Memory Background Swapping Contiguous Allocation Paging
CS399 New Beginnings Jonathan Walpole.
Lecture 3: Main Memory.
Chapter 8: Memory Management strategies
CS703 - Advanced Operating Systems
Operating Systems: Internals and Design Principles, 6/E
Page Main Memory.
Presentation transcript:

MSJ-1 Roadmap Contiguous memory management and its limitations Paged memory management Motivation and overview Binding, the page table, and the MMU Segmented memory management Motivation and overview The segment table This presentation is intended to be viewed in slideshow mode. If you are reading this text, you are not in slide show mode. Hit the F5 function key to enter slideshow mode.

MSJ-2 Summary of Contiguous Memory Management Contiguous memory management means that the physical address space of a process must be a single contiguous block (sometimes known as a partition) in physical memory External fragmentation is intrinsic to contiguous memory management It is the variable length holes in memory that lead to external fragmentation and since processes are variable length, so too will be the holes Even if we quantize our allocations and hence our holes, there can still be a variable number of quanta per hole and so we still will eventually have external fragmentation Compaction is in fact a solution, but not for real time systems To take the next step we’re going to have to drop the requirement for the physical address space of a process to be contiguous

MSJ-3 Introduction to Paged Memory Management main memory page0 page1 page2 page3 frame # 0x0000 0x1000 0x2000 0x3000 0x4000 0x5000 0x6000 0x7000 0x8000 0x9000 0xa000 0xb000 address ab ab load module The physical memory doesn’t see any of this; it just continues its boring life of receiving a physical address plus a control bit which tells it whether to read or write that address It’s up to the loader to load pages into the correct physical addresses and it’s up to the MMU to properly bind a logical address to a physical address before it’s sent to the memory (we’ll see how shortly) The physical memory doesn’t see any of this; it just continues its boring life of receiving a physical address plus a control bit which tells it whether to read or write that address It’s up to the loader to load pages into the correct physical addresses and it’s up to the MMU to properly bind a logical address to a physical address before it’s sent to the memory (we’ll see how shortly) So frame 4 starts at physical address 4·(frame size) = 4·1K 16 = 0x4000 and runs up to but not including the start of frame 5 at 0x5000, or physical addresses 0x4000 – 0x4fff, inclusive Here, for example, we see the OS view of memory for a page/frame size of 2 12 = (2 4 ) 3 =16 3 bytes =1K 16 (that’s a hex K there; it would be , in decimal) Paged memory management divides the logical address space of the load module up into fixed length chunks (quanta, now called pages) before loading them independently into “frames” defined by evenly spaced main memory addresses that appear to divide physical memory into fixed length chunks exactly the same size as pages Any page can go to any frame Paged memory management divides the logical address space of the load module up into fixed length chunks (quanta, now called pages) before loading them independently into “frames” defined by evenly spaced main memory addresses that appear to divide physical memory into fixed length chunks exactly the same size as pages Any page can go to any frame Pages are real enough, the loader does indeed chop up the load module’s logical address space into pages as it loads the process from disk into memory Frames are a fiction, however, merely a consequence of the way the OS allocates physical memory – the memory itself knows nothing about frames and there are no changes to the memory hardware itself The loader just makes sure that whenever it loads a page, it loads it starting at a base address that is some multiple of the page/frame size ‒ i.e., the base address of frame #k is k * f, where f is the page/frame size, so the frame number is in fact nothing more than the most significant bits of its base address Pages are real enough, the loader does indeed chop up the load module’s logical address space into pages as it loads the process from disk into memory Frames are a fiction, however, merely a consequence of the way the OS allocates physical memory – the memory itself knows nothing about frames and there are no changes to the memory hardware itself The loader just makes sure that whenever it loads a page, it loads it starting at a base address that is some multiple of the page/frame size ‒ i.e., the base address of frame #k is k * f, where f is the page/frame size, so the frame number is in fact nothing more than the most significant bits of its base address

MSJ-4 The long term scheduler (in charge of admission) asks the memory manager if there are enough free frames on the free frame list for the new process If so, the requisite number of frames are removed from the FFL and inserted into the new process’s page table The long term scheduler (in charge of admission) asks the memory manager if there are enough free frames on the free frame list for the new process If so, the requisite number of frames are removed from the FFL and inserted into the new process’s page table Page Table (PT) The Long Term Scheduler, Admission, the Free Frame List, and the Page Table FFL 220 3e8 99 a2 2f5 aa b For paged memory management, the free space list is simplified to a free frame list (FFL) whose management is much simpler than the older free space list: It doesn’t need to be ordered Since they’re fixed length, any frame is as good as any other for any use whatsoever For paged memory management, the free space list is simplified to a free frame list (FFL) whose management is much simpler than the older free space list: It doesn’t need to be ordered Since they’re fixed length, any frame is as good as any other for any use whatsoever The page table (PT) is a data structure used for the execution time binding of paged memory management Each process must have its own page table A PT contains the frame numbers assigned to (containing) the process A process’s page table is created when it (a new process) is admitted and loaded The page table (PT) is a data structure used for the execution time binding of paged memory management Each process must have its own page table A PT contains the frame numbers assigned to (containing) the process A process’s page table is created when it (a new process) is admitted and loaded Remember: All this number means in the FFL is that the physical memory from address 220*(pageSize) to address 221*(pageSize) is currently not used so it’s free for assignment to any page of any process that needs it

MSJ-5 PA Execution Time Binding for Paged Memory Management main memory LA memory management unit (MMU) CPU d fpfp p p d p is used as an offset into the page table The MMU circuitry splits up the logical address into a page number, p, and the displacement from the beginning of that page, d page table (PT) f0f0 f1f1 f2f2 fpfp The memory management unit (MMU) uses the page table to bind each logical address emitted by the CPU to the correct physical address in main memory The frame number the MMU finds there, in PT[p], is the frame in main memory that the OS assigned to hold page p of the process’s logical address space page p frame f p d PA=f p *(pageFrameSize)+d Pages and frames are the same size so the displacement from the beginning of a page is the same as the displacement from the base address of the frame that holds that page, so the MMU appends the displacement (from the logical address) to the frame number from the page table and the result is the physical address to be sent to main memory Pretend, for illustrative purposes that all the pages of a book contained exactly characters Then character #35879 in the book would be the 879 th character from the beginning of page 35 in the book Pretend, for illustrative purposes that all the pages of a book contained exactly characters Then character #35879 in the book would be the 879 th character from the beginning of page 35 in the book

MSJ-6 memory management unit (MMU) Page Table in the MMU So Where is the Page Table? page registers main memory PALA p d CPU page table (PT) dpdp fpfp fpfp v v v v v i i i Each page register also needs an extra bit to indicate whether or not it is valid So, for example, for a process whose logical address space was only 5 pages, only the first 5 page registers would be marked valid A CPU reference to an invalid page would cause the MMU to generate a hardware trap and the OS would terminate the process Each page register also needs an extra bit to indicate whether or not it is valid So, for example, for a process whose logical address space was only 5 pages, only the first 5 page registers would be marked valid A CPU reference to an invalid page would cause the MMU to generate a hardware trap and the OS would terminate the process n bits 2 n page registers If the page table for the running process is going to be in the MMU, it must be read in to what are called the MMU page registers as part of the context switch whenever the process is dispatched The bigger the page table, the longer the context switch time If the page table for the running process is going to be in the MMU, it must be read in to what are called the MMU page registers as part of the context switch whenever the process is dispatched The bigger the page table, the longer the context switch time When a process is not running, its page table is kept in memory There are two possibilities for how the MMU will access the page table for a currently running process: Copy it into the MMU as part of the context switch Use the one in main memory There are advantages and disadvantages to each choice When a process is not running, its page table is kept in memory There are two possibilities for how the MMU will access the page table for a currently running process: Copy it into the MMU as part of the context switch Use the one in main memory There are advantages and disadvantages to each choice There must be 2 n page registers, where n is the number of bits devoted to the page number in the MMU’s decoding of a logical address Fewer bits for the page number, p, means a faster context switch, since the page table will be smaller But since the total number of bits in a logical address is fixed, it also means more bits for the displacement, d, which means that the page size is larger and internal fragmentation becomes more of an issue There must be 2 n page registers, where n is the number of bits devoted to the page number in the MMU’s decoding of a logical address Fewer bits for the page number, p, means a faster context switch, since the page table will be smaller But since the total number of bits in a logical address is fixed, it also means more bits for the displacement, d, which means that the page size is larger and internal fragmentation becomes more of an issue

MSJ-7 memory management unit (MMU) Page Table main memory p d LAPA d fpfp CPU PTBR p + Page Table Only in Main Memory fpfp yes no trap to OS ? p ≤ PTLR PTLR To find the physical address in memory of the desired entry in the page table, the page number is simply added to the PTBR n bits Note that using p as an offset in main memory means that the page table must be contiguous in memory If the largest possible page table fits into a single frame, that’s guaranteed, of course, since memory within a frame is contiguous Otherwise, it will be necessary to page the page table! Note that using p as an offset in main memory means that the page table must be contiguous in memory If the largest possible page table fits into a single frame, that’s guaranteed, of course, since memory within a frame is contiguous Otherwise, it will be necessary to page the page table! p max size 2 n entries A page table limit register (PTLR) in the MMU can be used to avoid (fairly obvious) problems with keeping a valid/invalid bit with each page table entry in memory The PTLR and the PTBR are both part of a the context of a running process A page table limit register (PTLR) in the MMU can be used to avoid (fairly obvious) problems with keeping a valid/invalid bit with each page table entry in memory The PTLR and the PTBR are both part of a the context of a running process If the MMU doesn’t have page registers, it will need to access the page table in memory and so will need a special purpose register called the page table base register to contain the base address of the page table for the running process (each process has its own page table, remember) One PTBR is faster to context switch than a whole bunch of page registers If the MMU doesn’t have page registers, it will need to access the page table in memory and so will need a special purpose register called the page table base register to contain the base address of the page table for the running process (each process has its own page table, remember) One PTBR is faster to context switch than a whole bunch of page registers The frame number there in PT[p] is then read into the MMU and concatenated with the displacement to complete the binding Note also that now every memory reference by the CPU results in two physical memory references, the first into the page table to obtain f p for binding, and the second, after the binding, to actually satisfy the CPU’s request The result is to double the time it takes to satisfy the CPU’s memory request Note also that now every memory reference by the CPU results in two physical memory references, the first into the page table to obtain f p for binding, and the second, after the binding, to actually satisfy the CPU’s request The result is to double the time it takes to satisfy the CPU’s memory request

MSJ-8 main memory p dpdp LA dpdp CPU Translation Look Aside Buffer (TLAB) page table PTBR p + fpfp TLAB miss page # frame # TLAB TLAB hit fpfp PA yes no trap to OS ? p ≤ PTLR PTLR The TLAB is an associative cache of recently referenced page numbers and their corresponding frame numbers When a page number is presented to the TLAB, it is searched associatively and if there’s a hit (the presented page number is found), its frame number is available for binding without the necessity of retrieving it from memory The TLAB is an associative cache of recently referenced page numbers and their corresponding frame numbers When a page number is presented to the TLAB, it is searched associatively and if there’s a hit (the presented page number is found), its frame number is available for binding without the necessity of retrieving it from memory A translation look aside buffer (TLAB) in the MMU can speed things up by eliminating the binding access(es) to physical memory most of the time In the event the page number is not found in the TLAB (TLAB miss), the required frame number is retrieved from the page table in memory as before memory management unit (MMU)

MSJ-9 From an MMU design standpoint, it’s easy enough: just add protection bits to each entry in the page table But the program’s logical address space as produced by the compiler and linker is contiguous and the boundaries between the various code and data structures usually won’t fall neatly on page boundaries that the compiler doesn’t know anything about A Problem With Paging: Protection Bits Cannot Really be Applied to Pages read write execute PT Data stored here can be both read and written by this process code data pages in the contiguous logical address space read write execute ? ? ? What protection should be applied to this page? This page/frame looks like it contains code; the CPU can fetch instructions from here for execution, but other accesses are not permitted The CPU can read data from here, but can’t write to it ‒ this might be a frame shared by some other process that owns the data, wants to share it, but doesn’t want it altered by this process Remember, each process has its own page table, so to share memory, the same frame number can appear in the page table of multiple processes  and will usually be in different pages in the different processes, each with its own protection bits The CPU can read data from here, but can’t write to it ‒ this might be a frame shared by some other process that owns the data, wants to share it, but doesn’t want it altered by this process Remember, each process has its own page table, so to share memory, the same frame number can appear in the page table of multiple processes  and will usually be in different pages in the different processes, each with its own protection bits In theory, the page table, could help an OS to enforce a security policy so that, for example, users don’t try to overwrite code in library files that they in fact have a right to execute (but not to change)

MSJ-10 Roadmap Contiguous memory management and its limitations Paged memory management Segmented memory management Motivation and overview The segment table

MSJ-11 Segmented Memory Management Segments are logically distinct chunks of the overall logical address space of a process Each segment is logically contiguous and is assigned (bound) to physical memory by the OS and the MMU independently of other segments user code library code shared memory stack global data heap process

MSJ-12 main memory memory management unit (MMU) CPU LA segment table r w e segment base protection size address bits The Segment Table s dsds d s < limit ? no segmentation error, core dumped yes + dsds PA dsds The segment table is the key data structure used by segmented memory management, analogous to the page table for paged memory managmement It is where the protection bits applicable to each segment are stored – in fact, that’s one indicator of a segment: Something that is coherent in terms of its protection requirements and independent of other segments’ protection requirements n bits The displacement within the segment, d s, is then checked against the size limit of the segment from the segment table To complete the binding, the displacement is added to the base address of the segment ??? Note that this binding mechanism means that a segment must be stored contiguously in main memory! That’s not progress; we’re back to contiguous memory management and external fragmentation again! Note that this binding mechanism means that a segment must be stored contiguously in main memory! That’s not progress; we’re back to contiguous memory management and external fragmentation again! The MMU splits up the logical address it receives from the CPU into two fields: a segment number and a displacement within that segment For a valid segment, the protection bits are then checked to determine if the operation requested by the CPU is legal for that segment If not, the MMU raises a protection violation trap For a valid segment, the protection bits are then checked to determine if the operation requested by the CPU is legal for that segment If not, the MMU raises a protection violation trap The segment number, s, is used as an offset into the segment table Depending on where the segment table is stored (same choices as for a page table), the MMU can use either a valid/invalid bit for each segment register or a segment table limit register (STLR) to determine if the segment number from a logical address is valid, just as was done for the page number by a paging MMU If the segment number is invalid, the MMU raises a segmentation error trap to OS The segment number, s, is used as an offset into the segment table Depending on where the segment table is stored (same choices as for a page table), the MMU can use either a valid/invalid bit for each segment register or a segment table limit register (STLR) to determine if the segment number from a logical address is valid, just as was done for the page number by a paging MMU If the segment number is invalid, the MMU raises a segmentation error trap to OS Each entry in the segment table records the protection bits, size, and base address for a segment We didn’t need to do this check when paging, since all pages were the same size and all possible values of d p were always legal, but segments are variable length so a given segment may, and in general will, be less than the maximum possible segment size of 2 n bytes, where n is the number of bits for the d s field of the logical address

MSJ-13 segment table protection segment page table bits size base address Paging Within a Segment Gets the Best of Both Worlds main memory memory management unit (MMU) LA CPU s d s < limit ? no segmentation error, core dumped yes dsds p dpdp The only change to the segment table is that the base address is now for the segment’s page table, not for the segment itself PT s Each segment has its own page table PA The MMU checks for protection and segment size violations as before paging using p, d p, and the segment’s page table, PT s Once the legality of the logical address and the CPU’s requested operation are determined, the displacement within segment, d s, is then split into two fields, a page number within the segment and a displacement within the page Binding is then completed via the segment’s page table Once the legality of the logical address and the CPU’s requested operation are determined, the displacement within segment, d s, is then split into two fields, a page number within the segment and a displacement within the page Binding is then completed via the segment’s page table If the segment table were stored in memory – as opposed to segment registers in the MMU – the CPU’s requested memory operation would take 3 accesses to physical memory: One to retrieve the segment table entry for segment #s One to retrieve the page table entry for page p of segment s The access to the final, bound physical address A TLAB would be essentially mandatory here If the segment table were stored in memory – as opposed to segment registers in the MMU – the CPU’s requested memory operation would take 3 accesses to physical memory: One to retrieve the segment table entry for segment #s One to retrieve the page table entry for page p of segment s The access to the final, bound physical address A TLAB would be essentially mandatory here