1 Address Translation Memory Allocation –Linked lists –Bit maps Options for managing memory –Base and Bound –Segmentation –Paging Paged page tables Inverted.

Slides:



Advertisements
Similar presentations
Memory.
Advertisements

Fixed/Variable Partitioning
CS 311 – Lecture 21 Outline Memory management in UNIX
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 9: Memory Management Background Swapping Contiguous Allocation Paging Segmentation.
Main Memory CS Memory Management1. Background Program must be brought (from disk) into memory and placed within a process for it to be run Main.
Modified from Silberschatz, Galvin and Gagne Lecture 16 Chapter 8: Main Memory.
Chapter 9: Main Memory. 8.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 22, 2005 Chapter 9: Memory Management Background.
Background A program must be brought into memory and placed within a process for it to be executed. Input queue – collection of processes on the disk that.
Memory Management.
03/10/2004CSCI 315 Operating Systems Design1 Memory Management Notice: The slides for this lecture have been largely based on those accompanying the textbook.
1 Friday, June 30, 2006 "Man's mind, once stretched by a new idea, never regains its original dimensions." - Oliver Wendell Holmes, Jr.
Memory Management 1 CS502 Spring 2006 Memory Management CS-502 Spring 2006.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 8: Main Memory.
CS-3013 & CS-502, Summer 2006 Memory Management1 CS-3013 & CS-502 Summer 2006.
1 School of Computing Science Simon Fraser University CMPT 300: Operating Systems I Ch 8: Memory Management Dr. Mohamed Hefeeda.
Memory Management Gordon College Stephen Brinton.
Chapter 8: Main Memory.
Chapter 7: Main Memory CS 170, Fall Memory Management Background Swapping Contiguous Memory Allocation Paging Structure of the Page Table Segmentation.
Main Memory. Background Program must be brought (from disk) into memory and placed within a process for it to be run Main memory and registers are only.
03/17/2008CSCI 315 Operating Systems Design1 Virtual Memory Notice: The slides for this lecture have been largely based on those accompanying the textbook.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 8: Main Memory.
Chapter 8: Main Memory.
 Background  Swapping  Contiguous Allocation  Paging  Segmentation  Segmentation with Paging.
Chap 8 Memory Management. Background Program must be brought into memory and placed within a process for it to be run Input queue – collection of processes.
Chapter 8: Main Memory. 8.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 8: Memory Management Background Swapping Contiguous.
Chapter 8: Main Memory. Chapter 8: Memory Management Background Swapping Contiguous Memory Allocation Paging Structure of the Page Table Segmentation.
Silberschatz, Galvin and Gagne  2002 Modified for CSCI 346, Royden, Operating System Concepts Operating Systems Lecture 24 Paging.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 9: Memory Management Background Swapping Contiguous Allocation Paging Segmentation.
Memory Management -1 Background Swapping Memory Management Schemes
Silberschatz and Galvin  Operating System Concepts Module 8: Memory Management Background Logical versus Physical Address Space Swapping Contiguous.
Example of a Resource Allocation Graph CS1252-OPERATING SYSTEM UNIT III1.
Silberschatz, Galvin and Gagne  2002 Modified for CSCI 399, Royden, Operating System Concepts Operating Systems Lecture 32 Paging Read Ch. 9.4.
Computer Architecture and Operating Systems CS 3230: Operating System Section Lecture OS-7 Memory Management (1) Department of Computer Science and Software.
Operating Systems Chapter 8
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 8: Main Memory.
CS212: OPERATING SYSTEM Lecture 5: Memory Management Strategies 1 Computer Science Department.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 9: Memory Management Background Swapping Contiguous Allocation Paging Segmentation.
COSC 3407: Operating Systems Lecture 13: Address Translation.
Background Program must be brought into memory and placed within a process for it to be run. Input queue – collection of processes on the disk that are.
Memory Management 1. Background Programs must be brought (from disk) into memory for them to be run Main memory and registers are only storage CPU can.
8.1 Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Paging Physical address space of a process can be noncontiguous Avoids.
CE Operating Systems Lecture 14 Memory management.
1 Memory Management (b). 2 Paging  Logical address space of a process can be noncontiguous; process is allocated physical memory whenever the latter.
Main Memory. Chapter 8: Memory Management Background Swapping Contiguous Memory Allocation Paging Structure of the Page Table Segmentation Example: The.
Chapter 8: Memory-Management Strategies. 8.2Operating System Principles Chapter 8: Memory-Management Strategies Background Swapping Contiguous Memory.
9.1 Operating System Concepts Paging Example. 9.2 Operating System Concepts.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 9: Memory Management Background Swapping Contiguous Allocation Paging Segmentation.
Chapter 8: Main Memory. Chapter 8: Memory Management Background Swapping Contiguous Memory Allocation Paging Structure of the Page Table Segmentation.
8.1 Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Objectives To provide a detailed description of various ways of organizing.
Chapter 8: Main Memory. 8.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 22, 2005 Memory and Addressing It all starts.
8.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Fragmentation External Fragmentation – total memory space exists to satisfy.
Background Program must be brought into memory and placed within a process for it to be run. Input queue – collection of processes on the disk that are.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Chapter 8: Main Memory.
Chapter 8: Memory Management Background Swapping Contiguous Allocation Paging Segmentation Segmentation with Paging.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 8: Main Memory.
Chapter 8: Memory Management. 8.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 8: Memory Management Background Swapping Contiguous.
Chapter 2: Memory Management Background Swapping Contiguous Allocation Paging Segmentation Segmentation with Paging Operating System Concepts.
Chapter 7: Main Memory CS 170, Fall Program Execution & Memory Management Program execution Swapping Contiguous Memory Allocation Paging Structure.
Chapter 8: Main Memory.
Chapter 8: Main Memory.
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy 11/12/2018.
Operating System Concepts
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy Chapter 8 11/24/2018.
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy Chapter 9 12/1/2018.
Main Memory Background Swapping Contiguous Allocation Paging
Lecture 3: Main Memory.
Chapter 8: Memory Management strategies
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy Chapter 9 4/5/2019.
Presentation transcript:

1 Address Translation Memory Allocation –Linked lists –Bit maps Options for managing memory –Base and Bound –Segmentation –Paging Paged page tables Inverted page tables –Segmentation with Paging

2 Memory Management with Linked Lists Hole – block of available memory; holes of various size are scattered throughout memory. When a process arrives, it is allocated memory from a hole large enough to accommodate it. Operating system maintains two linked lists for a) allocated partitions b) free partitions (hole) OS process 5 process 8 process 2 OS process 5 process 2 OS process 5 process 2 OS process 5 process 9 process 2 process 9 process 10

3 Dynamic Storage-Allocation Problem First-fit: Allocate the first hole that is big enough. Best-fit: Allocate the smallest hole that is big enough; must search entire list, unless ordered by size. Produces the smallest leftover hole. Worst-fit: Allocate the largest hole; must also search entire list. Produces the largest leftover hole. How to satisfy a request of size n from a list of free holes. First-fit and best-fit better than worst-fit in terms of speed and storage utilization.

4 Memory Allocation with Bit Maps Part of memory with 5 processes, 3 holes –tick marks show allocation units –shaded regions are free Corresponding bit map Same information as a list

5 Fragmentation External Fragmentation –Have enough memory, but not contiguous –Can’t satisfy requests –Ex: first fit wastes 1/3 of memory on average Internal Fragmentation – allocated memory may be slightly larger than requested memory; this size difference is memory internal to a partition, but not being used. Reduce external fragmentation by compaction –Shuffle memory contents to place all free memory together in one large block.

6 Segmentation A segment is a region of logically contiguous memory. Idea is to generalize base and bounds, by allowing a table of base&bound pairs. Break virtual memory into segments Use contiguous physical memory per segment Divide each virtual address into: –segment number –segment offset –Note: compiler does this, not hardware

7 Segmentation – Address Translation Use segment # to index into segment table –segment table is table of base/limit pairs Compare to limit, Add base –exactly the same as base/bound scheme If greater than limit (or illegal access) –the segmentation fault

8 Segmentation Hardware

9 For example, what does it look like with this segment table, in virtual memory and physical memory? Assume 2 bit segment ID, and 12 bit segment offset virtual segment # physical segment start segment size code0x40000x700 data00x stack0x20000x1000

10 0 6ff ff fff 0 4ff fff ff virtual memoryphysical memory

11 Segmentation (cont.) This should seem a bit strange: the virtual address space has gaps in it! Each segment gets mapped to contiguous locations in physical memory, but may be gaps between segments. But, a correct program will never address gaps; if it does, trap to kernel and then core dump. Minor exception: stack, heap can grow. In UNIX, sbrk() increase size of heap segment. For stack, just take fault, system automatically increase size of stack. Detail: Need protection mode in segmentation table. For example, code segment would be read-only. Data and stack segment would be read-write.

12 Segmentation Pros & Cons +Efficient for sparse address spaces Can keep segment table in registers +Easy to share whole segments (for example, code segment) +Easy for address space to grow –Complicated Memory allocation: still need first fit, best fit, etc., and re-shuffling to coalesce free fragments, if no single free space is big enough for a new segment How do we make memory allocation simple and easy?

13 Paging Allocate physical memory in terms of fixed size chunks of memory, or pages. Simpler, because allows use of a bitmap. What is a bitmap? Each bits represents one page of physical memory – 1 means allocated, 0 means unallocated. Lots simpler than base & bounds or segmentation OS controls mapping: any page of virtual memory can go anywhere in physical memory

14 Paging (cont.) Avoids fitting variable sized memory units Break physical memory into frames –Determined by hardware Break virtual memory into pages –Pages and frames are the same size Divide each virtual address into: –Page number –Page offset Note: –With paging, hardware splits address –With segmentation, compiler generates segmented code.

15 Paging – Address Translation Index into page table with high order bits –Get physical frame Append that to offset Now present new address to memory Note: kernel keeps track of free frames can be done with a bitmap

16 Address Translation Architecture

17 Paging Example ABCDABCD EFGHEFGH IJKLIJKL IJKLIJKL EFGHEFGH ABCDABCD virtual memory physical memory Page table Where is virtual address 6? 9? Note: Page size is 4 bytes

18 Paging Issues Fragmentation Page Size Protection and Sharing Page Table Size What if page table too big for main memory?

19 Fragmentation and Page Size Fragmentation –No external fragmentation Page can go anywhere in main memory –Internal fragmentation Average: ½ page per address space Page Size –Small size? Reduces (on average) internal fragmentation –Large size? Better for page table size Better for disk I/O Better for number of page faults –Typical page sizes: 4K, 8K, 16K

20 Protection and Sharing Page protection – use a bit –code: read only –data: read/write Sharing –Just “map pages in” to your address space Ex: if “vi” is at frames 0 through 10, all processes can adjust their page tables to “point to” those frames.

21 Page Tables can be Large Page table is kept in main memory. Page-table base register (PTBR) points to the page table. Page-table length register (PRLR) indicates size of the page table. In this scheme every data/instruction access requires two memory accesses. One for the page table and one for the data/instruction Page table can be huge (million or more entries) Use multi-level page table –2 page numbers and one offset

22 Two-Level Paging Example A logical address (on 32-bit machine with 4K page size) is divided into: –a page number consisting of 20 bits. –a page offset consisting of 12 bits. Since the page table is paged, the page number is further divided into: –a 10-bit page number. –a 10-bit page offset. Thus, a logical address is as follows: where p i is an index into the outer page table, and p 2 is the displacement within the page of the outer page table. page number page offset pipi p2p2 d 10 12

23 Address-Translation Scheme Address-translation scheme for a two-level 32- bit paging architecture

24 Inverted Page Table (“Core Map”) One entry for each real page of memory. Entry consists of the virtual address of the page stored in that real memory location, with information about the process that owns that page. Decreases memory needed to store each page table, but increases time needed to search the table when a page reference occurs. Use hash table to limit the search to one — or at most a few — page-table entries. Good for page replacement

25 Inverted Page Table Architecture

26 Paging the Segments Divide address into three parts –Segment number –Page number –Offset Segment table contains addr. of page table Use that and page # to get frame # Combine frame number and offset

27 What does this buy you? Simple management of physical memory –Paging (just a bitmap) Maintain logical structure –Segmentation However, –Possibly 3 memory accesses to get to memory!

28 MULTICS Address Translation Scheme