Virtual Memory Prof. Sin-Min Lee Department of Computer Science.

Slides:



Advertisements
Similar presentations
Memory.
Advertisements

Part IV: Memory Management
Prof. Sin-Min Lee Department of Computer Science
Chapter 2: Memory Management, Early Systems
Chapter 2: Memory Management, Early Systems
Memory Management, Early Systems
Understanding Operating Systems Fifth Edition
Memory Management Chapter 7.
Allocating Memory.
CS 311 – Lecture 21 Outline Memory management in UNIX
OS Fall’02 Memory Management Operating Systems Fall 2002.
1 CSE 380 Computer Operating Systems Instructor: Insup Lee University of Pennsylvania, Fall 2002 Lecture Note: Memory Management.
Memory Management Chapter 4. Memory hierarchy Programmers want a lot of fast, non- volatile memory But, here is what we have:
Virtual Memory Chapter 8. Hardware and Control Structures Memory references are dynamically translated into physical addresses at run time –A process.
Multiprocessing Memory Management
Understanding Operating Systems1 Operating Systems Virtual Memory Thrashing Single-User Contiguous Scheme Fixed Partitions Dynamic Partitions.
CS 104 Introduction to Computer Science and Graphics Problems
Memory Management.
Memory Management 2010.
Virtual Memory Chapter 8.
Memory Management 1 CS502 Spring 2006 Memory Management CS-502 Spring 2006.
CS-3013 & CS-502, Summer 2006 Memory Management1 CS-3013 & CS-502 Summer 2006.
Memory Management Chapter 5.
Computer Organization and Architecture
Memory Management Five Requirements for Memory Management to satisfy: –Relocation Users generally don’t know where they will be placed in main memory May.
 2004 Deitel & Associates, Inc. All rights reserved. Chapter 9 – Real Memory Organization and Management Outline 9.1 Introduction 9.2Memory Organization.
03/17/2008CSCI 315 Operating Systems Design1 Virtual Memory Notice: The slides for this lecture have been largely based on those accompanying the textbook.
Chapter 2 Memory Management: Early Systems (all ancient history)
Virtual Memory  Early computers had a small and fixed amount to memory. All programs had to be able to fit in this memory. Overlays were used when the.
Operating System Machine Level  An operating system is a program that, from the programmer’s point of view, adds a variety of new instructions and features,
Computer Architecture and Operating Systems CS 3230: Operating System Section Lecture OS-7 Memory Management (1) Department of Computer Science and Software.
Operating Systems Chapter 8
MEMORY MANAGEMENT Presented By:- Lect. Puneet Gupta G.P.C.G. Patiala.
1. Memory Manager 2 Memory Management In an environment that supports dynamic memory allocation, the memory manager must keep a record of the usage of.
Chapter 2 Memory Management: Early Systems Understanding Operating Systems, Fourth Edition.
Memory Management – Page 1 of 49CSCI 4717 – Computer Architecture Memory Management Uni-program – memory split into two parts –One for Operating System.
Memory Management. Roadmap Basic requirements of Memory Management Memory Partitioning Basic blocks of memory management –Paging –Segmentation.
Chapter 2 - Memory Management, Early Systems Ivy Tech State College Northwest Region 01 CIS106 Microcomputer Operating Systems Gina Rue CIS Faculty.
CE Operating Systems Lecture 14 Memory management.
CS 149: Operating Systems March 3 Class Meeting Department of Computer Science San Jose State University Spring 2015 Instructor: Ron Mak
1 Memory Management Chapter 7. 2 Memory Management Subdividing memory to accommodate multiple processes Memory needs to be allocated to ensure a reasonable.
By Teacher Asma Aleisa Year 1433 H.   Goals of memory management  To provide a convenient abstraction for programming.  To allocate scarce memory.
1 Memory Management. 2 Fixed Partitions Legend Free Space 0k 4k 16k 64k 128k Internal fragmentation (cannot be reallocated) Divide memory into n (possible.
1 Memory Management Chapter 7. 2 Memory Management Subdividing memory to accommodate multiple processes Memory needs to be allocated to ensure a reasonable.
Main Memory. Chapter 8: Memory Management Background Swapping Contiguous Memory Allocation Paging Structure of the Page Table Segmentation Example: The.
Informationsteknologi Wednesday, October 3, 2007Computer Systems/Operating Systems - Class 121 Today’s class Memory management Virtual memory.
Copyright ©: Nahrstedt, Angrave, Abdelzaher, Caccamo 1 Memory management & paging.
NETW3005 Memory Management. Reading For this lecture, you should have read Chapter 8 (Sections 1-6). NETW3005 (Operating Systems) Lecture 07 – Memory.
1 Lecture 8: Virtual Memory Operating System Fall 2006.
Copyright ©: Nahrstedt, Angrave, Abdelzaher1 Memory.
Virtual Memory Prof. Sin-Min Lee Department of Computer Science.
Memory Management Chapter 5 Advanced Operating System.
Chapter 8: Memory Management. 8.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 8: Memory Management Background Swapping Contiguous.
Main Memory CSSE 332 Operating Systems Rose-Hulman Institute of Technology.
MEMORY MANAGEMENT. memory management  In a multiprogramming system, in order to share the processor, a number of processes must be kept in memory. 
CE 454 Computer Architecture
ITEC 202 Operating Systems
Chapter 2 Memory and process management
Chapter 9 – Real Memory Organization and Management
Main Memory Management
Chapter 8: Main Memory.
Multistep Processing of a User Program
Main Memory Background Swapping Contiguous Allocation Paging
Lecture 3: Main Memory.
COMP755 Advanced Operating Systems
Operating Systems: Internals and Design Principles, 6/E
CSE 542: Operating Systems
Page Main Memory.
Presentation transcript:

Virtual Memory Prof. Sin-Min Lee Department of Computer Science

Karnaugh Map Method of Multiplexer Implementation Consider the function: A is taken to be the data variable and B,C to be the select variables.

Example of MUX combo circuit F(X,Y,Z) =  m(1,2,6,7)

Parkinson's law : "Programs expand to fill the memory available to hold them" Idea : Manage the storage available efficiently between the available programs.

Before VM… Programmers tried to shrink programs to fit tiny memories Result: –Small –Inefficient Algorithms

Solution to Memory Constraints Use a secondary memory such as disk Divide disk into pieces that fit memory (RAM) –Called Virtual Memory

Implementations of VM Paging –Disk broken up into regular sized pages Segmentation –Disk broken up into variable sized segments

Memory Issues Idea: Separate concepts of –address space Disk –memory locations RAM Example: –Address Field = 2 16 = memory cells –Memory Size = 4096 memory cells How can we fit the Address Space into Main Memory?

Paging Break memories into Pages NOTE: normally Main Memory has thousands of pages page 1 page = 4096 bytes New Issue: How to manage addressing?

Address Mapping Mapping 2ndary Memory addresses to Main Memory addresses page 1 page = 4096 bytes physical addressvirtual address

Address Mapping Mapping 2ndary Memory ( program/virtual ) addresses to Main Memory ( physical ) addresses page 1 page = 4096 bytes physical address used by hardware virtual address used by program virtualphysical

Paging page virtualphysical / 0 Illusion that Main Memory is Large Contiguous Linear Size(MM) = Size(2ndry M) Transparent to Programmer

Paging Implementation Virtual Address Space (Program) & Physical Address Space (MM) –Broken up into equal pages (just like cache & MM!!) Page size  Always a power of 2 Common Size: –512 to 64K bytes

Memory Mapping Memory Management Unit (MMU): Device that performs virtual-to-physical mapping MMU 15-bit Physical Address 32-bit VM Address

Memory Management Unit 32-bit Virtual Address Broken into 2 portions 20-bit 12-bit Virtual page # offset in page (since our pages are 4KB) How to determine if page is in MM? Present/Absent Bit in Page Table Entry MMU

Demand Paging Possible Mapping of pages Page Fault: Requested page is not in MM Demand Paging: Page is demanded by program Page is loaded into MM

Demand Paging Possible Mapping of pages Page Fault: Requested page is not in MM Demand Paging: Page is demanded by program Page is loaded into MM But… What to bring in for a program on start up?

Working Set Set of pages used by a process Each process has a unique memory map Importance in regards to a multi-tasked OS At time t, there is a set of all k recently used pages References tend to cluster on a small number of pages Put this set to Work!!! Store & Load it during Process Switching

Page Replacement Policy Working Set: –Set of pages used actively & heavily –Kept in memory to reduce Page Faults Set is found/maintained dynamically by OS Replacement: OS tries to predict which page would have least impact on the running program Common Replacement Schemes: Least Recently Used (LRU) First-In-First-Out (FIFO)

Page Replacement Policies Least Recently Used (LRU) –Generally works well –TROUBLE: When the working set is larger than the Main Memory Working Set = 9 pages Pages are executed in sequence (0  8 (repeat) ) THRASHING

Page Replacement Policies First-In-First-Out(FIFO) –Removes Least Recently Loaded page –Does not depend on Use –Determined by number of page faults seen by a page

Page Replacement Policies Upon Replacement –Need to know whether to write data back –Add a Dirty-Bit Dirty Bit = 0; Page is clean; No writing Dirty Bit = 1; Page is dirty; Write back

Fragmentation Generally…. –Process: Program + Data != Integral # of Pages Wasted space on last page Example: –Program + Data = byes –Page = 4096 bytes –Result = 2672 bytes wasted on last page Internal Fragmentation: fragments in a page How do we solve this problem?

Page Size Smaller page size allows for less wasted space Benefits: –Less internal fragmentation –Less thrashing Drawbacks: – page table – storage – computer cost – time to load – more time spent at disk – miss rate

Segmentation Alternative to the 1-D view of Paging Create more than 1 address space Example –Compilation Process 1 segment for each table 0  Large Address Symbol Table Source Text Constants Parse Tree Stack

Segmentation Example –Compilation Process 1 segment for each table In a 1-D Address Space Symbol Table Source Text Constants Parse Tree Stack grows continuously grows unpredictably!!

Segmentation Example –Compilation Process 1 segment for each table In a Segmented Address Space Symbol Table Source Text Constants Parse Tree Stack Shrink/grow independently

Segmentation Virtual Address = Segment # + Offset in Segment Programmer is aware of Segments –May contain procedure, array, stack –Generally not a mixture of types Each Segment is a Logical Entity –It is generally not paged

Segmentation Benefits –Eases Linking of Procedures –Facilitates sharing of Procedures & Data –Protection Since user creates –User knows what is stored –Can specify »EXECUTE: procedure code »READ/WRITE: array

Implementation of Segmentation Two Ways to Implement –Swapping –Paging (a little bit of both)

Implementation of Segmentation Swapping –Like Demand Paging Move out segments to make room for new one Request for 7 moves out 1

Implementation of Segmentation Swapping –Like Demand Paging Over a Period of Time Wasted Space!!! External Fragmentation

Elimination of External Fragmentation 2 Methods –Compaction –Fit Segments in Existing Holes

Elimination of External Fragmentation Compaction –Moving segments closer to zero to eliminate wasted space Compaction

Fit Segments in Existing Holes –Maintain a list of Addresses & Hole Size –Algorithms: BEST FIT : choose smallest hole FIRST FIT: scan circularly & choose which fits first Where should I go? Elimination of External Fragmentation

Fit Segments in Existing Holes –Maintain a list of Addresses & Hole Size –Algorithms: BEST FIT : choose smallest hole Elimination of External Fragmentation

Fit Segments in Existing Holes –Maintain a list of Addresses & Hole Size –Algorithms: FIRST FIT : scan circularly & choose which fits first First Hole Elimination of External Fragmentation Empirically proven to give best results

Elimination of External Fragmentation Best of Both Worlds –Hole Coalescing Use Best Fit as your De-Fragmentation Algorithm Upon Removal: –COALESCE ANY NEIGHBORING HOLES

Simplified Fixed Partition Memory Table

Original StateAfter Job Entry 100KJob 1 (30K) Partition 1 Partition 2 25KJob 4 (25K) Partition 2 Partition 3 25K Partition 3 Partition 4 50KJob 2 (50K) Partition 4 Job List : J130K J250K J330K J425K Main memory use during fixed partition allocation of Table 2.1. Job 3 must wait.

Dynamic Partitions Available memory kept in contiguous blocks and jobs given only as much memory as they request when loaded. Improves memory use over fixed partitions. Performance deteriorates as new jobs enter the system –fragments of free memory are created between blocks of allocated memory (external fragmentation).

Dynamic Partitioning of Main Memory & Fragmentation

Dynamic Partition Allocation Schemes First-fit: Allocate the first partition that is big enough. –Keep free/busy lists organized by memory location (low- order to high-order). –Faster in making the allocation. Best-fit: Allocate the smallest partition that is big enough – Keep free/busy lists ordered by size (smallest to largest). –Produces the smallest leftover partition. –Makes best use of memory.

First-Fit Allocation Example J1 10K J2 20K J3 30K* J4 10K Memory MemoryJob Job Internal locationblock sizenumber sizeStatusfragmentation KJ1 10KBusy20K KJ4 10KBusy 5K KJ2 20KBusy30K K Free Total Available:115KTotal Used: 40K Job List

Best-Fit Allocation Example J1 10K J2 20K J3 30K J4 10K Memory MemoryJob Job Internal locationblock sizenumber sizeStatusfragmentation KJ1 10KBusy 5K KJ2 20KBusyNone KJ3 30KBusyNone KJ4 10KBusy40K Total Available:115KTotal Used: 70K Job List

First-Fit Memory Request

Best-Fit Memory Request

Best-Fit vs. First-Fit First-Fit Increases memory use Memory allocation takes less time Increases internal fragmentation Discriminates against large jobs Best-Fit More complex algorithm Searches entire table before allocating memory Results in a smaller “free” space (sliver)

Release of Memory Space : Deallocation Deallocation for fixed partitions is simple –Memory Manager resets status of memory block to “free”. Deallocation for dynamic partitions tries to combine free areas of memory whenever possible –Is the block adjacent to another free block? –Is the block between 2 free blocks? –Is the block isolated from other free blocks?

Case 1: Joining 2 Free Blocks

Case 2: Joining 3 Free Blocks

Case 3: Deallocating an Isolated Block

Relocatable Dynamic Partitions Memory Manager relocates programs to gather all empty blocks and compact them to make 1 memory block. Memory compaction (garbage collection, defragmentation) performed by OS to reclaim fragmented sections of memory space. Memory Manager optimizes use of memory & improves throughput by compacting & relocating.

Compaction Steps Relocate every program in memory so they’re contiguous. Adjust every address, and every reference to an address, within each program to account for program’s new location in memory. Must leave alone all other values within the program (e.g., data values).

Program in Memory During Compaction & Relocation Free list & busy list are updated –free list shows partition for new block of free memory –busy list shows new locations for all relocated jobs Bounds register stores highest location in memory accessible by each program. Relocation register contains value that must be added to each address referenced in program so it can access correct memory addresses after relocation.

Memory Before & After Compaction (Figure 2.5)

Contents of relocation register & close-up of Job 4 memory area (a) before relocation & (b) after relocation and compaction (Figure 2.6)

More Overhead is a Problem with Compaction & Relocation Timing of compaction (when, how often) is crucial. Approaches to timing of compaction: 1.Compact when certain percentage of memory is busy (e.g., 75%). 2.Compact only when jobs are waiting. 3.Compact after certain amount of time.