Virtual Memory Prof. Sin-Min Lee Department of Computer Science.

Slides:



Advertisements
Similar presentations
Chapters 7 & 8 Memory Management Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee Community College, Venice,
Advertisements

Chapter 8 Virtual Memory
Part IV: Memory Management
Prof. Sin-Min Lee Department of Computer Science
Chapter 2: Memory Management, Early Systems
Chapter 2: Memory Management, Early Systems
Memory Management, Early Systems
Understanding Operating Systems Fifth Edition
Memory Management Chapter 7.
Chapter 101 Virtual Memory Chapter 10 Sections and plus (Skip:10.3.2, 10.7, rest of 10.8)
1 CSE 380 Computer Operating Systems Instructor: Insup Lee University of Pennsylvania, Fall 2002 Lecture Note: Memory Management.
Virtual Memory Chapter 8.
Memory Management Chapter 4. Memory hierarchy Programmers want a lot of fast, non- volatile memory But, here is what we have:
Virtual Memory Chapter 8. Hardware and Control Structures Memory references are dynamically translated into physical addresses at run time –A process.
Lecture 11: Memory Management
Multiprocessing Memory Management
Understanding Operating Systems1 Operating Systems Virtual Memory Thrashing Single-User Contiguous Scheme Fixed Partitions Dynamic Partitions.
CS 104 Introduction to Computer Science and Graphics Problems
Memory Management.
Memory Management 2010.
Virtual Memory Chapter 8.
1 Virtual Memory Chapter 8. 2 Hardware and Control Structures Memory references are dynamically translated into physical addresses at run time –A process.
1 Lecture 9: Virtual Memory Operating System I Spring 2007.
Memory Management 1 CS502 Spring 2006 Memory Management CS-502 Spring 2006.
CS-3013 & CS-502, Summer 2006 Memory Management1 CS-3013 & CS-502 Summer 2006.
Memory Management Chapter 5.
Computer Organization and Architecture
Chapter 8 Virtual Memory
 2004 Deitel & Associates, Inc. All rights reserved. Chapter 9 – Real Memory Organization and Management Outline 9.1 Introduction 9.2Memory Organization.
Chapter 2 Memory Management: Early Systems (all ancient history)
Chapter 3 Memory Management: Virtual Memory
Operating Systems Chapter 8
Chapter 2 Memory Management: Early Systems Understanding Operating Systems, Fourth Edition.
CIS250 OPERATING SYSTEMS Memory Management Since we share memory, we need to manage it Memory manager only sees the address A program counter value indicates.
Chapter 2 Memory Management: Early Systems Understanding Operating Systems, Fourth Edition.
Memory Management – Page 1 of 49CSCI 4717 – Computer Architecture Memory Management Uni-program – memory split into two parts –One for Operating System.
1 Memory Management 4.1 Basic memory management 4.2 Swapping 4.3 Virtual memory 4.4 Page replacement algorithms 4.5 Modeling page replacement algorithms.
Chapter 2 - Memory Management, Early Systems Ivy Tech State College Northwest Region 01 CIS106 Microcomputer Operating Systems Gina Rue CIS Faculty.
Virtual Memory Chapter 8. Hardware and Control Structures Memory references are dynamically translated into physical addresses at run time –A process.
1 Virtual Memory Chapter 8. 2 Hardware and Control Structures Memory references are dynamically translated into physical addresses at run time –A process.
CS 149: Operating Systems March 3 Class Meeting Department of Computer Science San Jose State University Spring 2015 Instructor: Ron Mak
Chapter 8 Virtual Memory Operating Systems: Internals and Design Principles Seventh Edition William Stallings.
1 Memory Management Chapter 7. 2 Memory Management Subdividing memory to accommodate multiple processes Memory needs to be allocated to ensure a reasonable.
Virtual Memory Prof. Sin-Min Lee Department of Computer Science.
By Teacher Asma Aleisa Year 1433 H.   Goals of memory management  To provide a convenient abstraction for programming.  To allocate scarce memory.
1 Memory Management Chapter 7. 2 Memory Management Subdividing memory to accommodate multiple processes Memory needs to be allocated to ensure a reasonable.
Virtual Memory Chapter 8. Hardware and Control Structures Memory references are dynamically translated into physical addresses at run time –A process.
Memory Management. Why memory management? n Processes need to be loaded in memory to execute n Multiprogramming n The task of subdividing the user area.
Informationsteknologi Wednesday, October 3, 2007Computer Systems/Operating Systems - Class 121 Today’s class Memory management Virtual memory.
1 Lecture 8: Virtual Memory Operating System Fall 2006.
1 Memory Management n In most schemes, the kernel occupies some fixed portion of main memory and the rest is shared by multiple processes.
 2004 Deitel & Associates, Inc. All rights reserved. Chapter 9 – Real Memory Organization and Management Outline 9.1 Introduction 9.2Memory Organization.
CE 454 Computer Architecture
Chapter 2 Memory and process management
ITEC 202 Operating Systems
Chapter 9 – Real Memory Organization and Management
Chapter 8 Virtual Memory
Virtual Memory Chapter 8.
Lecture 10: Virtual Memory
Main Memory Management
Chapter 8: Main Memory.
Memory management Igor Radovanović.
Memory Management.
Main Memory Background Swapping Contiguous Allocation Paging
Operating Systems Concepts
COMP755 Advanced Operating Systems
Operating Systems: Internals and Design Principles, 6/E
Page Main Memory.
Presentation transcript:

Virtual Memory Prof. Sin-Min Lee Department of Computer Science

Parkinson's law : "Programs expand to fill the memory available to hold them" Idea : Manage the storage available efficiently between the available programs.

Before VM… Programmers tried to shrink programs to fit tiny memories Result: –Small –Inefficient Algorithms

Solution to Memory Constraints Use a secondary memory such as disk Divide disk into pieces that fit memory (RAM) –Called Virtual Memory

Implementations of VM Paging –Disk broken up into regular sized pages Segmentation –Disk broken up into variable sized segments

Memory Issues Idea: Separate concepts of –address space Disk –memory locations RAM Example: –Address Field = 2 16 = memory cells –Memory Size = 4096 memory cells How can we fit the Address Space into Main Memory?

Paging Break memories into Pages NOTE: normally Main Memory has thousands of pages page 1 page = 4096 bytes New Issue: How to manage addressing?

Address Mapping Mapping 2ndary Memory addresses to Main Memory addresses page 1 page = 4096 bytes physical addressvirtual address

Address Mapping Mapping 2ndary Memory ( program/virtual ) addresses to Main Memory ( physical ) addresses page 1 page = 4096 bytes physical address used by hardware virtual address used by program virtualphysical

Paging page virtualphysical / 0 Illusion that Main Memory is Large Contiguous Linear Size(MM) = Size(2ndry M) Transparent to Programmer

Paging Implementation Virtual Address Space (Program) & Physical Address Space (MM) –Broken up into equal pages (just like cache & MM!!) Page size  Always a power of 2 Common Size: –512 to 64K bytes

Paging Implementation Page Frames Page Tables Programs use Virtual Addresses

Memory Mapping Note: 2ndry Mem = 64K; Main Mem = 32K Page Frame: home of VM pages in MM Page Table: home of mappings for VM pages Page #Page Frame #

Memory Mapping Memory Management Unit (MMU): Device that performs virtual-to-physical mapping MMU 15-bit Physical Address 32-bit VM Address

Memory Management Unit 32-bit Virtual Address Broken into 2 portions 20-bit 12-bit Virtual page # offset in page (since our pages are 4KB) How to determine if page is in MM? Present/Absent Bit in Page Table Entry MMU

Demand Paging Possible Mapping of pages Page Fault: Requested page is not in MM Demand Paging: Page is demanded by program Page is loaded into MM

Demand Paging Possible Mapping of pages Page Fault: Requested page is not in MM Demand Paging: Page is demanded by program Page is loaded into MM But… What to bring in for a program on start up?

Working Set Set of pages used by a process Each process has a unique memory map Importance in regards to a multi-tasked OS At time t, there is a set of all k recently used pages References tend to cluster on a small number of pages Put this set to Work!!! Store & Load it during Process Switching

Page Replacement Policy Working Set: –Set of pages used actively & heavily –Kept in memory to reduce Page Faults Set is found/maintained dynamically by OS Replacement: OS tries to predict which page would have least impact on the running program Common Replacement Schemes: Least Recently Used (LRU) First-In-First-Out (FIFO)

Replacement Policy Placement Policy –Which page is replaced? –Page removed should be the page least likely to be referenced in the near future –Most policies predict the future behavior on the basis of past behavior

Replacement Policy Frame Locking –If frame is locked, it may not be replaced –Kernel of the operating system –Control structures –I/O buffers –Associate a lock bit with each frame

Basic Replacement Algorithms Optimal policy –Selects for replacement that page for which the time to the next reference is the longest –Impossible to have perfect knowledge of future events

Basic Replacement Algorithms Least Recently Used (LRU) –Replaces the page that has not been referenced for the longest time –By the principle of locality, this should be the page least likely to be referenced in the near future –Each page could be tagged with the time of last reference. This would require a great deal of overhead.

Basic Replacement Algorithms First-in, first-out (FIFO) –Treats page frames allocated to a process as a circular buffer –Pages are removed in round-robin style –Simplest replacement policy to implement –Page that has been in memory the longest is replaced –These pages may be needed again very soon

Basic Replacement Algorithms Clock Policy –Additional bit called a use bit –When a page is first loaded in memory, the use bit is set to 1 –When the page is referenced, the use bit is set to 1 –When it is time to replace a page, the first frame encountered with the use bit set to 0 is replaced. –During the search for replacement, each use bit set to 1 is changed to 0

Page Replacement Policies Upon Replacement –Need to know whether to write data back –Add a Dirty-Bit Dirty Bit = 0; Page is clean; No writing Dirty Bit = 1; Page is dirty; Write back

Fragmentation Generally…. –Process: Program + Data != Integral # of Pages Wasted space on last page Example: –Program + Data = byes –Page = 4096 bytes –Result = 2672 bytes wasted on last page Internal Fragmentation: fragments in a page How do we solve this problem?

Page Size Smaller page size allows for less wasted space Benefits: –Less internal fragmentation –Less thrashing Drawbacks: – page table – storage – computer cost – time to load – more time spent at disk – miss rate

Segmentation Alternative to the 1-D view of Paging Create more than 1 address space Example –Compilation Process 1 segment for each table 0  Large Address Symbol Table Source Text Constants Parse Tree Stack

Segmentation Example –Compilation Process 1 segment for each table In a 1-D Address Space Symbol Table Source Text Constants Parse Tree Stack grows continuously grows unpredictably!!

Segmentation Example –Compilation Process 1 segment for each table In a Segmented Address Space Symbol Table Source Text Constants Parse Tree Stack Shrink/grow independently

Segmentation Virtual Address = Segment # + Offset in Segment Programmer is aware of Segments –May contain procedure, array, stack –Generally not a mixture of types Each Segment is a Logical Entity –It is generally not paged

Segmentation Benefits –Eases Linking of Procedures –Facilitates sharing of Procedures & Data –Protection Since user creates –User knows what is stored –Can specify »EXECUTE: procedure code »READ/WRITE: array

Implementation of Segmentation Two Ways to Implement –Swapping –Paging (a little bit of both)

Implementation of Segmentation Swapping –Like Demand Paging Move out segments to make room for new one Request for 7 moves out 1

Implementation of Segmentation Swapping –Like Demand Paging Over a Period of Time Wasted Space!!! External Fragmentation

Elimination of External Fragmentation 2 Methods –Compaction –Fit Segments in Existing Holes

Elimination of External Fragmentation Compaction –Moving segments closer to zero to eliminate wasted space Compaction

Fit Segments in Existing Holes –Maintain a list of Addresses & Hole Size –Algorithms: BEST FIT : choose smallest hole FIRST FIT: scan circularly & choose which fits first Where should I go? Elimination of External Fragmentation

Fit Segments in Existing Holes –Maintain a list of Addresses & Hole Size –Algorithms: BEST FIT : choose smallest hole Elimination of External Fragmentation

Fit Segments in Existing Holes –Maintain a list of Addresses & Hole Size –Algorithms: FIRST FIT : scan circularly & choose which fits first First Hole Elimination of External Fragmentation Empirically proven to give best results

Elimination of External Fragmentation Best of Both Worlds –Hole Coalescing Use Best Fit as your De-Fragmentation Algorithm Upon Removal: –COALESCE ANY NEIGHBORING HOLES

Memory Management, Early Systems Single-User Contiguous Scheme Fixed Partitions Dynamic Partitions Deallocation Relocatable Dynamic Partitions Conclusion Single User Configurations Fixed Partitions Dynamic Partitions Relocatable Dynamic Partitions

Single-User Contiguous Scheme Each program loaded in its entirety into memory and allocated as much contiguous memory space as needed. If program was too large -- it couldn’t be executed. Minimal amount of work done by Memory Manager. Hardware needed : 1) register to store base address; 2) accumulator to track size of program as it is loaded into memory.

Algorithm to Load a Job in a Single-user System 1. Store first memory location of program into base register 2.Set program counter equal to address of first memory location 3.Load instructions of program 4.Increment program counter by number of bytes in instructions 5.Has the last instruction been reached? If yes, then stop loading program If no, then continue with step 6 6.Is program counter greater than memory size? If yes, then stop loading. If no, then continue with step 7 7.Load instruction in memory 8.Go to step 3.

Fixed (Static) Partitions Attempt at multiprogramming using fixed partitions –one partition for each job –size of partition designated by reconfiguring the system –partitions can’t be too small or too large. Critical to protect job’s memory space. Entire program stored contiguously in memory during entire execution. Internal fragmentation is a problem.

Simplified Fixed Partition Memory Table

Original StateAfter Job Entry 100KJob 1 (30K) Partition 1 Partition 2 25KJob 4 (25K) Partition 2 Partition 3 25K Partition 3 Partition 4 50KJob 2 (50K) Partition 4 Job List : J130K J250K J330K J425K Main memory use during fixed partition allocation of Table 2.1. Job 3 must wait.

Dynamic Partitions Available memory kept in contiguous blocks and jobs given only as much memory as they request when loaded. Improves memory use over fixed partitions. Performance deteriorates as new jobs enter the system –fragments of free memory are created between blocks of allocated memory (external fragmentation).

Dynamic Partitioning of Main Memory & Fragmentation

Dynamic Partition Allocation Schemes First-fit: Allocate the first partition that is big enough. –Keep free/busy lists organized by memory location (low- order to high-order). –Faster in making the allocation. Best-fit: Allocate the smallest partition that is big enough – Keep free/busy lists ordered by size (smallest to largest). –Produces the smallest leftover partition. –Makes best use of memory.

First-Fit Allocation Example J1 10K J2 20K J3 30K* J4 10K Memory MemoryJob Job Internal locationblock sizenumber sizeStatusfragmentation KJ1 10KBusy20K KJ4 10KBusy 5K KJ2 20KBusy30K K Free Total Available:115KTotal Used: 40K Job List

Best-Fit Allocation Example J1 10K J2 20K J3 30K J4 10K Memory MemoryJob Job Internal locationblock sizenumber sizeStatusfragmentation KJ1 10KBusy 5K KJ2 20KBusyNone KJ3 30KBusyNone KJ4 10KBusy40K Total Available:115KTotal Used: 70K Job List

First-Fit Memory Request

Best-Fit Memory Request

Best-Fit vs. First-Fit First-Fit Increases memory use Memory allocation takes less time Increases internal fragmentation Discriminates against large jobs Best-Fit More complex algorithm Searches entire table before allocating memory Results in a smaller “free” space (sliver)

Release of Memory Space : Deallocation Deallocation for fixed partitions is simple –Memory Manager resets status of memory block to “free”. Deallocation for dynamic partitions tries to combine free areas of memory whenever possible –Is the block adjacent to another free block? –Is the block between 2 free blocks? –Is the block isolated from other free blocks?

Case 1: Joining 2 Free Blocks

Case 2: Joining 3 Free Blocks

Case 3: Deallocating an Isolated Block

Relocatable Dynamic Partitions Memory Manager relocates programs to gather all empty blocks and compact them to make 1 memory block. Memory compaction (garbage collection, defragmentation) performed by OS to reclaim fragmented sections of memory space. Memory Manager optimizes use of memory & improves throughput by compacting & relocating.

Compaction Steps Relocate every program in memory so they’re contiguous. Adjust every address, and every reference to an address, within each program to account for program’s new location in memory. Must leave alone all other values within the program (e.g., data values).

Program in Memory During Compaction & Relocation Free list & busy list are updated –free list shows partition for new block of free memory –busy list shows new locations for all relocated jobs Bounds register stores highest location in memory accessible by each program. Relocation register contains value that must be added to each address referenced in program so it can access correct memory addresses after relocation.

Memory Before & After Compaction (Figure 2.5)

Contents of relocation register & close-up of Job 4 memory area (a) before relocation & (b) after relocation and compaction (Figure 2.6)