Presentation on theme: "Memory Management. Memory Management Issues Data/instructions must be resident in order to be acted upon Each process needs separate memory space OS needs."— Presentation transcript:
Memory Management Issues Data/instructions must be resident in order to be acted upon Each process needs separate memory space OS needs to efficiently determine range of legal addresses
Base and Limit Registers Define the starting address and length of legal memory OS must check each memory reference to be sure it’s within the limit
Addresses Absolute Relative Logical – generated by CPU (virtual) Physical (memory unit)
Binding Compile-time – Absolute Addresses Load-time – relocatable at load time; not known at compile time Execution-time – Allows process to be moved during execution Early binding allows for more error checking but is rigid; late binding is flexible but more prone to problems. (why?)
The MMU Memory Management Unit Maps virtual physical addresses Relocation register contents added to each address. User does not know where the process will actually reside during execution
Dynamic Loading Code segments are not loaded into memory until needed Efficient, don’t need to load code that is never used
Swapping Temporarily remove a process from memory and move to backing store (quantum expires) Roll out, roll in is a variation of swapping for priority-based systems (lower priority process swapped out for a higher priority to run) Most swap time is transfer time – the more memory in use, the more time to swap
Concerns with Swapping Must be sure the process being swapped out is idle (for example not pending I/O operation). Either don’t swap or use OS buffers If the system supports relocatable addressing, swapped out process does not need to be swapped back into the same memory addresses Must have enough swap space for the largest executing process
When A Process Is Swapped,Where Does It Go? “Swapped out of memory” – what happens to it? –Goes to “swap space” that could be a huge unstructured file Relys on file system – lets the file manager worry about management, but it’s slow –Could be a disk partition – have a swap space manage instead of the file system –Could be > 1 partition or > 1 disk –Dedicated Swap Space (a disk). Faster hardware, most expensive
Consider: RR Scheduling Time quantum = 500 ms Swap time for one process (average) is 250ms. Total swap time (in/out) is 500ms How much system time is spent working vs swapping?
Contiguous Memory Allocation Partition memory (OS in low memory, rest for user programs) Define fixed partition sizes The degree of multiprogramming = the number of partitions Popular with older systems but rarely seen today (batch)
Dynamic Memory Allocation Start with memory of size N Memory is allocated to processes as needed; reclaimed when process completes Still contiguous allocation
Allocation Strategies First fit Best fit Worst fit Data structures? Overhead?
Performance of Allocation Strategies First fit – generally fastest First and best are better than worst fit None significantly better in terms of storage utilization
Fragmentation Internal – process is allocated N bytes, needed something < N External – memory holes too small to be usable Compaction –Must support relocation –Pending I/O Problem –Pure overhead Fragmentation happens in memory and on the backing store device
Paging Allows non-contiguous addressing in memory Memory is broken into fixed-size blocks called frames Executable image of process broken into same-sized blocks calls pages Using paging, entire process is still resident
Paging (2) Address through a page number and displacement into the page Page table is maintained; one entry per page of physical memory Each process has its own associated page table Since pages are all the same size, locating a page is easy (page # * page size); locating a specific address – add displacement into the page
Paging (3) Internal (to the page) fragmentation –For each process, how often does this occur? –Suppose a 4K page frame, what is the maximum amount of fragmentation? Is it better to make the page size small or large?
Paging (4) Frame maintenance –What’s in each frame –Empty frames –How many frames are available (so arriving jobs can be appropriately handled)
Paging (5) Merits and Disadvantages –Increases context-switching time – address translation and loading new processes –All addresses references must be looked up and translated –Internal fragmentation –Frame table maintenance –Process PCB must include page information –Others?
Paging (6) If small # of pages, can keep paging info in registers If large # of pages, keep in memory –Keeping in memory is slow – requires a memory access to find table entry, plus another to find the actual data/instruction
Translation Look-Aside Buffer (TLB) Essentially a hardware cache Small, expensive (why it’s small) but very fast Look in TLB first, then go to memory if not found Various algorithms to determine what is kept in TLB; also can wire down frequently used so they are never removed
Protection in Paging Essentially each frame has a base and limit register Pages can be designated as read only Each process’ page table carries a valid/invalid bit indicating if the frame has been allocated to that process Addresses in the last frame may be invalid (due to internal fragmentation)
Storing the Page Table With a large physical memory and page size, each entry can be quite large (32-bit machine with 4K page size = 32 bits per entry*) Often too large to support in main memory Solution: page the page table *4096 = 12 bits for displacement + (2^32 addresses / page size = 20 bits for page #)
Two-Level Page Table Divide page # into its own page # and offset; these are used in the outer page table This can be further divided by using a Three-Level table Adopted from Silberschatz, Galvin and Gagne, 2005
Hashed Page Tables Another common solution is to hash the page table. Hashing works as you learned it in CS2/CS3: using a key (the page number), run through a hash code to find the entry. Chaining used to resolve collisions Faster than a sequential search
Inverted Page Table One entry for each physical frame of memory; stored by process and virtual page # stored at the frame Decreases need for each process to store page table Because or ordering, it may take longer to access so can be hashed
Page Table (left) and Inverted Page Table (right)
Segmentation Compiler-generated segments of a program –Code –Globals – Library Functions –…
Segmentation (2) With paging, pages are partitioned by hardware and user has no knowledge In segmentation, address space consists of several logical segments Each segment has a name and length. References are made through the segment and an offset into the segment
Segmentation (3) Allows easy sharing of code segments, but non-sharing of data segments
Virtual Memory Only part of program needs to be resident during execution Logical (virtual) address can be much larger than physical address space
VM Programs are no longer constrained by the size More programs can be executing at the same time – all do not need to be resident Swapping occurs only when something needed
Demand Paging Pages loaded as needed Lazy swapper – only swaps what’s needed Book points out that “swapper” is technically incorrect (since the process isn’t swapped, only individual pages) p.319
Page Table A page table is maintained for each executing process Tracks all pages – where they are located in memory or an invalid bit if not currently resident When an address is referenced, PCB is checked. If resident, access proceeds; if not resident, issues a page fault
Page Fault Find a free frame in memory Schedule the disk read to get the page Move the page into the selected frame Update the page table in the PCB Continue the interrupted instruction It sounds so easy!
Problems (it’s not that easy) There are no free frames currently available –Need a replacement policy Frame you select contains data that hasn’t been written back to the disk Frame you select is waiting for an I/O operation to complete
Replacement Strategies Look at access counts Look at time since last usage Reset usage flags at certain time intervals – this prevents a page from being spared because it was heavily used early-on in the processing, but not used recently
Replacement Strategies (2) Locality of Reference – clustering of references Global or Local replacement – select from all resident pages or only those belonging to the process
Replacement Algorithms FIFO LRU – select because page hasn’t been used very recently (based on time) LFU – select because page hasn’t been used much (based on access counts)
Replacement Algorithms (2) Second Chance. By-pass page once, then evict if selected the second time MRU – select because page was just used (assumes page has been used and is done) Recent activity counts (if pages tend to be used a lot at first, then infrequently thereafter)
Belady’s Anomaly X page faults with N frames Increase the number of frames, the number of page faults also increases
Dirty Bit A page is selected for removal, but contains unwritten data Context switch must now include a write of the page’s content Don’t want to write all pages being paged out, use the dirty bit
Thrashing We expect lots of page faults at the beginning of execution – this isn’t thrashing How many frames should be allocated to each process? How many frames can be supported? What size? How do you prevent a thrashing process from causing other processes to thrash? Don’t allow a thrashing process to steal frames from another.
The Working Set Based on locality of reference Set of pages most recently used, and expected to be used in the near future OS monitors each process and allocates enough frames to accommodate the working set If a process starts to thrash, remove it entirely and restart later
How many frames? Allocate too many frames: –Few page faults –Unwise allocation of resources Allocate too few frames: –High number of page faults; could lead to thrashing –Low utilization – too many context switches
Prepaging Predict when pages will be needed Easy to do on startup where frequent faults are common Is the prepaging overhead > the fault overhead?
Page Size Tradeoffs Smaller page Larger page table Shorter transfer time into memory Shorter write time for dirty pages Small internal fragmentation Larger page Smaller page table Longer transfer time into memory Longer write time for dirty pages More internal fragmentation
Programmers and Page Faults Think about locality of references Array accesses Stacks – good since references always to the top Hash table – scattered (Compilers) Separating code and data (code can be read only – no dirty pages)
Programmers and Page Faults (2) Pointers Dynamic Memory Allocation
Page Fault on a Page Waiting for I/O Lock it in; don’t allow the fault until the I/O completes Require that all I/O go through system memory
Your consent to our cookies if you continue to use this website.