Operating Systems Principles Memory Management Lecture 8: Virtual Memory 主講人:虞台文.

Slides:



Advertisements
Similar presentations
Chapter 4 Memory Management Basic memory management Swapping
Advertisements

Virtual Memory 3 Fred Kuhns
Background Virtual memory – separation of user logical memory from physical memory. Only part of the program needs to be in memory for execution. Logical.
Virtual Memory: Page Replacement
Chapter 8 Virtual Memory
8. Virtual Memory 8.1 Principles of Virtual Memory
Virtual Memory. 2 What is virtual memory? Each process has illusion of large address space –2 32 for 32-bit addressing However, physical memory is much.
OS Fall’02 Virtual Memory Operating Systems Fall 2002.
Chapter 101 Cleaning Policy When should a modified page be written out to disk?  Demand cleaning write page out only when its frame has been selected.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 10: Virtual Memory Background Demand Paging Process Creation Page Replacement.
Module 9: Virtual Memory
Module 10: Virtual Memory Background Demand Paging Performance of Demand Paging Page Replacement Page-Replacement Algorithms Allocation of Frames Thrashing.
Virtual Memory Background Demand Paging Performance of Demand Paging
Virtual Memory Introduction to Operating Systems: Module 9.
Segmentation and Paging Considerations
Chapter 101 Virtual Memory Chapter 10 Sections and plus (Skip:10.3.2, 10.7, rest of 10.8)
Virtual Memory Chapter 8.
Virtual Memory Chapter 8.
Virtual Memory Chapter 8. Hardware and Control Structures Memory references are dynamically translated into physical addresses at run time –A process.
Paging and Virtual Memory. Memory management: Review  Fixed partitioning, dynamic partitioning  Problems Internal/external fragmentation A process can.
CS 104 Introduction to Computer Science and Graphics Problems
Memory Management 2010.
Chapter 9: Virtual Memory. 9.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 22, 2005 Chapter 9: Virtual Memory Background.
Virtual Memory Chapter 8.
1 Virtual Memory Chapter 8. 2 Hardware and Control Structures Memory references are dynamically translated into physical addresses at run time –A process.
1 Lecture 9: Virtual Memory Operating System I Spring 2007.
1 Chapter 8 Virtual Memory Virtual memory is a storage allocation scheme in which secondary memory can be addressed as though it were part of main memory.
Computer Organization and Architecture
OS Spring’04 Virtual Memory: Page Replacement Operating Systems Spring 2004.
03/29/2004CSCI 315 Operating Systems Design1 Page Replacement Algorithms (Virtual Memory)
Memory Management ◦ Operating Systems ◦ CS550. Paging and Segmentation  Non-contiguous memory allocation  Fragmentation is a serious problem with contiguous.
Chapter 3 Memory Management: Virtual Memory
Rensselaer Polytechnic Institute CSC 432 – Operating Systems David Goldschmidt, Ph.D.
ICS 145B -- L. Bic1 Project: Page Replacement Algorithms Textbook: pages ICS 145B L. Bic.
Page 19/17/2015 CSE 30341: Operating Systems Principles Optimal Algorithm  Replace page that will not be used for longest period of time  Used for measuring.
MEMORY MANAGEMENT Presented By:- Lect. Puneet Gupta G.P.C.G. Patiala.
Subject: Operating System.
Virtual Memory Chapter 8. Hardware and Control Structures Memory references are dynamically translated into physical addresses at run time –A process.
1 Virtual Memory Chapter 8. 2 Hardware and Control Structures Memory references are dynamically translated into physical addresses at run time –A process.
Copyright ©: Lawrence Angrave, Vikram Adve, Caccamo 1 Virtual Memory.
Operating Systems CMPSC 473 Virtual Memory Management (3) November – Lecture 20 Instructor: Bhuvan Urgaonkar.
Virtual Memory Virtual Memory is created to solve difficult memory management problems Data fragmentation in physical memory: Reuses blocks of memory.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 10: Virtual Memory Background Demand Paging Page Replacement Allocation of.
Virtual Memory. Background Virtual memory is a technique that allows execution of processes that may not be completely in the physical memory. Virtual.
CompSci 143A1 8. Virtual Memory 8.1 Principles of Virtual Memory 8.2 Implementations of Virtual Memory –Paging –Segmentation –Paging With Segmentation.
ICS Virtual Memory 8.1 Principles of Virtual Memory 8.2 Implementations of Virtual Memory –Paging –Segmentation –Paging With Segmentation –Paging.
Virtual Memory The memory space of a process is normally divided into blocks that are either pages or segments. Virtual memory management takes.
Virtual Memory Chapter 8. Hardware and Control Structures Memory references are dynamically translated into physical addresses at run time –A process.
Page Buffering, I. Pages to be replaced are kept in main memory for a while to guard against poorly performing replacement algorithms such as FIFO Two.
Lectures 8 & 9 Virtual Memory - Paging & Segmentation System Design.
1 Lecture 8: Virtual Memory Operating System Fall 2006.
NETW3005 Virtual Memory. Reading For this lecture, you should have read Chapter 9 (Sections 1-7). NETW3005 (Operating Systems) Lecture 08 - Virtual Memory2.
1 Chapter 10: Virtual Memory Background Demand Paging Process Creation Page Replacement Allocation of Frames Thrashing Operating System Examples (not covered.
1 Page Replacement Algorithms. 2 Virtual Memory Management Fundamental issues : A Recap Key concept: Demand paging  Load pages into memory only when.
Virtual Memory. 2 Last Week Memory Management Increase degree of multiprogramming –Entire process needs to fit into memory Dynamic Linking and Loading.
Chapter 9: Virtual Memory. 9.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Background Virtual memory – separation of user logical memory.
Virtual Memory Chapter 8.
Chapter 2 Memory and process management
Chapter 8 Virtual Memory
Virtual Memory Chapter 8.
Lecture 10: Virtual Memory
Module 9: Virtual Memory
Chapter 9: Virtual Memory
Chapter 9: Virtual-Memory Management
5: Virtual Memory Background Demand Paging
Operating Systems Principles Memory Management Lecture 8: Virtual Memory 主講人:虞台文.
Module 9: Virtual Memory
COMP755 Advanced Operating Systems
Operating Systems: Internals and Design Principles, 6/E
Virtual Memory.
Presentation transcript:

Operating Systems Principles Memory Management Lecture 8: Virtual Memory 主講人:虞台文

Content Principles of Virtual Memory Implementations of Virtual Memory – Paging – Segmentation – Paging With Segmentation – Paging of System Tables – Translation Look-Aside Buffers Memory Allocation in Paged Systems – Global Page Replacement Algorithms – Local Page Replacement Algorithms – Load Control and Thrashing – Evaluation of Paging

Operating Systems Principles Memory Management Lecture 8: Virtual Memory Principles of Virtual Memory

The Virtual Memory Virtual memory is a technique that allows processes that may not be entirely in the memory to execute by means of automatic storage allocation upon request. The term virtual memory refers to the abstraction of separating LOGICAL memory  memory as seen by the process  from PHYSICAL memory  memory as seen by the processor. The programmer needs to be aware of only the logical memory space while the operating system maintains two or more levels of physical memory space.

Principles of Virtual Memory FFFFFFFF FFFFFF 4G4G4G4G Virtual Memory Virtual Memory Physical Memory 64M

Principles of Virtual Memory FFFFFFFF FFFFFF 4G4G4G4G Virtual Memory Virtual Memory Physical Memory 64M Address Map Address Map

Principles of Virtual Memory FFFFFFFF FFFFFF 4G4G4G4G Virtual Memory Virtual Memory Physical Memory 64M Address Map Address Map For each process, the system creates the illusion of large contiguous memory space(s) Relevant portions of Virtual Memory (VM) are loaded automatically and transparently Address Map translates Virtual Addresses to Physical Addresses For each process, the system creates the illusion of large contiguous memory space(s) Relevant portions of Virtual Memory (VM) are loaded automatically and transparently Address Map translates Virtual Addresses to Physical Addresses

Approaches Single-segment Virtual Memory: – One area of 0…n  1 words – Divided into fix-size pages Multiple-Segment Virtual Memory: – Multiple areas of up to 0…n  1 (words) – Each holds a logical segment (e.g., function, data structure) – Each is contiguous or divided into pages

Main Issues in VM Design Address mapping – How to translate virtual addresses to physical addresses? Placement – Where to place a portion of VM needed by process? Replacement – Which portion of VM to remove when space is needed? Load control – How many process could be activated at any one time? Sharing – How can processes share portions of their VMs?

Operating Systems Principles Memory Management Lecture 8: Virtual Memory Implementations of Virtual Memory

Implementations of VM Paging Segmentation Paging With Segmentation Paging of System Tables Translation Look-aside Buffers

Paged Virtual Memory Page No p P1P1 0 w 2n12n1 1 2 Page size 2 n Virtual Memory Offset Virtual Address (p, w)

Physical Memory Frame No f F1F1 0 w 2n12n1 1 2 Frame size 2 n Physical Memory Offset Physical Address (f, w) The size of physical memory is usually much smaller than the size of virtual memory.

Virtual & Physical Addresses Page number p Offset w The size of physical memory is usually much smaller than the size of virtual memory. va Frame number f Offset w pa |p| bits|w| bits | f | bits|w| bits Memory Size Address

Page No p P1P1 Virtual Memory Address Mapping Frame No f F1F1 Physical Memory (p, w) (f, w) Address Map Given (p, w), how to determine f from p ?

Page No p P1P1 Virtual Memory Address Mapping Frame No f F1F1 Physical Memory (id, p, w) (f, w) Address Map Each process (pid = id) has its own virtual memory. Given (id, p, w), how to determine f from (id, p) ?

Frame Tables ID p p w w p p Frame Table FT 0 1 f F1F1 frame f  1 frame f frame f +1 w Physical Memory IDp f pidpage Each process (pid = id) has its own virtual memory. Given (id, p, w), how to determine f from (id, p) ?

Address Translation via Frame Table address_map(id,p,w){ pa = UNDEFINED; for(f=0; f<F; f++) if(FT[f].pid==id && FT[f].page==p) pa = f+w; return pa; } Each process (pid = id) has its own virtual memory. Given (id, p, w), how to determine f from (id, p) ?

Disadvantages Inefficient: mapping must be performed for every memory reference. Costly: Search must be done in parallel in hardware (i.e., associative memory). Sharing of pages difficult or not possible.

Associative Memories as Translation Look-Aside Buffers When memory size is large, frame tables tend to be quite large and cannot be kept in associative memory entirely. To alleviate this, associative memories are used as translation look-aside buffers. – To be detailed shortly.

Physical Memory Page Tables f frame f  1 frame f frame f +1 p p w w PTR p w Page Table Register

Page Tables A page table keeps track of current locations of all pages belonging to a given process. PTR points at PT of the current process at run time by OS. Drawback: An extra memory accesses needed for any read/write operations. Solution: – Translation Look-Aside Buffer address_map(p, w) { pa = *(PTR+p)+w; return pa; }

Demand Paging Pure Paging – All pages of VM are loaded initially – Simple, but maximum size of VM = size of PM Demand Paging – Pages are loaded as needed: “on demand” – Additional bit in PT indicates a page’s presence/absence in memory – “Page fault” occurs when page is absent

Demand Paging Pure Paging All pages of VM can be loaded initially Simple, but maximum size of VM = size of PM Demand Paging – Pages a loaded as needed: “on demand” – Additional bit in PT indicates a page’s presence/absence in memory – “Page fault” occurs when page is absent address_map(p, w) { if (resident(*(PTR+p))) { pa = *(PTR+p)+w; return pa; } else page_fault; } resident(m) True: the m th page is in memory. False: the m th page is missing.

Segmentation Multiple contiguous spaces (“segments”) – More natural match to program/data structure – Easier sharing (Chapter 9) va = (s, w) mapped to pa (but no frames) Where/how are segments placed in PM? – Contiguous versus paged application

Contiguous Allocation Per Segment Segment Tables Segment x Segment s Segment y Physical Memory s s w w STR s w Segment Table Register

Contiguous Allocation Per Segment Each segment is contiguous in PM Segment Table (ST) tracks starting locations STR points to ST Address translation: Drawback: External fragmentation address_map(s, w) { if (resident(*(STR+s) )) { pa = *(STR+s)+w; return pa; } else segment_fault; } address_map(s, w) { if (resident(*(STR+s) )) { pa = *(STR+s)+w; return pa; } else segment_fault; }

Contiguous Allocation Per Segment Each segment is contiguous in PM Segment Table (ST) tracks starting locations STR points to ST Address translation: Drawback: External fragmentation address_map(s, w) { if (resident(*(STR+s))) { pa = *(STR+s)+w; return pa; } else segment_fault; }

Paging with segmentation

Each segment is divided into fix-size pages va = (s, p, w) – |s| determines # of segments (size of ST) – |p| determines # of pages per segment (size of PT) – |w| determines page size Address Translation: Drawback: 2 extra memory references address_map(s, p, w) { pa = *(*(STR+s)+p)+w; return pa; }

Paging of System Tables

ST or PT may be too large to keep in PM – Divide ST or PT into pages – Keep track by additional page table Paging of ST – ST divided into pages – Segment directory keeps track of ST pages Address Translation: address_map(s1, s2, p, w) { pa = *(*(*(STR+s1)+s2)+p)+w; return pa; } Drawback: 3 extra memory references.

Translation Look-Aside Buffers (TLB) Advantage of VM – Users view memory in logical sense. Disadvantage of VM – Extra memory accesses needed. Solution – Translation Look-aside Buffers (TLB) A special high-speed memory Basic idea of TLB – Keep the most recent translation of virtual to physical addresses readily available for possible future use. – An associative memory is employed as a buffer.

Translation Look-Aside Buffers (TLB)

When the search of (s, p) fails, the complete address translation is needed. Replacement strategy: LRU. When the search of (s, p) fails, the complete address translation is needed. Replacement strategy: LRU.

Translation Look-Aside Buffers (TLB) The buffer is searched associatively (in parallel)

Operating Systems Principles Memory Management Lecture 8: Virtual Memory Memory Allocation in Paged Systems

Memory Allocation with Paging Placement policy  where to allocate the memory on request? – Any free frame is OK (no external fragmentation) – Keep track of available space is sufficient both for statically and dynamically allocated memory systems Replacement policy  which page(s) to be replaced on page fault? – Goal: Minimize the number of page faults and/or the total number pages loaded.

Global/Local Replacement Policies Global replacement: – Consider all resident pages (regardless of owner). Local replacement – Consider only pages of faulting process, i.e., the working set of the process.

Criteria for Performance Comparison Tool: Reference String (RS) r 0 r 1... r t... r T r t : the page number referenced at time t Criteria: 1. The number of page faults 2. The total number of pages loaded

Criteria for Performance Comparison Tool: Reference String (RS) r 0 r 1... r t... r T r t : the page number referenced at time t Criteria: 1. The number of page faults 2. The total number of pages loaded Equivalent =1 To be used in the following discussions.

Global Page Replacement Algorithms Optimal Replacement Algorithm (MIN) – Accurate prediction needed – Unrealizable Random Replacement Algorithm – Playing roulette wheel FIFO Replacement Algorithm – Simple and efficient – Suffer from Belady’s anormaly Least Recently Used Algorithm (LRU) – Doesn’t suffer from Belady’s anormaly, but high overhead Second-Chance Replacement Algorithm – Economical version of LRU Third-Chance Replacement Algorithm – Economical version of LRU – Considering dirty pages

Optimal Replacement Algorithm (MIN) Replace page that will not be referenced for the longest time in the future. Problem: Reference String not known in advance.

Example: Optimal Replacement Algorithm (MIN) RS = c a d b e b a b c d Replace page that will not be referenced for the longest time in the future. Time t RS Frame 0 Frame 1 Frame 2 Frame 3 IN t OUT t a b c d cadbebabcd a b c d a b c d a b c d a b c d a b c e d b c e e d a b c e a b c e a b c e a b c e d a Problem: Reference String not known in advance. 2 page faults

Random Replacement Program reference string are never know in advance. Without any prior knowledge, random replacement strategy can be applied. Is there any prior knowledge available for common programs? – Yes, the locality of reference. Random replacement is simple but without considering such a property.

The Principle of Locality Most instructions are sequential – Except for branch instruction Most loops are short – for-loop – while-loop Many data structures are accessed sequentially – Arrays – files of records

FIFO Replacement Algorithm FIFO: Replace oldest page – Assume that pages residing the longest in memory are least likely to be referenced in the future. Easy but may exhibit Belady’s anormaly, i.e., – Increasing the available memory can result in more page faults.

Example: FIFO Replacement Algorithm RS = c a d b e b a b c d Time t RS Frame 0 Frame 1 Frame 2 Frame 3 IN t OUT t a b c d cadbebabcd a b c d a b c d a b c d a b c d e b c d d a b c e a e b c d e a c d e a b d e a b c d e Replace oldest page.  a b b c c d       5 page faults

Example: Belady’s Anormaly RS = dcbadcedcbaecdcbae 17 page faults Time t RS Frame 0 Frame dcbadcedcbaecdcbae dd c b c b a d a d c e c e d c d c b a b e b c b c d c d b d b a e a 14 page faults Time t RS Frame 0 Frame Frame 2 dcbadcedcbaecdcbae dd c d c a c a d bb b a d c e d c e d c e d c e b c e b a e b a e c a e d a c d a c b a c b a c e a

Example: Belady’s Anormaly 14 page faults Time t RS Frame 0 Frame Frame 2 dcbadcedcbaecdcbae dd c d c a c a d bb b a d c e d c e d c e d c e b c e b a e b a e c a e d a c d a c b a c b a c e a 15 page faults Time t RS Frame 0 Frame Frame 2 Frame 3

Least Recently Used Replacement (LRU) Replace Least Recently Used page – Remove the page which has not been referenced for the longest time. – Comply with the principle of locality Doesn’t suffer from Belady’s anormaly

Example: Least Recently Used Replacement (LRU) RS = c a d b e b a b c d Replace Least Recently Used page. 3 page faults Time t RS Frame 0 Frame 1 Frame 2 Frame 3 IN t OUT t Queue end Queue head a b c d cadbebabcd a b c d a b c d a b c d a b c d a b e d a b d c e c a b e d a b e d a b e d a b e c d e c d d c b a c d b a a c d b d a c b b d a c e b d a b e d a a b e d b a e d c b a e d c b a

LRU Implementation Software queue: too expensive Time-stamping – Stamp each referenced page with current time – Replace page with oldest stamp Hardware capacitor with each frame – Charge at reference – Replace page with smallest charge n -bit aging register with each frame – Set left-most bit of referenced page to 1 – Shift all registers to right at every reference or periodically – Replace page with smallest value R = R n  1 R n  2 … R 1 R 0

Second-Chance Algorithm Approximates LRU Implement use-bit u with each frame Set u=1 when page referenced To select a page: – If u==0, select page – Else, set u=0 and consider next frame Used page gets a second chance to stay in PM Also called clock algorithm since search cycles through page frames.

Example: Second-Chance Algorithm To select a page: If u==0, select page Else, set u=0 and consider next frame cycle through page frames RS = c a d b e b a b c d Time t RS Frame 0 Frame 1 Frame 2 Frame 3 IN t OUT t page faults cadbebabcd  a/1 b/1 c/1 d/1  a/1 b/1 c/1 d/1  a/1 b/1 c/1 d/1  a/1 b/1 c/1 d/1  a/1 b/1 c/1 d/1  e/1 b/0 c/0 d/0 e a  e/1 b/1 c/0 d/0  e/1 b/0 a/1 d/0 a c  e/1 b/1 a/1 d/0  e/1 b/1 a/1 c/1 c d  d/1 b/0 a/0 c/0 d e

Third-Chance Algorithm Second chance algorithm does not distinguish between read and write access Write access more expensive Give modified pages a third chance: – u -bit set at every reference (read and write) – w -bit set at write reference – to select a page, cycle through frames, resetting bits, until uw==00 : uw  uw * (remember modification) 0 0 select Can be implemented by an additional bit.

Example: Third-Chance Algorithm RS = c a w d b w e b a w b c d 3 page faults uw  uw * 0 0 select Time t RS Frame 0 Frame 1 Frame 2 Frame 3 IN t OUT t a/00* d/10 e/00 c/00 cawaw dbwbw ebawaw bcd  a/10 b/10 c/10 d/10  a/10 b/10 c/10 d/10  a/11 b/10 c/10 d/10  a/11 b/10 c/10 d/10  a/11 b/11 c/10 d/10  a/00* b/00* e/10 d/00 e c  a/00* b/10* e/10 d/00  a/11 b/10* e/10 d/00  a/11 b/10* e/10 d/00  a/11 b/10* e/10 c/10 c d  d b

Local Page Replacement Algorithms Measurements indicate that Every program needs a “minimum” set of pages – If too few, thrashing occurs – If too many, page frames are wasted The “minimum” varies over time How to determine and implement this “minimum”? – Depending only on the behavior of the process itself.

Local Page Replacement Algorithms Optimal Page Replacement Algorithm (VMIN) The Working Set Model (WS) Page Fault Frequency Replacement Algorithm (PFF)

Optimal Page Replacement Algorithm (VMIN) The method – Define a sliding window (t, t +  )  width  + 1 –  is a parameter (a system constant) – At any time t, maintain as resident all pages visible in window Guaranteed to generate smallest number of page faults corresponding to a given window width.

Example: Optimal Page Replacement Algorithm (VMIN) RS = c c d b c e c e a d 5 page faults Time t RS Page a IN t OUT t ccdbcecead Page b Page c Page d Page e                                                        c b d bc e e a a d d  = 3

Example: Optimal Page Replacement Algorithm (VMIN) RS = c c d b c e c e a d 5 page faults Time t RS Page a IN t OUT t ccdbcecead Page b Page c Page d Page e                                                        c b d bc e e a a d d  = 3

Example: Optimal Page Replacement Algorithm (VMIN) RS = c c d b c e c e a d 5 page faults Time t RS Page a IN t OUT t ccdbcecead Page b Page c Page d Page e                                                        c b d bc e e a a d d  = 3 By increasing , the number of page faults can arbitrarily be reduced, of course at the expense of using more page frames. VMIN is unrealizable since the reference string is unavailable. By increasing , the number of page faults can arbitrarily be reduced, of course at the expense of using more page frames. VMIN is unrealizable since the reference string is unavailable.

Working Set Model Use trailing window (instead of future window) Working set W(t,  ) is all pages referenced during the interval (t – , t) (instead of (t, t +  ) ) At time t : – Remove all pages not in W(t,  ) – Process may run only if entire W(t,  ) is resident

Example: Working Set Model RS = e d a c c d b c e c e a d 5 page faults  = 3 Time t RS Page a IN t OUT t ccdbcecead Page b Page c Page d Page e                                                        c b a b ea d a ed

Example: Working Set Model RS = e d a c c d b c e c e a d 5 page faults  = 3 Time t RS Page a IN t OUT t ccdbcecead Page b Page c Page d Page e                                                        c b a b ea d a ed

Approximate Working Set Model Drawback: costly to implement Approximations: 1. Each page frame with an aging register Set left-most bit to 1 whenever referenced Periodically shift right all aging registers Remove pages which reach zero from working set. 2. Each page frame with a use bit and a time stamp Use bit is turned on by hardware whenever referenced Periodically do following for each page frame: – Turn off use-bit if it is on and set current time as its time stamp – Compute turn-off time t off of the page frame – Remove the page from the working set if t off > t max

Page Fault Frequency (PFF) Replacement Main objective: Keep page fault rate low Basic principle: – If the time btw the current (t c ) and the previous (t c  1 ) page faults exceeds a critical value , all pages not referenced during that time interval are removed from memory. The algorithm of PFF: – If time between page faults   grow resident set by adding new page to resident set – If time between page faults >  shrink resident set by adding new page and removing all pages not referenced since last page fault

Example: Page Fault Frequency (PFF) Replacement RS = c c d b c e c e a d 5 page faults  = 2 Time t RS Page a IN t OUT t ccdbcecead Page b Page c Page d Page e                                                        c b a, e ea d a eb, d

Example: Page Fault Frequency (PFF) Replacement RS = c c d b c e c e a d 5 page faults  = 2 Time t RS Page a IN t OUT t ccdbcecead Page b Page c Page d Page e                                                        c b a, e ea d a eb, d

Load Control and Thrashing Main issues: – How to choose the degree of multiprogramming? Decrease? Increase? – When level decreased, which process should be deactivated? – When a process created or a suspended one reactivated, which of its pages should be loaded? One or many? Load Control – Policy to set the number & type of concurrent processes Thrashing – Most system’s effort pays on moving pages between main and secondary memory, i.e., low CPU utilization.

Load control  Choosing Degree of Multiprogramming Local replacement: – Each process has a well-defined resident set, e.g., Working set model & PFF replacement – This automatically imposes a limit, i.e., up to the point where total memory is allocated. Global replacement – No working set concept – Use CPU utilization as a criterion – With too many processes, thrashing occurs

Load control  Choosing Degree of Multiprogramming Local replacement: – Each process has a well-defined resident set, e.g., Working set model & PFF replacement – This automatically imposes a limit, i.e., up to the point where total memory is allocated. Global replacement – No working set concept – Use CPU utilization as a criterion – With too many processes, thrashing occurs L = mean time between faults S = mean page fault service time

Load control  Choosing Degree of Multiprogramming Local replacement: – Each process has a well-defined resident set, e.g., Working set model & PFF replacement – This automatically imposes a limit, i.e., up to the point where total memory is allocated. Global replacement – No working set concept – Use CPU utilization as a criterion – With too many processes, thrashing occurs L = mean time between faults S = mean page fault service time Thrashing How to determine the optimum, i.e., N max

Load control  Choosing Degree of Multiprogramming L=S criterion: – Page fault service S needs to keep up with mean time between faults L. 50% criterion: – CPU utilization is highest when paging disk 50% busy (found experimentally). Clock load control – Scan the list of page frames to find replaced page. – If the pointer advance rate is too low, increase multiprogramming level. How to determine N max ?

Load control  Choosing the Process to Deactivate Lowest priority process – Consistent with scheduling policy Faulting process – Eliminate the process that would be blocked Last process activated – Most recently activated process is considered least important. Smallest process – Least expensive to swap in and out Largest process – Free up the largest number of frames

Load control  Prepaging Which pages to load when process activated – Prepage last resident set

Evaluation of Paging Advantages of paging – Simple placement strategy – No external fragmentation Parameters affecting the dynamic behavior of paged systems – Page size – Available memory

Evaluation of Paging

A process requires a certain percentages of its pages within the short time period after activation. Prepaging is important.

Evaluation of Paging Smaller page size is beneficial. Another advantage with a small page size: Reduce memory waste due to internal fragmentation. Another advantage with a small page size: Reduce memory waste due to internal fragmentation. However, small pages require lager page tables.

Evaluation of Paging W : Minimum amount of memory to avoid thrashing. Load control is important.