Modeling Page Replacement Algorithms

Slides:



Advertisements
Similar presentations
Chapter 3 Memory Management
Advertisements

Chapter 4 Memory Management Page Replacement 补充:什么叫页面抖动?
More on Processes Chapter 3. Process image _the physical representation of a process in the OS _an address space consisting of code, data and stack segments.
Virtual Memory Management G. Anuradha Ref:- Galvin.
MODERN OPERATING SYSTEMS Third Edition ANDREW S. TANENBAUM Chapter 3 Memory Management Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall,
1 Virtual Memory Management B.Ramamurthy. 2 Demand Paging Main memory LAS 0 LAS 1 LAS 2 (Physical Address Space -PAS) LAS - Logical Address.
Advanced OS Chapter 3p2 Sections 3.4 / 3.5. Interrupts These enable software to respond to signals from hardware. The set of instructions to be executed.
Virtual Memory and Paging J. Nelson Amaral. Large Data Sets Size of address space: – 32-bit machines: 2 32 = 4 GB – 64-bit machines: 2 64 = a huge number.
Virtual Memory Management B.Ramamurthy. Paging (2) The relation between virtual addresses and physical memory addres- ses given by page table.
1 Virtual Memory Management B.Ramamurthy Chapter 10.
CS 333 Introduction to Operating Systems Class 13 - Virtual Memory (3) Jonathan Walpole Computer Science Portland State University.
Memory Management ◦ Operating Systems ◦ CS550. Paging and Segmentation  Non-contiguous memory allocation  Fragmentation is a serious problem with contiguous.
System Calls 1.
OPERATING SYSTEM OVERVIEW. Contents Basic hardware elements.
Memory Management 3 Tanenbaum Ch. 3 Silberschatz Ch. 8,9.
1 Memory Management 4.1 Basic memory management 4.2 Swapping 4.3 Virtual memory 4.4 Page replacement algorithms 4.5 Modeling page replacement algorithms.
Virtual Memory Expanding Memory Multiple Concurrent Processes.
Chapter 4 Memory Management Virtual Memory.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Chapter 9: Virtual Memory.
Lecture Topics: 11/24 Sharing Pages Demand Paging (and alternative) Page Replacement –optimal algorithm –implementable algorithms.
Operating Systems ECE344 Ashvin Goel ECE University of Toronto Demand Paging.
Processes and Virtual Memory
Demand Paging Reference Reference on UNIX memory management
Virtual Memory Various memory management techniques have been discussed. All these strategies have the same goal: to keep many processes in memory simultaneously.
操作系统原理 OPERATING SYSTEM Chapter 3 Memory Management 内存管理.
Lecture 14 PA2. Lab 2: Demand Paging Implement the following syscalls xmmap, xmunmap, vcreate, vgetmem/vfreemem, srpolicy Deadline: November , 10:00.
COMP091 – Operating Systems 1 Memory Management. Memory Management Terms Physical address –Actual address as seen by memory unit Logical address –Address.
Virtual Memory 1 Computer Organization II © McQuain Virtual Memory Use main memory as a “cache” for secondary (disk) storage – Managed jointly.
Main Memory CSSE 332 Operating Systems Rose-Hulman Institute of Technology.
Processes and Threads Chapter 3 and 4 Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee Community College,
Computer Architecture Lecture 12: Virtual Memory I
Virtual Memory Chapter 8.
Memory Management Chapter 3
Translation Lookaside Buffer
MODERN OPERATING SYSTEMS Third Edition ANDREW S
Virtual Memory CSSE 332 Operating Systems
Memory Management Paging (continued) Segmentation
Session 3 Memory Management
Memory Caches & TLB Virtual Memory
Chapter 9: Virtual Memory – Part I
Chapter 9: Virtual Memory
Protection of System Resources
Day 08 Processes.
Day 09 Processes.
Structure of Processes
Lecture 28: Virtual Memory-Address Translation
Chapter 9: Virtual-Memory Management
CS 105 “Tour of the Black Holes of Computing!”
Page Replacement.
Operating Systems Lecture November 2018.
Memory Management Paging (continued) Segmentation
Modeling Page Replacement Algorithms
Translation Buffers (TLB’s)
CS510 Operating System Foundations
Operating Systems Lecture 3.
Process Control B.Ramamurthy 2/22/2019 B.Ramamurthy.
CSE451 Virtual Memory Paging Autumn 2002
Translation Buffers (TLB’s)
CS 105 “Tour of the Black Holes of Computing!”
Unix Process Control B.Ramamurthy 4/11/2019 B.Ramamurthy.
Lecture 3: Main Memory.
CS 105 “Tour of the Black Holes of Computing!”
CSE 542: Operating Systems
Translation Buffers (TLBs)
Memory Management Paging (continued) Segmentation
Virtual Memory Use main memory as a “cache” for secondary (disk) storage Managed jointly by CPU hardware and the operating system (OS) Programs share main.
Review What are the advantages/disadvantages of pages versus segments?
Lecture Topics: 11/20 HW 7 What happens on a memory reference Traps
Virtual Memory.
CSE 542: Operating Systems
Presentation transcript:

Modeling Page Replacement Algorithms Reference text: Tanenbaum ch.3.4.5

Paging in Practice Newer UNIX OS is based on swapping and demand paging. The kernel and the page daemon process performs paging Main memory is divided into kernel, core map and page frames. 4M page frame 1KB

Two-Handed Clock Algorithm For the (i – HANDSPREADPAGES) th page , check the usuage bit. If it is not set, add the page to the free list. Every 250ms, page replacement algorithm wakes up to see if the free page frame is equal to a set value(~1/4 of memory). If less, transfer pages from memory to disk. 1st hand 2nd hand clear the usuage bit of the i th page Page daemon maintains 2 pointers into the core map The first hand clears the usage bit at the front end The second hand checks the usage bit at the back end. Pages with the usage bit=0 will be put on the free list

UNIX Commands: vmstat and uptime uptime : shows how long the system has been up vmstat -s : shows virtual memory stat

Swapping in UNIX If the paging rate is too high and the number of free pages is always way below the the threshold, the swapper is used to remove one or more processes from memory Processes that have been idled for > 20 sec will be swapped out first Processes that are the largest and have been idled the longest will be swapped out second

Paging Implementation Issues Operating System Involvement 1. When a new process is created: determine how large the program and data and create a page table for it page table not in memory when process is swapped out allocate space from disk for swap area 2. When the process is scheduled for execution: reset the MMU for the new process (make current page table current) and flush TLB bring the process’ pages in memory to reduce initial page faults 3. When a page faults occurs: read register to determine where page fault occurs compute the page needed and find the page frame to accommodate the new page back up PC to execute the instruction again 4. When a process exits OS releases the page table, pages, and disk space CR2 in x86 processors

Page Fault Handling Hardware traps to kernel - save PC on stack 2. General registers saved - using an assembly program 3. OS determines which virtual page needed - reading a hardware register (for x86, CR2) 4. OS checks validity of address, seeks page frame - address is valid and no protection fault - no free page frame, invoke page replacement algorithm 5. If selected frame is dirty, write it to disk - context switch takes place, suspend the faulting process and start the disk transfer process - mark the frame busy

Page Fault Handling (cont’d) 6. OS schedules new page in from disk - looks up the disk address where the needed page is and schedule a disk read 7. Page tables updated - mark the page frame to be in normal state 8. Faulting instruction backed up to when it began 9. Faulting process scheduled 10. Reload registers and continues program

Page Fault Handling on x86 CPU register CR3 contains the physical address of the page directory. Each process has its own page directory plus page tables and pages they point to At process switch, CR3 is loaded with the physical address of the scheduled process’ page directory and this causes a whole process image to be mapped in Loading of CR3 flushes data/instruction cache and TLBs

Page Fault Handling on x86(cont’d) 1. MMU sees V=0 in a needed PTE, causes trap via vector number 0x14. IDT[0x14] has the fault handling routine. MMU puts faulting linear address in CR2 2. Assembly routine saves registers (this is part of the OS) 3. PF handler (in C) figures out which page not present 4. OS checks if this page is a valid page of this process (else usually kill process), get page from free list. 5. Drop this step (the free list handling gets dirty pages written out earlier) 6. OS locates needed data, often in executable file or swap space, schedules disk i/o, blocks this process

Page Fault Handling on x86(cont’d) 7. Disk interrupt signals page in, PTE updated, wakeup process. 8. Faulting instruction needs re-executing—process is still in kernel after scheduling, back in PF handler, with user PC on bottom of stack where it can be adjusted (backed up) (this is done in the PF handler, not the disk interrupt). 9. The PF handler returns to the assembly routine 10. The as routine does the iret, using the backed-up PC, and resumes user execution. 11. (added) The user code re-executes the instruction that caused the PF, more successfully this time

Example of Page Faults 1. First reference to data page—page contents are in executable file. PF handler blocks. 2. First reference to BSS page (uninitialized data)—no blocking, just assign page from free list. 3. Reference that extends the user stack—same as 2. 4. First reference to text page (code)—as in 1, or if this program is in use by another process, arrange sharing of code page already in memory. 5. Re-reference after pageout to swap space—block while read in from swap space. 6. Reference to address outside of program image: fails validity test in step 4 above, causes “segmentation violation” in Solaris, usually kills process. 7. Ref to malloc’d memory (heap): malloc itself only allocates swap space, not real memory, so the memory is added by PFs, like #2.