28.1 Demand PagingPaging as described in Chapter 7 implied whole program was in memoryBut does it have to be?On average30% of a programs memory footprint is the primary logic of program70% is little used error handlingTherefore, prudent for memory manager not to load entire program into memory on startup.Basic idea is to load parts of the program that are not in memory on demand.This technique, referred to as demand paging results in better memory utilization.
3What would be the main advantage of demand paging?
48.1.1 Hardware for demand paging CPU003E1234PTBR0044234vMemory0x00x00230x10x01240x20x11110x30x3F040x3E00x00000x3E10x00440x3E20x0068
58.1.1 Hardware for demand paging IFID/RREXMEMWBInstructioninoutBUFERPotential page faultsI1I2I3I4I5Let I1 complete and squash I3-I5 before INT.INT needs to save PC corresponding to I2 for re-starting I2 after servicing page fault.Note that there is no harm in squashing instructions I3-I5 since they have not modified permanent state of programIf I5 page faults…handleIf I2 page faults
68.1.2 Page fault handler Find a free page frame Load the faulting virtual page from the disk into the free page frameUpdate the page table for the faulting processPlace the PCB of the process back in the ready queue of the scheduler
78.1.3 Data structures for Demand-paged Memory Management Free-list of page framesFrame tableDisk map
88.1.3 Data structures for Demand-paged Memory Management Free-list of page frames…Pframe 52Pframe 20Pframe 8Free-listPframe 200
98.1.3 Data structures for Demand-paged Memory Management Frame table<P2, 20>free<P5, 15><P1, 32><P3, 0><P4, 0><PID, VPN>Pframe1234567Note: There aredifferent waysof implementingthis.
108.1.3 Data structures for Demand-paged Memory Management Disk mapSwap spaceP1…..P2Pndisk addressVPN1234567Disk map for P1
118.1.4 Anatomy of a Page Fault Find a free page frame Pick victim page and evictLoad faulting pageUpdate page table for faulting process and frame tableRestart faulting process
12Eviction When evicting a page we must consider its status Clean: the page has not been written to and thus matches its counterpart on diskDirty: the page has been written to and no longer matches what is on diskClean pages may be simply evictedDirty pages must be written back to disk
138.2 Interaction between the Process Scheduler and Memory Manager CPUHardwareCPU scheduler…PCB1PCB2ready_qKernelProcess 1Process 2Process n……….User levelMemory ManagerTimer interruptPage faultProcess dispatchfreelistPframePT1PT2.DM1DM2FT(1)(2)
14CPU scheduler dispatches process, it runs until one of following happens HW timer interrupts CPU causing upcall (1) to CPU scheduler that may result in a process switch. CPU scheduler takes appropriate action to schedule next process on CPU.Process incurs a page fault resulting in an upcall (2) to memory manager that results in page fault handlingProcess makes system call resulting in another subsystem (not shown) getting an upcallCPUHardwareCPU scheduler…PCB1PCB2ready_qKernelProcess 1Process 2Process n……….User levelMemory ManagerTimer interruptPage faultProcess dispatchfreelistPframePT1PT2.DM1DM2FT(1)(2)
158.3 Page Replacement Policies How to pick victim page to evict from physical memory when page fault & free-list is empty.For a given string of page references, policy should result in least number of page faults.This attribute ensures that the amount of time spent in OS dealing with page faults is minimized.Ideally, once a particular page has been brought into physical memory, policy should not incur a page fault for same page again.This attribute ensures that page fault handler attempts to respect reference pattern of the user programs.
168.3 Page Replacement Policies May useLocal victim selectionSimpleDon't need frame tablePoor memory utilizationGlobal victim selectionBetter memory utilizationThe normIdeally there are no page faults and memory manager never runs.Goal is to minimize (or eliminate) page faults
178.3.1 Belady’s MinIn 1966 Laszlo Belady proposed an optimal page replacement algorithm requiring to know in advance the page replacement stringObviously this is impossibleBut the performance level of Belady's Min may be used as a reference standard to compare to other policies performance
188.3.2 First In First Out (FIFO) Affix a timestamp when a page is brought in to physical memoryIf a page has to be replaced, choose the longest resident page as the victimNo special hardware neededQueue length is number of physical frames<PID, VPN>Circular queue…..HeadTailfreeFull
19FIFOMaintain queue. As page is read in enqueue. Use head of queue as frame to replaceSample 1,2,3,4,1,2,5,1,2,3,4,5
228.3.3 Least Recently Used (LRU) LRU policy makes assumption that if a page has not been referenced in a long time there is a good chance it will not be referenced in the future as well.Thus, victim page in LRU policy is page that has not been used for longest time.<PID, VPN>Push down stack…..TopBottomfree
238.3.3 Least Recently Used (LRU) 123412512345TIMEPhysicalFramesPushDownStack
248.3.3 Least Recently Used (LRU) LRU is appealing but actually not feasibleStack has as many entries as number of physical frames. For a physical memory of 64 MB & an 8 KB pagesize, size of stack: 8 KB. Too big in datapath!On every access, hardware has to modify stack to place current reference on top of stack. Too slow.LRU may be bad choice in certain situationse.g. Access N+1 pages in a processor with N frames available
2220.127.116.11 Approximate LRU: A Small Hardware Stack Add a hardware stack with ~16 entriesPush references onto stackIf they are already in stack bring to topBottom reference falls out of stackWhen free frame needed randomly select one not in stackShown to be successful in some applicationsProbably not fast enough for high speed pipelined processor
218.104.22.168 Approximate LRU: Reference bit per page frame Associate a bit with each frame.hardware sets on referencesoftware reads and clearsHave an n-bit counter register for each framePeriodically (daemon) right shifts all counters and puts reference bit into high order bitHighest value counters are recently used frames; lowest value counters are LRU frames
278.3.4 Second chance page replacement algorithm Initially, OS clears reference bits of all frames. As program executes, hardware sets reference bits for pages referenced by program.If a page has to be replaced, memory manager chooses replacement candidate in FIFO manner.If chosen victim’s reference bit is set, then manager clears reference bit and this page is moved to end of FIFO queue.The victim is first candidate in FIFO order whose reference bit is not set.
288.3.5 Review of page replacement algorithms Hardware Assist neededcommentsFIFONoneCould lead to anomalous behaviorBelady’s MINOracleProvably optimal performance; not realizable in hardware; useful as a standard for performance comparisonTrue LRUPush down stackExpected performance close to optimal; infeasible for hardware implementation due to space and time complexity; worst-case performance may be similar or even worse compared to FIFOApproximate LRU #1A small hardware stackExpected performance close to optimal; worst-case performance may be similar or even worse compared to FIFOApproximate LRU #2Reference bit per pageExpected performance close to optimal; moderate hardware complexity; worst-case performance may be similar or even worse compared to FIFOSecond Chance ReplacementExpected performance better than FIFO; memory manager implementation simplified compared to LRU schemes
298.3.6 Optimizing Memory Management Beyond basic techniques presented additiona optimizations are possibleThese optimizations are on top of the already presented techniques
308.3.7 Pool of free page frames Instead of waiting for free page count to = 0Periodically run daemon to evict pages keeping a pool of n free pages
322.214.171.124 Overlapping I/O with Processing Upon eviction we add the evicted frame to the free listIf the frame was dirty it is scheduled for write backWhen a frame is needed only clean frames are selected skipping over dirty frames still awaiting write back
3126.96.36.199 Reverse Mapping to Page Tables When daemon runs to maintain free list at a certain level it will take pages from processes that might turn around and page fault on those pagesIf we maintain additional info in free list we can know this and give page back to process (since the data is still intact)freelistPframe 52DirtyPframe 22CleanPframe 200….<PID, VPN>
338.3.8 ThrashingSuppose many processes are in memory but CPU utilization is low. What could cause this?Too many I/O bound processes?Too many CPU bound processes?Should we add more processes into memory?
348.3.8 ThrashingIf some processes don't really have enough pages to support their current configuration they will constantly be page faulting and trying to grab frames from other processesThis may lead those processes to also start page faulting and grabbing framesDoes this sound like the ideal place to introduce more processes into memory?
358.3.8 ThrashingControlling thrashing starts with understanding temporal localityTemporal locality is the tendency for the same memory location to be accessed over a short period of timeDuring a given time period t certain pages will be accessed others will notIf during t the pages that need to be accessed are in memory then no page faults will occur
378.3.9 Working setWorking set is the set of pages that defines the locus of activity of a programThe working set size (WSS) denotes the number of distinct pages touched by a process in a window of time.The total memory pressure (TMP) exerted on the system is the summation of the WSS of all the processes currently competing for resources.
388.3.10 Controlling thrashing If TMP > Physical MemoryDecrease the degree of multiprogramming.Else: Increase# physical frames per processPage faultrateHigh watermarkLow waterMonitor Page fault rateif pfr>HighDecrease progsif pfr<LowIncrease progs
398.4 Other considerations Prepaging after swapping out Bring entire WS back when swapping back inMemory manager and I/O system must work togetherSuppose I/O system is writing data into page that is being swapped out by memory manager!Solution: PiningWhat about pages where memory manager lives?They are pined
408.5 Speeding up Address Translation We will do whatever we can to reduce page fault rateContext switch time < 100 instructionsPage fault time without disk I/O < 100 instructionsDisk I/O to handle page fault ~1,000,000 instructionsNevertheless there remains a big penalty. Each processor memory reference actually requires two memory accesses!One to page tableOne to actual location desired
418.5 Speeding up Address Translation Consider spatial locality which is the tendency to access nearby locations over a short period of timeIf we access address 0x how likely is it that we will access 0x C?What frame will we access in each case?Recall0x0x C
428.5.1 Address Translation with TLB Solution: Install a hardware device which stores recently accessed page table infoDivided into two sections one for user entries, the other for kernel entriesOn context switch user portion may be flushed quickly with a special kernel-mode instruction
468.6 Advanced topics in memory management How big is a page table?Assume a 32 bit address and a page size of 4kBAssume each page table entry is 32 bits longAnd each process has a page table!How can we make this work?
478.6.1 Multi-level Page Tables To deal with excessively large page tables we can page the page table...Page 1Page 2Offset0000PhysicalMemoryOuterPageTablePTBRPage ofPageTable10231023
488.6.2 Sophisticated use of the page table entry Page tables may contain more than just a physical frame numberThey may also contain mode informationRW: Read WriteRO: Read OnlyCW: Copy on WriteUseful for items such as static-read-only areas of memory and process forking
498.6.3 Inverted page tablesVirtual memory is usually much larger than physical memorySome architectures (e.g. IBM Power processors) use an inverted page table, essentially a frame table.The inverted page table alleviates the need for a per-process page table.Size of table is equal to size of physical memory (in frames) rather than virtual memory.Unfortunately, inverted page tables complicate logical to physical address translation done by hardware.hardware handles address translations through TLB mechanism.On TLB miss, hardware hands over control (through a trap) to OS to resolve translation in software. OS is responsible for updating TLB as well.Architecture usually provides special instructions for reading, writing, and purging TLB entries in privileged mode.
508.7 Summary TopicsDemand paging basics including hardware support, and data structures in OS for demand-pagingInteraction between CPU scheduler and memory manager in dealing with page faultsPage replacement policies including FIFO, LRU, and second chance replacementTechniques for reducing penalty for page faults including keeping a pool of page frames ready for allocation on page faults, performing any necessary writes of replaced pages to disk lazily, and reverse mapping replaced page frames to the displaced pagesThrashing and the use of working set of a process for controlling thrashingTranslation look-aside buffer for speeding up address translation to keep the pipelined processor humming alongAdvanced topics in memory management including multi-level page tables, and inverted page tables