Download presentation
Presentation is loading. Please wait.
Published byAlonso Brumfield Modified over 9 years ago
1
Copyright ©: Lawrence Angrave, Vikram Adve, Caccamo 1 Virtual Memory III
2
Working set (1968, Denning) Main idea figure out how much memory a process needs to keep most of its recent computation in memory with very few page faults How? The working set model assumes temporal locality Recently accessed pages are more likely to be accessed again Thus, as the number of page frames increases above some threshold, the page fault rate will drop dramatically
3
Working set (1968, Denning) What we want to know: collection of pages process must have in order to avoid thrashing This requires knowing the future. And our trick is? Intuition of Working Set: Pages referenced by process in last seconds of execution are considered to comprise its working set : the working set parameter Usages of working set? Cache partitioning: give each application enough space for WS Page replacement: preferably discard non-WS pages Scheduling: a process is not executed unless its WS is in memory
4
Working set in details At virtual time vt, the working set of a process W(vt, T) is the set of pages that the process has referenced during the past T units of virtual time. Virtual time vt is measured in terms of sequence of memory references It is easy to notice that size of working set grows as window size is increased Limitations of Working Set High overhead to maintain a moving window over memory references Past does not always predict the future correctly It is hard to identify best value for window size T Process total number of pages
5
Calculating Working Set 12 references, 8 faults Window size is
6
Working set in details Strategy for sizing the resident set of a process based on Working set Keep track of working set of each process Periodically remove from the resident set the pages that don’t belong to working set anymore A process is scheduled for execution only if its working set is in main memory
7
Working set of real programs Typical programs have phases Working set size transition stable Sum of both
8
Page Fault Frequency (PFF) algorithm Approximation of pure Working Set Assume that working set strategy is valid; hence, properly sizing the resident set will reduce page fault rate. Let’s focus on process fault rate rather than its exact page references If process page fault rate increases beyond a maximum threshold, then increase its resident set size. If page fault rate decreases below a minimum threshold, then decrease its resident set size Without harming the process, OS can free some frames and allocate them to other processes suffering higher PFF
9
Data structures for UNIX memory management To support paged virtual memory, UNIX uses following data structures: Page table (one per process) Disk block descriptor table (one per process): each virtual page has a disk block descriptor that maps it to a specific block on a swap device page frame data table: it has an entry per memory frame and it is used by the replacement algorithm
10
Page protection Page protection is implemented using the “protect” field in PTE For instance Write, eXecute bits enforce writing, executable permissions at the page level Check is done by hardware during each access Illegal access generates SIGSEGV
11
The size of swapping device We have seen that 32-bits address space corresponds to 4-Gb of virtual memory. It is important to notice that size of swap device always sets a hard limit on the total # of virtual pages that can be allocated to currently running processes
12
Loader and Virtual Memory At process creation the loader allocates a contigous chunk of virtual pages for.text and.data sections of its ELF executable file (stored on the disk) Actually, the loader does not copy any data from disk to main memory Quiz: what mechanism can be used to physically load the needed blocks of executable file into main memory?
13
Loader and Virtual Memory At process creation the loader allocates a contigous chunk of virtual pages for.text and.data sections of its ELF executable file (stored on the disk) Actually, the loader does not copy any data from disk to main memory Quiz: what mechanism can be used to physically load the needed blocks of executable file into main memory? File memory mapping (do you remember mmap?) takes care of loading pages of an executable file during process execution
14
Revisiting memory mapping OS can initialize a virtual memory area by mapping it to a file on the disk (see mmap). This process is called memory mapping Memory mapping is an efficient mechanism that allows to load an executable file in main memory and to map shared objects across multiple processes Can you suggest any example of shared objects?
15
Revisiting memory mapping OS can initialize a virtual memory area by mapping it to a file on the disk (see mmap). This process is called memory mapping Memory mapping is an efficient mechanism that allows to load an executable in main memory and to map shared objects across multiple processes Can you suggest any example of shared objects? Many processes have identical read-only text areas (like running multiple instances of UNIX shell tcsh) A shared library like standard C library is another example of memory mapped object. It would be extremely inefficient to duplicate copies of shared code in physical memory Shared memory
16
Revisiting memory mapping VPN0 VPN1 VPN2 VPN3 VPN4 VPN5 VPN6 VPN7 VPN8 VPN0 VPN1 VPN2 VPN3 VPN4 VPN5 VPN6 VPN7 VPN8 Process X virtual memory shared Shared object on disk The shared physical page does not have to be mapped at the same virtual address for all processes sharing it. Process Y virtual memory Page table Physical memory mapped at virt. addr.1 mapped at virt. addr.3
Similar presentations
© 2024 SlidePlayer.com Inc.
All rights reserved.