Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Memory Management Review of Designs and Issues Basic memory management Swapping Virtual memory Page replacement algorithms Design issues for paging systems.

Similar presentations


Presentation on theme: "1 Memory Management Review of Designs and Issues Basic memory management Swapping Virtual memory Page replacement algorithms Design issues for paging systems."— Presentation transcript:

1 1 Memory Management Review of Designs and Issues Basic memory management Swapping Virtual memory Page replacement algorithms Design issues for paging systems Implementation issues Segmentation

2 2 Memory Management Ideally programmers want memory that is –large –fast –non volatile Memory hierarchy –small amount of fast, expensive memory – cache –some medium-speed, medium price main memory –gigabytes of slow, cheap disk storage Memory manager handles the memory hierarchy

3 3 Multiprogramming with Fixed Partitions Fixed memory partitions –separate input queues for each partition –single input queue

4 4 Modeling Multiprogramming CPU utilization as a function of number of processes in memory Degree of multiprogramming

5 5 Analysis of Multiprogramming System Performance Arrival and work requirements of 4 jobs CPU utilization for 1 – 4 jobs with 80% I/O wait Sequence of events as jobs arrive and finish –note numbers show amout of CPU time jobs get in each interval

6 6 Relocation and Protection Cannot be sure where program will be loaded in memory –address locations of variables, code routines cannot be absolute –must keep a program out of other processes ’ partitions Use base and limit values –address locations added to base value to map to physical addr –address locations larger than limit value is an error

7 7 Swapping (1) Memory allocation changes as –processes come into memory –leave memory Shaded regions are unused memory

8 8 Swapping (2) Allocating space for growing data segment Allocating space for growing stack & data segment

9 9 Memory Management with Bit Maps Part of memory with 5 processes, 3 holes –tick marks show allocation units –shaded regions are free Corresponding bit map Same information as a list

10 10 Memory Management with Linked Lists Four neighbor combinations for the terminating process X

11 11 Virtual Memory Paging (1) The position and function of the MMU

12 12 Paging (2) The relation between virtual addresses and physical memory addres- ses given by page table

13 13 Page Tables (1) Internal operation of MMU with 16 4 KB pages

14 14 Page Tables (2) 32 bit address with 2 page table fields Two-level page tables Second-level page tables Top-level page table

15 15 Page Tables (3) Typical page table entry

16 16 TLBs – Translation Lookaside Buffers A TLB to speed up paging

17 17 Inverted Page Tables Comparison of a traditional page table with an inverted page table

18 18 Page Replacement Algorithms Page fault forces choice –which page must be removed –make room for incoming page Modified page must first be saved –unmodified just overwritten Better not to choose an often used page –will probably need to be brought back in soon

19 19 Optimal Page Replacement Algorithm Replace page needed at the farthest point in future –Optimal but unrealizable Estimate by … –logging page use on previous runs of process –although this is impractical

20 20 Not Recently Used Page Replacement Algorithm Each page has Reference bit, Modified bit –bits are set when page is referenced, modified Pages are classified 1. not referenced, not modified 2. not referenced, modified 3. referenced, not modified 4. referenced, modified NRU removes page at random –from lowest numbered non empty class

21 21 FIFO Page Replacement Algorithm Maintain a linked list of all pages –in order they came into memory Page at beginning of list replaced Disadvantage –page in memory the longest may be often used

22 22 Second Chance Page Replacement Algorithm Operation of a second chance –pages sorted in FIFO order –Page list if fault occurs at time 20, A has R bit set (numbers above pages are loading times)

23 23 The Clock Page Replacement Algorithm

24 24 Least Recently Used (LRU) Assume pages used recently will used again soon –throw out page that has been unused for longest time Must keep a linked list of pages –most recently used at front, least at rear –update this list every memory reference !! Alternatively keep counter in each page table entry –choose page with lowest value counter –periodically zero the counter

25 25 Review of Page Replacement Algorithms

26 Replacement Scope Scope of replacement policy can be: –Local replacement policy – chooses only among the resident pages of the process that generated page fault in selecting a page to replace –Global replacement policy – considers all unlocked pages in main memory as candidates for replacement, regardless of which process owns a particular page Global policies are more attractive because of the simplicity of implementation and minimal overhead

27 Replacement Scope Fixed Allocation, Local Replacement Scope –Number of frames allocated to a process is fixed in advance Too small: leads to thrashing Too big: wastes memory –Reduces the available number of processes that can be in memory –Operating system is choosing the page to be replaced among the frames allocated to that process –Global replacement scope is not possible

28 Replacement scope (3) Variable Allocation, Global Replacement Scope –This combination is the easiest to implement and is a common technique in operating systems –When a page fault occurs, a free frame is added to the resident set of the process who experiences a page fault and the page is brought up in that frame –Harder to determine who should lose a page; the selection is made among all the frames in the memory (except the locked ones), there is no discipline to determine which process should loose a page from its resident set May remove a page from a process that needs it

29 Replacement Scope Variable Allocation, Local Replacement Scope –Concept When loading a process, allocate it a set of page frames After a page fault, replace one of those pages Periodically reevaluate set size, adjust it to improve performance –For this method, it is important to determine the resident set size and timing of changes. working set strategy – one method to deal with those aspects

30 Load Control Determines the number of processes that will be resident in main memory –referenced as multiprogramming level Too few processes, many occasions when all processes will be blocked and much time will be spent in swapping Too many processes will lead to thrashing (page fault rate will be very high)

31 31 Load Control Despite good designs, system may still thrash When Page Fault Function algorithm indicates –some processes need more memory –but no processes need less Solution : Reduce number of processes competing for memory –swap one or more to disk, divide up pages they held –reconsider degree of multiprogramming

32 Multiprogramming Level Solutions: –Having a page fault frequency algorithm implicitly incorporates load control (only those processes which have a sufficiently large resident set are allowed to execute); In providing the required resident set size for each active process, the replacement policy automatically and dynamically determines the number of active processes –Adjust dynamically the multiprogramming level so that the mean time between page faults equals the mean time required to process a page fault; studies show that this is the point where the processor utilization is attained a maximum

33 33 Page Size Small page size Advantages –less internal fragmentation – better fit for various data structures, code sections –less unused program in memory Disadvantages –programs need many pages, larger page tables

34 34 Separate Instruction and Data Spaces One address space Separate I and D spaces

35 35 Shared Pages Two processes sharing same program sharing its page table

36 36 Cleaning Policy Need for a background process, paging daemon –periodically inspects state of memory When too few frames are free –selects pages to evict using a replacement algorithm It can use same circular list –as regular page replacement algorithm but with different pointer

37 Cleaning Policy Demand cleaning –A page is written out only when it has been selected for replacement Pre-cleaning –Writes modified pages before their associated frames are needed so that pages can be written out in batches Best approach uses page buffering –Replaced pages are placed in two lists Modified and unmodified –Pages in the modified list are periodically written out in batches –Pages in the unmodified list are either reclaimed if referenced again or lost when its frame is assigned to another page

38 38 Implementation Issues Operating System Involvement with Paging Four times when OS involved with paging 1. Process creation  determine program size  create page table 2. Process execution  MMU reset for new process  TLB flushed 3. Page fault time  determine virtual address causing fault  swap target page out, needed page in 4. Process termination time  release page table, pages

39 39 Locking Pages in Memory Virtual memory and I/O occasionally interact Process issues call for read from device into buffer –while waiting for I/O, another processes starts up –has a page fault –buffer for the first procoss may be chosen to be paged out Need to specify some pages locked –exempted from being target pages

40 40 Backing Store (a) Paging to static swap area (b) Backing up pages dynamically

41 41 Separation of Policy and Mechanism Page fault handling with an external pager

42 42 Segmentation (1) One-dimensional address space with growing tables One table may bump into another

43 43 Segmentation (2) Allows each table to grow or shrink, independently

44 44 Segmentation (3) Comparison of paging and segmentation

45 45 Implementation of Pure Segmentation (a)-(d) Development of checker boarding (e) Removal of the checker boarding by compaction

46 46 Segmentation with Paging Descriptor segment points to page tables Segment descriptor – numbers are field lengths

47 47 Segmentation with Paging A 34-bit virtual address

48 48 Segmentation with Paging Protection Level

49 Segmentation Summary A process is divided into a number of segments that don’t need to be equal in size When a process is brought into the main memory, then all of its segments are usually brought into main memory and a process segment table is setup. Advantages: –The virtual address space of a process is divided into logically distinct units which correspond to constituent parts of a process –Segments are the natural units of access control; the process may have different access rights for different segments –Segments are the natural units for sharing code and data objects with other processes Disadvantages: –Inconvenient for operating system to manage storage allocation for variable-sized segments because each segment that is held in physical memory must be contiguous; after the system has been running for a while, the free memory available can be fragmented This is known as external fragmentation, because though the total free memory might be far greater than the size of some segment that must be loaded, still there is no single area large enough to load it External fragmentation might be solved by moving segments that are not needed out to the disk and moving them back when they are needed. This is called swapping.

50 Paging Summary Advantages – by using fixed size pages in virtual address space and fixed size pages in physical address space, it addresses some of the problems with segmentation: –External fragmentation is no longer a problem (all frames in physical memory are same size) –Transfers to/from disks can be performed at granularity of individual pages Disadvantages –The page size is a choose made by CPU or OS designer It may not fit the size of program data structures and lead to internal fragmentation in which storage allocation request must be rounded to an integral number of pages –There may be no correspondence between page protection settings and application data structures If two process are to share data structures, they may do so at the level of sharing entire pages –Requiring per process page tables, it is likely that the OS require more storage for its internal data structures

51 Combined paging and segmentation Both paging and segmentation have their strengths –Paging – transparent to the programmer eliminates external fragmentation, thus providing efficient use of main memory –Segmentation – visible to the programmer but with ability to handle growing data structure, modularity and support for sharing and protection Some systems are equipped with hardware (processor) and software (operating system) to provide both –User address space is broken up into a number of segments, at the discretion of the programmer –Each segment is broken up into a number of fixed size pages, which are equal in length to a main memory frame –From the programmer point of view a logical address still consists of a segment number and an segment offset From the system point of view, the segment offset is seen as a page number and a page offset for a page within the specified segment

52 Memory hierarchy review It is a tradeoff between size, speed and cost Register –Fastest memory element; but small storage; very expensive Cache –Fast and small compared to main memory; acts as a buffer between the CPU and main memory: it contains the most recent used memory locations (address and contents are recorded here) Main memory is the RAM of the system Disk storage - HDD

53 Cache review Every address reference goes first to the cache; if the desired address is not here, then we have a cache miss; The contents are fetched from main memory into the indicated CPU register and the content is also saved into the cache memory Most software exhibits temporal locality of access, meaning that it is likely that same address will be used again soon, and if so, the address will be found in the cache, when we have a cache hit

54 Address binding An address used in an instruction can point anywhere in the virtual address space of the process, it still must be bound to a physical memory address Programs are made of modules. Compilers or assemblers that are translating a module, don’t know where the module will be loaded in the physical memory –One way to deal with this is by assuming that code that they output will start at address zero in memory –A linker can take a sequence of such modules and create a single composite module, by adjusting the (relative) addresses in all but the first module; the addresses in the resulting module are still relative to its start –Compilers have the whole virtual address space at their disposal Address translation can be dynamic or static.

55 Static address binding OS is responsible for managing the memory, so it will give the loader a base address where to load the module –The loader should adjust all the relative addresses in the module, converting them to absolute physical addresses. –This is called static relocation or static binding Problems with static binding: –Once loaded, the code or data of the program can’t be moved into the memory without further relocation –All the processes executing in such a system would share same physical address space; they would not be protected from other if addressing errors occur; Even the OS code is exposed to addressing errors

56 Dynamic address binding Advantages of dynamic address binding: –A given program can run anywhere in the physical memory and can be moved around by the operating system; all of the addresses that it is using are relative to its own virtual address space, so it is unaware of the physical locations at which it happens to have been placed –It is possible to protect processes from each other and protect the operating system from application processes by a mechanism we employ for isolating the addresses seen by the processes To realize the advantages we will need a mechanism to bind the virtual address within the loaded instructions to physical addresses when the instructions are executed

57 Memory Management Design Issues The design of an memory management system for an OS depends on three areas of choice: –Use or not VM memory techniques Any new OS provides support for it –Use paging, segmentation or both Pure segmentation is very rare; when segmentation combined with paging, most of the design issues are in the area of paging –Algorithms employed for various aspects of memory management the idea is to try and minimize the rate at which page faults occur, because page faults cause software overhead Performance of any policies set depends on main memory size, the relative speed of main and secondary memory, the size and number of processes competing for resources, the execution behavior of individual processes.

58 Operating system policies for VM Fetch Policy –When a page should be brought in main memory Demand, Pre-paging Placement policy –Where in real memory a process piece is to reside Replacement policy –Which page to be replaced when a new one is brought in the main memory –Basic algorithms Optimal, Least recently used (LRU), First in first out (FIFO), Clock

59 Operating system policies for VM Resident set management –Not all pages of a process need (or could) to be brought in the main memory. How much memory to allocate to a process?? Resident set size –Fixed, variable Replacement scope –Global, local Cleaning policy –Opposite to fetch policy. Determining when a modified page should be written to secondary memory Demand, pre-cleaning Load control –Determining the number of processes that would be resident in the main memory Degree of multiprogramming

60 Fetch Policy When to bring a page into memory Demand Paging – Load the page when a process tries to reference it –Tends to produce a large number of page faults when the process starts, then the page fault ratio settles down Pre-paging – Bring in pages that are likely to be used in the near future –Try to take advantage of disk characteristics Generally more efficient to load several consecutive sectors/pages than individual sectors due to seek, rotational latency –Hard to correctly guess which pages will be referenced Easier to guess at program startup May load unnecessary pages –Usefulness is not clear

61 Placement Policy Determines where in the real memory a process piece is to reside –In pure segmentation systems, it is an important design issue; since most of the modern operating systems are not based on pure segmentation VM techniques. –In pure paging or segmentation combined with paging systems, placement is irrelevant, since the address translation hardware and main memory access hardware can perform their functions for any page- frame combinations with equal efficiency

62 Replacement Policy Which page to be replaced when a new one is brought in the main memory? Several inter-related concepts are involved: –Resident set management How many page frames are to be allocated to each active process? The set of pages considered for replacement should be limited per process that caused the page fault or for all the page frames in the main memory? –Replacement Policy Across the considered set pages, which particular page should be selected for replacement? Page removed should be the page least likely to be referenced in the near future Most policies predict the future behavior on the basis of past behavior

63 Resident Set Size How many pages to bring in the main memory for a process, how much memory to allocate? –The smaller the memory allocation per process, the more processes could reside in the main memory (increases the probability that the OS will find at least one ready process at any given time, reducing the swapping time) –If a relatively small number of pages per process, the rate of page faults will be higher –Beyond of a certain size, additional allocation of main memory to a process will have no noticeable effect on the page fault rate for that process because of the principle of locality Two policies are present in operating systems: –Fixed allocation –Variable allocation

64 Resident Set Size Fixed-allocation –gives a process a fixed number of pages within which to execute; the number is decided at the initial process loading time and it may be a function of the process type (interactive, batch, etc…) or may based on guidance from the programmer or system manager –when a page fault occurs, one of the pages of that process must be replaced Variable-allocation –number of pages allocated to a process varies over the lifetime of the process –If a process suffers persistently high rate page faults, will be given additional frames to reduce the page fault rate –A process with very low page fault rate will be given a reduced frame allocation –Requires the operating system to continuously asses the behavior of the process, leading to software overhead and complexity

65 No More Memory We are done now! 65


Download ppt "1 Memory Management Review of Designs and Issues Basic memory management Swapping Virtual memory Page replacement algorithms Design issues for paging systems."

Similar presentations


Ads by Google