Presentation is loading. Please wait.

Presentation is loading. Please wait.

Memory Management.

Similar presentations


Presentation on theme: "Memory Management."— Presentation transcript:

1 Memory Management

2 Motivation, when and where primary and secondary memory management is needed,
compiled code and memory relocation, linking and loading, processes and primary memory management, static and dynamic partitioned using MFT and MVT algorithms, memory allocation policies, critique of various policies like first fit, best fit, internal and external fragmentation, secondary memory management, fixed and variable partitions, virtual memory concept, paging and page replacement policies, page faults, thrashing, hardware support for paging, segmentation, segmentation with paging.

3 Motivation for Memory mgt.
Program must be brought into memory and placed in process for it to be run Input queue – collection of processes on disk waiting to be brought into memory to run programs User programs go through several steps before being run

4 Motivation for Memory mgt.
The main motivation for management of main memory comes from the support for multiprogramming. Several executables processes reside in main memory at any given time. In other words, there are several programs using the main memory as their address space. Also, programs move into, and out of, the main memory as they terminate, or get suspended for some IO, or new executables are required to be loaded in main memory.

5 when and where primary and secondary memory management is needed
The von Neumann principle for the design and operation of computers requires that a program has to be primary memory resident to execute. Also, a user requires to revisit his programs often during its evolution. However, due to the fact that primary memory is volatile, a user needs to store his program in some non-volatile store. Therefore, some form of memory management is needed at both primary and secondary memory levels

6 Memory Management Subdividing memory to accommodate multiple processes
Memory needs to be allocated to ensure a reasonable supply of ready processes to consume available processor time

7 Memory Management Requirements
Relocation Programmer does not know where the program will be placed in memory when it is executed While the program is executing, it may be swapped to disk and returned to main memory at a different location (relocated) Memory references must be translated in the code to actual physical memory address

8 Main Memory Management
Allocation: First of all the processes that are scheduled to run must be resident in the memory. These processes must be allocated space in main memory.

9 Why Main Memory Management
Swapping, fragmentation and compaction: If a program is moved out or terminates, it creates a hole, (i.e. a contiguous unused area) in main memory. When a new process is to be moved in, it may be allocated one of the available holes. It is quite possible that main memory has far too many small holes at a certain time. In such a situation none of these holes is really large enough to be allocated to a new process that may be moving in. The main memory is too fragmented. It is, therefore, essential to attempt compaction. Compaction means OS re-allocates the existing programs in contiguous regions and creates a large enough free area for allocation to a new process.

10 Why Main Memory Management
Garbage collection: Some programs use dynamic data structures. These programs dynamically use and discard memory space. Technically, the deleted data items (from a dynamic data structure) release memory locations. However, in practice the OS does not collect such free space immediately for allocation. This is because that affects performance. Such areas, therefore, are called garbage. When such garbage exceeds a certain threshold, the OS would not have enough memory available for any further allocation. This entails compaction (or garbage collection), without severely affecting performance

11 Protection: With many programs residing in main memory it can happen that due
to a programming error (or with malice) some process writes into data or instruction area of some other process. The OS ensures that each process accesses only to its own allocated area, i.e. each process is protected from other processes

12 Swapping A process can be swapped temporarily out of memory to a backing store, and then brought back into memory for continued execution Backing store – fast disk large enough to accommodate copies of all memory images for all users; must provide direct access to these memory images Roll out, roll in – swapping variant used for priority-based scheduling algorithms; lower-priority process is swapped out so higher-priority process can be loaded and executed Major part of swap time is transfer time; total transfer time is directly proportional to the amount of memory swapped Modified versions of swapping are found on many systems (i.e., UNIX, Linux, and Windows) System maintains a ready queue of ready-to-run processes which have memory images on disk

13 Schematic View of Swapping

14 Virtual memory: Often a processor sees a large logical storage space (a virtual storage space) though the actual main memory may not be that large. So some facility needs to be provided to translate a logical address available to a process or into a physical address to access the desired data or instruction.

15 Dynamic Loading Routine is not loaded until it is called
Better memory-space utilization; unused routine is never loaded Useful when large amounts of code are needed to handle infrequently occurring cases No special support from the operating system is required implemented through program design

16 Dynamic Linking Linking postponed until execution time.
Small piece of code, stub, used to locate the appropriate memory-resident library routine. Stub replaces itself with the address of the routine, and executes the routine. Operating system needed to check if routine is in processes’ memory address. Dynamic linking is particularly useful for libraries.

17 Multistep Processing of a User Program

18 Binding of Instructions and Data to Memory
Address binding of instructions and data to memory addresses can happen at three different stages Compile time: If memory location known a priori, absolute code can be generated; must recompile code if starting location changes Load time: Must generate relocatable code if memory location is not known at compile time Execution time: Binding delayed until run time if the process can be moved during its execution from one memory segment to another. Need hardware support for address maps (e.g., base and limit registers)

19 Logical vs. Physical Address Space
The concept of a logical address space that is bound to a separate physical address space is central to proper memory management. Logical address – generated by the CPU; also referred to as virtual address. Physical address – address seen by the memory unit. Logical and physical addresses are the same in compile-time and load-time address-binding schemes; logical (virtual) and physical addresses differ in execution-time address-binding scheme.

20 Memory-Management Unit (MMU)
Hardware device that maps virtual to physical address. MMU scheme, the value in the relocation register is added to every address generated by a user process at the time it is sent to memory. The user program deals with logical addresses; it never sees the real physical addresses.

21 Dynamic relocation using a relocation register

22 Memory-Management Unit (MMU)
Memory-Management Unit (MMU) is a Hardware maps logical to physical address With MMU, value in relocation register added to every address generated by user process when sent to memory User program deals with logical addresses; never sees real physical addresses

23 Memory Partitioning Fixed partitioning Dynamic Partitioning Equal size
Unequal size Dynamic Partitioning

24 Fixed Partitioning Equal-size partitions
any process whose size is less than or equal to the partition size can be loaded into an available partition if all partitions are full, the operating system can swap a process out of a partition a program may not fit in a partition. The programmer must design the program with overlays

25 Fixed Partitioning Main memory use is inefficient
Any program, no matter how small, occupies an entire partition This is called internal fragmentation

26

27 Placement Algorithm with Partitions
Equal-size partitions Placement is trivial because all partitions are of equal size, it does not matter which partition is used Unequal-size partitions can assign each process to the smallest partition within which it will fit queue for each partition processes are assigned in such a way as to minimize wasted memory within a partition

28 IBM Mainframe OS – OS/MFT

29 Dynamic Partitioning Partitions are of variable length and number
Process is allocated exactly as much memory as required Starts off well! Eventually leads to lot of holes in the memory External fragmentation Must use compaction to shift processes so they are contiguous and all free memory is in one block

30 Fixed Partition OS OS J1 (30k) 100k Internal fragmentation.
The size of partition decided from the beginning by the designer Job has to be placed in the partition equal or bigger than the job size. May cause internal fragmentation that cannot be used by other job. J1=30k J2=50k J3=30k J4=25k OS OS J1 (30k) 100k 25k 50k Internal fragmentation. Cannot be used by other job Until the current job finish. Partition 1 J4 (25k) Partition 2 Partition 3 J2 (30k) Partition 4

31 Fixed Partitions (continued)
Disadvantages: Requires entire program to be stored contiguously Jobs are allocated space on the basis of first available partition of required size Works well only if all of the jobs are of the same size or if the sizes are known ahead of time Arbitrary partition sizes lead to undesired results Too small a partition size results in large jobs having longer turnaround time Too large a partition size results in memory waste or internal fragmentation

32 Dynamic Partition OS OS OS OS J1 10k J2 15k J3 20k J4 50k 95k J1(10k)
Partition are made when a job are to be placed. The size depends on the size of the job. After a while there will be a part that cannot be used since its to small for any job. i.e. external fragmentation OS OS OS OS J1 10k J2 15k J3 20k J4 50k 95k J1(10k) J1 & J4 end J5 5k J6 30k J5 5k J2 (15k) J2 (15k) J2 (15k) J3 (20k) J3 (20k) J3 (20k) J4 (50k) J6 30k

33 Dynamic Partition –cont.
External fragmentation

34 Allocation Technique OS must keep lists of each memory location noting which are free and which are busy. When a new jobs come into the system the free partitions must be allocated. Allocation policy First-fit memory allocation First partition fitting the requirements Best-fit memory allocation Closest fit, the smallest partition fitting the requirement Worst-fit The biggest partition fitting the requirement

35 Allocation Technique Consider the job in this order and the partition is fixed J1 25k J2 45k J3 15k J4 10k OS OS First-fit J3 15k J2 has to wait Until J1 finish 20k J4 10k 10k J1 25k 50k 30k 30k

36 Allocation Technique J1 25k J2 45k J3 15k J4 10k OS OS 20k J3 15k
Best-fit 10k J4 10k 50k J2 45k 30k J1 25k

37 Allocation Technique J1 25k J2 45k J3 15k J4 10k OS OS J4 10k 20k
Worse-fit 20k 10k 10k J1 25k 50k 25k J3 15k 30k 15k

38 Allocation Technique OS J1 25k J2 45k First-fit J3 15k J4 10k J4 10k
Best-fit J3 15k 10k J4 10k 50k J2 45k OS 30k J4 10k J1 25k 20k 10k J1 25k 25k Worse-fit J3 15k 15k

39 PAGING Logical address space of a process can be noncontiguous; process is allocated physical memory whenever the latter is available. Divide physical memory into fixed-sized blocks called frames (size is power of 2, between 512 bytes and 8192 bytes). Divide logical memory into blocks of same size called pages. Keep track of all free frames. To run a program of size n pages, need to find n free frames and load program. Set up a page table to translate logical to physical addresses. Internal fragmentation.

40 PAGING Address generated by CPU is divided into:
Page number (p) – used as an index into a page table which contains base address of each page in physical memory. Page offset (d) – combined with base address to define the physical memory address that is sent to the memory unit.

41 PAGING

42 PAGING EXAMPLES

43 PAGING Before allocation After allocation

44 Implementation of Page Table
Page table is kept in main memory. Page-table base register (PTBR) points to the page table. Page-table length register (PRLR) indicates size of the page table. In this scheme every data/instruction access requires two memory accesses. One for the page table and one for the data/instruction. The two memory access problem can be solved by the use of a special fast-lookup hardware cache called Associative memory or translation look-aside buffers (TLBs)

45 Associative Memory Associative memory – parallel search
Address translation (A´, A´´) If A´ is in associative register, get frame # out. Otherwise get frame # from page table in memory Page # Frame #

46 Paging Hardware With TLB

47 Memory Protection Memory protection implemented by associating protection bit with each frame Valid-invalid bit attached to each entry in the page table: “valid” indicates that the associated page is in the process’ logical address space, and is thus a legal page “invalid” indicates that the page is not in the process’ logical address space

48 Valid (v) or Invalid (i) Bit In A Page Table

49 Segmentation Memory-management scheme that supports user view of memory A program is a collection of segments. A segment is a logical unit such as: main program, procedure, function, method, object, local variables, global variables, common block, stack, symbol table, arrays

50 User’s View of a Program
1 4 2 3 1 2 3 4 user space physical memory space

51 Segmentation Architecture
Logical address consists of a two tuple: <segment-number, offset>, Segment table – maps two-dimensional physical addresses; each table entry has: base – contains the starting physical address where the segments reside in memory limit – specifies the length of the segment Segment-table base register (STBR) points to the segment table’s location in memory Segment-table length register (STLR) indicates number of segments used by a program; segment number s is legal if s < STLR

52 Segmentation Architecture
Relocation. dynamic by segment table Sharing. shared segments same segment number Allocation. first fit/best fit external fragmentation Protection. With each entry in segment table associate: validation bit = 0  illegal segment read/write/execute privileges Protection bits associated with segments; code sharing occurs at segment level

53 Address Translation Architecture

54 EXAMPLE

55 Sharing of Segments

56 Segmentation with Paging – MULTICS
The MULTICS system solved problems of external fragmentation and lengthy search times by paging the segments Solution differs from pure segmentation in that the segment-table entry contains not the base address of the segment, but rather the base address of a page table for this segment

57 MULTICS Address Translation Scheme

58 Virtual Memory Virtual memory – separation of user logical memory from physical memory. Only part of the program needs to be in memory for execution. Some functions are almost never executed. Over-provision data structures in program. Logical address space can be much larger than physical address space. Allows address spaces to be shared by several processes. Virtual memory can be implemented via: Demand paging Demand segmentation

59 Virtual Memory That is Larger Than Physical Memory
Virtual memory that is larger than physical memory Virtual-address space

60 Transfer of a Paged Memory to Contiguous Disk Space
Demand paging pages are only loaded into memory when they are demanded during execution Less I/O needed Less memory needed Higher degree of multiprogramming Faster response Pager (lazy swapper) never swaps a page into memory unless that page will be needed. An extreme case: Pure demand paging starts a process with no pages in memory … Transfer of a Paged Memory to Contiguous Disk Space

61 Valid-Invalid Bit V/I bit is a hardware support associated with each page table entry (PTE) 1: page is legal & in-memory 0: page is invalid OR valid but on the disk  page fault Frame # valid-invalid bit 1 1 1 1 page table

62 PAGE FAULT If there is ever a reference to a page, first reference will trap to OS  page fault OS looks at another table to decide: Invalid reference  abort. Just not in memory. Get empty frame. Swap page into frame. Reset tables, validation bit = 1. Restart instruction: Least Recently Used block move / auto increment/decrement location

63 Handling a Page Fault If access a PTE and find V bit there
 a page fault trap Check a page table in PCB to determine: Invalid reference  abort the process Valid page, but this page is on disk Find a free frame from free-frame list, and then schedule a disk I/O to read the desired page into that free frame; if no free frame, find a page in memory to swap it out (Page replacement algorithms) Restart the instruction that was interrupt by the page fault

64 What happens if there is no free frame?
Page replacement – find some page in memory, but not really in use, swap it out. algorithm performance – want an algorithm which will result in minimum number of page faults. Same page may be brought into memory several times.

65 Performance of Demand Paging
Page Fault Rate: 0  p  1.0 Effective Access Time (EAT) EAT = (1 – p) x memory access time + p x page fault time e.g. P = 0.1 Memory access time = 200 ns page fault time = 8 ms EAT = (1 – 0.1) x x ns=0.8MS Why The page fault is very expensive? The page fault rate significantly affect the System performance SO reduce page fault rate P !

66 Page Replacement Prevent over-allocation of memory by modifying page-fault service routine to include page replacement. Use modify (dirty) bit to reduce overhead of page transfers – only modified pages are written to disk. Page replacement completes separation between logical memory and physical memory – large virtual memory can be provided on a smaller physical memory.

67 Need For Page Replacement

68 Page Replacement Basic page replacement algorithm
Find the location of the desired page on disk Find a free frame: if there is a free frame, use it if there is no free frame, use a page replacement algorithm to select a victim frame. Check if the victim frame is modified (“dirty”). If it’s dirty, write back to disk Load the demand page from disk into the (newly) free frame. Update the page and frame tables Restart the interrupted process (page replacement diagram is next …) 3 possible solutions: Terminate this process Swap out this process and decrease multi-programming degree Page replacement

69 Page Replacement Diagram

70 Page replacement Want lowest page-fault rate.
Evaluate algorithm by running it on a particular string of memory references (reference string) and computing the number of page faults on that string.

71 First-In-First-Out (FIFO) Algorithm

72 First-In-First-Out (FIFO) Algorithm
Reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5 3 frames (3 pages can be in memory at a time per process) 4 frames FIFO Replacement – Belady’s Anomaly more frames  less page faults Only in some cases not in all see above e.g 1 1 4 5 2 2 1 3 9 page faults 3 3 2 4 1 1 5 4 2 2 1 5 10 page faults 3 3 2 4 4 3

73 FIFO Page Replacement

74 FIFO Illustrating Belady’s Anamoly

75 Optimal Algorithm Replace page that will not be used for longest period of time. 4 frames example 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, How do you know this? Used for measuring how well your algorithm performs. 1 4 2 6 page faults 3 4 5

76 Optimal Page Replacement

77 Least Recently Used (LRU) Algorithm
Reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, Counter implementation Every page entry has a counter; every time page is referenced through this entry, copy the clock into the counter. When a page needs to be changed, look at the counters to determine which are to change. 1 5 2 3 5 4 4 3

78 LRU Page Replacement

79 LRU Algorithm (Cont.) Stack implementation – keep a stack of page numbers in a double link form: Page referenced: move it to the top requires 6 pointers to be changed No search for replacement

80 Use Of A Stack to Record The Most Recent Page References

81 Counting Algorithms Keep a counter of the number of references that have been made to each page. LFU Algorithm: replaces page with smallest count. MFU Algorithm: based on the argument that the page with the smallest count was probably just brought in and has yet to be used.

82 Global vs. Local Allocation
Global replacement – process selects a replacement frame from the set of all frames; one process can take a frame from another. Local replacement – each process selects from only its own set of allocated frames.

83 Thrashing If a process does not have “enough” pages, the page-fault rate is very high. This leads to: low CPU utilization. operating system thinks that it needs to increase the degree of multiprogramming. another process added to the system. Thrashing  a process is busy swapping pages in and out.

84 Thrashing Why does paging work? Locality model
Process migrates from one locality to another. Localities may overlap. Why does thrashing occur?  size of locality > total memory size

85 Thank You


Download ppt "Memory Management."

Similar presentations


Ads by Google