Presentation is loading. Please wait.

Presentation is loading. Please wait.

Lecture 5 Memory Management Part I. Lecture Highlights  Introduction to Memory Management  What is memory management  Related Problems of Redundancy,

Similar presentations


Presentation on theme: "Lecture 5 Memory Management Part I. Lecture Highlights  Introduction to Memory Management  What is memory management  Related Problems of Redundancy,"— Presentation transcript:

1 Lecture 5 Memory Management Part I

2 Lecture Highlights  Introduction to Memory Management  What is memory management  Related Problems of Redundancy, Fragmentation and Synchronization  Memory Placement Algorithms  Continuous Memory Allocation Scheme  Parameters Involved  Parameter-Performance Relationships  Some Sample Results

3 Introduction What is memory management Memory management primarily deals with space multiplexing. All the processes need to be scheduled in such a way that all the users get the illusion that their processes reside on the RAM. The job of the memory manager: keep track of which parts of memory are in use and which parts are not in use to allocate memory to processes when they need it and deallocate it when they are done to manage swapping between main memory and disk when main memory is not big enough to hold all the processes.

4 What is memory management Visual Representation Operating system User space Process p1 Process p2 P1 -Swap out P2 - Swap in Main Memory Hard disc

5 Memory Management An Example This example illustrates the basic concept of memory management. We consider a mickey mouse system where: Memory Size: 16MB Transfer Rate: 2MB/ms RR Time Quantum: 2ms We’ll use the process mix on the next slide and follow the RAM configuration before and after each time slot as also the action taking place during the time slot for five time slots.

6 Memory Management An Example – The Process Mix Process IDExecution Time (ms) Size (in MB) Transfer time needed (ms) P1421 P2263 P3642 P4842 P5221 P61042 P7221

7 Memory Management An Example – Time Slot 1 RAM Configuration Before: After: Time Slot 1 P1 (4ms) P2 (2ms) P3 (6ms) P4 (8ms) P1 Executes P1 (2ms) P2 (2ms) P3 (6ms) P4 (8ms)

8 Memory Management An Example – Time Slot 2 RAM Configuration Before: After: Time Slot 2 P1 (2ms) P2 (2ms) P3 (6ms) P4 (8ms) P1 spooled in in 1ms P5 spooled in in 1ms P2 Executes P2 Done P5 (2ms) P2 (0ms) P3 (6ms) P4 (8ms)

9 Memory Management An Example – Time Slot 3 RAM Configuration Before: After: Time Slot 3 P5 (2ms) P2 (0ms) P3 (6ms) P4 (8ms) P2 spooled out in 2ms P3 Executes P5 (2ms) P2 (0ms) P3 (4ms) P4 (8ms)

10 Memory Management An Example – Time Slot 4 RAM Configuration Before: After: Time Slot 4 P5 (2ms) P2 (0ms) P3 (4ms) P4 (8ms)P5 (2ms) P3 (4ms) P4 (6ms) P6 (10ms) 2MB Hole P2 spooled out in 1ms P6 spooled in in 1ms P4 Executes

11 Memory Management An Example – Time Slot 5 RAM Configuration Before: After: Time Slot 5 P5 (2ms) P3 (4ms) P4 (6ms) 2MB Hole P5 (0ms) P3 (4ms) P4 (6ms) P7 (2ms) P6 (10ms) P6 spooled in in 1ms P7 spooled in in 1ms P5 Executes P5 Done

12 Memory Management An Example The previous slides gave started a stepwise walk-through of the mickey mouse system. Try and complete the walk through from this point on.

13 Related Problems Synchronization problem in spooling Spooling enables the transfer of process while another process is in execution. It aims at preventing the CPU from being idle, thus, managing CPU utilization more efficiently. The processes that are being transferred to the main memory can be of different sizes. When trying to transfer a very big process, it is possible that the transfer time exceeds the combined execution time of the processes in the RAM. This results in the CPU being idle which was the problem for which spooling was invented. The above problem is termed as the synchronization problem. The reason behind it is that the variance in process sizes does not guarantee synchronization.

14 Related Problems Redundancy Problem Usually the combined size of all processes is much bigger than the RAM size and for this reason processes are swapped in and out continuously. One issue regarding this is: What is the use of transferring the entire process when only part of the code is executed in a given time slot? This problem is termed as the Redundancy problem.

15 Related Problems Fragmentation Fragmentation is encountered when the free memory space is broken into little pieces as processes are loaded and removed from memory. Fragmentation is of two types: External fragmentation Internal fragmentation In the present context, we are concerned with external fragmentation and shall explore the same in greater details in the following slides.

16 Generation of Holes In A System An Example 400K 1000K 2000K 2300K 2560K 400K 1000K P2 terminates 2000K 2300K 2560K OS P1 P2 P3 OS P1 P3 400K 1000K allocate P4 1700K 2000K 2300K 2560K OS P1 P4 P3 Figure: P5 of size 500K cannot be allocated in part (c) abc

17 Generation of Holes In A System An Example In the previous visual presentation, we see that initially P1, P2, P3 are in the RAM and the remaining 260K is not enough for P4 (700K). (part a) When P2 terminates, it is spooled out leaving behind a hole of size 1000K. So now we have two holes of sizes 1000K and 260K respectively. (part b) At this point, we have a hole big enough to spool in P4 which leaves us with two holes of sizes 300K and 260K. (part c) Thus, we see holes are generated because the size of the spooled out process is not that same as the size of the process waiting to be spooled in.

18 Related Problems Fragmentation – External Fragmentation External fragmentation exists when enough total memory space exists to satisfy a request, but it is not contiguous; storage is fragmented into a large number of small holes. Referring to the figure of the scheduling example on the next slide, two such cases can be observed.

19 Related Problems Fragmentation – External Fragmentation 400K 1000K 2000K 2300K 2560K 400K 1000K P2 terminates 2000K 2300K 2560K OS P1 P2 P3 OS P1 P3 400K 1000K allocate P4 1700K 2000K 2300K 2560K OS P1 P4 P3 Figure: P5 of size 500K cannot be allocated due to external fragmentation a bc

20 Related Problems Fragmentation – External Fragmentation From the figure on the last slide, we see In part (a), there is a total external fragmentation of 260K, a space that is too small to satisfy the requests of either of the two remaining processes, P4 and P5. In part (c), however, there is a total external fragmentation of 560K. This space would be large enough to run process P5, except that this free memory is not contiguous. It is fragmented into two pieces, neither one of which is large enough, by itself, to satisfy the memory request of process P5.

21 Related Problems Fragmentation – External Fragmentation This fragmentation problem can be severe. In the worst case, there could be a block of free (wasted) memory between every two processes. If all this memory were in one big free block, a few more processes could be run. Depending on the total amount of memory storage and the average process size, external fragmentation may be either a minor or major problem.

22 Related Problems Fragmentation – External Fragmentation One solution to the problem of external fragmentation is compaction. The goal is to shuffle the memory contents to place all free memory together in one large block. The simplest compaction algorithm is to move all processes toward one end of the memory; all holes in the other direction, producing one large hole of available memory. This scheme can be quite expensive. The figure on the following slide shows different ways to compact memory. Selecting an optimal compaction strategy is quite difficult.

23 Related Problems Fragmentation – External Fragmentation 300K 500K 600K 1000K 1200K 1500K 1900K 2100K 300K 500K 600K 800K 1200K 2100K 300K 500K 600K 1000K 1200K 2100K 300K 500K 600K 1500K 1900K 2100K OS P1 P2 400K P3 300K P4 200K P1 P2 P3 P4 900K P1 P2 P4 P3 900K P1 P2 900K P4 P3 Original allocation Moved 600K Moved 400K Moved 200K Different Ways To Compact Memory

24 Related Problems Fragmentation – External Fragmentation As mentioned earlier, compaction is an expensive scheme. The following example gives a more concrete idea of the same. Given the following: RAM size = 128 MB Access speed of 1byte of RAM = 10ns Each byte will need to be accessed twice during compaction. Thus, Compaction time = 2 x 10 x 10 -9 x 128 x 10 6 = 2560 x 10 -3 s = 2560ms  3s Supposing we are using RR scheduling with time quantum of 2ms, the compaction time is equivalent to 1280 time slots.

25 Related Problems Fragmentation – External Fragmentation Compaction is usually defined by the following two thresholds: Memory hole size threshold: If the sizes of all the holes are at most a predefined hole size, then the main memory undergoes compaction. This predefined hole size is termed as the hole size threshold. e.g. If we have two holes of size ‘x’ and size ‘y’ respectively and the hole threshold is 4KB, then compaction is done provided x<= 4KB and y<= 4KB Total hole percentage: The total hole percentage refers to the percentage of total hole size over memory size. Only if it exceeds the designated threshold is compaction undertaken. e.g. taking the two holes with size ‘x’ and size ‘y’ respectively, total hole percentage threshold equal to 6%, then for a RAM size of 32MB, compaction is done only if (x+y) >= 6% of 32MB.

26 Related Problems Fragmentation – External Fragmentation Another possible solution to the external fragmentation problem is to permit the physical address space of a process to be noncontiguous, thus allowing a process to be allocated physical memory wherever the latter is available. One way of implementing this solution is through the use of a paging scheme. Paging entails division of physical memory into many small equal-sized frames. Logical memory is also broken into blocks of the same size called pages. When a process is to be executed, its pages are loaded into any available memory frames. On using a paging scheme, external fragmentation can be eliminated totally. Paging is discussed in details in the next lecture.

27 Related Problems Fragmentation – Internal Fragmentation Consider a hole of 18,464 bytes as shown in the figure. Suppose that the next process requests 18,462 bytes. If we allocate exactly the requested block, we are left with a hole of 2 bytes. The overhead to keep track of this hole will be substantially larger than the hole itself. The general approach is to allocate very small holes as part of the larger request. operating system P7P7 P 43 Internal fragmentation Hole of 18,464 bytes Next request is for 18,462 bytes

28 Related Problems Fragmentation – Internal Fragmentation As illustrated in the previous slide, the allocated memory may be slightly larger then the requested memory. The difference between these two numbers is internal fragmentation – memory that is internal to a partition, but is not being used. In other words, unused memory within allocated memory is called internal fragmentation.

29 Memory Placement Algorithms As seen earlier, while swapping processes in and out of the RAM, holes are created. In general, there is at any time a set of holes, of various sizes, scattered throughout memory. When a process arrives and needs memory, we search the set of holes for a hole that is best suited for the process. The following slide describes three algorithms that are used to select a free hole.

30 Memory Placement Algorithms The three placement algorithms are: First-fit: Allocate the first hole that is big enough. Best-fit: Allocate the smallest hole that is big enough. Worst-fit: Allocate the largest hole. Simulations have shown that both first-fit and best-fit are better than worst-fit in terms of decreasing both time and storage utilization. Neither first-fit nor best-fit is clearly the best in terms of storage utilization, but first-fit is usually faster.

31 Continuous Memory Allocation Scheme The continuous memory allocation scheme entails loading of processes into memory in a sequential order. When a process is removed from main memory, new processes are loaded if there is a hole big enough to hold it. This algorithm is easy to implement, however, it suffers from the drawback of external fragmentation. Compaction, consequently, becomes an inevitable part of the scheme.

32 Continuous Memory Allocation Scheme Parameters Involved Memory size RAM access time Disc access time Compaction thresholds Memory hole-size threshold Total hole percentage Memory placement algorithms Round robin time slot

33 Continuous Memory Allocation Scheme Effect of Memory Size As anticipated, greater the amount of memory available, the higher would be the system performance.

34 Continuous Memory Allocation Scheme Effect of RAM and disc access times RAM access time and disc access time together define the transfer rate in a system. Higher transfer rate means less time it takes to move processes from main memory to secondary memory and vice-versa thus increasing the efficiency of the operating system. Since compaction involves accessing the entire RAM twice, a lower RAM access time will translate to lower compaction times.

35 Continuous Memory Allocation Scheme Effect of Compaction Thresholds Optimal values of hole size threshold largely depend on the size of the processes since it is these processes that have to be fit in the holes. Thresholds that lead to frequent compaction can bring down performance at an accelerating rate since compaction is quite expensive in terms of time. Threshold values also play a key role in determining state of fragmentation present. Its effect on system performance is not very straightforward and has seldom been the focus of studies in this field.

36 Continuous Memory Allocation Scheme Effect of Memory Placement Algorithms Simulations have shown that both first- fit and best-fit are better than worst-fit in terms of decreasing both time and storage utilization. Neither first-fit nor best fit is clearly best in terms of storage utilization, but first-fit is generally faster.

37 Continuous Memory Allocation Scheme Effect of Round Robin Time Slot As depicted in the figures on the next slide, best choice for the value of time slot would be corresponding to the transfer time for a single process. For example, if most of the processes required 2ms to be transferred, then a time slot of 2ms would be ideal. Hence, while one process completes execution, another can be transferred. However, the transfer times for the processes in consideration are seldom a normal or uniform distribution. The reason for the non-uniform distribution is that there are many different types of processes in a system. The variance as depicted in the figure is too much in a real system and makes the choice of time slot a difficult proposition to decide upon.

38 Continuous Memory Allocation Scheme Effect of Round Robin Time Slot Ideal Process Size Graph Time slot corresponding to this size transfer time Process Size # of processes Process Size Realistic Process Size Graph

39 Continuous Memory Allocation Scheme Performance Measures Average Waiting Time Average Turnaround Time CPU utilization CPU throughput Memory fragmentation percentage over time This is a new performance measure and it quantifies compaction cost. It is calculated as a percentage of compaction times versus the total time.

40 Continuous Memory Allocation Implementation As part of Assignment 3, you’ll implement a memory manager system within an operating system satisfying the given requirements. (For complete details refer to Assignment 3)Assignment 3 We’ll see a brief explanation of the assignment in the following slides.

41 Continuous Memory Allocation Implementation Details Following are some specifications of the memory manager system you’ll implement: A continuous memory allocation scheme is used. The PCB’s are to be executed based on a round robin mechanism. The main memory size is 32 MB. The job sizes vary between 20 KB -> 2 MB. (Uniform Random Distribution, Multiple of 20 KB). The Disc capacity is 500 MB, initially 50 % full with jobs.

42 Continuous Memory Allocation Implementation Details Use First Fit, Best Fit, and Worst Fit Techniques (should be a variable). Do compaction when fragmentation is more than 6 % and holes are 50 KB or less (Assume memory access time = 14 x 10 -9 seconds). Use a varying time slot (a variable parameter, multiple of 1M.S). Disc access time = 1ms + (jobsize (in bytes)/ 500000) ms Job execution time ranges between 2ms and 10ms (multiple of 1ms).

43 Continuous Memory Allocation Implementation Details Once you’re done with the implementation, think of the problem from an algorithmic design point of view. The implementation involves many parameters such as: Memory Size Disc access time Time slot for RR Compaction Thresholds RAM access time Fitting algorithm

44 Continuous Memory Allocation Implementation Details The eventual goal would be to optimize several performance measures (enlisted earlier) Perform several test runs and write a summation indicating how sensitive are some of the performance measures to some of the above parameters

45 Continuous Memory Allocation Sample Screenshots of Simulation Setting variable parameters

46 Continuous Memory Allocation Sample Screenshots of Simulation Initial Hard Disc Configuration

47 Continuous Memory Allocation Sample Screenshots of Simulation Initial RAM Configuration

48 Continuous Memory Allocation Sample Screenshots of Simulation Memory Manager In Execution

49 Continuous Memory Allocation Sample Screenshots of Simulation Compaction Scenario

50 Continuous Memory Allocation Sample Screenshots of Simulation Final Performance Measures For The Run

51 Continuous Memory Allocation Sample tabulated data from simulation Time Slot Average Waiting Time Average Turnaround Time CPU Utilization Through put Measure Memory fragmentation percentage 2345%529% 3442%874% 4563%1274% 512 1%1790% TABLE: Round Robin Time Quantum vs. Performance Measures

52 Continuous Memory Allocation Sample tabulated data from simulation RR Ti me Slo t Average Turnaround Time Average Waiting Time CPU UtilizationThroughputFragmentation % Fi rs t fit B es t fit W ors t fit Fir st fit Be st fit W ors t fit Fir st fit B es t fit Wo rst fit Fir st fit Be st fit Wo rst fit Fir st fit B es t fit W ors t fit 24333221%1% 1%1% 1%555827474 74 34444442%2% 2%2% 2%888747474 46665663%3% 2%2% 2%1211 747474 51266 551%1% 2%2% 2%171414 14907979 79

53 Continuous Memory Allocation Sample Graph (using data from simulation)

54 Continuous Memory Allocation Sample Graph (comparing memory algorithms) 2 3 4 5 Comparing Memory Placement Algorithms: Average Turnaround time

55 Continuous Memory Allocation Sample Graph (comparing memory algorithms) 2 3 4 5 Comparing Memory Placement Algorithms: Average Waiting Time

56 Continuous Memory Allocation Sample Graph (comparing memory algorithms) 2 3 4 5 Comparing Memory Placement Algorithms: CPU utilization

57 Continuous Memory Allocation Sample Graph (comparing memory algorithms) 2 3 4 5 Comparing Memory Placement Algorithms: Throughput

58 Continuous Memory Allocation Sample Graph (comparing memory algorithms) 2 3 4 5 Comparing Memory Placement Algorithms: % Fragmentation

59 Continuous Memory Allocation Fragmentation percentage over time

60 Continuous Memory Allocation Conclusions from the sample simulation The following emerged as the studied optimizing parameters: Optimal value of the round robin quantum None of the memory placement algorithms could be termed as optimal. Studying the fragmentation percentage over time gave us the probable time windows where compaction was undertaken.

61 Lecture Summary Introduction to Memory Management What is memory management Related Problems of Redundancy, Fragmentation and Synchronization Memory Placement Algorithms Continuous Memory Allocation Scheme Parameters Involved Parameter-Performance Relationships Some Sample Results

62 Preview of next lecture The following topics shall be covered in the next lecture:  Introduction to Paging  Paging Hardware & Page Tables  Paging model of memory  Page Size  Paging versus Continuous Allocation Scheme  Multilevel Paging  Page Replacement & Page Anticipation Algorithms  Parameters Involved  Parameter-Performance Relationships  Sample Results


Download ppt "Lecture 5 Memory Management Part I. Lecture Highlights  Introduction to Memory Management  What is memory management  Related Problems of Redundancy,"

Similar presentations


Ads by Google