Presentation is loading. Please wait.

Presentation is loading. Please wait.

Virtual Memory Prof. Sin-Min Lee Department of Computer Science.

Similar presentations


Presentation on theme: "Virtual Memory Prof. Sin-Min Lee Department of Computer Science."— Presentation transcript:

1 Virtual Memory Prof. Sin-Min Lee Department of Computer Science

2

3

4 Karnaugh Map Method of Multiplexer Implementation Consider the function: A is taken to be the data variable and B,C to be the select variables.

5 Example of MUX combo circuit F(X,Y,Z) =  m(1,2,6,7)

6

7

8

9

10 Parkinson's law : "Programs expand to fill the memory available to hold them" Idea : Manage the storage available efficiently between the available programs.

11 Before VM… Programmers tried to shrink programs to fit tiny memories Result: –Small –Inefficient Algorithms

12 Solution to Memory Constraints Use a secondary memory such as disk Divide disk into pieces that fit memory (RAM) –Called Virtual Memory

13

14 Implementations of VM Paging –Disk broken up into regular sized pages Segmentation –Disk broken up into variable sized segments

15

16

17

18

19

20

21

22

23

24

25 Memory Issues Idea: Separate concepts of –address space Disk –memory locations RAM Example: –Address Field = 2 16 = 65536 memory cells –Memory Size = 4096 memory cells How can we fit the Address Space into Main Memory?

26 Paging Break memories into Pages NOTE: normally Main Memory has thousands of pages page 1 page = 4096 bytes New Issue: How to manage addressing?

27 Address Mapping Mapping 2ndary Memory addresses to Main Memory addresses page 1 page = 4096 bytes physical addressvirtual address

28 Address Mapping Mapping 2ndary Memory ( program/virtual ) addresses to Main Memory ( physical ) addresses page 1 page = 4096 bytes physical address used by hardware virtual address used by program 40958191 0 4096 virtualphysical

29 Paging page 40958191 0 4096 virtualphysical 0 4095 / 0 Illusion that Main Memory is Large Contiguous Linear Size(MM) = Size(2ndry M) Transparent to Programmer

30 Paging Implementation Virtual Address Space (Program) & Physical Address Space (MM) –Broken up into equal pages (just like cache & MM!!) Page size  Always a power of 2 Common Size: –512 to 64K bytes

31 Memory Mapping Memory Management Unit (MMU): Device that performs virtual-to-physical mapping MMU 15-bit Physical Address 32-bit VM Address

32 Memory Management Unit 32-bit Virtual Address Broken into 2 portions 20-bit 12-bit Virtual page # offset in page (since our pages are 4KB) How to determine if page is in MM? Present/Absent Bit in Page Table Entry MMU

33 Demand Paging Possible Mapping of pages Page Fault: Requested page is not in MM Demand Paging: Page is demanded by program Page is loaded into MM

34 Demand Paging Possible Mapping of pages Page Fault: Requested page is not in MM Demand Paging: Page is demanded by program Page is loaded into MM But… What to bring in for a program on start up?

35 Working Set Set of pages used by a process Each process has a unique memory map Importance in regards to a multi-tasked OS At time t, there is a set of all k recently used pages References tend to cluster on a small number of pages Put this set to Work!!! Store & Load it during Process Switching

36 Page Replacement Policy Working Set: –Set of pages used actively & heavily –Kept in memory to reduce Page Faults Set is found/maintained dynamically by OS Replacement: OS tries to predict which page would have least impact on the running program Common Replacement Schemes: Least Recently Used (LRU) First-In-First-Out (FIFO)

37 Page Replacement Policies Least Recently Used (LRU) –Generally works well –TROUBLE: When the working set is larger than the Main Memory Working Set = 9 pages Pages are executed in sequence (0  8 (repeat) ) THRASHING

38 Page Replacement Policies First-In-First-Out(FIFO) –Removes Least Recently Loaded page –Does not depend on Use –Determined by number of page faults seen by a page

39 Page Replacement Policies Upon Replacement –Need to know whether to write data back –Add a Dirty-Bit Dirty Bit = 0; Page is clean; No writing Dirty Bit = 1; Page is dirty; Write back

40 Fragmentation Generally…. –Process: Program + Data != Integral # of Pages Wasted space on last page Example: –Program + Data = 26000 byes –Page = 4096 bytes –Result = 2672 bytes wasted on last page Internal Fragmentation: fragments in a page How do we solve this problem?

41 Page Size Smaller page size allows for less wasted space Benefits: –Less internal fragmentation –Less thrashing Drawbacks: – page table – storage – computer cost – time to load – more time spent at disk – miss rate

42 Segmentation Alternative to the 1-D view of Paging Create more than 1 address space Example –Compilation Process 1 segment for each table 0  Large Address Symbol Table Source Text Constants Parse Tree Stack

43 Segmentation Example –Compilation Process 1 segment for each table In a 1-D Address Space Symbol Table Source Text Constants Parse Tree Stack grows continuously grows unpredictably!!

44 Segmentation Example –Compilation Process 1 segment for each table In a Segmented Address Space Symbol Table Source Text Constants Parse Tree Stack Shrink/grow independently

45 Segmentation Virtual Address = Segment # + Offset in Segment Programmer is aware of Segments –May contain procedure, array, stack –Generally not a mixture of types Each Segment is a Logical Entity –It is generally not paged

46 Segmentation Benefits –Eases Linking of Procedures –Facilitates sharing of Procedures & Data –Protection Since user creates –User knows what is stored –Can specify »EXECUTE: procedure code »READ/WRITE: array

47 Implementation of Segmentation Two Ways to Implement –Swapping –Paging (a little bit of both)

48 Implementation of Segmentation Swapping –Like Demand Paging Move out segments to make room for new one Request for 7 moves out 1

49 Implementation of Segmentation Swapping –Like Demand Paging Over a Period of Time Wasted Space!!! External Fragmentation

50 Elimination of External Fragmentation 2 Methods –Compaction –Fit Segments in Existing Holes

51 Elimination of External Fragmentation Compaction –Moving segments closer to zero to eliminate wasted space Compaction

52 Fit Segments in Existing Holes –Maintain a list of Addresses & Hole Size –Algorithms: BEST FIT : choose smallest hole FIRST FIT: scan circularly & choose which fits first Where should I go? Elimination of External Fragmentation

53 Fit Segments in Existing Holes –Maintain a list of Addresses & Hole Size –Algorithms: BEST FIT : choose smallest hole Elimination of External Fragmentation

54 Fit Segments in Existing Holes –Maintain a list of Addresses & Hole Size –Algorithms: FIRST FIT : scan circularly & choose which fits first First Hole Elimination of External Fragmentation Empirically proven to give best results

55 Elimination of External Fragmentation Best of Both Worlds –Hole Coalescing Use Best Fit as your De-Fragmentation Algorithm Upon Removal: –COALESCE ANY NEIGHBORING HOLES

56 Simplified Fixed Partition Memory Table

57 Original StateAfter Job Entry 100KJob 1 (30K) Partition 1 Partition 2 25KJob 4 (25K) Partition 2 Partition 3 25K Partition 3 Partition 4 50KJob 2 (50K) Partition 4 Job List : J130K J250K J330K J425K Main memory use during fixed partition allocation of Table 2.1. Job 3 must wait.

58 Dynamic Partitions Available memory kept in contiguous blocks and jobs given only as much memory as they request when loaded. Improves memory use over fixed partitions. Performance deteriorates as new jobs enter the system –fragments of free memory are created between blocks of allocated memory (external fragmentation).

59 Dynamic Partitioning of Main Memory & Fragmentation

60 Dynamic Partition Allocation Schemes First-fit: Allocate the first partition that is big enough. –Keep free/busy lists organized by memory location (low- order to high-order). –Faster in making the allocation. Best-fit: Allocate the smallest partition that is big enough – Keep free/busy lists ordered by size (smallest to largest). –Produces the smallest leftover partition. –Makes best use of memory.

61 First-Fit Allocation Example J1 10K J2 20K J3 30K* J4 10K Memory MemoryJob Job Internal locationblock sizenumber sizeStatusfragmentation 10240 30KJ1 10KBusy20K 40960 15KJ4 10KBusy 5K 56320 50KJ2 20KBusy30K 107520 20K Free Total Available:115KTotal Used: 40K Job List

62 Best-Fit Allocation Example J1 10K J2 20K J3 30K J4 10K Memory MemoryJob Job Internal locationblock sizenumber sizeStatusfragmentation 40960 15KJ1 10KBusy 5K 10752020KJ2 20KBusyNone 10240 30KJ3 30KBusyNone 56230 50KJ4 10KBusy40K Total Available:115KTotal Used: 70K Job List

63 First-Fit Memory Request

64 Best-Fit Memory Request

65 Best-Fit vs. First-Fit First-Fit Increases memory use Memory allocation takes less time Increases internal fragmentation Discriminates against large jobs Best-Fit More complex algorithm Searches entire table before allocating memory Results in a smaller “free” space (sliver)

66 Release of Memory Space : Deallocation Deallocation for fixed partitions is simple –Memory Manager resets status of memory block to “free”. Deallocation for dynamic partitions tries to combine free areas of memory whenever possible –Is the block adjacent to another free block? –Is the block between 2 free blocks? –Is the block isolated from other free blocks?

67 Case 1: Joining 2 Free Blocks

68 Case 2: Joining 3 Free Blocks

69 Case 3: Deallocating an Isolated Block

70 Relocatable Dynamic Partitions Memory Manager relocates programs to gather all empty blocks and compact them to make 1 memory block. Memory compaction (garbage collection, defragmentation) performed by OS to reclaim fragmented sections of memory space. Memory Manager optimizes use of memory & improves throughput by compacting & relocating.

71 Compaction Steps Relocate every program in memory so they’re contiguous. Adjust every address, and every reference to an address, within each program to account for program’s new location in memory. Must leave alone all other values within the program (e.g., data values).

72 Program in Memory During Compaction & Relocation Free list & busy list are updated –free list shows partition for new block of free memory –busy list shows new locations for all relocated jobs Bounds register stores highest location in memory accessible by each program. Relocation register contains value that must be added to each address referenced in program so it can access correct memory addresses after relocation.

73 Memory Before & After Compaction (Figure 2.5)

74 Contents of relocation register & close-up of Job 4 memory area (a) before relocation & (b) after relocation and compaction (Figure 2.6)

75 More Overhead is a Problem with Compaction & Relocation Timing of compaction (when, how often) is crucial. Approaches to timing of compaction: 1.Compact when certain percentage of memory is busy (e.g., 75%). 2.Compact only when jobs are waiting. 3.Compact after certain amount of time.


Download ppt "Virtual Memory Prof. Sin-Min Lee Department of Computer Science."

Similar presentations


Ads by Google