Presentation is loading. Please wait.

Presentation is loading. Please wait.

Memory Management. Memory Management Issues Data/instructions must be resident in order to be acted upon Each process needs separate memory space OS needs.

Similar presentations


Presentation on theme: "Memory Management. Memory Management Issues Data/instructions must be resident in order to be acted upon Each process needs separate memory space OS needs."— Presentation transcript:

1 Memory Management

2 Memory Management Issues Data/instructions must be resident in order to be acted upon Each process needs separate memory space OS needs to efficiently determine range of legal addresses

3 Base and Limit Registers Define the starting address and length of legal memory OS must check each memory reference to be sure it’s within the limit

4 Addresses Absolute Relative Logical – generated by CPU (virtual) Physical (memory unit)

5 Binding Compile-time – Absolute Addresses Load-time – relocatable at load time; not known at compile time Execution-time – Allows process to be moved during execution Early binding allows for more error checking but is rigid; late binding is flexible but more prone to problems. (why?)

6 The MMU Memory Management Unit Maps virtual  physical addresses Relocation register contents added to each address. User does not know where the process will actually reside during execution

7 Dynamic Loading Code segments are not loaded into memory until needed Efficient, don’t need to load code that is never used

8 Swapping Temporarily remove a process from memory and move to backing store (quantum expires) Roll out, roll in is a variation of swapping for priority-based systems (lower priority process swapped out for a higher priority to run) Most swap time is transfer time – the more memory in use, the more time to swap

9 Concerns with Swapping Must be sure the process being swapped out is idle (for example not pending I/O operation). Either don’t swap or use OS buffers If the system supports relocatable addressing, swapped out process does not need to be swapped back into the same memory addresses Must have enough swap space for the largest executing process

10 When A Process Is Swapped,Where Does It Go? “Swapped out of memory” – what happens to it? –Goes to “swap space” that could be a huge unstructured file Relys on file system – lets the file manager worry about management, but it’s slow –Could be a disk partition – have a swap space manage instead of the file system –Could be > 1 partition or > 1 disk –Dedicated Swap Space (a disk). Faster hardware, most expensive

11 Consider: RR Scheduling Time quantum = 500 ms Swap time for one process (average) is 250ms. Total swap time (in/out) is 500ms How much system time is spent working vs swapping?

12 Contiguous Memory Allocation Partition memory (OS in low memory, rest for user programs) Define fixed partition sizes The degree of multiprogramming = the number of partitions Popular with older systems but rarely seen today (batch)

13 Dynamic Memory Allocation Start with memory of size N Memory is allocated to processes as needed; reclaimed when process completes Still contiguous allocation

14 Allocation Strategies First fit Best fit Worst fit Data structures? Overhead?

15 Performance of Allocation Strategies First fit – generally fastest First and best are better than worst fit None significantly better in terms of storage utilization

16 Fragmentation Internal – process is allocated N bytes, needed something < N External – memory holes too small to be usable Compaction –Must support relocation –Pending I/O Problem –Pure overhead Fragmentation happens in memory and on the backing store device

17 Paging Allows non-contiguous addressing in memory Memory is broken into fixed-size blocks called frames Executable image of process broken into same-sized blocks calls pages Using paging, entire process is still resident

18 Paging (2) Address through a page number and displacement into the page Page table is maintained; one entry per page of physical memory Each process has its own associated page table Since pages are all the same size, locating a page is easy (page # * page size); locating a specific address – add displacement into the page

19 Paging (3) Internal (to the page) fragmentation –For each process, how often does this occur? –Suppose a 4K page frame, what is the maximum amount of fragmentation? Is it better to make the page size small or large?

20 Paging (4) Frame maintenance –What’s in each frame –Empty frames –How many frames are available (so arriving jobs can be appropriately handled)

21 Paging (5) Merits and Disadvantages –Increases context-switching time – address translation and loading new processes –All addresses references must be looked up and translated –Internal fragmentation –Frame table maintenance –Process PCB must include page information –Others?

22 Paging (6) If small # of pages, can keep paging info in registers If large # of pages, keep in memory –Keeping in memory is slow – requires a memory access to find table entry, plus another to find the actual data/instruction

23 Translation Look-Aside Buffer (TLB) Essentially a hardware cache Small, expensive (why it’s small) but very fast Look in TLB first, then go to memory if not found Various algorithms to determine what is kept in TLB; also can wire down frequently used so they are never removed

24 Protection in Paging Essentially each frame has a base and limit register Pages can be designated as read only Each process’ page table carries a valid/invalid bit indicating if the frame has been allocated to that process Addresses in the last frame may be invalid (due to internal fragmentation)

25 Storing the Page Table With a large physical memory and page size, each entry can be quite large (32-bit machine with 4K page size = 32 bits per entry*) Often too large to support in main memory Solution: page the page table *4096 = 12 bits for displacement + (2^32 addresses / page size = 20 bits for page #)

26 Two-Level Page Table Divide page # into its own page # and offset; these are used in the outer page table This can be further divided by using a Three-Level table Adopted from Silberschatz, Galvin and Gagne, 2005

27 Hashed Page Tables Another common solution is to hash the page table. Hashing works as you learned it in CS2/CS3: using a key (the page number), run through a hash code to find the entry. Chaining used to resolve collisions Faster than a sequential search

28 Inverted Page Table One entry for each physical frame of memory; stored by process and virtual page # stored at the frame Decreases need for each process to store page table Because or ordering, it may take longer to access so can be hashed

29 Page Table (left) and Inverted Page Table (right)

30 Segmentation Compiler-generated segments of a program –Code –Globals – Library Functions –…

31 Segmentation (2) With paging, pages are partitioned by hardware and user has no knowledge In segmentation, address space consists of several logical segments Each segment has a name and length. References are made through the segment and an offset into the segment

32 Segmentation (3) Allows easy sharing of code segments, but non-sharing of data segments

33 Segmentation Example

34 Virtual Memory Only part of program needs to be resident during execution Logical (virtual) address can be much larger than physical address space

35 VM Programs are no longer constrained by the size More programs can be executing at the same time – all do not need to be resident Swapping occurs only when something needed

36 Demand Paging Pages loaded as needed Lazy swapper – only swaps what’s needed Book points out that “swapper” is technically incorrect (since the process isn’t swapped, only individual pages) p.319

37 Page Table A page table is maintained for each executing process Tracks all pages – where they are located in memory or an invalid bit if not currently resident When an address is referenced, PCB is checked. If resident, access proceeds; if not resident, issues a page fault

38 Page Fault Find a free frame in memory Schedule the disk read to get the page Move the page into the selected frame Update the page table in the PCB Continue the interrupted instruction It sounds so easy!

39 Problems (it’s not that easy) There are no free frames currently available –Need a replacement policy Frame you select contains data that hasn’t been written back to the disk Frame you select is waiting for an I/O operation to complete

40 Replacement Strategies Look at access counts Look at time since last usage Reset usage flags at certain time intervals – this prevents a page from being spared because it was heavily used early-on in the processing, but not used recently

41 Replacement Strategies (2) Locality of Reference – clustering of references Global or Local replacement – select from all resident pages or only those belonging to the process

42 Replacement Algorithms FIFO LRU – select because page hasn’t been used very recently (based on time) LFU – select because page hasn’t been used much (based on access counts)

43 Replacement Algorithms (2) Second Chance. By-pass page once, then evict if selected the second time MRU – select because page was just used (assumes page has been used and is done) Recent activity counts (if pages tend to be used a lot at first, then infrequently thereafter)

44 Belady’s Anomaly X page faults with N frames Increase the number of frames, the number of page faults also increases

45 Dirty Bit A page is selected for removal, but contains unwritten data Context switch must now include a write of the page’s content Don’t want to write all pages being paged out, use the dirty bit

46 Thrashing We expect lots of page faults at the beginning of execution – this isn’t thrashing How many frames should be allocated to each process? How many frames can be supported? What size? How do you prevent a thrashing process from causing other processes to thrash? Don’t allow a thrashing process to steal frames from another.

47 The Working Set Based on locality of reference Set of pages most recently used, and expected to be used in the near future OS monitors each process and allocates enough frames to accommodate the working set If a process starts to thrash, remove it entirely and restart later

48 How many frames? Allocate too many frames: –Few page faults –Unwise allocation of resources Allocate too few frames: –High number of page faults; could lead to thrashing –Low utilization – too many context switches

49 Prepaging Predict when pages will be needed Easy to do on startup where frequent faults are common Is the prepaging overhead > the fault overhead?

50 Page Size Tradeoffs Smaller page Larger page table Shorter transfer time into memory Shorter write time for dirty pages Small internal fragmentation Larger page Smaller page table Longer transfer time into memory Longer write time for dirty pages More internal fragmentation

51 Programmers and Page Faults Think about locality of references Array accesses Stacks – good since references always to the top Hash table – scattered (Compilers) Separating code and data (code can be read only – no dirty pages)

52 Programmers and Page Faults (2) Pointers Dynamic Memory Allocation

53 Page Fault on a Page Waiting for I/O Lock it in; don’t allow the fault until the I/O completes Require that all I/O go through system memory


Download ppt "Memory Management. Memory Management Issues Data/instructions must be resident in order to be acted upon Each process needs separate memory space OS needs."

Similar presentations


Ads by Google