Presentation is loading. Please wait.

Presentation is loading. Please wait.

UNIT–IV: Memory Management

Similar presentations


Presentation on theme: "UNIT–IV: Memory Management"— Presentation transcript:

1 UNIT–IV: Memory Management
Logical & Physical Address Space Swapping Memory Management Techniques Contiguous Memory Allocation Non–Contiguous Memory Allocation Paging Segmentation Segmentation with Paging Virtual Memory Management Demand Paging Page Replacement Algorithms Demand Segmentation Thrashing Case Studies Linux Windows Exam Questions

2 Logical & Physical Address Space
The Concept of a logical address space that is bound to a separate physical address space is central to proper memory management Logical address – generated by the CPU; also referred to as virtual address Physical address – address seen by the memory unit Logical and physical addresses are the same in compile-time and load-time address-binding schemes; logical (virtual) and physical addresses differ in execution-time address-binding scheme

3 Swapping Schematic View

4 Swapping A process can be swapped temporarily out of memory to a backing store, and then brought back into memory for continued execution Backing Store – Fast disk large enough to accommodate copies of all memory images for all users; must provide direct access to these memory images Roll out, Roll in – Swapping variant used for priority-based scheduling algorithms; lower-priority process is swapped out so higher-priority process can be loaded and executed Major part of swap time is transfer time; Total transfer time is directly proportional to the amount of memory swapped Modified versions of swapping are found on many systems (i.e., UNIX, Linux, and Windows) System maintains a ready queue of ready-to-run processes which have memory images on disk

5 Memory Management Techniques
Single Contiguous Memory Management Partitioned Memory Management Relocation Partitioned Memory Management Paged Memory Management Segmented Memory Management Demand–Paged Memory Management Page Replacement Algorithms Overlay Memory Management

6 Single Contiguous Memory Management
Simplicity Available memory fully not utilised Limited Job Size (< Available Memory)

7 Partitioned Memory Management

8

9 Dynamic Storage-Allocation Problem
How to satisfy a request of size n from a list of free holes First-fit: Allocate the first hole that is big enough Best-fit: Allocate the smallest hole that is big enough; must search entire list, unless ordered by size Produces the smallest leftover hole Worst-fit: Allocate the largest hole; must also search entire list Produces the largest leftover hole First-fit and best-fit better than worst-fit in terms of speed and storage utilization

10 Memory First Fit Best Fit Worst Fit
1. Consider a swapping system in which memory consists of the following hole sizes in memory order: 10KB, 4KB, 20KB, 18KB, 7KB, 9KB, 12KB, and 15KB. Which hole is taken for successive segment requests of 12KB, 10KB, 9KB for first fit? Now repeat the question for best fit, and worst fit. 10 KB 4KB 20 KB 18 KB 7 KB 9 KB 12 KB 15 KB Memory 10 KB (Job 2) 4KB 9 KB 7 KB (Job 3) 12 KB 15 KB (Job 1) 8 KB First Fit 10 KB (Job 2) 4KB 20 KB 18 KB 7 KB 9 KB (Job3) 12 KB (Job 1) 15 KB Best Fit 10 KB 4KB 12 KB (Job 1) (Job 2) 7 KB 9 KB 6 KB 8 KB Job 3) Worst Fit 10

11 2. Given memory partitions of 12KB, 7KB, 15KB, 20KB, 9KB, 4KB, 10KB, and 18KB (in order), how would each of the first-fit and best-fit algorithms place processes of 10KB, 12KB, 6KB, and 9KB (in order)?

12 Relocation Partitioned Memory Management
Compaction / Burping / Recompaction / Reburping Periodically combining all free areas in between partitions into one contiguous area. Move the contents of all allocated partitions to become one contiguous.

13 Fragmentation External Fragmentation Internal Fragmentation
– Total memory space exists to satisfy a request, but it is not contiguous Internal Fragmentation – allocated memory may be slightly larger than requested memory; this size difference is memory internal to a partition, but not being used Reduce external fragmentation by compaction Shuffle memory contents to place all free memory together in one large block Compaction is possible only if relocation is dynamic, and is done at execution time I/O problem Latch job in memory while it is involved in I/O Do I/O only into OS buffers

14 Relocation Partitioned Memory Management

15 Paging Physical address space of a process can be noncontiguous.
Process is allocated physical memory whenever the latter is available Divide physical memory into fixed-sized blocks called frames (Size is power of 2, between 512 bytes and 8,192 bytes) Divide logical memory into blocks of same size called pages Keep track of all free frames To run a program of size n pages, need to find n free frames and load program Set up a page table to translate logical to physical addresses Internal fragmentation

16 Address Translation Scheme
Address generated by CPU is divided into: Page number (p) – used as an index into a page table which contains base address of each page in physical memory Page offset (d) – combined with base address to define the physical memory address that is sent to the memory unit For given logical address space 2m and page size 2n

17 Paging Hardware

18 Paging Model of Logical and Physical Memory

19 Paging Example n=2 and m= byte memory and 4-byte pages

20 Paging (Cont.) Calculating internal fragmentation
Page size = 2,048 bytes Process size = 72,766 bytes 35 pages + 1,086 bytes Internal fragmentation of 2, ,086 = 962 bytes Worst case fragmentation = 1 frame – 1 byte On average fragmentation = 1 / 2 frame size So small frame sizes desirable? But each page table entry takes memory to track Page sizes growing over time Solaris supports two page sizes – 8 KB and 4 MB Process view and physical memory now very different By implementation process can only access its own memory

21 Segmentation Program Collection of Segments Segment Logical Unit
Main Program Procedure Function Method Object Local / Global Variables Common Block Stack Symbol Table Arrays

22 User’s View of a Program

23 Logical View of Segmentation
1 4 2 3 1 2 3 4 user space physical memory space

24 Segmentation Architecture
Logical address consists of a two tuple: <segment-number, offset>, Segment table – maps two-dimensional physical addresses; each table entry has: base – contains the starting physical address where the segments reside in memory limit – specifies the length of the segment Segment-table base register (STBR) points to the segment table’s location in memory Segment-table length register (STLR) indicates number of segments used by a program; segment number s is legal if s < STLR

25 Segmentation Architecture (Cont.)
Protection With each entry in segment table associate: validation bit = 0  illegal segment read/write/execute privileges Protection bits associated with segments; code sharing occurs at segment level Since segments vary in length, memory allocation is a dynamic storage-allocation problem A segmentation example is shown in the following diagram

26 Segmentation Hardware

27 Example of Segmentation

28 Virtual Memory – Background
Separation of user logical memory from physical memory. Only part of the program needs to be in memory for execution Logical address space can therefore be much larger than physical address space Allows address spaces to be shared by several processes Allows for more efficient process creation Virtual memory can be implemented via: Demand paging Demand segmentation

29 Virtual Memory That is Larger Than Physical Memory

30 Demand Paging Bring a page into memory only when it is needed
Less I/O needed Less memory needed Faster response More users Lazy swapper Never swaps a page into memory unless page will be needed Swapper that deals with pages is a pager

31 Demand Paging

32 Page Replacement

33 Page Replacement Algorithms
Want lowest page-fault rate Evaluate algorithm by running it on a particular string of memory references (reference string) and computing the number of page faults on that string

34 Page Replacement Algorithms
First–In First–Out (FIFO) Optimal Least Recently Used (LRU) Second Chance (Clock)

35 FIFO Page Replacement Algorithm
Reference String: 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1.

36 First-In-First-Out (FIFO) Algorithm
Reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5 3 frames (3 pages can be in memory at a time per process) 4 frames Belady’s Anomaly: more frames  more page faults 1 2 3 4 5 9 page faults 1 2 3 5 4 10 page faults

37 Optimal Algorithm Replace page that will not be used for longest period of time Example: Reference String: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5 4 frames 1 2 3 4 6 page faults 5

38 Optimal Page Replacement
Reference String: 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1.

39 Least Recently Used (LRU) Algorithm
Reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5 Counter implementation Every page entry has a counter; every time page is referenced through this entry, copy the clock into the counter When a page needs to be changed, look at the counters to determine which are to change 5 2 4 3 1

40 LRU Page Replacement Reference String: 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1. Stack implementation Keep a stack of page numbers in a double link form: Page referenced: Move it to the top requires 6 pointers to be changed No search for replacement

41 Second Chance (Clock) Algorithms
Reference bit With each page associate a bit, initially = 0 When page is referenced bit set to 1 Replace the one which is 0 (if one exists) We do not know the order, however Second chance Need reference bit Clock replacement If page to be replaced (in clock order) has reference bit = 1 then: set reference bit 0 leave page in memory replace next page (in clock order), subject to same rules

42 Second-Chance (clock) Page-Replacement Algorithm

43 Overlay Memory Management

44 Thrashing If a process does not have “enough” pages, the page-fault rate is very high. This leads to: low CPU utilization operating system thinks that it needs to increase the degree of multiprogramming another process added to the system Thrashing A process is busy swapping pages in and out

45 Windows Uses demand paging with clustering. Clustering brings in pages surrounding the faulting page Processes are assigned working set minimum and working set maximum Working set minimum is the minimum number of pages the process is guaranteed to have in memory A process may be assigned as many pages up to its working set maximum When the amount of free memory in the system falls below a threshold, automatic working set trimming is performed to restore the amount of free memory Working set trimming removes pages from processes that have pages in excess of their working set minimum

46 Linux Linux address translation
Linux uses paging to translate virtual addresses to physical addresses Linux does not use segmentation Advantages More portable since some RISC architectures don’t support segmentation Hierarchical paging is flexible enough Intel x86 processes have segments Linux tries to avoid using segmentation Memory management is simpler when all processes use the same segment register values Using segment registers is not portable to other processors Linux uses paging 4k page size A three-level page table to handle 64-bit addresses On x86 processors Only a two-level page table is actually used Paging is supported in hardware TLB is provided as well

47 Exam Questions Explain briefly about free space management.
Explain the paging concept. What is paging? Why paging is used? Explain Paging technique with example. What is fragmentation? Different types of fragmentation. Explain internal fragmentation. Explain briefly about a. fragmentation b. Swapping c. Thrashing Write short notes on Segmentation. What is virtual memory?

48 Exam Questions Briefly explain the implementation of virtual memory.
Discuss the issues when a page fault occurs. What are the needs for Page replacement algorithms? Explain any two page replacement algorithms. Write short notes on Page replacement algorithms and its types. Explain the following Page Replacement algorithms. a. LRU Replacement b. FIFO Replacement c. Optimal Replacement d. Second Chance Replacement


Download ppt "UNIT–IV: Memory Management"

Similar presentations


Ads by Google