Presentation is loading. Please wait.

Presentation is loading. Please wait.

Memory Management.

Similar presentations

Presentation on theme: "Memory Management."— Presentation transcript:

1 Memory Management

2 Multistep Processing of a User Program

3 Swapping A process can be swapped temporarily out of memory to a backing store, and then brought back into memory for continued execution Backing store – fast disk large enough to accommodate copies of all memory images for all users; Roll out, roll in – swapping variant used for priority-based scheduling algorithms; lower-priority process is swapped out so higher-priority process can be loaded and executed

4 Contiguous Allocation (Cont.)
Multiple-partition allocation Hole – block of available memory; holes of various size are scattered throughout memory When a process arrives, it is allocated memory from a hole large enough to accommodate it Operating system maintains information about: a) allocated partitions b) free partitions (hole) OS OS OS OS process 5 process 5 process 5 process 5 process 9 process 9 process 8 process 10 process 2 process 2 process 2 process 2

5 Dynamic Storage-Allocation Problem
How to satisfy a request of size n from a list of free holes First-fit: Allocate the first hole that is big enough Best-fit: Allocate the smallest hole that is big enough; must search entire list, unless ordered by size Produces the smallest leftover hole Worst-fit: Allocate the largest hole; must also search entire list Produces the largest leftover hole First-fit and best-fit better than worst-fit in terms of speed and storage utilization

6 Fragmentation Issues External Fragmentation – total memory space exists to satisfy a request, but it is not contiguous Internal Fragmentation – allocated memory may be slightly larger than requested memory; this size difference is memory internal to a partition, but not being used Reduce external fragmentation by compaction Shuffle memory contents to place all free memory together in one large block Compaction is possible only if relocation is dynamic, and is done at execution time I/O problem Latch job in memory while it is involved in I/O Do I/O only into OS buffers

7 Paging Divide physical memory into fixed-sized blocks called frames. Keep track of all free frames Divide logical memory into blocks of same size called pages To run a program of size n pages, need to find n free frames. Set up a page table to translate logical to physical addresses Remove/reduce external fragmentation. Internal fragmentation exists

8 Address translation

9 Free Frames Before allocation After allocation

10 Implementation of Page Table
Page table is kept in main memory Page-table base register (PTBR) points to the page table Page-table length register (PRLR) indicates size of the page table In this scheme every data/instruction access requires two memory accesses. One for the page table and one for the data/instruction. The two memory access problem can be solved by the use of a special fast-lookup hardware cache called associative memory or translation look-aside buffers (TLBs)

11 Paging Hardware With TLB

12 Performance Characteristics of TLB
Typical TLB Size: 8 - 4,096 entries Hit time: clock cycle Miss penalty: clock cycles Miss rate: % If a TLB hit takes 1 clock cycle, a miss takes 30 clock cycles, and the miss rate is 1%, the effective memory cycle rate for page mapping 1* (1+30)X0.01=1.30 1.30 clock cycles per memory access

13 Effective Access Time Associative Lookup =  time unit
Assume memory cycle time is 1 unit of time (e.g ns) Hit ratio – percentage of times that a page number is found in the associative registers; ratio related to number of associative registers Hit ratio =  Effective Access Time (EAT) EAT = (1 + )  + (2 + )(1 – ) = 2 +  – 

14 Shared Pages Shared code
One copy of read-only (reentrant) code shared among processes (i.e., text editors, compilers, window systems). Shared code must appear in same location in the logical address space of all processes Private code and data Each process keeps a separate copy of the code and data The pages for the private code and data can appear anywhere in the logical address space

15 Shared Pages Example

16 Hierarchical Page Tables
Motivating example: 32 -bit address space with 4KB per page. Page table would contain 2^32/ 2^12= 1 million entries. Need a 4MB page table with contiguous space. 4 bytes per entry Can we divide this page table into smaller pieces? Break up the logical address space into multiple page tables A simple technique is a two-level page table

17 Two-Level Page-Table Scheme

18 Two-Level Paging Example
A logical address (on 32-bit machine with 2K page size) is divided into: a page number consisting of 20 bits a page offset consisting of 12 bits Since the page table is paged, the page number is further divided into: a 10-bit page number a 10-bit page offset Thus, a logical address is as follows: where pi is an index into the outer page table, and p2 is the displacement within the page of the outer page table page number page offset pi p2 d 10 10 12

19 Address-Translation Scheme

20 Linear Address in Linux
Broken into four parts:

21 Three-level Paging in Linux

22 Segmentation Memory-management scheme that supports user view of memory A program is a collection of segments A segment is a logical unit such as: main program procedure function method object local variables, global variables common block stack symbol table arrays

23 User’s View of a Program

24 Logical View of Segmentation
1 4 2 3 1 2 3 4 user space physical memory space

25 Segmentation Architecture
Logical address consists of a two tuple: <segment-number, offset>, Segment table – maps two-dimensional physical addresses; each table entry has: base – contains the starting physical address where the segments reside in memory limit – specifies the length of the segment

26 Segmentation Hardware

27 Example of Segmentation

Download ppt "Memory Management."

Similar presentations

Ads by Google