Presentation is loading. Please wait.

Presentation is loading. Please wait.

Chapter 8 Memory Management Dr. Yingwu Zhu. Outline Background Basic Concepts Memory Allocation.

Similar presentations


Presentation on theme: "Chapter 8 Memory Management Dr. Yingwu Zhu. Outline Background Basic Concepts Memory Allocation."— Presentation transcript:

1 Chapter 8 Memory Management Dr. Yingwu Zhu

2 Outline Background Basic Concepts Memory Allocation

3 Background Program must be brought into memory and placed within a process for it to be run User programs go through several steps before being run An assumption for this discussion: ◦ Physical memory is large enough to hold an any sized process (VM is discussed in Ch. 9)

4 Multi-step processing of a user program

5 Outline Background Basic Concepts ◦ Logical address vs. physical address ◦ MMU Memory Allocation

6 Logical vs. Physical Address Space Logical address (virtual address) ◦ Generated by the CPU, always starting from 0 Physical address ◦ Address seen/required by the memory unit Logical address space is bound to a physical address space ◦ Central to proper memory management

7 Binding logical address space to physical address space Binding instructions & data into memory can happen at 3 different stages ◦ Compile time: If memory location known a priori, absolute code can be generated; must recompile code if starting location changes ◦ Load time: Must generate relocatable code if memory location is not known at compile time ◦ Execution time: Binding delayed until run time if the process can be moved during its execution from one memory segment to another. Need hardware support for address maps (e.g., relocation and limit registers) ◦ Logical address = physical address for compile time and load time; logical address != physical address for execution time

8 Memory Management Unit (MMU) Hardware device ◦ logical address  physical address (mapping) Simplest MMU scheme: relocation register The user program deals with logical addresses; it never sees the real physical addresses

9 Outline Background Basic Concepts Memory Allocation

10 Memory Allocation, How? In the context of ◦ Multiprocessing, competing for resources ◦ Memory is a scarce resource shared by processes Questions: ◦ #1: How to allocate memory to processes? ◦ #2: What considerations need to be taken in memory allocation? ◦ #3: How to manage free space?

11 Memory Allocation Contiguous allocation Non-contiguous allocation: Paging

12 Contiguous Allocation Fact: memory is usually split into 2 partitions ◦ Low end for OS (e.g., interrupt vector) ◦ High end for user processes  where allocation happens

13 Contiguous Allocation Definition: each process is placed in a single contiguous section of memory ◦ Single partition allocation ◦ Multiple-partition allocation

14 Contiguous Allocation Single partition allocation, needs hardware support ◦ Relocation register: the base physical address ◦ Limit register: the range of logical address

15 Contiguous Allocation Base register + limit register define a logical address Space in memory !

16 Contiguous Allocation Single partition allocation (high-end partition), needs hardware support ◦ Relocation register: the base physical address ◦ Limit register: the range of logical address ◦ Protect user processes from each other, and from changing OS code & data

17 Protection by Base + Limit Registers CPU < + Memory Logical address Limit register Relocation register Trap: address error yes Physical address no

18 Contiguous Allocation Multiple-partition allocation ◦ Divide memory into multiple fixed-sized partitions ◦ Hole: block of free/available memory  Holes of various sizes are scattered thru memory ◦ When a process arrives, it is allocated from a hole large enough to hold it  May create a new hole ◦ OS manages  Allocated blocks & holes

19 Multiple-Partition Allocation OS process 5 process 8 process 2 OS process 5 process 2 OS process 5 process 2 process 9 OS process 5 process 9 process 2 process 10 rm p8add p9add p10

20 Multiple-Partition Allocation Dynamic storage allocation problem ◦ How to satisfy a request of size n from a list of holes? Three strategies ◦ First fit: allocate the first hole large enough ◦ Best fit: allocate the smallest hole that is big enough ◦ Worst fit: allocate the largest hole ◦ First & best fit outperform worst fit (in storage utilization)

21 Fragmentation Storage allocation produces fragmentation! External fragmentation ◦ Total available memory space exists to satisfy a request, but it is not contiguous Internal fragmentation ◦ Allocated memory may be slightly larger than requested memory, this size difference is memory internal to a partition, but not being used ◦ Why? Memory is allocated in block size rather than in the unit of bytes

22 Thinking What fragmentations are produced by multiple-partition allocation? What fragmentation is produced by single-partition allocation? Any solution to eliminate fragmentations?

23 Memory Allocation Contiguous allocation Non-contiguous allocation: Paging

24 Paging Memory allocated to a process is not necessarily contiguous ◦ Divide physical memory into fixed-sized blocks, called frames (size is power of 2, e.g., 512B – 16MB) ◦ Divide logical address space into blocks of same size, called pages ◦ Memory allocated to a process in one or multiple of frames ◦ Page table: maps pages to frames, per-process structure ◦ Keep track of all free frames

25 Paging Example

26 Address Translation Scheme Logical address  Physical address Address generated by the CPU is divided into ◦ Page number (p): used as an index into a page table which contains base address of each page in physical memory ◦ Page offset (d): combined with base address to define the physical memory address that is sent to the memory unit

27 Logical Address pd n bitsm-n bits Logical address space 2^m

28 Address Translation

29 Exercise Consider a process of size 72,776 bytes and page size of 2048 bytes  How many entries are in the page table?  What is the internal fragmentation size?

30 Discussion How to implement page tables? Where to maintain page tables?

31 Implementation of Page Tables (1) Option 1: hardware support, using a set of dedicated registers Case study  16-bit address, 8KB page size, how many registers needed for the page table? Using dedicated registers ◦ Pros ◦ Cons

32 Implementation of Page Tables (2) Option 2: kept in main memory ◦ Page-table base register (PTBR) points to the page table ◦ Page-table length register (PTLR) indicates size of the page table ◦ Problem? Every data/instruction access requires 2 memory accesses: One for the page table and one for the data/instruction.

33 Option 2: Using memory to keep page tables How to handle 2-memory-accesses problem?

34 Option 2: Using memory to keep page tables How to handle 2-memory-accesses problem? Caching + hardware support ◦ Use of a special fast-lookup hardware cache called associative memory or translation look-aside buffers (TLBs) ◦ Cache page-table entries (LRU, etc.) ◦ Expensive but faster ◦ Small: 64 – 1024 entries

35 Associative Memory Associative memory – parallel search Address translation (A´, A´´) ◦ If A´ is in associative register, get frame # out ◦ Otherwise get frame # from page table in memory Page #Frame #

36 Paging with TLB

37 Effective Memory-Access Time Associative Lookup = b time unit Assume one memory access time is x time unit Hit ratio – percentage of times that a page number is found in the associative registers; ratio related to number of associative registers Hit ratio =  Effective Access Time (EAT) EAT = (x + b)  + (2x + b)(1 –  ) Example: memory takes 100ns, TLB 20ns, hit ratio 80%, EAT?

38 Memory Protection Memory protection implemented by associating protection bit with each frame Valid-invalid bit attached to each entry in the page table: ◦ “valid” indicates that the associated page is in the process’s logical address space, and is thus a legal page ◦ “invalid” indicates that the page is not in the process’s logical address space

39 Memory Protection

40 Page Table Structure Hierarchical Paging Hashed page tables Inverted hash tables

41 Why Hierarchical Paging? Most modern computer systems support a large logical address space, 2^32 – 2^64 Large page tables ◦ Example: 32-bit logical address space, page size is 4KB, then 2^20 page table entries. If address takes 4 bytes, then the page table size costs 4MB ◦ Contiguous memory allocation for large page tables may be a problem! ◦ Physical memory may not hold a single large page table!

42 Hierarchical Paging Break up the logical address space into multiple page tables ◦ Page table is also paged! A simple technique is a two-level page table

43 Two-Level Paging Example A logical address (on 32-bit machine with 4K page size) is divided into: ◦ A page number consisting of 20 bits  what’s the page table size in bytes? ◦ A page offset consisting of 12 bits Since the page table is paged, the page number is further divided into: ◦ A 10-bit page number ◦ A10-bit page offset Thus, a logical address is as follows: where p i is an index into the outer page table, and p 2 is the displacement within the page of the outer page table page numberpage offset pipi p2p2 d 10 12

44 Address Translation 2-level 32-bit paging architecture

45 Address Translation Example

46 Page Table Structure Hierarchical Paging Hashed page tables Inverted hash tables

47 Hashed Page Tables A common approach for handling address space > 32 bits ◦ The virtual page number is hashed into a page table. This page table contains a chain of elements hashing to the same location. ◦ Virtual page numbers are compared in this chain searching for a match. If a match is found, the corresponding physical frame is extracted.

48 Hashed Page Tables

49 Page Table Structure Hierarchical Paging Hashed page tables Inverted hash tables

50 Inverted Page Tables Why need it? How? ◦ One entry for each memory frame ◦ Each entry consists of the virtual address of the page stored in the memory frame, with info about the process that owns the page: ◦ One page table system wide Pros & Cons

51 Inverted Hash Tables Pros: reduce memory consumption for page tables Cons: linear search performance!

52 Exercise Consider a system with 32GB virtual memory, page size is 2KB. It uses 2-level paging. The size of the page directory (outer page table) is 4KB. The physical memory is 512MB.  Show how the virtual memory address is split in page directory, page table and offset.  How many (2nd level) page tables are there in this system (per process)?  How many entries are there in the (2nd level) page table?  What is the size of the frame number (in bits) needed for implementing this?

53 Summary Basic concepts MMU: logical addr.  physical addr. Memory Allocation ◦ Contiguous ◦ Non-contiguous: paging  Implementation of page tables  Hierarchical paging  Hashed page tables  Inverted page tables  TLB & effective memory-access time

54 Notes: Shared Memory ipcs –m ipcrm


Download ppt "Chapter 8 Memory Management Dr. Yingwu Zhu. Outline Background Basic Concepts Memory Allocation."

Similar presentations


Ads by Google