The memory allocation problem Define the memory allocation problem Memory organization and memory allocation schemes.

Slides:



Advertisements
Similar presentations
Dynamic Memory Management
Advertisements

Chapter 6: Memory Management
The Linux Kernel: Memory Management
Chapter 12. Kernel Memory Allocation
KERNEL MEMORY ALLOCATION Unix Internals, Uresh Vahalia Sowmya Ponugoti CMSC 691X.
7. Physical Memory 7.1 Preparing a Program for Execution
Memory Management Design & Implementation Segmentation Chapter 4.
Memory Management Memory Areas and their use Memory Manager Tasks:
Chapter 3.1 : Memory Management
Memory Management A memory manager should take care of allocating memory when needed by programs release memory that is no longer used to the heap. Memory.
Memory Management (continued) CS-3013 C-term Memory Management CS-3013 Operating Systems C-term 2008 (Slides include materials from Operating System.
Memory Management 1 CS502 Spring 2006 Memory Management CS-502 Spring 2006.
CS-3013 & CS-502, Summer 2006 Memory Management1 CS-3013 & CS-502 Summer 2006.
Organizing files for performance Chapter Data compression Advantages of reduced file size Redundancy reduction: state code example Repeating sequences:
1 Chapter 3.1 : Memory Management Storage hierarchy Storage hierarchy Important memory terms Important memory terms Earlier memory allocation schemes Earlier.
Memory Allocation CS Introduction to Operating Systems.
Real-Time Concepts for Embedded Systems Author: Qing Li with Caroline Yao ISBN: CMPBooks.
Dynamic Memory Allocation Questions answered in this lecture: When is a stack appropriate? When is a heap? What are best-fit, first-fit, worst-fit, and.
Memory Management Chapter 7.
Dynamic Partition Allocation Allocate memory depending on requirements Partitions adjust depending on memory size Requires relocatable code –Works best.
1. Memory Manager 2 Memory Management In an environment that supports dynamic memory allocation, the memory manager must keep a record of the usage of.
Storage Management - Chap 10 MANAGING A STORAGE HIERARCHY on-chip --> main memory --> 750ps - 8ns ns. 128kb - 16mb 2gb -1 tb. RATIO 1 10 hard disk.
CS 326 Programming Languages, Concepts and Implementation Instructor: Mircea Nicolescu Lecture 9.
1 Memory Management Chapter 7. 2 Memory Management Subdividing memory to accommodate multiple processes Memory needs to be allocated to ensure a reasonable.
1 Memory Management Basics. 2 Program P Basic Memory Management Concepts Address spaces Physical address space — The address space supported by the hardware.
1 Advanced Memory Management Techniques  static vs. dynamic kernel memory allocation  resource map allocation  power-of-two free list allocation  buddy.
Chapter 17 Free-Space Management Chien-Chung Shen CIS, UD
CS 241 Discussion Section (11/17/2011). Outline Review of MP7 MP8 Overview Simple Code Examples (Bad before the Good) Theory behind MP8.
CS 241 Section Week #9 (11/05/09). Topics MP6 Overview Memory Management Virtual Memory Page Tables.
Memory Management -Memory allocation -Garbage collection.
Operating Systems ECE344 Ashvin Goel ECE University of Toronto Memory Management Overview.
Copyright ©: Nahrstedt, Angrave, Abdelzaher, Caccamo 1 Memory management & paging.
CS 241 Discussion Section (2/9/2012). MP2 continued Implement malloc, free, calloc and realloc Reuse free memory – Sequential fit – Segregated fit.
CS 241 Discussion Section (12/1/2011). Tradeoffs When do you: – Expand Increase total memory usage – Split Make smaller chunks (avoid internal fragmentation)
Lecture 7 Page 1 CS 111 Summer 2013 Dynamic Domain Allocation A concept covered in a previous lecture We’ll just review it here Domains are regions of.
External fragmentation in a paging system Use paging circuitry to map groups of noncontiguous free pages into logically contiguous addresses (remap your.
Ch. 4 Memory Mangement Parkinson’s law: “Programs expand to fill the memory available to hold them.”
CompSci 143A1 Part II: Memory Management Chapter 7: Physical Memory Chapter 8: Virtual Memory Chapter 9: Sharing Data and Code in Main Memory Spring, 2013.
Memory Management One of the most important OS jobs.
Chapter 17 Free-Space Management
Section 10: Memory Allocation Topics
Memory Management Memory Areas and their use Memory Manager Tasks:
Chapter 2: The Linux System Part 4
ITEC 202 Operating Systems
Day 19 Memory Management.
CS 326 Programming Languages, Concepts and Implementation
Partitioned Memory Allocation
Dynamic Domain Allocation
Dynamic Memory Allocation
Chapter 11: File System Implementation
Memory Management Memory Areas and their use Memory Manager Tasks:
Lecture 26: Memory Management - Swapping
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy 11/12/2018.
Chapter 11: File System Implementation
Optimizing Malloc and Free
CS Introduction to Operating Systems
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy Chapter 8 11/24/2018.
Memory Management Chapter 10 11/24/2018 Crowley OS Chap. 10.
Chapter 11: File System Implementation
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy Chapter 9 12/1/2018.
Chapter 8: Memory management
Memory Management Overview
Binding Times Binding is an association between two things Examples:
Memory Management (1).
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy Chapter 9 4/5/2019.
Chapter 11: File System Implementation
Memory Management Memory Areas and their use Memory Manager Tasks:
Chapter 7 Memory Management
Lecture 26: Memory Management - Swapping
Presentation transcript:

The memory allocation problem Define the memory allocation problem Memory organization and memory allocation schemes.

Up to this point: Process management provides the illusion of an infinite number of CPUs –On a single processor machine Memory management provides a different set of illusions –Protected memory –Infinite amount of memory

Memory allocation problem A sub-problem in memory management. –A large block of memory (e.g. physical memory), say N bytes. –Requests for sub-blocks come in an unpredictable fashion (e.g. processes start and need memory). Each request may ask for 1 to N bytes. –Allocate continuous memory to a request when available. The memory block is in use for a while and is then returned to the system (free). –Goal: to satisfy as many requests as possible –constraints: CPU and memory overheads should be low.

Memory allocation problem Example –10KB memory to be managed –r1 = req(1K); –r2 = req (2K); –r3 = req(4k); –free(r2); –free(r1); –r4 = req(4k); How to do it makes a difference!! Internal fragment: unused memory within a block –Asking for 100 bytes and get a 512 bytes block External fragment: unused memory between blocks –Even when the total available memory is more than a request, the request cannot be satisfied as in the example.

Variations of the memory allocation problem occur in different situations. Disk space management heap management –new and delete in C++ –malloc and free in C.

Two issues in the memory allocation problem: –Memory organization: how to divide the memory into blocks for allocation? Static method: divide the memory once before the memory are allocated. Dynamic method: divide it up as the memory is allocated. –Memory allocation: select which piece of memory for a request. –Memory organization and memory allocation are close related.

Static memory organization: –Statically divide memory into fixed size subblocks, each for a request. –Advantages: easy to implement. Good when the sizes for memory requests are fixed. –Can be extended for handle memory requests for different sizes known a prior. Example: 500, 000 bytes, each request for either 50,000 or 200,000. Two 50,000 bytes blocks, Two 200,000 byte blocks. –Data structure: a linked list for each type of blocks.

Static memory organization: –Worst case complexity: new and free O(1) for both –Disadvantage: cannot handle variable-size requests effectively. Might need to use a large block to satisfy a request for small size. –Internal fragmentation

Buddy system: –Allow to use larger blocks to satisfy smaller requests without wasting more than half of the block. –Maintain a free block list for each power of two size. E.g. memory has 512K bytes. The system will maintain free block list for blocks of 512K, 256K, 128K, 64K, 32K, 16K, 8K, 4K, 2K, 1K, 512bytes, 256 bytes, 128bytes, 64bytes, 32bytes, 16bytes, 8bytes, 4 bytes, 2bytes, 1bytes. –All free lists start off empty except for the free block list for the largest block which contains one free block.

Buddy system: –When receiving a request, round it up to the next power of two and look for that list. If that block list is empty, look for the next larger power of two and divide a free block there into two blocks (buddy). If still fail, keep going up until you find one. –When freeing a block, look whether its buddy is free, if yes, merge them, otherwise, insert the free block into the appropriate free block list.

Example: –memory = 32 bytes, free block list for 32, 16, 8, 4, 2, 1 bytes. –Initial state of the free block list? –What is the memory state after the sequence: request 1( 1 byte), free (request 1), request 2 (4 bytes), request 3 (8 bytes), free request 2, request 4 (1 byte), request 5 (2 bytes), free request 3.

–Buddy system is a semi-dynamic method keeps the simplicity of the statically allocated blocks and handles the difficult cases of different request sizes. –Worst case complexity? –Drawback: –Can still have internal fragments

Dynamic memory allocation: –Grant only the size requested (different from static memory allocation and buddy system). –Example: total 512 bytes: allocate(r1, 100), allocate(r2, 200), allocate(r3, 200), free(r2), allocate(r4, 10), free(r1), allocate(r5, 200) –Fragmentation: memory is divided up into small blocks that none of them can be used to satisfy any requests. Static allocation -- internal fragment. dynamic allocation -- external fragment.

Issues in dynamic memory allocation. –Where are the free memory blocks? How to keep track of the free/used memory blocks –Which memory blocks to allocate? There may exist multiple free memory blocks that can satisfy a request. Which block to use? –Fragments must be reduced.

–Keeping track of the blocks: the list method. Keep a link list of all blocks of memory(block list). Example: total 512 bytes: allocate(r1, 100), allocate(r2, 200), allocate(r3, 200), free(r2), allocate(r4, 10), free(r1), allocate(r5, 200) How to do the allocation operation? How to do the free operation? What information is to be kept in the list node?

–What is the information to be kept in the list node? Address: Block start address size: size of the block. Allocated: whether the block is free or allocated. Struct list_node { int address; int size; int allocated; struct list_node *next; struct list_node *prev; };

–Do we have to keep track of allocated block in the list? Not necessary. What is good and bad about keeping the allocated blocks information? –Can check invalid free operations. –Where to keep the block list? Reserving space for the block list at the beginning of the memory. –Limited number of block list nodes, constant space overhead (memory space taken away for memory management). Use space within each block. Each block has a header for block_list node.

–Keeping track of the memory blocks: the bitmap method. Memory is divided into blocks, use one bit to represent whether one block is allocated or not. A bitmap is a string of bits. –E.g 4Mbytes memory, each block has 1 byte. How many bits are needed to keep track of the status of the memory? How about each block has 1KB? How to allocate? How to free? Commonly used in disk.

Comparing the list method and bitmap method: list method(block header) bitmap method space: # of blocks fixed static time finding free blocks? O(B) string matching free a block? O(B) O(1)

Which free block to allocate? –First fit: search from the beginning, use the first free block. –next fit: search from the current position, use the first free block. –best fit: use the smallest block that can fit. –worst fit: use the largest block that can fit. Which algorithm is faster? Which is slower? Which one is the best (satisfies most requests)? –Depend the programs

One question What would be a simple and effective heap management scheme for most applications? –Hints: most applications do not use heap that much!!!