Memory Management II: Dynamic Storage Allocation Mar 7, 2000 Topics Segregated free lists –Buddy system Garbage collection –Mark and Sweep –Copying –Reference.

Slides:



Advertisements
Similar presentations
Dynamic Memory Allocation II
Advertisements

Garbage collection David Walker CS 320. Where are we? Last time: A survey of common garbage collection techniques –Manual memory management –Reference.
Dynamic Memory Management
Lecture 10: Heap Management CS 540 GMU Spring 2009.
Garbage Collection What is garbage and how can we deal with it?
Instructors: Seth Copen Goldstein, Franz Franchetti, Greg Kesden
Garbage Collection  records not reachable  reclaim to allow reuse  performed by runtime system (support programs linked with the compiled code) (support.
5. Memory Management From: Chapter 5, Modern Compiler Design, by Dick Grunt et al.
1 Binghamton University Dynamic Memory Allocation: Advanced Concepts CS220: Computer Systems II.
Dynamic Memory Allocation Topics Simple explicit allocators Data structures Mechanisms Policies.
Some topics in Memory Management
Garbage Collection CSCI 2720 Spring Static vs. Dynamic Allocation Early versions of Fortran –All memory was static C –Mix of static and dynamic.
CPSC 388 – Compiler Design and Construction
Mark and Sweep Algorithm Reference Counting Memory Related PitFalls
CS 536 Spring Automatic Memory Management Lecture 24.
Carnegie Mellon 1 Dynamic Memory Allocation: Advanced Concepts : Introduction to Computer Systems 18 th Lecture, Oct. 26, 2010 Instructors: Randy.
Carnegie Mellon Introduction to Computer Systems /18-243, spring th Lecture, Mar. 31 st Instructors: Gregory Kesden and Markus Püschel.
CS 61C L07 More Memory Management (1) Garcia, Fall 2004 © UCB Lecturer PSOE Dan Garcia inst.eecs.berkeley.edu/~cs61c CS61C.
Dynamic Memory Allocation II Nov 7, 2002 Topics Explicit doubly-linked free lists Segregated free lists Garbage collection Memory-related perils and pitfalls.
Dynamic Memory Allocation II November 7, 2007 Topics Explicit doubly-linked free lists Segregated free lists Garbage collection Review of pointers Memory-related.
Dynamic Memory Allocation II March 27, 2008 Topics Explicit doubly-linked free lists Segregated free lists Garbage collection Review of pointers Memory-related.
CS61C L07 More Memory Management (1) Garcia, Spring 2008 © UCB Lecturer SOE Dan Garcia inst.eecs.berkeley.edu/~cs61c CS61C.
SEG Advanced Software Design and Reengineering TOPIC L Garbage Collection Algorithms.
University of Washington Memory Allocation III The Hardware/Software Interface CSE351 Winter 2013.
Dynamic Memory Allocation Questions answered in this lecture: When is a stack appropriate? When is a heap? What are best-fit, first-fit, worst-fit, and.
Explicit Free Lists Use data space for link pointers –Typically doubly linked –Still need boundary tags for coalescing –It is important to realize that.
1 Dynamic Memory Allocation: Basic Concepts. 2 Today Basic concepts Implicit free lists.
Dynamic Memory Allocation II
Writing You Own malloc() March 29, 2003 Topics Explicit Allocation Data structures Mechanisms Policies class19.ppt “The course that gives CMU its.
11/26/2015IT 3271 Memory Management (Ch 14) n Dynamic memory allocation Language systems provide an important hidden player: Runtime memory manager – Activation.
CS 241 Discussion Section (11/17/2011). Outline Review of MP7 MP8 Overview Simple Code Examples (Bad before the Good) Theory behind MP8.
Carnegie Mellon /18-243: Introduction to Computer Systems Instructors: Bill Nace and Gregory Kesden (c) All Rights Reserved. All work.
University of Washington Wouldn’t it be nice… If we never had to free memory? Do you free objects in Java? 1.
Memory Management -Memory allocation -Garbage collection.
Dynamic Memory Allocation II Topics Explicit doubly-linked free lists Segregated free lists Garbage collection.
Dynamic Memory Allocation II April 1, 2004 Topics Explicit doubly-linked free lists Segregated free lists Garbage collection Memory-related perils and.
Consider Starting with 160 k of memory do: Starting with 160 k of memory do: Allocate p1 (50 k) Allocate p1 (50 k) Allocate p2 (30 k) Allocate p2 (30 k)
Dynamic Memory Allocation II November 4, 2004 Topics Explicit doubly-linked free lists Segregated free lists Garbage collection Memory-related perils and.
Dynamic Memory Allocation II
2/4/20161 GC16/3011 Functional Programming Lecture 20 Garbage Collection Techniques.
University of Washington Today More memory allocation!
CS 241 Discussion Section (2/9/2012). MP2 continued Implement malloc, free, calloc and realloc Reuse free memory – Sequential fit – Segregated fit.
CS 241 Discussion Section (12/1/2011). Tradeoffs When do you: – Expand Increase total memory usage – Split Make smaller chunks (avoid internal fragmentation)
CS61C L07 More Memory Management (1) Garcia, Spring 2010 © UCB Lecturer SOE Dan Garcia inst.eecs.berkeley.edu/~cs61c CS61C.
CS412/413 Introduction to Compilers and Translators April 21, 1999 Lecture 30: Garbage collection.
University of Washington Today More memory allocation!
1 Advanced Topics. 2 Outline Garbage collection Memory-related perils and pitfalls Suggested reading: 9.10, 9.11.
Dynamic Memory Allocation II March 27, 2008 Topics Explicit doubly-linked free lists Segregated free lists Garbage collection Review of pointers Memory-related.
Memory Management What if pgm mem > main mem ?. Memory Management What if pgm mem > main mem ? Overlays – program controlled.
Carnegie Mellon 1 Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition Dynamic Memory Allocation: Advanced Concepts :
Instructor: Fatma CORUT ERGİN
Garbage Collection What is garbage and how can we deal with it?
Section 10: Memory Allocation Topics
Inst.eecs.berkeley.edu/~cs61c CS61C : Machine Structures Lecture 7 – More Memory Management Lecturer PSOE Dan Garcia
Dynamic Memory Allocation
Instructors: Roger Dannenberg and Greg Ganger
Dynamic Memory Allocation II Nov 7, 2000
Memory Management I: Dynamic Storage Allocation Oct 7, 1999
Memory Management: Dynamic Allocation
Dynamic Memory Allocation
Dynamic Memory Allocation: Basic Concepts CS220: Computer Systems II
Dynamic Memory Allocation: Advanced Concepts /18-213/14-513/15-513: Introduction to Computer Systems 20th Lecture, November 2, 2017.
Memory Management I: Dynamic Storage Allocation March 2, 2000
CS703 - Advanced Operating Systems
Chapter 12 Memory Management
Lecturer PSOE Dan Garcia
CS703 - Advanced Operating Systems
Inst.eecs.berkeley.edu/~cs61c CS61C : Machine Structures Lecture 7 – More Memory Management Lecturer PSOE Dan Garcia
Garbage Collection What is garbage and how can we deal with it?
Presentation transcript:

Memory Management II: Dynamic Storage Allocation Mar 7, 2000 Topics Segregated free lists –Buddy system Garbage collection –Mark and Sweep –Copying –Reference counting class15.ppt “The course that gives CMU its Zip!”

CS 213 S’00– 2 – class15.ppt Basic allocator mechanisms Sequential fits (implicit or explicit single free list) best fit, first fit, or next fit placement various splitting and coalescing options –splitting thresholds –immediate or deferred coalescing Segregated free lists simple segregated storage -- separate heap for each size class segregated fits -- separate linked list for each size class –buddy systems

CS 213 S’00– 3 – class15.ppt Segregate Storage Each size “class” has its own collection of blocks Often have separate collection for every small size (2,3,4,…) For larger sizes typically have a collection for each power of 2

CS 213 S’00– 4 – class15.ppt Simple segregated storage Separate heap and free list for each size class No splitting To allocate a block of size n: if free list for size n is not empty, –allocate first block on list (note, list can be implicit or explicit) if free list is empty, –get a new page –create new free list from all blocks in page –allocate first block on list constant time To free a block: Add to free list If page is empty, return the page for use by another size (optional) Tradeoffs: fast, but can fragment badly

CS 213 S’00– 5 – class15.ppt Segregated fits Array of free lists, each one for some size class To allocate a block of size n: search appropriate free list for block of size m > n if an appropriate block is found: –split block and place fragment on appropriate list (optional) if no block is found, try next larger class repeat until block is found To free a block: coalesce and place on appropriate list (optional) Tradeoffs faster search than sequential fits (i.e., log time for power of two size classes) controls fragmentation of simple segregated storage coalescing can increase search times –deferred coalescing can help

CS 213 S’00– 6 – class15.ppt Buddy systems Special case of segregated fits. all blocks are power of two sizes Basic idea: Heap is 2 m words Maintain separate free lists of each size 2 k, 0 <= k <= m. Requested block sizes are rounded up to nearest power of 2. Originally, one free block of size 2 m.

CS 213 S’00– 7 – class15.ppt Buddy systems (cont) To allocate a block of size 2 k : Find first available block of size 2 j s.t. k <= j <= m. if j == k then done. otherwise recursively split block until j == k. Each remaining half is called a “buddy” and is placed on the appropriate free list 2m2m buddy

CS 213 S’00– 8 – class15.ppt Buddy systems (cont) To free a block of size 2 k continue coalescing with buddies while the buddies are free buddy Block to free buddy Not free, done Added to appropriate free list

CS 213 S’00– 9 – class15.ppt Buddy systems (cont) Key fact about buddy systems: given the address and size of a block, it is easy to compute the address of its buddy e.g., block of size 32 with address xxx...x00000 has buddy xxx...x10000 Tradeoffs: fast search and coalesce subject to internal fragmentation

CS 213 S’00– 10 – class15.ppt Internal fragmentation Internal fragmentation is wasted space inside allocated blocks: minimum block size larger than requested amount –e.g., due to minimum free block size, free list overhead policy decision not to split blocks –e.g., buddy system –Much easier to define and measure than external fragmentation.

CS 213 S’00– 11 – class15.ppt Implicit Memory Management Garbage collector Garbage collection: automatic reclamation of heap- allocated storage -- application never has to free Common in functional languages, scripting languages, and modern object oriented languages: Lisp, ML, Java, Perl, Mathematica, Variants (conservative garbage collectors) exist for C and C++ Cannot collect all garbage void foo() { int *p = malloc(128); return; /* p block is now garbage */ }

CS 213 S’00– 12 – class15.ppt Garbage Collection How does the memory manager know when memory can be freed? In general we cannot know what is going to be used in the future since it depends on conditionals But we can tell that certain blocks cannot be used if there are no pointers to them Need to make certain assumptions about pointers Memory manager can distinguish pointers from non-pointers All pointers point to the start of a block Cannot hide pointers (e.g. by coercing them to an int, and then back again)

CS 213 S’00– 13 – class15.ppt Classical GC algorithms Mark and sweep collection (McCarthy, 1960) Does not move blocks (unless you also “compact”) Reference counting (Collins, 1960) Does not move blocks Copying collection (Minsky, 1963) Moves blocks For more information see Jones and Lin, “Garbage Collection: Algorithms for Automatic Dynamic Memory”, John Wiley & Sons, 1996.

CS 213 S’00– 14 – class15.ppt Memory as a graph We view memory as a directed graph Each block is a node in the graph Each pointer is an edge in the graph Locations not in the heap that contain pointers into the heap are called root nodes (e.g. registers, locations on the stack, global variables) Root nodes Heap nodes Not-reachable (garbage) reachable A node (block) is reachable if there is a path from any root to that node. Non-reachable nodes are garbage (never needed by the application)

CS 213 S’00– 15 – class15.ppt Assumptions for this lecture Application new(n) : returns pointer to new block with all locations cleared read(b,i): read location i of block b into register write(b,i,v): write v into location i of block b Each block will have a header word addressed as b[-1], for a block b Used for different purposes in different collectors Instructions used by the Garbage Collector is_ptr(p): d etermines whether p is a pointer length(b ): returns the length of block b, not including the header get_roots() : returns all the roots

CS 213 S’00– 16 – class15.ppt Mark and sweep collecting Can build on top of malloc/free package Allocate using malloc until you “run out of space” When out of space: Use extra “mark bit” in the head of each block Mark: Start at roots and set mark bit on all reachable memory Sweep: Scan all blccks and free blocks that are not marked Before mark root After mark After sweep free Mark Bit Set

CS 213 S’00– 17 – class15.ppt Mark and sweep (cont.) ptr mark(ptr p) { if (!is_ptr(p)) return; // do nothing if not pointer if (markBitSet(p)) return // check if already marked setMarkBit(p); // set the mark bit for (i=0; i < length(p); i++) // mark all children mark(p[i]); return; } Mark using depth-first traversal of the memory graph Sweep using lengths to find next block ptr sweep(ptr p, ptr end) { while (p < end) { if markBitSet(p) clearMarkBit(); else if (allocateBitSet(p)) free(p); p += length(p); }

CS 213 S’00– 18 – class15.ppt Mark and sweep in C A C Conservative Collector Is_ptr() can determines if a word is a pointer by checking if it points to an allocated block of memory. But, in C pointers can point to the middle of a block. So how do we find the beginning of the block Can use balanced tree to keep track of all allocated blocks where the key is the location Balanced tree pointers can be stored in head (use two additional words) head ptr headdata leftright size

CS 213 S’00– 19 – class15.ppt Copying collection Keep two equal-sized spaces, from-space and to-space Repeat until application finishes Application allocates in one space contiguously until space is full. Stop application and copy all reachable blocks to contiguous locations in the other space. Flip the roles of the two spaces and restart application. Before copy (from space) After copy (to space) Copy does not necessarily keep the order of the blocks root Has the effect or removing all fragments

CS 213 S’00– 20 – class15.ppt Copying collection (new) All new blocks are allocated in tospace, one after the other An extra word is allocated for the header The Garbage-Collector starts (flips), when we reach top ptr new (int n) { if (free+n+1 > top) flip(); newblock = free; free += (n+1); for (i=0; i < n+1; i++) newblock[i] = 0; return newblock+1; } tospacefreetopfromspace

CS 213 S’00– 21 – class15.ppt Copying collection (flip) void flip() { swap(&fromspace,&tospace); top = tospace + size; free = tospace; for (r in roots) r = copy(r); } tospacefreetopfromspace freetoptospace reachable Not reachable After the first three lines of flip (before the copy).

CS 213 S’00– 22 – class15.ppt Copying collection (copy) ptr copy(ptr p) { if (!is_ptr(p)) return p; // do nothing if not pointer if (p[-1] != 0) return p[-1]; // check if already forwaded new = free+1; // location for the copy p[-1] = new; // set the forwarding pointer new[-1] = 0; // clear forward in new copy free += length(p)+1; // increment free pointer for (i=0; i < length(p); i++) // copy all children and new[i] = copy(p[i]); // update pointers to new loc return new; } fromspacefreetoptospace fromspacefreetoptospace

CS 213 S’00– 23 – class15.ppt Reference counting Basic algorithm Keeps count on each block of how many pointers point to the block When a count goes to zero, the block can be freed Data structures Can be built on top of an existing explicit allocator –allocate(n), free(p) Add an additional header word for the “reference count” 3 Keeping the count updated requires that the we modify every read and write (can be optimized out in some cases)

CS 213 S’00– 24 – class15.ppt Reference counting ptr new (int n) { newblock = allocate(n+1); newblock[0] = 1; return newblock+1; } void write(ptr b, int i, val v) { decrement(b[I]); if (is_ptr(v)) v[-1]++; b[i] = v; } val read(ptr b, int i) { v = b[I]; if (is_ptr(v)) v[-1]++; return v; } Set reference count to one when creating a new block When reading a value increment its reference counter When writing decrement the old value and increment the new value

CS 213 S’00– 25 – class15.ppt Reference counting Decrement if counter decrements to zero then the block can be freed when freeing a block, the algorithm must decrement the counters of everything pointed to by the block -- this might in turn recursively free more blocks void decrement(ptr p) { if (!is_ptr(p)) return; p[-1]--; if (p[-1] == 0) { for (i=0; i<length(p); i++) decrement(p[i]); free(p-1); }

CS 213 S’00– 26 – class15.ppt Reference counting example Initially n R 2 T 1 S 1 U 1 V Now consider: write(R,1,NULL) This will execute a: decrement(S)

CS 213 S’00– 27 – class15.ppt Reference counting example After counter on S is decremented n R 2 T 0 S 1 U 1 V void decrement(ptr p) { if (!is_ptr(p)) return; p[-1]--; if (p[-1] == 0) { for (i=0; i<length(p); i++) decrement(p[i]); free(p-1); }

CS 213 S’00– 28 – class15.ppt Reference counting example n R 1 T 0 S 1 U 1 V After: decrement(S[0]) void decrement(ptr p) { if (!is_ptr(p)) return; p[-1]--; if (p[-1] == 0) { for (i=0; i<length(p); i++) decrement(p[i]); free(p-1); }

CS 213 S’00– 29 – class15.ppt Reference counting example n R 1 T 0 S 0 U 0 V After: decrement(S[1]) free

CS 213 S’00– 30 – class15.ppt Reference counting cyclic data structures n R 2 S 2 T 1 U write(R,1,NULL) n R 1 S 2 T 1 U AfterBefore

CS 213 S’00– 31 – class15.ppt Garbage Collection Summary Copying Collection Pros: prevents fragmentation, and allocation is very cheap Cons: requires twice the space (from and to), and stops allocation to collect Mark and Sweep Pros: requires little extra memory (assuming low fragmentation) and does not move data Cons: allocation is somewhat slower, and all memory needs to be scanned when sweeping Reference Counting Pros: requires little extra memory (assuming low fragmentation) and does not move data Cons: reads and writes are more expensive and difficult to deal with cyclic data structures Some collectors use a combination (e.g. copying for small objects and reference counting for large objects)