Memory Management Tom Roeder CS215 2006fa. Motivation Recall unmanaged code eg C: { double* A = malloc(sizeof(double)*M*N); for(int i = 0; i < M*N; i++)

Slides:



Advertisements
Similar presentations
Dynamic Memory Management
Advertisements

Introduction to Memory Management. 2 General Structure of Run-Time Memory.
Garbage Collection CS Introduction to Operating Systems.
CMSC 330: Organization of Programming Languages Memory and Garbage Collection.
Lecture 10: Heap Management CS 540 GMU Spring 2009.
Garbage Collection What is garbage and how can we deal with it?
CMSC 330: Organization of Programming Languages Memory and Garbage Collection.
CSC 213 – Large Scale Programming. Today’s Goals  Consider what new does & how Java works  What are traditional means of managing memory?  Why did.
Garbage Collection  records not reachable  reclaim to allow reuse  performed by runtime system (support programs linked with the compiled code) (support.
Run-time organization  Data representation  Storage organization: –stack –heap –garbage collection Programming Languages 3 © 2012 David A Watt,
Memory Management Chapter 7. Memory Management Subdividing memory to accommodate multiple processes Memory needs to be allocated efficiently to pack as.
Hastings Purify: Fast Detection of Memory Leaks and Access Errors.
CS 326 Programming Languages, Concepts and Implementation Instructor: Mircea Nicolescu Lecture 18.
CPSC 388 – Compiler Design and Construction
Mark and Sweep Algorithm Reference Counting Memory Related PitFalls
CS 536 Spring Automatic Memory Management Lecture 24.
Chapter 8 Runtime Support. How program structures are implemented in a computer memory? The evolution of programming language design has led to the creation.
Memory Management. History Run-time management of dynamic memory is a necessary activity for modern programming languages Lisp of the 1960’s was one of.
CS 1114: Data Structures – memory allocation Prof. Graeme Bailey (notes modified from Noah Snavely, Spring 2009)
CS 61C L07 More Memory Management (1) Garcia, Fall 2004 © UCB Lecturer PSOE Dan Garcia inst.eecs.berkeley.edu/~cs61c CS61C.
1 Storage Allocation Operating System Hebrew University Spring 2007.
Runtime The optimized program is ready to run … What sorts of facilities are available at runtime.
Memory Allocation and Garbage Collection. Why Dynamic Memory? We cannot know memory requirements in advance when the program is written. We cannot know.
Memory Management Chapter 5.
Age-Oriented Concurrent Garbage Collection Harel Paz, Erez Petrank – Technion, Israel Steve Blackburn – ANU, Australia April 05 Compiler Construction Scotland.
Pointers and Dynamic Variables. Objectives on completion of this topic, students should be able to: Correctly allocate data dynamically * Use the new.
Garbage collection (& Midterm Topics) David Walker COS 320.
Linked lists and memory allocation Prof. Noah Snavely CS1114
1 CSE 303 Lecture 11 Heap memory allocation ( malloc, free ) reading: Programming in C Ch. 11, 17 slides created by Marty Stepp
CS61C Midterm Review on C & Memory Management Fall 2006 Aaron Staley Some material taken from slides by: Michael Le Navtej Sadhal.
Uniprocessor Garbage Collection Techniques Paul R. Wilson.
1 Overview Assignment 6: hints  Living with a garbage collector Assignment 5: solution  Garbage collection.
Memory Management Last Update: July 31, 2014 Memory Management1.
SEG Advanced Software Design and Reengineering TOPIC L Garbage Collection Algorithms.
Memory Allocation CS Introduction to Operating Systems.
Dynamic Memory Allocation Questions answered in this lecture: When is a stack appropriate? When is a heap? What are best-fit, first-fit, worst-fit, and.
Chapter 3.5 Memory and I/O Systems. 2 Memory Management Memory problems are one of the leading causes of bugs in programs (60-80%) MUCH worse in languages.
C. Varela; Adapted w/permission from S. Haridi and P. Van Roy1 Declarative Computation Model Memory management (CTM 2.5) Carlos Varela RPI April 6, 2015.
Runtime Environments. Support of Execution  Activation Tree  Control Stack  Scope  Binding of Names –Data object (values in storage) –Environment.
C++ Memory Overview 4 major memory segments Key differences from Java
1 Memory Management Chapter 7. 2 Memory Management Subdividing memory to accommodate multiple processes Memory needs to be allocated to ensure a reasonable.
COMP3190: Principle of Programming Languages
11/26/2015IT 3271 Memory Management (Ch 14) n Dynamic memory allocation Language systems provide an important hidden player: Runtime memory manager – Activation.
1 Memory Management Chapter 7. 2 Memory Management Subdividing memory to accommodate multiple processes Memory needs to be allocated to ensure a reasonable.
Runtime System CS 153: Compilers. Runtime System Runtime system: all the stuff that the language implicitly assumes and that is not described in the program.
Garbage Collection and Memory Management CS 480/680 – Comparative Languages.
G ARBAGE C OLLECTION CSCE-531 Ankur Jain Neeraj Agrawal 1.
Consider Starting with 160 k of memory do: Starting with 160 k of memory do: Allocate p1 (50 k) Allocate p1 (50 k) Allocate p2 (30 k) Allocate p2 (30 k)
Runtime The optimized program is ready to run … What sorts of facilities are available at runtime.
LECTURE 13 Names, Scopes, and Bindings: Memory Management Schemes.
CS412/413 Introduction to Compilers and Translators April 21, 1999 Lecture 30: Garbage collection.
GC Assertions: Using the Garbage Collector To Check Heap Properties Samuel Z. Guyer Tufts University Edward Aftandilian Tufts University.
Runtime Environments Chapter 7. Support of Execution  Activation Tree  Control Stack  Scope  Binding of Names –Data object (values in storage) –Environment.
Eliminating External Fragmentation in a Non-Moving Garbage Collector for Java Author: Fridtjof Siebert, CASES 2000 Michael Sallas Object-Oriented Languages.
Memory Management What if pgm mem > main mem ?. Memory Management What if pgm mem > main mem ? Overlays – program controlled.
Garbage Collection What is garbage and how can we deal with it?
Inst.eecs.berkeley.edu/~cs61c CS61C : Machine Structures Lecture 7 – More Memory Management Lecturer PSOE Dan Garcia
Dynamic Memory Allocation
CS 153: Concepts of Compiler Design November 28 Class Meeting
Concepts of programming languages
Storage.
Smart Pointers.
Memory Management and Garbage Collection Hal Perkins Autumn 2011
Strategies for automatic memory management
Created By: Asst. Prof. Ashish Shah, J.M.Patel College, Goregoan West
CS703 - Advanced Operating Systems
Inst.eecs.berkeley.edu/~cs61c CS61C : Machine Structures Lecture 7 – More Memory Management Lecturer PSOE Dan Garcia
Automating Memory Management
CMPE 152: Compiler Design May 2 Class Meeting
Garbage Collection What is garbage and how can we deal with it?
Presentation transcript:

Memory Management Tom Roeder CS fa

Motivation Recall unmanaged code eg C: { double* A = malloc(sizeof(double)*M*N); for(int i = 0; i < M*N; i++) { A[i] = i; } } What’s wrong? memory leak: forgot to call free(A); common problem in C

Motivation What’s wrong here? char* f(){ char c[100]; for(int i = 0; i < 100; i++) { c[i] = i; } return c; } Returning memory allocated on the stack Can you still do this in C#? no: array sizes must be specified in “new” expressions

Motivation Solution: no explicit malloc/free (new/delete) eg. in Java/C# { double[] A = new double[M*N]; for(int i = 0; i < M*N; i++) { A[i] = i; } } No leak: memory is “lost” but freed later A Garbage collector tries to free memory keeps track of used information somehow

COM’s Solution Reference Counting AddRef/Release each time a new reference is created: call AddRef each time released: call Release must be called by programmer leads to difficult bugs forgot to AddRef: objects disappear underneath forgot to Release: memory leaks Entirely manual solutions unacceptable

Garbage Collection Why must we do this in COM? no way to tell what points to what C/C++ pointers can point to anything C#/Java have a managed runtime all pointer types are known at runtime can do reference counting in CLR Garbage Collection is program analysis figure out properties of code automatically two type of analysis: dynamic and static

Soundness and Completeness For any program analysis Sound? are the operations always correct? usually an absolute requirement Complete? does the analysis capture all possible instances? For Garbage Collection sound = does it ever delete current memory? complete = does it delete all unused memory?

Reference Counting As in COM, keep count of references. How? on assignment, increment and decrement when removing variables, decrement eg. local variables being removed from stack know where all objects live at ref count 0, reclaim object space Advantage: incremental (don’t stop) Is this safe? Yes: not reference means not reachable

Reference Counting Disadvantages constant cost, even when lots of space optimize the common case! can’t detect cycles Has fallen out of favor. Reachable

Trees Instead of counting references keep track of some top-level objects and trace out the reachable objects only clean up heap when out of space much better for low-memory programs Two major types of algorithm Mark and Sweep Copy Collectors

Trees Top-level objects managed by CLR local variables on stack registers pointing to objects Garbage collector starts top-level builds a graph of the reachable objects

Mark and Sweep Two-pass algorithm First pass: walk the graph and mark all objects everything starts unmarked Second pass: sweep the heap, remove unmarked not reachable implies garbage Soundness? Yes: any object not marked is not reachable Completeness? Yes, since any object unreachable is not marked but only complete eventually

Mark and Sweep Can be expensive eg. emacs everything stops and collection happens this is a general problem for garbage collection at end of first phase, know all reachable objects should use this information how could we use it?

Copy Collectors Instead of just marking as we trace copy each reachable object to new part of heap needs to have enough space to do this no need for second pass Advantages one pass compaction Disadvantages higher memory requirements

Fragmentation Common problem in memory schemes Enough memory but not enough contiguous consider allocator in OS ? 10

Unmanaged algorithms best-fit search the heap for the closest fit takes time causes external fragmentation (as we saw) first-fit choose the first fit found starts from beginning of heap next-fit first-fit with a pointer to last place searched

Unmanaged algorithms worst-fit put the object in the largest possible hole under what workload is this good? objects need to grow eg. database construction eg. network connection table different algorithms appropriate in different settings: designed differently in compiler/runtime, we want access speed

Heap Allocation Algorithms Best for managed heap? must be usually O(1) so not best or first fit use next fit walk on the edge of the last chunk General idea allocate contiguously allocate forwards until out of memory

Compacting Copy Collector Move live objects to bottom of heap leaves more free space on top contiguous allocation allows faster access cache works better with locality Must then modify references recall: references are really pointers must update location in each object Can be made very fast

Compacting Copy Collector Another possible collector: divide memory into two halves fill up one half before doing any collection on full: walk the trees and copy to other side work from new side Need twice memory of other collectors But don’t need to find space in old side contiguous allocation is easy

C# Memory management Related to next-fit, copy-collector keep a NextObjPointer to next free space use it for new objects until no more space Keep knowledge of Root objects global and static object pointers all thread stack local variables registers pointing to objects maintained by JIT compiler and runtime eg. JIT keeps a table of roots

C# Memory management On traversal: walk from roots to find all good objects linear pass through heap on gap, compact higher objects down fix object references to make this work very fast in general Speedups: assume different types of objects

Generations Current.NET uses 3 generations: 0 – recently created objects: yet to survive GC 1 – survived 1 GC pass 2 – survived more than 1 GC pass Assumption: longer lived implies live longer Is this a good assumption? good assumption for many applications and for many systems (eg. P2P) Put lower objects lower in heap

Generations During compaction, promote generations eg. Gen 1 reachable object goes to Gen 2 Eventually: } } } Generation 0 Generation 1 Generation 2 Heap

More Generation Optimization Don’t trace references in old objects. Why? speed improvement but could refer to young objects Use Write-Watch support. How? note if an old object has some field set then can trace through references

Large Objects Heap Area of the heap dedicated to large objects never compacted. Why? copy cost outweights any locality automatic generation 2 rarely collected large objects likely to have long lifetime Commonly used for DataGrid objects results from database queries 20k or more

Object Pinning Can require that an object not move could hurt GC performance useful for unsafe operation in fact, needed to make pointers work syntax: fixed(…) { … } will not move objects in the declaration in the block

Finalization Recall C++ destructors: ~MyClass() { // cleanup } called when object is deleted does cleanup for this object Don’t do this in C# (or Java) similar construct exists but only called on GC no guarantees when

Finalization More common idiom: public void Finalize() { base.Finalize(); Dispose(false); } maybe needed for unmanaged resources slows down GC significantly Finalization in GC: when object with Finalize method created add to Finalization Queue when about to be GC’ed, add to Freachable Queue

Finalization images from MSDN Nov 2000