Runtime The optimized program is ready to run … What sorts of facilities are available at runtime.

Slides:



Advertisements
Similar presentations
Garbage collection David Walker CS 320. Where are we? Last time: A survey of common garbage collection techniques –Manual memory management –Reference.
Advertisements

Compilation /15a Lecture 13 Compiling Object-Oriented Programs Noam Rinetzky 1.
CMSC 330: Organization of Programming Languages Memory and Garbage Collection.
Lecture 10: Heap Management CS 540 GMU Spring 2009.
Garbage Collection What is garbage and how can we deal with it?
CMSC 330: Organization of Programming Languages Memory and Garbage Collection.
Garbage Collection  records not reachable  reclaim to allow reuse  performed by runtime system (support programs linked with the compiled code) (support.
Compiler construction in4020 – lecture 12 Koen Langendoen Delft University of Technology The Netherlands.
5. Memory Management From: Chapter 5, Modern Compiler Design, by Dick Grunt et al.
Memory Management Tom Roeder CS fa. Motivation Recall unmanaged code eg C: { double* A = malloc(sizeof(double)*M*N); for(int i = 0; i < M*N; i++)
Run-time organization  Data representation  Storage organization: –stack –heap –garbage collection Programming Languages 3 © 2012 David A Watt,
Garbage Collection CSCI 2720 Spring Static vs. Dynamic Allocation Early versions of Fortran –All memory was static C –Mix of static and dynamic.
By Jacob SeligmannSteffen Grarup Presented By Leon Gendler Incremental Mature Garbage Collection Using the Train Algorithm.
CS 326 Programming Languages, Concepts and Implementation Instructor: Mircea Nicolescu Lecture 18.
CPSC 388 – Compiler Design and Construction
CS 536 Spring Automatic Memory Management Lecture 24.
Chapter 8 Runtime Support. How program structures are implemented in a computer memory? The evolution of programming language design has led to the creation.
Memory Management. History Run-time management of dynamic memory is a necessary activity for modern programming languages Lisp of the 1960’s was one of.
1 The Compressor: Concurrent, Incremental and Parallel Compaction. Haim Kermany and Erez Petrank Technion – Israel Institute of Technology.
CS 1114: Data Structures – memory allocation Prof. Graeme Bailey (notes modified from Noah Snavely, Spring 2009)
Garbage Collection Mooly Sagiv html://
Memory Management Professor Yihjia Tsai Tamkang University.
MOSTLY PARALLEL GARBAGE COLLECTION Authors : Hans J. Boehm Alan J. Demers Scott Shenker XEROX PARC Presented by:REVITAL SHABTAI.
Memory Allocation and Garbage Collection. Why Dynamic Memory? We cannot know memory requirements in advance when the program is written. We cannot know.
Compilation 2007 Garbage Collection Michael I. Schwartzbach BRICS, University of Aarhus.
Age-Oriented Concurrent Garbage Collection Harel Paz, Erez Petrank – Technion, Israel Steve Blackburn – ANU, Australia April 05 Compiler Construction Scotland.
Garbage collection (& Midterm Topics) David Walker COS 320.
Linked lists and memory allocation Prof. Noah Snavely CS1114
Uniprocessor Garbage Collection Techniques Paul R. Wilson.
UniProcessor Garbage Collection Techniques Paul R. Wilson University of Texas Presented By Naomi Sapir Tel-Aviv University.
Garbage Collection Memory Management Garbage Collection –Language requirement –VM service –Performance issue in time and space.
A Parallel, Real-Time Garbage Collector Author: Perry Cheng, Guy E. Blelloch Presenter: Jun Tao.
SEG Advanced Software Design and Reengineering TOPIC L Garbage Collection Algorithms.
Dynamic Memory Allocation Questions answered in this lecture: When is a stack appropriate? When is a heap? What are best-fit, first-fit, worst-fit, and.
Ulterior Reference Counting: Fast Garbage Collection without a Long Wait Author: Stephen M Blackburn Kathryn S McKinley Presenter: Jun Tao.
© 2004, D. J. Foreman 1 Memory Management. © 2004, D. J. Foreman 2 Building a Module -1  Compiler ■ generates references for function addresses may be.
A Real-Time Garbage Collector Based on the Lifetimes of Objects Henry Lieberman and Carl Hewitt (CACM, June 1983) Rudy Kaplan Depena CS395T: Memory Management.
Chapter 4 Memory Management.
Copyright (c) 2004 Borys Bradel Myths and Realities: The Performance Impact of Garbage Collection Paper: Stephen M. Blackburn, Perry Cheng, and Kathryn.
Runtime Environments. Support of Execution  Activation Tree  Control Stack  Scope  Binding of Names –Data object (values in storage) –Environment.
Memory Management II: Dynamic Storage Allocation Mar 7, 2000 Topics Segregated free lists –Buddy system Garbage collection –Mark and Sweep –Copying –Reference.
1 Lecture 22 Garbage Collection Mark and Sweep, Stop and Copy, Reference Counting Ras Bodik Shaon Barman Thibaud Hottelier Hack Your Language! CS164: Introduction.
Compilation (Semester A, 2013/14) Lecture 13b: Memory Management Noam Rinetzky Slides credit: Eran Yahav 1.
CS 326 Programming Languages, Concepts and Implementation Instructor: Mircea Nicolescu Lecture 9.
11/26/2015IT 3271 Memory Management (Ch 14) n Dynamic memory allocation Language systems provide an important hidden player: Runtime memory manager – Activation.
CSE 425: Control Abstraction I Functions vs. Procedures It is useful to differentiate functions vs. procedures –Procedures have side effects but usually.
Garbage Collection and Memory Management CS 480/680 – Comparative Languages.
UniProcessor Garbage Collection Techniques Paul R. Wilson University of Texas Presented By Naomi Sapir Tel-Aviv University.
G ARBAGE C OLLECTION CSCE-531 Ankur Jain Neeraj Agrawal 1.
GARBAGE COLLECTION IN AN UNCOOPERATIVE ENVIRONMENT Hans-Juergen Boehm Computer Science Dept. Rice University, Houston Mark Wieser Xerox Corporation, Palo.
Memory Management -Memory allocation -Garbage collection.
Operating Systems ECE344 Ashvin Goel ECE University of Toronto Memory Management Overview.
Runtime The optimized program is ready to run … What sorts of facilities are available at runtime.
Introduction to Garbage Collection. Garbage Collection It automatically reclaims memory occupied by objects that are no longer in use It frees the programmer.
2/4/20161 GC16/3011 Functional Programming Lecture 20 Garbage Collection Techniques.
® July 21, 2004GC Summer School1 Cycles to Recycle: Copy GC Without Stopping the World The Sapphire Collector Richard L. Hudson J. Eliot B. Moss Originally.
CS412/413 Introduction to Compilers and Translators April 21, 1999 Lecture 30: Garbage collection.
Runtime Environments Chapter 7. Support of Execution  Activation Tree  Control Stack  Scope  Binding of Names –Data object (values in storage) –Environment.
Garbage Collection What is garbage and how can we deal with it?
Dynamic Compilation Vijay Janapa Reddi
Dynamic Memory Allocation
Memory Management © 2004, D. J. Foreman.
Chapter 9 – Real Memory Organization and Management
Concepts of programming languages
Automatic Memory Management
Main Memory Management
Memory Management and Garbage Collection Hal Perkins Autumn 2011
Strategies for automatic memory management
Memory Management Kathryn McKinley.
Garbage Collection What is garbage and how can we deal with it?
Presentation transcript:

Runtime The optimized program is ready to run … What sorts of facilities are available at runtime

Compiler Passes Analysis of input program (front-end) character stream Lexical Analysis Code Generation Optimization Intermediate Code Generation Semantic Analysis Syntactic Analysis annotated AST abstract syntax tree token stream target language intermediate form intermediate form Synthesis of output program (back-end)

Runtime Systems Compiled code + runtime system = executable The runtime system can include library functions for: –I/O, for console, files, networking, etc. –graphics libraries, other third-party libraries –reflection: examining the static code & dynamic state of the running program itself –threads, synchronization –memory management –system access, e.g. system calls Can have more development effort put into the runtime system than into the compiler!

Memory management Support –allocating a new (heap) memory block –deallocating a memory block when it’s done deallocated blocks will be recycled Manual memory management: the programmer decides when memory blocks are done, and explicitly deallocates them Automatic memory management: the system automatically detects when memory blocks are done, and automatically deallocates them

Manual memory management Typically use "free lists" Runtime system maintains a linked list of free blocks –to allocate a new block of memory, scan the list to find a block that’s big enough if no free blocks, allocate large chunk of new memory from OS put any unused part of newly-allocated block back on free list –to deallocate a memory block, add to free list store free-list links in the free blocks themselves Lots of interesting engineering details: allocate blocks using first fit or best fit? maintain multiple free lists, each for different size(s) of block? combine adjacent free blocks into one larger block, to avoid fragmentation of memory into lots of little blocks?

Regions A different interface for manual memory management Support: creating a new (heap) memory region allocating a new (heap) memory block from a region deallocating an entire region when all blocks in that region are done Deallocating a region is much faster than deallocating all its blocks individually Less need for complicated reasoning about when individual blocks are done But must keep entire region allocated as long as any block in the region is still allocated Best for applications with "phased allocations" create a region at the start of a "phase" allocate data used only in that phase to the region deallocate region when phase completes (What applications have significant phased allocation?)

Automatic memory management A.k.a. garbage collection Automatically identify blocks that are "dead", deallocate them ensure no dangling pointers, no storage leaks can have faster allocation, better memory locality General styles: reference counting tracing mark/sweep copying Options: generational incremental, parallel, distributed Accurate vs. conservative vs. hybrid

Reference Counting For each heap-allocated block, maintain count of # of pointers to block when create block, ref count = 0 when create new ref to block, increment ref count when remove ref to block, decrement ref count if ref count goes to zero, then delete block

Live ROOTSROOTS Reference Counting ROOTSROOTS ROOTSROOTS ANTIANTI 0 0 Cyclic 0 Dead

Evaluation of reference counting + local, incremental work + little/no language support required + local, implies feasible for distributed systems - cannot reclaim cyclic structures - uses malloc/free back-end => heap gets fragmented - high run-time overhead (10-20%) Delay processing of ptrs from stack (deferred reference counting) - space cost - no bound on time to reclaim -thread-safety? But: a surprising resurgence in recent research papers fixes almost all of these problems

Tracing Collectors Start with a set of root pointers global vars contents of stack and registers Follow pointers in blocks, transitively starting from blocks pointed at by roots –identifies all reachable blocks –all unreachable blocks are garbage unreachable implies cannot be accessed by program A question: how to identify pointers –which globals, stack slots, registers hold pointers? –which slots of heap-allocated memory hold pointers?

Identifying pointers “Accurate”: always know unambiguously where pointers are Use some subset of the following to do this: static type info & compiler support run-time tagging scheme run-time conventions about where pointers can be Conservative: assume anything that looks like a pointer might a pointer, & mark target block reachable + supports GC in "uncooperative environments", e.g. C, C++ What “looks” like a pointer? most optimistic: just align pointers to beginning of blocks what about interior pointers? off-the-end pointers? unaligned pointers? Miss encoded pointers (e.g. xor’d ptrs), ptrs in files,...

Mark/sweep collection Stop the application when heap fills Phase 1: trace reachable blocks, using e.g. depth- first traversal –set mark bit in each block Phase 2: sweep through all of memory –add unmarked blocks to free list –clear marks of marked blocks, to prepare for next GC Restart the application –allocate new (unmarked) blocks using free list

ROOTSROOTS Live Mark/sweep Collection Dead

Evaluation of mark/sweep + collects cyclic structures + simple to implement + no overhead during program execution - “embarrassing pause” problem - not suitable for distributed systems

Copying collection Divide heap into two equal-sized semi-spaces application allocates in from-space to-space is empty When from-space fills, do a GC: –visit blocks referenced by roots –when visit block from pointer: copy block to to-space redirect pointer to copy leave forwarding pointer in from-space version … if visiting block again, just redirect –scan to-space linearly to visit reachable blocks may copy more blocks to end of to-space a la BFS –when done scanning to-space reset from-space to be empty flip: swap roles of to-space and from-space –restart application

ROOTSROOTS Live Copying Collection Dead

Evaluation of copying + collects cyclic structures + allocates directly from end of from-space no free list needed, implies very fast allocation + memory implicitly compacted on each allocation implies better memory locality implies no fragmentation problems + no separate traversal stack required + only visits reachable blocks, ignores unreachable blocks - requires twice the (virtual) memory; physical memory sloshes back and forth could benefit from OS support - “embarrassing pause” problem remains - copying can be slower than marking - redirects pointers, implies the need for accurate pointer info

Generational GC Hypothesis: most blocks die soon after allocation e.g. closures, cons cells, stack frames, … Idea: concentrate GC effort on young blocks divide up heap into 2 or more generations GC each generation with different frequencies, algorithms

A generational collector 2 generations: new-space and old-space new-space managed using copying old-space managed using mark/sweep To keep pauses low, make new-space relatively small will need frequent, but short, collections If a block survives many new-space collections, then promote it to old-space no more load on new-space collections If old-space fills, do a full GC of both generations

Live Dead Generational Collector Live(?) 1 ROOTSROOTS NURSERYNURSERY MAINHEAPMAINHEAP Dead

Roots for generational GC Must include pointers from old-space to new- space as roots when collecting new-space How to find these? 1.Scan old-space at each scavenge 2.Track pointers from old-space to new-space

Tracking old--> pointers How to keep track of pointers from old space to new space –need a data structure to record them in –need a strategy to update the data structure 1.Use a purely functional language! 2.Keep list of all locations in old-space with cross- generational pointers –Instrument all assignments to update list (write barrier) Can implement write barrier in sw or using page-protected hw expensive : duplicates? space? 3.Same, but only track blocks containing such locations –Lower time and space costs, higher root scanning costs 4.Track fixed-size cards containing such locations –Use a bit-map as “list,” implies very efficient to maintain

Evaluation of generation scavenging + new-space collections are short, fraction of a second + vs. pure copying: less copying of long- lived blocks less virtual memory space + vs. pure mark/sweep: faster allocation better memory locality - requires write barrier - still have infrequent full GCs w/embarrassing pauses