Presentation is loading. Please wait.

Presentation is loading. Please wait.

On-the-Fly Garbage Collection Using Sliding Views Erez Petrank Technion – Israel Institute of Technology Joint work with Yossi Levanoni, Hezi Azatchi,

Similar presentations


Presentation on theme: "On-the-Fly Garbage Collection Using Sliding Views Erez Petrank Technion – Israel Institute of Technology Joint work with Yossi Levanoni, Hezi Azatchi,"— Presentation transcript:

1 On-the-Fly Garbage Collection Using Sliding Views Erez Petrank Technion – Israel Institute of Technology Joint work with Yossi Levanoni, Hezi Azatchi, and Harel Paz

2 Erez PetrankGC via Sliding Views2 Garbage Collection User allocates space dynamically, the garbage collector automatically frees the space when it “no longer needed”. Usually “no longer needed” = unreachable by a path of pointers from program local references (roots). Programmer does not have to decide when to free an object. (No memory leaks, no dereferencing of freed objects.) Built into Java, C#.

3 Erez PetrankGC via Sliding Views3 Garbage Collection Two Classic Approaches Reference counting [Collins 1960]: keep a reference count for each object, reclaim objects with count 0. Tracing [McCarthy 1960]: trace reachable objects, reclaim objects not traced. Traditional Wisdom Good Problematic

4 Erez PetrankGC via Sliding Views4 What (was) Bad about RC ? Does not reclaim cycles A heavy overhead on pointer modifications. Traditional belief: “Cannot be used efficiently with parallel processing” A B

5 Erez PetrankGC via Sliding Views5 What’s Good about RC ? Reference Counting work is proportional to work on creations and modifications. Can tracing deal with tomorrow’s huge heaps? Reference counting has good locality. The Challenge: RC overhead on pointer modification seems too expensive. RC seems impossible to “parallelize”.

6 Erez PetrankGC via Sliding Views6 Garbage Collection Today Today’s advanced environments: multiprocessors + large memories Dealing with multiprocessors Single-threaded stop the world

7 Erez PetrankGC via Sliding Views7 Garbage Collection Today Today’s advanced environments: multiprocessors + large memories Dealing with multiprocessors Concurrent collectionParallel collection

8 Erez PetrankGC via Sliding Views8 Terminology (stop the world, parallel, concurrent, …) Stop-the-World Parallel (STW) Concurrent On-the-Fly program GC

9 Erez PetrankGC via Sliding Views9 Benefits & Costs Informal Pause times 200ms 2ms 20ms Throughput Loss: 10-20% Stop-the-World Parallel (STW) Concurrent On-the-Fly program GC

10 Erez PetrankGC via Sliding Views10 This Talk Introduction: RC and Tracing, Coping with SMP’s.  RC introduction and parallelization problem.  Main focus: a novel concurrent reference counting algorithm (suitable for Java).  Concurrent made on-the-fly based on “sliding views” Extensions: cycle collection, mark and sweep, generations, age- oriented. Implementation and measurements on Jikes. Extremely short pauses, good throughput.

11 Erez PetrankGC via Sliding Views11 Basic Reference Counting Each object has an RC field, new objects get o.rc:=1. When p that points to o 1 is modified to point to o 2 execute: o 2.rc++, o 1.rc--. if then o 1.rc==0: Delete o 1. Decrement o.rc for all children of o 1. Recursively delete objects whose rc is decremented to 0. o1o1 o2o2 p

12 Erez PetrankGC via Sliding Views12 An Important Term: A write barrier is a piece of code executed with each pointer update. “p  o2 ” implies: Read p; (see o1) p  o2; o2.rc++; o1.rc- -; o1o1 o2o2 p

13 Erez PetrankGC via Sliding Views13 Deferred Reference Counting Problem: overhead on updating program variables (locals) is too high. Solution [Deutch & Bobrow 76] : Don’t update rc for local variables (roots). “Once in a while”: collect all objects with o.rc=0 that are not referenced from local variables. Deferred RC reduces overhead by 80%. Used in most modern RC systems. Still, “heap” write barrier is too costly.

14 Multithreaded RC? Traditional wisdom: write barrier must be synchronized !

15 Multithreaded RC? Problem 1: ref-counts updates must be atomic Fortunately, this can be easily solved : Each thread logs required updates in a local buffer and the collector applies all the updates during GC (as a single thread).

16 Multithreaded RC? Problem 1: ref-counts updates must be atomic A BDC Thread 2: Read A.next; (see B) A.next  D; B.rc- -; D.rc++ Thread 1: Read A.next; (see B) A.next  C; B.rc- -; C.rc++ Problem 2: parallel updates confuse counters:

17 Erez PetrankGC via Sliding Views17 Known Multithreaded RC [DeTreville 1990, Bacon et al 2001]: Cmp & swp for each pointer modification. Thread records its updates in a buffer.

18 Erez PetrankGC via Sliding Views18 To Summarize Problems… Write barrier overhead is high. Even with deferred RC. Using RC with multithreading seems to bear high synchronization cost. Lock or “compare & swap” with each pointer update.

19 Reducing RC Overhead: We start by looking at the “parent’s point of view”. We are counting rc for the child, but rc changes when a parent’s pointer is modified. Parent Child

20 An Observation Consider a pointer p that takes the following values between GC’s: O 0,O 1, O 2, …, O n. All RC algorithms perform 2n operations: O 0.rc--; O 1.rc++; O 1.rc--; O 2.rc++; O 2.rc--; … ; O n.rc++; But only two operations are needed: O 0.rc-- and O n.rc++ p O1O1 O2O2 O3O3 OnOn..... O4O4 O0O0

21 Use of Observation Time Only the first modification of each pointer is logged. Garbage Collection P  O 1 ; (record p’s previous value O 0 ) P  O 2 ; (do nothing) … P  O n ; (do nothing) Garbage Collection: For each modified slot p: Read p to get O n, read records to get O 0. Read p to get O n, read records to get O 0. O 0.rc--, O n.rc++ O 0.rc--, O n.rc++

22 Some Technical Remarks When a pointer is first modified, it is marked “dirty” and its previous value is logged. We actually log each object that gets modified (and not just a single pointer). Reason 1: we don’t want a dirty bit per pointer. Reason 2: object’s pointers tend to be modified together. Only non-null pointer fields are logged. New objects are “born dirty”.

23 Effects of Optimization RC work significantly reduced: The number of logging & counter updates is reduced by a factor of 100-1000 for typical Java benchmarks !

24 Elimination of RC Updates BenchmarkNo of stores No of “first” stores Ratio of “first” stores jbb71,011,357264,1151/269 Compress64,905511/1273 Db33,124,78030,6961/1079 Jack135,174,7751,5461/87435 Javac22,042,028535,2961/41 Jess26,258,10727,3331/961 Mpegaudio5,517,795511/108192

25 Effects of Optimization RC work significantly reduced: The number of logging & counter updates is reduced by a factor of 100-1000 for typical Java benchmarks ! Write barrier overhead dramatically reduced. The vast majority of the write barriers run a single “if”. Last but not least: the task has changed ! We need to record the first update.

26 Erez PetrankGC via Sliding Views26 Reducing Synch. Overhead Our second contribution: A carefully designed write barrier (and an observation) does not require any sync. operation.

27 The write barrier Update(Object **slot, Object *new){ Object *old = *slot if (!IsDirty(slot)) { log( slot, old ) SetDirty(slot) } *slot = new } Observation: If two threads: 1.invoke the write barrier in parallel, and 2.both log an old value, then both record the same old value.

28 Running Write Barrier Concurrently Thread 1: Update(Object **slot, Object *new){ Object *old = *slot if (!IsDirty(slot)) { /* if we got here, Thread 2 has */ /* yet set the dirty bit, thus, has */ /* not yet modified the slot. */ log( slot, old ) SetDirty(slot) } *slot = new } Thread 2: Update(Object **slot, Object *new){ Object *old = *slot if (!IsDirty(slot)) { /* if we got here, Thread 1 has */ /* yet set the dirty bit, thus, has */ /* not yet modified the slot. */ log( slot, old ) SetDirty(slot) } *slot = new }

29 Concurrent Algorithm: Use write barrier with program threads. To collect: Stop all threads Scan roots (local variables) get the buffers with modified slots Clear all dirty bits. Resume threads For each modified slot: decrement rc for old value (written in buffer), increment rc for current value (“read heap”), Reclaim non-local objects with rc 0.

30 Timeline Stop threads. Scan roots; Get buffers; erase dirty bits; Resume threads. Decrement values in read buffers; Increment “current” values; Collect dead objects

31 Timeline Stop threads. Scan roots; Get buffers; erase dirty bits; Resume threads. Decrement values in read buffers; Increment “current” values; Collect dead objects Unmodified current values are in the heap. Modified are in new buffers.

32 Concurrent Algorithm: Use write barrier with program threads. To collect: Stop all threads Scan roots (local variables) get the buffers with modified slots Clear all dirty bits. Resume threads For each modified slot: decrease rc for old value (written in buffer), increase rc for current value (“read heap”), Reclaim non-local objects with rc 0. Goal 2: stop one thread at a time Goal 1: clear dirty bits during program run.

33 Erez PetrankGC via Sliding Views33 The Sliding Views “Framework” Develop a concurrent algorithm There is a short time in which all the threads are stopped simultaneously to perform some task. Avoid stopping the threads together. Instead, stop one thread at a time. Tricky part: “fix” the problems created by this modification. Idea borrowed from the Distributed Computing community [Lamport].

34 Erez PetrankGC via Sliding Views34 Graphically A Snapshot A Sliding View time Heap Addr. Heap Addr. tt1t2

35 Erez PetrankGC via Sliding Views35 Fixing Correctness The way to do this in our algorithm is to use snooping: While collecting the roots, record objects that get a new pointer. Do not reclaim these objects. No details…

36 Erez PetrankGC via Sliding Views36 Cycles Collection Our initial solution: use a tracing algorithm infrequently. More about this tracing collector and about cycle collectors later…

37 Erez PetrankGC via Sliding Views37 Performance Measurements Implementation for Java on the Jikes Research JVM Compared collectors: Jikes parallel stop-the-world (STW) Jikes concurrent RC (Jikes concurrent) Benchmarks: SPECjbb2000: a server benchmark --- simulates business-like transactions. SPECjvm98: a client benchmarks --- a suite of mostly single-threaded benchmarks

38 Erez PetrankGC via Sliding Views38 Pause Times vs. STW

39 Erez PetrankGC via Sliding Views39 Pause Times vs. Jikes Concurrent

40 Erez PetrankGC via Sliding Views40 SPECjbb2000 Throughput

41 Erez PetrankGC via Sliding Views41 SPECjvm98 Throughput

42 Erez PetrankGC via Sliding Views42 SPECjbb2000 Throughput

43 Erez PetrankGC via Sliding Views43 A Glimpse into Subsequent Work: SPECjbb2000 Throughput

44 Erez PetrankGC via Sliding Views44 Subsequent Work Cycle Collection [CC’05]) Cycle Collection A Mark and Sweep Collector [OOPSLA’03] A Mark and Sweep Collector A Generational Collector [CC’03] A Generational Collector An Age-Oriented Collector [CC’05] An Age-Oriented Collector

45 Erez PetrankGC via Sliding Views45 Related Work It’s not clear where to start… RC, concurrent, generational, etc… Some more relevant work was mentioned.

46 Erez PetrankGC via Sliding Views46 Conclusions A Study of Concurrent Garbage Collection with a Focus on RC. Novel techniques obtaining short pauses, high efficiency. The best approach: age-oriented collection with concurrent RC for old and concurrent tracing for young. Implementation and measurements on Jikes demonstrate non-obtrusiveness and high efficiency.

47 Erez PetrankGC via Sliding Views47 Project Building Blocks A novel reference counting algorithm. State-of-the-art cycle collection. Generational RC (for old) and tracing (for young) A concurrent tracing collector. An age-oriented collector: fitting generations with concurrent collectors.


Download ppt "On-the-Fly Garbage Collection Using Sliding Views Erez Petrank Technion – Israel Institute of Technology Joint work with Yossi Levanoni, Hezi Azatchi,"

Similar presentations


Ads by Google