The Transactional Memory / Garbage Collection Analogy Dan Grossman University of Washington Microsoft Programming Languages TCN September 14, 2010.

Slides:



Advertisements
Similar presentations
Automated Theorem Proving Lecture 1. Program verification is undecidable! Given program P and specification S, does P satisfy S?
Advertisements

Design and Implementation Issues for Atomicity Dan Grossman University of Washington Workshop on Declarative Programming Languages for Multicore Architectures.
Introduction to Memory Management. 2 General Structure of Run-Time Memory.
Automatic Memory Management Noam Rinetzky Schreiber 123A /seminar/seminar1415a.html.
Compilation 2011 Static Analysis Johnni Winther Michael I. Schwartzbach Aarhus University.
ECE 454 Computer Systems Programming Compiler and Optimization (I) Ding Yuan ECE Dept., University of Toronto
Relaxed Consistency Models. Outline Lazy Release Consistency TreadMarks DSM system.
CS 162 Memory Consistency Models. Memory operations are reordered to improve performance Hardware (e.g., store buffer, reorder buffer) Compiler (e.g.,
D u k e S y s t e m s Time, clocks, and consistency and the JMM Jeff Chase Duke University.
Concurrency The need for speed. Why concurrency? Moore’s law: 1. The number of components on a chip doubles about every 18 months 2. The speed of computation.
Programming-Language Approaches to Improving Shared-Memory Multithreading: Work-In-Progress Dan Grossman University of Washington Microsoft Research, RiSE.
5. Memory Management From: Chapter 5, Modern Compiler Design, by Dick Grunt et al.
Various languages….  Could affect performance  Could affect reliability  Could affect language choice.
CSE , Autumn 2011 Michael Bond.  Name  Program & year  Where are you coming from?  Research interests  Or what’s something you find interesting?
Thread-Level Transactional Memory Decoupling Interface and Implementation UW Computer Architecture Affiliates Conference Kevin Moore October 21, 2004.
CS 326 Programming Languages, Concepts and Implementation Instructor: Mircea Nicolescu Lecture 18.
Chapter 8 Runtime Support. How program structures are implemented in a computer memory? The evolution of programming language design has led to the creation.
CS444/CS544 Operating Systems Synchronization 2/16/2006 Prof. Searleman
The Transactional Memory / Garbage Collection Analogy Dan Grossman University of Washington AMD July 26, 2010.
PARALLEL PROGRAMMING with TRANSACTIONAL MEMORY Pratibha Kona.
Software Transactions: A Programming-Languages Perspective Dan Grossman University of Washington 28 February 2008.
The Transactional Memory / Garbage Collection Analogy Dan Grossman University of Washington OOPSLA 2007.
My programming-languages view of TM: Research and Conjectures Dan Grossman University of Washington June 2008.
Software Transactions: A Programming-Languages Perspective Dan Grossman University of Washington 27 March 2008.
Programming-Language Motivation, Design, and Semantics for Software Transactions Dan Grossman University of Washington June 2008.
Run time vs. Compile time
Deterministic Execution of Nondeterministic Shared-Memory Programs Dan Grossman University of Washington Dagstuhl Seminar on Design and Validation of Concurrent.
1 New Architectures Need New Languages A triumph of optimism over experience! Ian Watson 3 rd July 2009.
02/23/2004CSCI 315 Operating Systems Design1 Process Synchronization Notice: The slides for this lecture have been largely based on those accompanying.
1 Sharing Objects – Ch. 3 Visibility What is the source of the issue? Volatile Dekker’s algorithm Publication and Escape Thread Confinement Immutability.
02/17/2010CSCI 315 Operating Systems Design1 Process Synchronization Notice: The slides for this lecture have been largely based on those accompanying.
Department of Computer Science Presenters Dennis Gove Matthew Marzilli The ATOMO ∑ Transactional Programming Language.
1 Run time vs. Compile time The compiler must generate code to handle issues that arise at run time Representation of various data types Procedure linkage.
Software Transactions: A Programming-Languages Perspective Dan Grossman University of Washington 5 December 2006.
Synchronization CSCI 444/544 Operating Systems Fall 2008.
02/19/2007CSCI 315 Operating Systems Design1 Process Synchronization Notice: The slides for this lecture have been largely based on those accompanying.
Discussion Week 3 TA: Kyle Dewey. Overview Concurrency overview Synchronization primitives Semaphores Locks Conditions Project #1.
© 2009 Matthew J. Sottile, Timothy G. Mattson, and Craig E Rasmussen 1 Concurrency in Programming Languages Matthew J. Sottile Timothy G. Mattson Craig.
Programming Paradigms for Concurrency Part 2: Transactional Memories Vasu Singh
Eraser: A Dynamic Data Race Detector for Multithreaded Programs STEFAN SAVAGE, MICHAEL BURROWS, GREG NELSON, PATRICK SOBALVARRO, and THOMAS ANDERSON Ethan.
COMP 111 Threads and concurrency Sept 28, Tufts University Computer Science2 Who is this guy? I am not Prof. Couch Obvious? Sam Guyer New assistant.
Colorama: Architectural Support for Data-Centric Synchronization Luis Ceze, Pablo Montesinos, Christoph von Praun, and Josep Torrellas, HPCA 2007 Shimin.
OOPLs /FEN March 2004 Object-Oriented Languages1 Object-Oriented Languages - Design and Implementation Java: Behind the Scenes Finn E. Nordbjerg,
Aritra Sengupta, Swarnendu Biswas, Minjia Zhang, Michael D. Bond and Milind Kulkarni ASPLOS 2015, ISTANBUL, TURKEY Hybrid Static-Dynamic Analysis for Statically.
Shared Memory Consistency Models. SMP systems support shared memory abstraction: all processors see the whole memory and can perform memory operations.
Memory Consistency Models. Outline Review of multi-threaded program execution on uniprocessor Need for memory consistency models Sequential consistency.
1 CSCD 326 Data Structures I Software Design. 2 The Software Life Cycle 1. Specification 2. Design 3. Risk Analysis 4. Verification 5. Coding 6. Testing.
Transactions and Concurrency Control. Concurrent Accesses to an Object Multiple threads Atomic operations Thread communication Fairness.
CS533 – Spring Jeanie M. Schwenk Experiences and Processes and Monitors with Mesa What is Mesa? “Mesa is a strongly typed, block structured programming.
CoreDet: A Compiler and Runtime System for Deterministic Multithreaded Execution Tom Bergan Owen Anderson, Joe Devietti, Luis Ceze, Dan Grossman To appear.
What’s Ahead for Embedded Software? (Wed) Gilsoo Kim
3/12/2013Computer Engg, IIT(BHU)1 PARALLEL COMPUTERS- 2.
AtomCaml: First-class Atomicity via Rollback Michael F. Ringenburg and Dan Grossman University of Washington International Conference on Functional Programming.
Specifying Multithreaded Java semantics for Program Verification Abhik Roychoudhury National University of Singapore (Joint work with Tulika Mitra)
Week 9, Class 3: Java’s Happens-Before Memory Model (Slides used and skipped in class) SE-2811 Slide design: Dr. Mark L. Hornick Content: Dr. Hornick Errors:
On Transactional Memory, Spinlocks and Database Transactions Khai Q. Tran Spyros Blanas Jeffrey F. Naughton (University of Wisconsin Madison)
Agenda  Quick Review  Finish Introduction  Java Threads.
4 November 2005 CS 838 Presentation 1 Nested Transactional Memory: Model and Preliminary Sketches J. Eliot B. Moss and Antony L. Hosking Presented by:
Object Lifetime and Pointers
Atomicity CS 2110 – Fall 2017.
Specifying Multithreaded Java semantics for Program Verification
Threads and Memory Models Hal Perkins Autumn 2011
Changing thread semantics
Threads and Memory Models Hal Perkins Autumn 2009
Design and Implementation Issues for Atomicity
CSE 451: Operating Systems Autumn 2003 Lecture 7 Synchronization
CSE 451: Operating Systems Autumn 2005 Lecture 7 Synchronization
CSE 451: Operating Systems Winter 2003 Lecture 7 Synchronization
CSE 332: Concurrency and Locks
Controlled Interleaving for Transactions
Presentation transcript:

The Transactional Memory / Garbage Collection Analogy Dan Grossman University of Washington Microsoft Programming Languages TCN September 14, 2010

Today Short overview of my history and research agenda The TM/GC Analogy: My perspective on –Why high-level languages benefit from transactions –What the key design dimensions are –How to think about the software-engineering benefits Plenty of time for discussion September 14, 2010Dan Grossman: The TM/GC Analogy2

September 14, 2010Dan Grossman: The TM/GC Analogy3 Biography / group names Programming-languages academic researcher,1997- PhD for Cyclone  UW faculty, –Type system, compiler for memory-safe C dialect 30%  85% focus on multithreading, Co-advising 3-4 students with computer architect Luis Ceze, Two groups for “marketing purposes” WASP, wasp.cs.washington.edu SAMPA, sampa.cs.washington.edu September-December 2010: Sabbatical visiting RiSE in MSR

Some things not in today’s talk Work at lower levels of the system stack –Compiler/run-time for deterministic multithreading [ASPLOS10] –Lock prediction [HotPar10] Other approaches to improve shared-memory concurrency –Code-centric partial specifications [OOPSLA10] Better PL support for web-application extensibility (with MSR) –Aspects for JavaScript [OOSPLA10] Progress-estimation for MapReduce ensembles [SIGMOD10] Better type-checker error messages [PLDI07] Older work on memory-safe C ( ) … September 14, 2010Dan Grossman: The TM/GC Analogy4

TM at Univ. Washington I come at transactions from the programming-languages side –Formal semantics, language design, and efficient implementation for atomic blocks –Software-development benefits –Interaction with other sophisticated features of modern PLs [ICFP05][MSPC06][PLDI07][OOPSLA07][SCHEME07][POPL08][ICFP09] September 14, 2010Dan Grossman: The TM/GC Analogy5 transfer(from,to,amt){ atomic { deposit(to,amt); withdraw(from,amt); } An easier-to-use and harder-to-implement synchronization primitive

A key question Why exactly are atomic blocks better than locks? –Good science/engineering demands an answer Answers I wasn’t happy with: –“Just seems easier”[non-answer] –“More declarative”[means what?] –“Deadlock impossible”[only in unhelpful technical sense] –“Easier for idiom X”[not a general principle] So came up with another answer I still deeply believe years later… September 14, 2010Dan Grossman: The TM/GC Analogy6

The analogy September 14, 2010Dan Grossman: The TM/GC Analogy7 “Transactional memory is to shared-memory concurrency as garbage collection is to memory management” Understand TM and GC better by explaining remarkable similarities –Benefits, limitations, and implementations –A technical description / framework with explanatory power –Not a sales pitch

September 14, 2010Dan Grossman: The TM/GC Analogy8 Outline Why an analogy helps Brief separate overview of GC and TM The core technical analogy (but read the essay) –And why concurrency is still harder Provocative questions based on the analogy

September 14, 2010Dan Grossman: The TM/GC Analogy9 Two bags of concepts reachability dangling pointers reference counting space exhaustion weak pointers real-time guarantees liveness analysis conservative collection finalization GC races eager update deadlock obstruction-freedom open nesting false sharing TM memory conflicts escape analysis

September 14, 2010Dan Grossman: The TM/GC Analogy10 Interbag connections reachability dangling pointers reference counting space exhaustion weak pointers real-time guarantees liveness analysis conservative collection finalization GC races eager update deadlock obstruction-freedom open nesting false sharing TM memory conflicts escape analysis

September 14, 2010Dan Grossman: The TM/GC Analogy11 Analogies help organize reachability dangling pointers reference counting space exhaustion weak pointers real-time guarantees liveness analysis conservative collection finalization GC races eager update deadlock obstruction-freedom open nesting false sharing TM memory conflicts escape analysis

September 14, 2010Dan Grossman: The TM/GC Analogy12 Analogies help organize reachability dangling pointers reference counting space exhaustion weak pointers real-time guarantees liveness analysis conservative collection finalization GC races eager update deadlock obstruction-freedom open nesting false sharing TM memory conflicts escape analysis commit handlers

September 14, 2010Dan Grossman: The TM/GC Analogy13 So the goals are… Leverage the design trade-offs of GC to guide TM –And vice-versa? Identify open research Motivate TM –TM improves concurrency as GC improves memory –GC is a huge help despite its imperfections –So TM is a huge help despite its imperfections

September 14, 2010Dan Grossman: The TM/GC Analogy14 Outline “TM is to shared-memory concurrency as GC is to memory management” Why an analogy helps Brief separate overview of GC and TM The core technical analogy (but read the essay) –And why concurrency is still harder Provocative questions based on the analogy

September 14, 2010Dan Grossman: The TM/GC Analogy15 Memory management Allocate objects in the heap Deallocate objects to reuse heap space –If too soon, dangling-pointer dereferences –If too late, poor performance / space exhaustion

September 14, 2010Dan Grossman: The TM/GC Analogy16 GC Basics Automate deallocation via reachability approximation –Approximation can be terrible in theory Reachability via tracing or reference-counting –Duals [Bacon et al OOPSLA04] Lots of bit-level tricks for simple ideas –And high-level ideas like a nursery for new objects rootsheap objects

September 14, 2010Dan Grossman: The TM/GC Analogy17 A few GC issues Weak pointers –Let programmers overcome reachability approximation Accurate vs. conservative –Conservative can be unusable (only) in theory Real-time guarantees for responsiveness

September 14, 2010Dan Grossman: The TM/GC Analogy18 GC Bottom-line Established technology with widely accepted benefits Even though it can perform terribly in theory Even though you cannot always ignore how GC works (at a high-level) Even though an active research area after 50 years

September 14, 2010Dan Grossman: The TM/GC Analogy19 Concurrency Restrict attention to explicit threads communicating via shared memory Synchronization mechanisms coordinate access to shared memory –Bad synchronization can lead to races or a lack of parallelism (even deadlock)

September 14, 2010Dan Grossman: The TM/GC Analogy20 Atomic An easier-to-use and harder-to-implement primitive lock acquire/release(behave as if) no interleaved computation; no unfair starvation void deposit(int x){ synchronized(this){ int tmp = balance; tmp += x; balance = tmp; } void deposit(int x){ atomic{ int tmp = balance; tmp += x; balance = tmp; }

September 14, 2010Dan Grossman: The TM/GC Analogy21 TM basics atomic (or related constructs) implemented via transactional memory Preserve parallelism as long as no memory conflicts –Can lead to unnecessary loss of parallelism If conflict detected, abort and retry Lots of complicated details –All updates must appear to happen at once

September 14, 2010Dan Grossman: The TM/GC Analogy22 A few TM issues Open nesting: atomic { … open { s; } … } Granularity (potential false conflicts) atomic{… x.f++; …} atomic{… x.g++; … } Update-on-commit vs. update-in-place Obstruction-freedom …

September 14, 2010Dan Grossman: The TM/GC Analogy23 Advantages So atomic “sure feels better than locks” But the crisp reasons I’ve seen are all (great) examples –Personal favorite from Flanagan et al Same issue as Java’s StringBuffer.append –(see essay for close 2nds)

September 14, 2010Dan Grossman: The TM/GC Analogy24 Code evolution void deposit(…) { synchronized(this) { … }} void withdraw(…) { synchronized(this) { … }} int balance(…) { synchronized(this) { … }}

September 14, 2010Dan Grossman: The TM/GC Analogy25 Code evolution void deposit(…) { synchronized(this) { … }} void withdraw(…) { synchronized(this) { … }} int balance(…) { synchronized(this) { … }} void transfer(Acct from, int amt) { if(from.balance()>=amt && amt < maxXfer) { from.withdraw(amt); this.deposit(amt); } }

September 14, 2010Dan Grossman: The TM/GC Analogy26 Code evolution void deposit(…) { synchronized(this) { … }} void withdraw(…) { synchronized(this) { … }} int balance(…) { synchronized(this) { … }} void transfer(Acct from, int amt) { synchronized(this) { //race if(from.balance()>=amt && amt < maxXfer) { from.withdraw(amt); this.deposit(amt); }

September 14, 2010Dan Grossman: The TM/GC Analogy27 Code evolution void deposit(…) { synchronized(this) { … }} void withdraw(…) { synchronized(this) { … }} int balance(…) { synchronized(this) { … }} void transfer(Acct from, int amt) { synchronized(this) { synchronized(from) { //deadlock (still) if(from.balance()>=amt && amt < maxXfer) { from.withdraw(amt); this.deposit(amt); } }} }

September 14, 2010Dan Grossman: The TM/GC Analogy28 Code evolution void deposit(…) { atomic { … }} void withdraw(…) { atomic { … }} int balance(…) { atomic { … }}

September 14, 2010Dan Grossman: The TM/GC Analogy29 Code evolution void deposit(…) { atomic { … }} void withdraw(…) { atomic { … }} int balance(…) { atomic { … }} void transfer(Acct from, int amt) { //race if(from.balance()>=amt && amt < maxXfer) { from.withdraw(amt); this.deposit(amt); } }

September 14, 2010Dan Grossman: The TM/GC Analogy30 Code evolution void deposit(…) { atomic { … }} void withdraw(…) { atomic { … }} int balance(…) { atomic { … }} void transfer(Acct from, int amt) { atomic { //correct and parallelism-preserving! if(from.balance()>=amt && amt < maxXfer){ from.withdraw(amt); this.deposit(amt); }

September 14, 2010Dan Grossman: The TM/GC Analogy31 But can we generalize So TM sure looks appealing… But what is the essence of the benefit? You know my answer…

September 14, 2010Dan Grossman: The TM/GC Analogy32 Outline “TM is to shared-memory concurrency as GC is to memory management” Why an analogy helps Brief separate overview of GC and TM The core technical analogy (but read the essay) –And why concurrency is still harder Provocative questions based on the analogy

September 14, 2010Dan Grossman: The TM/GC Analogy33 The problem, part 1 Why memory management is hard: Balance correctness (avoid dangling pointers) And performance (no space waste or exhaustion) Manual approaches require whole-program protocols Example: Manual reference count for each object Must avoid garbage cycles race conditions concurrent programming loss of parallelismdeadlock lock lock acquisition

September 14, 2010Dan Grossman: The TM/GC Analogy34 The problem, part 2 Manual memory-management is non-modular: Caller and callee must know what each other access or deallocate to ensure right memory is live A small change can require wide-scale changes to code –Correctness requires knowing what data subsequent computation will access synchronization locks are held release concurrent

September 14, 2010Dan Grossman: The TM/GC Analogy35 The solution Move whole-program protocol to language implementation One-size-fits-most implemented by experts –Usually combination of compiler and run-time GC system uses subtle invariants, e.g.: –Object header-word bits –No unknown mature pointers to nursery objects In theory, object relocation can improve performance by increasing spatial locality –In practice, some performance loss worth convenience thread-sharedthread-local TM optimistic concurrency parallelism

September 14, 2010Dan Grossman: The TM/GC Analogy36 Two basic approaches Tracing: assume all data is live, detect garbage later Reference-counting: can detect garbage immediately –Often defer some counting to trade immediacy for performance (e.g., trace the stack) update-on-commitconflict-freeconflicts update-in-placeconflicts conflict-detection optimistic reads

September 14, 2010Dan Grossman: The TM/GC Analogy37 So far… memory managementconcurrency correctnessdangling pointersraces performancespace exhaustiondeadlock automationgarbage collectiontransactional memory new objectsnursery datathread-local data eager approachreference-countingupdate-in-place lazy approachtracingupdate-on-commit

September 14, 2010Dan Grossman: The TM/GC Analogy38 Incomplete solution GC a bad idea when “reachable” is a bad approximation of “cannot- be-deallocated” Weak pointers overcome this fundamental limitation –Best used by experts for well-recognized idioms (e.g., software caches) In extreme, programmers can encode manual memory management on top of GC –Destroys most of GC’s advantages…

September 14, 2010Dan Grossman: The TM/GC Analogy39 Circumventing GC class Allocator { private SomeObjectType[] buf = …; private boolean[] avail = …; Allocator() { /* initialize arrays */ } SomeObjectType malloc() { /* find available index */ } void free(SomeObjectType o) { /* set corresponding index available */ }

September 14, 2010Dan Grossman: The TM/GC Analogy40 Incomplete solution GC a bad idea when “reachable” is a bad approximation of “cannot-be-deallocated” Weak pointers overcome this fundamental limitation –Best used by experts for well-recognized idioms (e.g., software caches) In extreme, programmers can encode manual memory management on top of GC –Destroys most of GC’s advantages… TM memory conflict run-in-parallel Open nested txns unique id generation lockingTM

September 14, 2010Dan Grossman: The TM/GC Analogy41 Circumventing GC class SpinLock { private boolean b = false; void acquire() { while(true) atomic { if(b) continue; b = true; return; } void release() { atomic { b = false; } } TM

September 14, 2010Dan Grossman: The TM/GC Analogy42 Programmer control For performance and simplicity, GC treats entire objects as reachable, which can lead to more space Space-conscious programmers can reorganize data accordingly But with conservative collection, programmers cannot completely control what appears reachable –Arbitrarily bad in theory (some) TM accessed less parallelism Parallelism coarser granularity (e.g., cache lines) conflicting

September 14, 2010Dan Grossman: The TM/GC Analogy43 So far… memory managementconcurrency correctnessdangling pointersraces performancespace exhaustiondeadlock automationgarbage collectiontransactional memory new objectsnursery datathread-local data eager approachreference-countingupdate-in-place lazy approachtracingupdate-on-commit key approximationreachabilitymemory conflicts manual circumventionweak pointersopen nesting uncontrollable approx.conservative collectionfalse memory conflicts

September 14, 2010Dan Grossman: The TM/GC Analogy44 More I/O: output after input of pointers can cause incorrect behavior due to dangling pointers Real-time guarantees doable but costly Static analysis can avoid overhead –Example: liveness analysis for fewer root locations –Example: remove write-barriers on nursery data Obstruction-freedom thread-local escape potential conflicts irreversible actions in transactions

September 14, 2010Dan Grossman: The TM/GC Analogy45 One more Finalization allows arbitrary code to run when an object successfully gets collected But bizarre semantic rules result since a finalizer could cause the object to become reachable A commit handler transactioncommits commit handler transaction to have memory conflicts

September 14, 2010Dan Grossman: The TM/GC Analogy46 Too much coincidence! memory managementconcurrency correctnessdangling pointersraces performancespace exhaustiondeadlock automationgarbage collectiontransactional memory new objectsnursery datathread-local data eager approachreference-countingupdate-in-place lazy approachtracingupdate-on-commit key approximationreachabilitymemory conflicts manual circumventionweak pointersopen nesting uncontrollable approx.conservative collectionfalse memory conflicts more…I/O of pointersI/O in transactions real-timeobstruction-free liveness analysisescape analysis ……

September 14, 2010Dan Grossman: The TM/GC Analogy47 Outline “TM is to shared-memory concurrency as GC is to memory management” Why an analogy helps Brief separate overview of GC and TM The core technical analogy (but read the essay) –And why concurrency is still harder Provocative questions based on the analogy

September 14, 2010Dan Grossman: The TM/GC Analogy48 Concurrency is hard! I never said the analogy means TM concurrent programming is as easy as GC sequential programming By moving low-level protocols to the language run-time, TM lets programmers just declare where critical sections should be But that is still very hard and – by definition – unnecessary in sequential programming Huge step forward = panacea /

September 14, 2010Dan Grossman: The TM/GC Analogy49 Stirring things up I can defend the technical analogy on solid ground Then push things (perhaps) too far … 1.Many used to think GC was too slow without hardware 2.Many used to think GC was “about to take over” (decades before it did) 3.Many used to think we needed a “back door” for when GC was too approximate GC was invented in 1960 and went mainstream in the mid-1990s

September 14, 2010Dan Grossman: The TM/GC Analogy50 Next steps? Push the analogy further or discredit it Generational GC? Contention management? Inspire and guide new language design and implementation Teach programming with TM as we teach programming with GC –First? Understand that TM solves some problems for some multithreading patterns: It doesn’t have to be all things for all people Find other useful analogies

Thank you Full essay in OOPSLA 2007 September 14, 2010Dan Grossman: The TM/GC Analogy51