Software Transactions: A Programming-Languages Perspective Dan Grossman University of Washington 28 February 2008.

Slides:



Advertisements
Similar presentations
Automated Theorem Proving Lecture 1. Program verification is undecidable! Given program P and specification S, does P satisfy S?
Advertisements

Design and Implementation Issues for Atomicity Dan Grossman University of Washington Workshop on Declarative Programming Languages for Multicore Architectures.
Automatic Memory Management Noam Rinetzky Schreiber 123A /seminar/seminar1415a.html.
D u k e S y s t e m s Time, clocks, and consistency and the JMM Jeff Chase Duke University.
Chapter 6: Process Synchronization
Programming-Language Approaches to Improving Shared-Memory Multithreading: Work-In-Progress Dan Grossman University of Washington Microsoft Research, RiSE.
Synchronization. Shared Memory Thread Synchronization Threads cooperate in multithreaded environments – User threads and kernel threads – Share resources.
Transactional Memory (TM) Evan Jolley EE 6633 December 7, 2012.
Chapter 8 Runtime Support. How program structures are implemented in a computer memory? The evolution of programming language design has led to the creation.
The Transactional Memory / Garbage Collection Analogy Dan Grossman University of Washington AMD July 26, 2010.
The Transactional Memory / Garbage Collection Analogy Dan Grossman University of Washington Microsoft Programming Languages TCN September 14, 2010.
PARALLEL PROGRAMMING with TRANSACTIONAL MEMORY Pratibha Kona.
“THREADS CANNOT BE IMPLEMENTED AS A LIBRARY” HANS-J. BOEHM, HP LABS Presented by Seema Saijpaul CS-510.
Summer School on Language-Based Techniques for Concurrent and Distributed Software Software Transactions: Software Implementations Dan Grossman University.
The Transactional Memory / Garbage Collection Analogy Dan Grossman University of Washington OOPSLA 2007.
The Why, What, and How of Software Transactions for More Reliable Concurrency Dan Grossman University of Washington 8 September 2006.
Vertically Integrated Analysis and Transformation for Embedded Software John Regehr University of Utah.
Atomicity via Source-to-Source Translation Benjamin Hindman Dan Grossman University of Washington 22 October 2006.
STM in Managed Runtimes: High-Level Language Semantics (MICRO 07 Tutorial) Dan Grossman University of Washington 2 December 2007.
My programming-languages view of TM: Research and Conjectures Dan Grossman University of Washington June 2008.
Software Transactions: A Programming-Languages Perspective Dan Grossman University of Washington 27 March 2008.
Honors Compilers Addressing of Local Variables Mar 19 th, 2002.
Supporting Nested Transactional Memory in LogTM Authors Michelle J Moravan Mark Hill Jayaram Bobba Ben Liblit Kevin Moore Michael Swift Luke Yen David.
Exceptions and side-effects in atomic blocks Tim Harris.
Programming-Language Motivation, Design, and Semantics for Software Transactions Dan Grossman University of Washington June 2008.
Run time vs. Compile time
Language Support for Lightweight transactions Tim Harris & Keir Fraser Presented by Narayanan Sundaram 04/28/2008.
1 New Architectures Need New Languages A triumph of optimism over experience! Ian Watson 3 rd July 2009.
The Why, What, and How of Software Transactions for More Reliable Concurrency Dan Grossman University of Washington 17 November 2006.
The Why, What, and How of Software Transactions for More Reliable Concurrency Dan Grossman University of Washington 26 May 2006.
1 Sharing Objects – Ch. 3 Visibility What is the source of the issue? Volatile Dekker’s algorithm Publication and Escape Thread Confinement Immutability.
STM in Managed Runtimes: High-Level Language Semantics (MICRO 07 Tutorial) Dan Grossman University of Washington 2 December 2007.
Department of Computer Science Presenters Dennis Gove Matthew Marzilli The ATOMO ∑ Transactional Programming Language.
1 Run time vs. Compile time The compiler must generate code to handle issues that arise at run time Representation of various data types Procedure linkage.
CPS110: Implementing threads/locks on a uni-processor Landon Cox.
DTHREADS: Efficient Deterministic Multithreading
Software Transactions: A Programming-Languages Perspective Dan Grossman University of Washington 5 December 2006.
Software Transactions: A Programming-Languages Perspective Dan Grossman University of Washington 29 March 2007.
Synchronization CSCI 444/544 Operating Systems Fall 2008.
02/19/2007CSCI 315 Operating Systems Design1 Process Synchronization Notice: The slides for this lecture have been largely based on those accompanying.
1 Testing Concurrent Programs Why Test?  Eliminate bugs?  Software Engineering vs Computer Science perspectives What properties are we testing for? 
Games Development 2 Concurrent Programming CO3301 Week 9.
Optimistic Design 1. Guarded Methods Do something based on the fact that one or more objects have particular states  Make a set of purchases assuming.
COMP 111 Threads and concurrency Sept 28, Tufts University Computer Science2 Who is this guy? I am not Prof. Couch Obvious? Sam Guyer New assistant.
Operating Systems ECE344 Ashvin Goel ECE University of Toronto Mutual Exclusion.
C++ Memory Overview 4 major memory segments Key differences from Java
Aritra Sengupta, Swarnendu Biswas, Minjia Zhang, Michael D. Bond and Milind Kulkarni ASPLOS 2015, ISTANBUL, TURKEY Hybrid Static-Dynamic Analysis for Statically.
Low-Overhead Software Transactional Memory with Progress Guarantees and Strong Semantics Minjia Zhang, 1 Jipeng Huang, Man Cao, Michael D. Bond.
Memory Consistency Models. Outline Review of multi-threaded program execution on uniprocessor Need for memory consistency models Sequential consistency.
Data races, informally [More formal definition to follow] “race condition” means two different things Data race: Two threads read/write, write/read, or.
Wait-Free Multi-Word Compare- And-Swap using Greedy Helping and Grabbing Håkan Sundell PDPTA 2009.
CS510 Concurrent Systems Why the Grass May Not Be Greener on the Other Side: A Comparison of Locking and Transactional Memory.
CS533 – Spring Jeanie M. Schwenk Experiences and Processes and Monitors with Mesa What is Mesa? “Mesa is a strongly typed, block structured programming.
CoreDet: A Compiler and Runtime System for Deterministic Multithreaded Execution Tom Bergan Owen Anderson, Joe Devietti, Luis Ceze, Dan Grossman To appear.
AtomCaml: First-class Atomicity via Rollback Michael F. Ringenburg and Dan Grossman University of Washington International Conference on Functional Programming.
On Transactional Memory, Spinlocks and Database Transactions Khai Q. Tran Spyros Blanas Jeffrey F. Naughton (University of Wisconsin Madison)
December 1, 2006©2006 Craig Zilles1 Threads & Atomic Operations in Hardware  Previously, we introduced multi-core parallelism & cache coherence —Today.
Why Events Are A Bad Idea (for high-concurrency servers)
Atomic Operations in Hardware
Atomic Operations in Hardware
Atomicity CS 2110 – Fall 2017.
Threads and Memory Models Hal Perkins Autumn 2011
Enforcing Isolation and Ordering in STM Systems
Dan Grossman University of Washington 18 July 2006
Threads and Memory Models Hal Perkins Autumn 2009
Thread Implementation Issues
Parallelism and Concurrency
Design and Implementation Issues for Atomicity
CSE 153 Design of Operating Systems Winter 19
CSE 332: Concurrency and Locks
Presentation transcript:

Software Transactions: A Programming-Languages Perspective Dan Grossman University of Washington 28 February 2008

Dan Grossman, Software Transactions2 Atomic An easier-to-use and harder-to-implement primitive lock acquire/release(behave as if) no interleaved computation void deposit(int x){ synchronized(this){ int tmp = balance; tmp += x; balance = tmp; }} void deposit(int x){ atomic { int tmp = balance; tmp += x; balance = tmp; }}

28 February 2008Dan Grossman, Software Transactions3 Relation to “The View” Focus assumes explicit threads + shared memory –Will be one of a few widespread models Seems well suited for “branch-and-bound” –And other less-structured data-access patterns A better synchronization primitive is a great thing –Much of today will motivate… –… without overselling

28 February 2008Dan Grossman, Software Transactions4 Viewpoints Software transactions good for: Software engineering (avoid races & deadlocks) Performance (optimistic “no conflict” without locks) Research should be guiding: New hardware with transactional support Software support –Semantic mismatch between language & hardware –Prediction: hardware for the common/simple case –May be fast enough without hardware –Lots of nontransactional hardware exists

28 February 2008Dan Grossman, Software Transactions5 PL Perspective Complementary to lower-level implementation work Motivation: –What is the essence of the advantage over locks? Language design: –Rigorous high-level semantics –Interaction with rest of the language Language implementation: –Interaction with modern compilers –New optimization needs A key improvement for the shared-memory puzzle piece

28 February 2008Dan Grossman, Software Transactions6 Today, part 1 Language design, semantics: 1.Motivation: Example + the GC analogy [OOPSLA07] 2.Semantics: strong vs. weak isolation [PLDI07]* [POPL08] 3.Interaction w/ other features [ICFP05][SCHEME07][POPL08] * Joint work with Intel PSL

28 February 2008Dan Grossman, Software Transactions7 Today, part 2 Implementation: 4.On one core [ICFP05] [SCHEME07] 5.Static optimizations for strong isolation [PLDI07]* 6.Multithreaded transactions * Joint work with Intel PSL

28 February 2008Dan Grossman, Software Transactions8 Code evolution void deposit(…) { synchronized(this) { … }} void withdraw(…) { synchronized(this) { … }} int balance(…) { synchronized(this) { … }}

28 February 2008Dan Grossman, Software Transactions9 Code evolution void deposit(…) { synchronized(this) { … }} void withdraw(…) { synchronized(this) { … }} int balance(…) { synchronized(this) { … }} void transfer(Acct from, int amt) { if(from.balance()>=amt && amt < maxXfer) { from.withdraw(amt); this.deposit(amt); } }

28 February 2008Dan Grossman, Software Transactions10 Code evolution void deposit(…) { synchronized(this) { … }} void withdraw(…) { synchronized(this) { … }} int balance(…) { synchronized(this) { … }} void transfer(Acct from, int amt) { synchronized(this) { //race if(from.balance()>=amt && amt < maxXfer) { from.withdraw(amt); this.deposit(amt); }

28 February 2008Dan Grossman, Software Transactions11 Code evolution void deposit(…) { synchronized(this) { … }} void withdraw(…) { synchronized(this) { … }} int balance(…) { synchronized(this) { … }} void transfer(Acct from, int amt) { synchronized(this) { synchronized(from) { //deadlock (still) if(from.balance()>=amt && amt < maxXfer) { from.withdraw(amt); this.deposit(amt); } }} }

28 February 2008Dan Grossman, Software Transactions12 Code evolution void deposit(…) { atomic { … }} void withdraw(…) { atomic { … }} int balance(…) { atomic { … }}

28 February 2008Dan Grossman, Software Transactions13 Code evolution void deposit(…) { atomic { … }} void withdraw(…) { atomic { … }} int balance(…) { atomic { … }} void transfer(Acct from, int amt) { //race if(from.balance()>=amt && amt < maxXfer) { from.withdraw(amt); this.deposit(amt); } }

28 February 2008Dan Grossman, Software Transactions14 Code evolution void deposit(…) { atomic { … }} void withdraw(…) { atomic { … }} int balance(…) { atomic { … }} void transfer(Acct from, int amt) { atomic { //correct and parallelism-preserving! if(from.balance()>=amt && amt < maxXfer){ from.withdraw(amt); this.deposit(amt); }

28 February 2008Dan Grossman, Software Transactions15 But can we generalize So transactions sure looks appealing… But what is the essence of the benefit? Transactional Memory (TM) is to shared-memory concurrency as Garbage Collection (GC) is to memory management

28 February 2008Dan Grossman, Software Transactions16 GC in 60 seconds Automate deallocation via reachability approximation rootsheap objects Allocate objects in the heap Deallocate objects to reuse heap space –If too soon, dangling-pointer dereferences –If too late, poor performance / space exhaustion

28 February 2008Dan Grossman, Software Transactions17 GC Bottom-line Established technology with widely accepted benefits Even though it can perform arbitrarily badly in theory Even though you can’t always ignore how GC works (at a high-level) Even though an active research area after 40 years Now about that analogy…

28 February 2008Dan Grossman, Software Transactions18 The problem, part 1 Why memory management is hard: Balance correctness (avoid dangling pointers) And performance (space waste or exhaustion) Manual approaches require whole-program protocols Example: Manual reference count for each object Must avoid garbage cycles race conditions concurrent programming loss of parallelismdeadlock lock lock acquisition

28 February 2008Dan Grossman, Software Transactions19 The problem, part 2 Manual memory-management is non-modular: Caller and callee must know what each other access or deallocate to ensure right memory is live A small change can require wide-scale code changes –Correctness requires knowing what data subsequent computation will access synchronization locks are heldrelease concurrent

28 February 2008Dan Grossman, Software Transactions20 The solution Move whole-program protocol to language implementation One-size-fits-most implemented by experts –Usually inside the compiler and run-time GC system uses subtle invariants, e.g.: –Object header-word bits –No unknown mature pointers to nursery objects thread-sharedthread-local TM

28 February 2008Dan Grossman, Software Transactions21 So far… memory managementconcurrency correctnessdangling pointersraces performancespace exhaustiondeadlock automationgarbage collectiontransactional memory new objectsnursery datathread-local data

28 February 2008Dan Grossman, Software Transactions22 Incomplete solution GC a bad idea when “reachable” is a bad approximation of “cannot-be-deallocated” Weak pointers overcome this fundamental limitation –Best used by experts for well-recognized idioms (e.g., software caches) In extreme, programmers can encode manual memory management on top of GC –Destroys most of GC’s advantages TM memory conflict run-in-parallel Open nested txns unique id generation lockingTM

28 February 2008Dan Grossman, Software Transactions23 Circumventing TM class SpinLock { private boolean b = false; void acquire() { while(true) atomic { if(b) continue; b = true; return; } void release() { atomic { b = false; } }

28 February 2008Dan Grossman, Software Transactions24 It really keeps going (see the essay) memory managementconcurrency correctnessdangling pointersraces performancespace exhaustiondeadlock automationgarbage collectiontransactional memory new objectsnursery datathread-local data key approximationreachabilitymemory conflicts manual circumventionweak pointersopen nesting complete avoidanceobject poolinglock library uncontrollable approx.conservative collectionfalse memory conflicts eager approachreference-countingupdate-in-place lazy approachtracingupdate-on-commit external dataI/O of pointersI/O in transactions forward progressreal-timeobstruction-free static optimizationsliveness analysisescape analysis

28 February 2008Dan Grossman, Software Transactions25 Lesson Transactional memory is to shared-memory concurrency as garbage collection is to memory management Huge but incomplete help for correct, efficient software Analogy should help guide transactions research

28 February 2008Dan Grossman, Software Transactions26 Today, part 1 Language design, semantics: 1.Motivation: Example + the GC analogy [OOPSLA07] 2.Semantics: strong vs. weak isolation [PLDI07]* [POPL08] [ [Katherine Moore] 3.Interaction w/ other features [ICFP05][SCHEME07][POPL08] * Joint work with Intel PSL

28 February 2008Dan Grossman, Software Transactions27 “Weak” isolation Widespread misconception: “Weak” isolation violates the “all-at-once” property only if corresponding lock code has a race (May still be a bad thing, but smart people disagree.) atomic { y = 1; x = 3; y = x; } x = 2; print(y); //1? 2? 666? initially y==0

28 February 2008Dan Grossman, Software Transactions28 It’s worse Privatization: One of several examples where lock code works and weak-isolation transactions do not (Example adapted from [Rajwar/Larus] and [Hudson et al]) sync(lk) { r = ptr; ptr = new C(); } assert(r.f==r.g); sync(lk) { ++ptr.f; ++ptr.g; } initially ptr.f == ptr.g ptr fg

28 February 2008Dan Grossman, Software Transactions29 It’s worse (Almost?) every published weak-isolation system lets the assertion fail! Eager-update or lazy-update atomic { r = ptr; ptr = new C(); } assert(r.f==r.g); atomic { ++ptr.f; ++ptr.g; } initially ptr.f == ptr.g ptr fg

28 February 2008Dan Grossman, Software Transactions30 The need for semantics Which is wrong: the privatization code or the transactions implementation? What other “gotchas” exist? –What language/coding restrictions suffice to avoid them? Can programmers correctly use transactions without understanding their implementation? –What makes an implementation correct? Only rigorous source-level semantics can answer

28 February 2008Dan Grossman, Software Transactions31 What we did Formal operational semantics for a collection of similar languages that have different isolation properties Program state allows at most one live transaction: a;H;e1|| … ||en a’;H’;e1’|| … ||en’ Multiple languages, including:

28 February 2008Dan Grossman, Software Transactions32 What we did Formal operational semantics for a collection of similar languages that have different isolation properties Program state allows at most one live transaction: a;H;e1|| … ||en a’;H’;e1’|| … ||en’ Multiple languages, including: 1. “Strong”: If one thread is in a transaction, no other thread may use shared memory or enter a transaction

28 February 2008Dan Grossman, Software Transactions33 What we did Formal operational semantics for a collection of similar languages that have different isolation properties Program state allows at most one live transaction: a;H;e1|| … ||en a’;H’;e1’|| … ||en’ Multiple languages, including: 2. “Weak-1-lock”: If one thread is in a transaction, no other thread may enter a transaction

28 February 2008Dan Grossman, Software Transactions34 What we did Formal operational semantics for a collection of similar languages that have different isolation properties Program state allows at most one live transaction: a;H;e1|| … ||en a’;H’;e1’|| … ||en’ Multiple languages, including: 3. “Weak-undo”: Like weak, plus a transaction may abort at any point, undoing its changes and restarting

28 February 2008Dan Grossman, Software Transactions35 A family Now we have a family of languages: “Strong”: … other threads can’t use memory or start transactions “Weak-1-lock”: … other threads can’t start transactions “Weak-undo”: like weak, plus undo/restart So we can study how family members differ and conditions under which they are the same Oh, and we have a kooky, ooky name: The AtomsFamily

28 February 2008Dan Grossman, Software Transactions36 Easy Theorems Theorem: Every program behavior in strong is possible in weak-1-lock Theorem: weak-1-lock allows behaviors strong does not Theorem: Every program behavior in weak-1-lock is possible in weak-undo Theorem: (slightly more surprising): weak-undo allows behavior weak-1-lock does not

28 February 2008Dan Grossman, Software Transactions37 Hard theorems Consider a (formally defined) type system that ensures any mutable memory is either: –Only accessed in transactions –Only accessed outside transactions Theorem: If a program type-checks, it has the same possible behaviors under strong and weak-1-lock Theorem: If a program type-checks, it has the same possible behaviors under weak-1-lock and weak-undo

28 February 2008Dan Grossman, Software Transactions38 A few months in 1 picture strongweak-1-lockweak-undo strong-undo

28 February 2008Dan Grossman, Software Transactions39 Lesson Weak isolation has surprising behavior; formal semantics let’s us model the behavior and prove sufficient conditions for avoiding it In other words: With a (too) restrictive type system, get semantics of strong and performance of weak

28 February 2008Dan Grossman, Software Transactions40 Today, part 1 Language design, semantics: 1.Motivation: Example + the GC analogy [OOPSLA07] 2.Semantics: strong vs. weak isolation [PLDI07]* [POPL08] 3.Interaction w/ other features [ICFP05][SCHEME07][POPL08] * Joint work with Intel PSL

28 February 2008Dan Grossman, Software Transactions41 What if… Real languages need precise semantics for all feature interactions. For example: Native Calls [Ringenburg] Exceptions [Ringenburg, Kimball] First-class continuations [Kimball] Thread-creation [Moore] Java-style class-loading [Hindman] Open: Bad interactions with memory-consistency model See joint work with Manson and Pugh [MSPC06]

28 February 2008Dan Grossman, Software Transactions42 Today, part 2 Implementation: 4.On one core [ICFP05] [SCHEME07] [Michael Ringenburg, Aaron Kimball] 5.Static optimizations for strong isolation [PLDI07]* 6.Multithreaded transactions * Joint work with Intel PSL

28 February 2008Dan Grossman, Software Transactions43 Interleaved execution The “uniprocessor (and then some)” assumption: Threads communicating via shared memory don't execute in “true parallel” Important special case: Uniprocessors still exist Many language implementations assume it (e.g., OCaml, Scheme48) Multicore may assign one core to an application

28 February 2008Dan Grossman, Software Transactions44 Implementing atomic Key pieces: Execution of an atomic block logs writes If scheduler pre-empts during atomic, rollback the thread Duplicate code or bytecode-interpreter dispatch so non-atomic code is not slowed by logging

28 February 2008Dan Grossman, Software Transactions45 Logging example Executing atomic block: build LIFO log of old values: y:0z:?x:0y:2 Rollback on pre-emption: Pop log, doing assignments Set program counter and stack to beginning of atomic On exit from atomic: Drop log int x=0, y=0; void f() { int z = y+1; x = z; } void g() { y = x+1; } void h() { atomic { y = 2; f(); g(); }

28 February 2008Dan Grossman, Software Transactions46 Logging efficiency Keep the log small: Don’t log reads (key uniprocessor advantage) Need not log memory allocated after atomic entered –Particularly initialization writes Need not log an address more than once –To keep logging fast, switch from array to hashtable after “many” (50) log entries y:0z:?x:0y:2

28 February 2008Dan Grossman, Software Transactions47 Evaluation Strong isolation on uniprocessors at little cost –See papers for “in the noise” performance Memory-access overhead Recall initialization writes need not be logged Rare rollback not in atomicin atomic readnone writenonelog (2 more writes)

28 February 2008Dan Grossman, Software Transactions48 Lesson Implementing transactions in software for a uniprocessor is so efficient it deserves special-casing Note: Don’t run other multicore services on a uniprocessor either

28 February 2008Dan Grossman, Software Transactions49 Today, part 2 Implementation: 4.On one core [ICFP05] [SCHEME07] 5.Static optimizations for strong isolation [PLDI07]* [Steven Balensiefer, Benjamin Hindman] 6.Multithreaded transactions * Joint work with Intel PSL

28 February 2008Dan Grossman, Software Transactions50 Strong performance problem Recall uniprocessor overhead: not in atomicin atomic readnone writenonesome With parallelism: not in atomicin atomic readnone iff weaksome writenone iff weaksome

28 February 2008Dan Grossman, Software Transactions51 Optimizing away strong’s cost Thread local Immutable Not accessed in transaction New: static analysis for not-accessed-in-transaction …

28 February 2008Dan Grossman, Software Transactions52 Not-accessed-in-transaction Revisit overhead of not-in-atomic for strong isolation, given information about how data is used in atomic Yet another client of pointer-analysis in atomic no atomic access no atomic write atomic write readnone some writenonesome not in atomic

28 February 2008Dan Grossman, Software Transactions53 Analysis details Whole-program, context-insensitive, flow-insensitive –Scalable, but needs whole program Can be done before method duplication –Keep lazy code generation without losing precision Given pointer information, just two more passes 1.How is an “abstract object” accessed transactionally? 2.What “abstract objects” might a non-transactional access use?

28 February 2008Dan Grossman, Software Transactions54 Collaborative effort UW: static analysis using pointer analysis –Via Paddle/Soot from McGill Intel PSL: high-performance STM –Via compiler and run-time Static analysis annotates bytecodes, so the compiler back-end knows what it can omit

28 February 2008Dan Grossman, Software Transactions55 Benchmarks Tsp

28 February 2008Dan Grossman, Software Transactions56 Benchmarks JBB

28 February 2008Dan Grossman, Software Transactions57 Lesson The cost of strong isolation is in the nontransactional code; compiler optimizations help a lot

28 February 2008Dan Grossman, Software Transactions58 Today, part 2 Implementation: 4.On one core [ICFP05] [SCHEME07] 5.Static optimizations for strong isolation [PLDI07]* 6.Multithreaded transactions [Aaron Kimball] Caveat: ongoing work * Joint work with Intel PSL

28 February 2008Dan Grossman, Software Transactions59 Multithreaded Transactions Most implementations (hw or sw) assume code inside a transaction is single-threaded –But isolation and parallelism are orthogonal –And Amdahl’s Law will strike with manycore Language design: need nested transactions Currently modifying Microsoft’s Bartok STM –Key: correct logging without sacrificing parallelism Work perhaps ahead of the technology curve –like concurrent garbage collection

28 February 2008Dan Grossman, Software Transactions60 Credit Semantics: Katherine Moore Uniprocessor: Michael Ringenburg, Aaron Kimball Optimizations: Steven Balensiefer, Ben Hindman Implementing multithreaded transactions: Aaron Kimball Memory-model issues: Jeremy Manson, Bill Pugh High-performance strong STM: Tatiana Shpeisman, Vijay Menon, Ali-Reza Adl-Tabatabai, Richard Hudson, Bratin Saha wasp.cs.washington.edu

28 February 2008Dan Grossman, Software Transactions61 Please read High-Level Small-Step Operational Semantics for Transactions (POPL08) Katherine F. Moore and Dan Grossman The Transactional Memory / Garbage Collection Analogy (OOPSLA07) Dan Grossman Software Transactions Meet First-Class Continuations (SCHEME07) Aaron Kimball and Dan Grossman Enforcing Isolation and Ordering in STM (PLDI07) Tatiana Shpeisman, Vijay Menon, Ali-Reza Adl-Tabatabai, Steve Balensiefer, Dan Grossman, Richard Hudson, Katherine F. Moore, and Bratin Saha Atomicity via Source-to-Source Translation (MSPC06) Benjamin Hindman and Dan Grossman What Do High-Level Memory Models Mean for Transactions? (MSPC06) Dan Grossman, Jeremy Manson, and William Pugh AtomCaml: First-Class Atomicity via Rollback (ICFP05) Michael F. Ringenburg and Dan Grossman

28 February 2008Dan Grossman, Software Transactions62 Lessons 1.Transactions: the garbage collection of shared memory 2.Semantics lets us prove sufficient conditions for avoiding weak-isolation anomalies 3.Must define interaction with features like exceptions 4.Uniprocessor implementations are worth special-casing 5.Compiler optimizations help remove the overhead in nontransactional code resulting from strong isolation 6.Amdahl’s Law suggests multithreaded transactions, which we believe we can make scalable