CS510 Concurrent Systems Jonathan Walpole. A Lock-Free Multiprocessor OS Kernel.

Slides:



Advertisements
Similar presentations
Symmetric Multiprocessors: Synchronization and Sequential Consistency.
Advertisements

1 Lecture 20: Synchronization & Consistency Topics: synchronization, consistency models (Sections )
1 Chapter 5 Concurrency: Mutual Exclusion and Synchronization Principals of Concurrency Mutual Exclusion: Hardware Support Semaphores Readers/Writers Problem.
Global Environment Model. MUTUAL EXCLUSION PROBLEM The operations used by processes to access to common resources (critical sections) must be mutually.
CS492B Analysis of Concurrent Programs Lock Basics Jaehyuk Huh Computer Science, KAIST.
Ch. 7 Process Synchronization (1/2) I Background F Producer - Consumer process :  Compiler, Assembler, Loader, · · · · · · F Bounded buffer.
CS510 – Advanced Operating Systems 1 The Synergy Between Non-blocking Synchronization and Operating System Structure By Michael Greenwald and David Cheriton.
Chapter 6: Process Synchronization
Scalable and Lock-Free Concurrent Dictionaries
Wait-Free Reference Counting and Memory Management Håkan Sundell, Ph.D.
A Lock-Free Multiprocessor OS Kernel1 Henry Massalin and Calton Pu Columbia University June 1991 Presented by: Kenny Graunke.
Scalable Synchronous Queues By William N. Scherer III, Doug Lea, and Michael L. Scott Presented by Ran Isenberg.
Toward Efficient Support for Multithreaded MPI Communication Pavan Balaji 1, Darius Buntinas 1, David Goodell 1, William Gropp 2, and Rajeev Thakur 1 1.
Multi-Object Synchronization. Main Points Problems with synchronizing multiple objects Definition of deadlock – Circular waiting for resources Conditions.
CS444/CS544 Operating Systems Synchronization 2/21/2006 Prof. Searleman
CS444/CS544 Operating Systems Synchronization 2/16/2006 Prof. Searleman
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Introduction to Lock-free Data-structures and algorithms Micah J Best May 14/09.
Simple, Fast, and Practical Non- Blocking and Blocking Concurrent Queue Algorithms Presenter: Jim Santmyer By: Maged M. Micheal Michael L. Scott Department.
CS510 Concurrent Systems Class 2 A Lock-Free Multiprocessor OS Kernel.
CS510 Concurrent Systems Class 13 Software Transactional Memory Should Not be Obstruction-Free.
CS533 - Concepts of Operating Systems 1 Class Discussion.
CS533 Concepts of Operating Systems Class 3 Integrated Task and Stack Management.
Threads© Dr. Ayman Abdel-Hamid, CS4254 Spring CS4254 Computer Network Architecture and Programming Dr. Ayman A. Abdel-Hamid Computer Science Department.
CPS110: Implementing threads/locks on a uni-processor Landon Cox.
Race Conditions CS550 Operating Systems. Review So far, we have discussed Processes and Threads and talked about multithreading and MPI processes by example.
Computer Science 162 Discussion Section Week 3. Agenda Project 1 released! Locks, Semaphores, and condition variables Producer-consumer – Example (locks,
Operating Systems CSE 411 CPU Management Oct Lecture 13 Instructor: Bhuvan Urgaonkar.
CS510 Concurrent Systems Introduction to Concurrency.
Cosc 4740 Chapter 6, Part 3 Process Synchronization.
Operating Systems CSE 411 Multi-processor Operating Systems Multi-processor Operating Systems Dec Lecture 30 Instructor: Bhuvan Urgaonkar.
Optimistic Design 1. Guarded Methods Do something based on the fact that one or more objects have particular states  Make a set of purchases assuming.
Operating Systems ECE344 Ashvin Goel ECE University of Toronto Mutual Exclusion.
Maged M.Michael Michael L.Scott Department of Computer Science Univeristy of Rochester Presented by: Jun Miao.
Kernel Locking Techniques by Robert Love presented by Scott Price.
Wait-Free Multi-Word Compare- And-Swap using Greedy Helping and Grabbing Håkan Sundell PDPTA 2009.
CS399 New Beginnings Jonathan Walpole. 2 Concurrent Programming & Synchronization Primitives.
The read-copy-update mechanism for supporting real-time applications on shared-memory multiprocessor systems with Linux Guniguntala et al.
CS510 Concurrent Systems Jonathan Walpole. A Methodology for Implementing Highly Concurrent Data Objects.
Software Transactional Memory Should Not Be Obstruction-Free Robert Ennals Presented by Abdulai Sei.
CS510 Concurrent Systems Jonathan Walpole. RCU Usage in Linux.
Implementing Lock. From the Previous Lecture  The “too much milk” example shows that writing concurrent programs directly with load and store instructions.
1 Critical Section Problem CIS 450 Winter 2003 Professor Jinhua Guo.
CS510 Concurrent Systems Jonathan Walpole. Introduction to Concurrency.
Mutual Exclusion Algorithms. Topics r Defining mutual exclusion r A centralized approach r A distributed approach r An approach assuming an organization.
Implementing Mutual Exclusion Andy Wang Operating Systems COP 4610 / CGS 5765.
Mutual Exclusion -- Addendum. Mutual Exclusion in Critical Sections.
Lecture 5 Page 1 CS 111 Summer 2013 Bounded Buffers A higher level abstraction than shared domains or simple messages But not quite as high level as RPC.
Scalable Computing model : Lock free protocol By Peeyush Agrawal 2010MCS3469 Guided By Dr. Kolin Paul.
December 1, 2006©2006 Craig Zilles1 Threads & Atomic Operations in Hardware  Previously, we introduced multi-core parallelism & cache coherence —Today.
Multiprocessors – Locks
Lecture 20: Consistency Models, TM
By Michael Greenwald and David Cheriton Presented by Jonathan Walpole
Background on the need for Synchronization
Håkan Sundell Philippas Tsigas
Atomic Operations in Hardware
Atomic Operations in Hardware
CS510 Concurrent Systems Jonathan Walpole.
CS510 Concurrent Systems Jonathan Walpole.
Lecture 21: Synchronization and Consistency
Lecture 22: Consistency Models, TM
Coordination Lecture 5.
Implementing Mutual Exclusion
Software Transactional Memory Should Not be Obstruction-Free
Kernel Synchronization II
CS510 Operating System Foundations
CS333 Intro to Operating Systems
CSE 451 Section 1/27/2000.
CSE 542: Operating Systems
CSE 542: Operating Systems
Presentation transcript:

CS510 Concurrent Systems Jonathan Walpole

A Lock-Free Multiprocessor OS Kernel

The Synthesis Kernel A research project at Columbia University Synthesis V.0 -Uniprocessor (Motorola 68020) ‏ -No virtual memory Synthesis V.1 -Dual 68030s -virtual memory, threads, etc -Lock-free kernel

Locking Why do kernels normally use locks? Locks support a concurrent programming style based on mutual exclusion -Acquire lock on entry to critical sections -Release lock on exit -Block or spin if lock is held -Only one thread at a time executes the critical section Locks prevent concurrent access and enable sequential reasoning about critical section code

Why Not Use Locking? Granularity decisions -Simplicity vs performance -Increasingly poor performance (superscalar CPUs) Complicates composition -Need to know the locks I’m holding before calling a function -Need to know if its safe to call while holding those locks? Risk of deadlock Propagates thread failures to other threads -What if I crash while holding a lock?

Is There an Alternative? Use lock-free, “optimistic” synchronization -Execute the critical section unconstrained, and check at the end to see if you were the only one -If so, continue. If not roll back and retry Synthesis uses no locks at all! Goal: Show that Lock-Free synchronization is... -Sufficient for all OS synchronization needs -Practical -High performance

Locking is Pessimistic Murphy's law: -“If it can go wrong, it will...” In concurrent programming: -“If we can have a race condition, we will...” -“If another thread could mess us up, it will...” Solution: -Hide the resources behind locked doors -Make everyone wait until we're done -That is...if there was anyone at all -We pay the same cost either way

Optimistic Synchronization The common case is often little or no contention -Or at least it should be! -Do we really need to shut out the whole world? -Why not proceed optimistically and only incur cost if we encounter contention? If there's little contention, there's no starvation -So we don’t need to be “wait-free” which guarantees no starvation -Lock-free is easier and cheaper than wait-free Small critical sections really help performance

How Does It Work? Copy -Write down any state we need in order to retry - Do the work -Perform the computation Atomically “test and commit” or retry -Compare saved assumptions with the actual state of the world -If different, undo work, and start over with new state -If preconditions still hold, commit the results and continue -This is where the work becomes visible to the world (ideally)

Example – Stack Pop Pop() { retry: old_SP = SP; new_SP = old_SP + 1; elem = *old_SP; if (CAS(old_SP, new_SP, &SP) == FAIL) goto retry; return elem; } CS510 - Concurrent Systems 10

Example – Stack Pop Pop() { retry: old_SP = SP; new_SP = old_SP + 1; elem = *old_SP; if (CAS(old_SP, new_SP, &SP) == FAIL) goto retry; return elem; } loop

Example – Stack Pop Pop() { retry: old_SP = SP; new_SP = old_SP + 1; elem = *old_SP; if (CAS(old_SP, new_SP, &SP) == FAIL) goto retry; return elem; } Locals - won’t change! Locals - won’t change! Global - may change any time! “Atomic” read-modify-write instruction “Atomic” read-modify-write instruction

CAS CAS – single word Compare and Swap -An atomic read-modify-write instruction -Semantics of the single atomic instruction are: CAS(copy, update, mem_addr) { if (*mem_addr == copy) { *mem_addr = update; return SUCCESS; } else return FAIL; }

Example – Stack Pop Pop() { retry: old_SP = SP; new_SP = old_SP + 1; elem = *old_SP; if (CAS(old_SP, new_SP, &SP) == FAIL) goto retry; return elem; }

Example – Stack Pop Pop() { retry: old_SP = SP; new_SP = old_SP + 1; elem = *old_SP; if (CAS(old_SP, new_SP, &SP) == FAIL) goto retry; return elem; } Do Work

Example – Stack Pop Pop() { retry: old_SP = SP; new_SP = old_SP + 1; elem = *old_SP; if (CAS(old_SP, new_SP, &SP) == FAIL) goto retry; return elem; }

Example – Stack Pop Pop() { retry: old_SP = SP; new_SP = old_SP + 1; elem = *old_SP; if (CAS(old_SP, new_SP, &SP) == FAIL) goto retry; return elem; } Do Work

What Made It Work? It works because we can atomically commit the new stack pointer value and compare the old stack pointer with the one at commit time This allows us to verify no other thread has accessed the stack concurrently with our operation -i.e. since we took the copy -Well, at least we know the address in the stack pointer is the same as it was when we started -Does this guarantee there was no concurrent activity? -Does it matter? -We have to be careful !

Stack Push Push(elem) { retry: old_SP = SP; new_SP = old_SP – 1; old_val = *new_SP; if(CAS2(old_SP, old_val, new_SP, elem, &SP, new_SP) == FAIL)‏ goto retry; }

Stack Push Push(elem) { retry: old_SP = SP; new_SP = old_SP – 1; old_val = *new_SP; if(CAS2(old_SP, old_val, new_SP, elem, &SP, new_SP) == FAIL)‏ goto retry; } Copy

Stack Push Push(elem) { retry: old_SP = SP; new_SP = old_SP – 1; old_val = *new_SP; if(CAS2(old_SP, old_val, new_SP, elem, &SP, new_SP) == FAIL)‏ goto retry; } Do Work

Stack Push Push(elem) { retry: old_SP = SP; new_SP = old_SP – 1; old_val = *new_SP; if(CAS2(old_SP, old_val, new_SP, elem, &SP, new_SP) == FAIL)‏ goto retry; }

Stack Push Push(elem) { retry: old_SP = SP; new_SP = old_SP – 1; old_val = *new_SP; if(CAS2(old_SP, old_val, new_SP, elem, &SP, new_SP) == FAIL)‏ goto retry; } Note: this is a double compare and swap! Its needed to atomically update both the new item and the new stack pointer Unnecessary Compare

CAS2 CAS2 = double compare and swap -Sometimes referred to as DCAS CAS2(copy1, copy2, update1, update2, addr1, addr2) { if(addr1 == copy1 && addr2 == copy2) { *addr1 = update1; *addr2 = update2; return SUCCEED; } else return FAIL; }

Stack Push Push(elem) { retry: old_SP = SP; new_SP = old_SP – 1; old_val = *new_SP; if(CAS2(old_SP, old_val, new_SP, elem, &SP, new_SP) == FAIL)‏ goto retry; } Do Work

Synchronization in Synthesis Saved state is only one or two words Commit is done via -Compare-and-Swap (CAS), or -Double-Compare-and-Swap (CAS2 or DCAS) Can we really do everything in only two words? -Every synchronization problem in the Synthesis kernel is reduced to only needing to atomically touch two words at a time! -Requires some very clever kernel architecture

Approach Build data structures that work concurrently -Stacks -Queues (array-based to avoid allocations) -Linked lists Then build the OS around these data structures Concurrency is a first-class concern

Its Trickier Than It Seems List operations show insert and delete at the head -This is the easy case -What about insert and delete of interior nodes? -Next pointers of deletable nodes are not safe to traverse, even the first time! -Need reference counts and DCAS to atomically compare and update the count and pointer values -This is expensive, so we may choose to defer deletes instead (more on this later in the course) Specialized list and queue implementations can reduce the overheads

The Fall-Back Position If you can’t reduce the work such that it requires atomic updates to two or less words: -Create a single server thread and do the work sequentially on a single CPU -Why is this faster than letting multiple CPUs try to do it concurrently? Callers pack the requested operation into a message -Send it to the server (using lock-free queues!) -Wait for a response/callback/... -The queue effectively serializes the operations

Lock vs Lock-Free Critical Sections Lock_based_Pop() { spin_lock(&lock); elem = *SP; SP = SP + 1; spin_unlock(&lock); return elem; } Lock_free_Pop() { retry: old_SP = SP; new_SP = old_SP + 1; elem = *old_SP; if (CAS(old_SP, new_SP, &SP) == FAIL)‏ goto retry; return elem; }

Conclusions This is really intriguing! Its possible to build an entire OS without locks! But do you really want to? -Does it add or remove complexity? -What if hardware only gives you CAS and no DCAS? -What if critical sections are large or long lived? -What if contention is high? -What if we can’t undo the work? -… ?