Threads and Synchronization

Slides:



Advertisements
Similar presentations
Operating Systems Semaphores II
Advertisements

Synchronization NOTE to instructors: it is helpful to walk through an example such as readers/writers locks for illustrating the use of condition variables.
Ch. 7 Process Synchronization (1/2) I Background F Producer - Consumer process :  Compiler, Assembler, Loader, · · · · · · F Bounded buffer.
Chapter 6: Process Synchronization
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
1 Semaphores and Monitors CIS450 Winter 2003 Professor Jinhua Guo.
Critical Sections and Race Conditions. Administrivia Install and build Nachos if you haven’t already.
Sleep/Wakeup and Condition Variables. Example: Await/Awake Consider a very simple use of sleep/wakeup to implement two new primitives: currentThread->Await()
Monitors and Semaphores. Annotated Condition Variable Example Condition *cv; Lock* cvMx; int waiter = 0; void await() { cvMx->Lock(); waiter = waiter.
1 Semaphores and Monitors: High-level Synchronization Constructs.
Concurrency: Mutual Exclusion, Synchronization, Deadlock, and Starvation in Representative Operating Systems.
Monitors CSCI 444/544 Operating Systems Fall 2008.
Synchronization in Java Fawzi Emad Chau-Wen Tseng Department of Computer Science University of Maryland, College Park.
CPS110: Implementing threads/locks on a uni-processor Landon Cox.
Synchronization CSCI 444/544 Operating Systems Fall 2008.
Scheduler Activations Jeff Chase. Threads in a Process Threads are useful at user-level – Parallelism, hide I/O latency, interactivity Option A (early.
Concurrency Recitation – 2/24 Nisarg Raval Slides by Prof. Landon Cox.
CS510 Concurrent Systems Introduction to Concurrency.
Experience with Processes and Monitors in Mesa
D u k e S y s t e m s Asynchrony, Concurrency, and Threads Jeff Chase Duke University.
Threads and Concurrency. A First Look at Some Key Concepts kernel The software component that controls the hardware directly, and implements the core.
Implementing Synchronization. Synchronization 101 Synchronization constrains the set of possible interleavings: Threads “agree” to stay out of each other’s.
Operating Systems ECE344 Ashvin Goel ECE University of Toronto Mutual Exclusion.
4061 Session 21 (4/3). Today Thread Synchronization –Condition Variables –Monitors –Read-Write Locks.
CSC321 Concurrent Programming: §5 Monitors 1 Section 5 Monitors.
11/18/20151 Operating Systems Design (CS 423) Elsa L Gunter 2112 SC, UIUC Based on slides by Roy Campbell, Sam.
D u k e S y s t e m s More Synchronization featuring Producer/Consumer with Semaphores Jeff Chase Duke University.
CS399 New Beginnings Jonathan Walpole. 2 Concurrent Programming & Synchronization Primitives.
1 Condition Variables CS 241 Prof. Brighten Godfrey March 16, 2012 University of Illinois.
CS533 – Spring Jeanie M. Schwenk Experiences and Processes and Monitors with Mesa What is Mesa? “Mesa is a strongly typed, block structured programming.
COSC 3407: Operating Systems Lecture 9: Readers-Writers and Language Support for Synchronization.
U NIVERSITY OF M ASSACHUSETTS A MHERST Department of Computer Science Computer Systems Principles Synchronization Emery Berger and Mark Corner University.
Fall 2008Programming Development Techniques 1 Topic 20 Concurrency Section 3.4.
1 Previous Lecture Overview  semaphores provide the first high-level synchronization abstraction that is possible to implement efficiently in OS. This.
CSCI1600: Embedded and Real Time Software Lecture 17: Concurrent Programming Steven Reiss, Fall 2015.
CS510 Concurrent Systems Jonathan Walpole. Introduction to Concurrency.
1 5-High-Performance Embedded Systems using Concurrent Process (cont.)
Threads and Concurrency
© 2004, D. J. Foreman 1 Monitors and Inter-Process Communication.
D u k e S y s t e m s CPS 310 Threads and Concurrency: Topics Jeff Chase Duke University
Mutual Exclusion -- Addendum. Mutual Exclusion in Critical Sections.
CS162 Section 2. True/False A thread needs to own a semaphore, meaning the thread has called semaphore.P(), before it can call semaphore.V() False: Any.
Tutorial 2: Homework 1 and Project 1
CSE 120 Principles of Operating
CS703 – Advanced Operating Systems
Background on the need for Synchronization
Semaphores and Condition Variables
More Threads and Synchronization
CS510 Operating System Foundations
CSCI 511 Operating Systems Chapter 5 (Part C) Monitor
Jonathan Walpole Computer Science Portland State University
Threads Chapter 4.
Background and Motivation
CSCI1600: Embedded and Real Time Software
CSE 451: Operating Systems Autumn Lecture 8 Semaphores and Monitors
CSE 451: Operating Systems Autumn 2004 Module 6 Synchronization
CSE 451: Operating Systems Autumn 2003 Lecture 7 Synchronization
CSE 451: Operating Systems Autumn 2005 Lecture 7 Synchronization
CSE 451: Operating Systems Winter 2004 Module 6 Synchronization
CSE 451: Operating Systems Winter 2003 Lecture 7 Synchronization
CSE 153 Design of Operating Systems Winter 19
CS333 Intro to Operating Systems
Chapter 6: Synchronization Tools
CSE 451: Operating Systems Winter 2007 Module 6 Synchronization
CSCI1600: Embedded and Real Time Software
CSE 451: Operating Systems Autumn 2009 Module 7 Synchronization
CSE 451 Section 1/27/2000.
CS 144 Advanced C++ Programming May 7 Class Meeting
Monitors and Inter-Process Communication
Don Porter Portions courtesy Emmett Witchel
Presentation transcript:

Threads and Synchronization Jeff Chase Duke University

Portrait of a thread Details vary. name/status etc machine state low name/status etc 0xdeadbeef machine state Stack high Thread operations a rough sketch: t = create(); t.start(proc, arg); t.alert(); (optional) result = t.join(); Self operations a rough sketch: exit(result); t = self(); setdata(ptr); ptr = selfdata(); alertwait(); (optional) Details vary.

A thread: closer look User TCB thread library user stack thread API e.g., pthreads or Java threads kernel interface for thread libs (not for users) 1-to-1 mapping of user threads to dedicated kernel supported “vessels” 0xdeadbeef User TCB thread library threads, mutexes, condition variables… user stack PG-13 kernel thread support raw “vessels”, e.g., Linux CLONE_THREAD+”futex” Kernel TCB kernel stack saved context (Some older user-level “green” thread libraries may multiplex multiple user threads over each “vessel”.) Threads can enter the kernel (fault or trap) and block, so they need a k-stack.

Kernel-based vs. user-level threads Take 2 A thread system schedules threads over a pool of “logical cores” or “vessels” for threads to run in. The kernel provides the vessels: they are either classic processes or “lightweight” processes, e.g., via CLONE_THREAD. Kernel scheduler schedules/multiplexes vessels on core slots: Select at most one vessel to occupy each core slot at any given time. Each vessel occupies at most one core slot at any given time. Vessels have k-stacks and can block independently in the kernel. A “kernel-based thread system” maintains a stable 1-1 mapping of threads to dedicated vessels. There is no user-level thread scheduler, since the mapping stable. For simplicity we just call each (thread, vessel) pair a “thread”. A thread library can always choose to multiplex N threads over M vessels. It used to be necessary but it’s not anymore. It causes problems with I/O because the threads cannot block independently in the kernel and the kernel does not know about the threads. There might still be performance-related reasons to do it in some scenarios but we IGNORE THAT CASE from now on.

Thread models illustrated Kernel scheduler (not library) decides which thread/vessel to run next. 1-to-1 mapping of user threads to dedicated kernel supported “vessels” data Thread/vessels block via kernel syscalls. They block in the kernel, not in user space. Each has a kernel stack, so they can block independently. Syscall interface for “vessels” as a foundation for thread API libraries. Might call the vessels “threads” or “lightweight processes”. Optional add on: a library that multiplexes N user-level threads over M kernel thread vessels, N > M. user-level thread readyList data while(1) { t = get next ready thread; scheduler->Run(t); }

Andrew Birrell

Synchronization The scheduler (and the machine) select the execution order of threads. Each thread executes a sequence of instructions, but their sequences may be arbitrarily interleaved. E.g., from the point of view of loads/stores on memory. Each possible execution order is a schedule. It is the program’s responsibility to exclude schedules that lead to incorrect behavior. The programmer has some tools to do this, and we must use those tools correctly. It is called synchronization or concurrency control.

Resource Trajectory Graphs Resource trajectory graphs (RTG) depict the “random walk” through the space of possible program states. Sm Sn So RTG for N threads is N-dimensional. Thread i advances along axis i. Each point represents one state in the set of all possible system states. Cross-product of the possible states of all threads in the system (But not all states in the cross-product are legally reachable.)

Resource Trajectory Graphs This RTG depicts a schedule within the space of possible schedules for a simple program of two threads sharing one core. Every schedule ends here. Blue advances along the y-axis. EXIT The diagonal is an idealized parallel execution (two cores). Purple advances along the x-axis. The scheduler chooses the path (schedule, event order, or interleaving). context switch From the point of view of the program, the chosen path is nondeterministic. EXIT Every schedule starts here.

Two threads execute this code section. x is a shared variable. Interleaving matters load x, R2 ; load global variable x add R2, 1, R2 ; increment: x = x + 1 store R2, x ; store global variable x Two threads execute this code section. x is a shared variable. load add store load add store X In this schedule, x is incremented only once: last writer wins. The program breaks under this schedule. This bug is a race.

This is not a game X  But we can think of it as a game. You write your program. The game begins when you submit your program to your adversary: the scheduler. The scheduler chooses all the moves while you watch. Your program may constrain the set of legal moves. The scheduler searches for a legal schedule that breaks your program. If it succeeds, then you lose (your program has a race). You win by not losing.  X x=x+1 U LOOZ x=x+1

A picture of a race x = x + 1; x = x + 1; Events in different threads may be interleaved. load add store Each schedule may be different. x = x + 1; These code sections are concurrent in this execution no ordering is defined among them. load add store x = x + 1; They are conflicting: they access a shared variable (global or heap), and at least one access is a write. An execution with concurrent conflicting accesses has a race: the result depends on the schedule.

Possible interleavings? time load add store X 1. load add store load add store x = x + 1; load add store X 2. load add store load add store x = x + 1;  3.  4.

X X Critical sections 1. 2. x = x + 1; x = x + 1; concurrent interleaved (racebug) load add store X 1. load add store load add store x = x + 1; load add store X serialized (one after the other) 2. load add store load add store x = x + 1; This code sequence is a critical section: the program fails if more than one thread executes in the critical section concurrently: that constitutes a race, a bug.

The need for mutual exclusion  The program may fail if the schedule enters the grey box (i.e., if two threads execute the critical section concurrently). The two threads must not both operate on the shared global x “at the same time”. x=??? X x=x+1 x=x+1

A Lock or Mutex Locks are the basic tools to enforce mutual exclusion in conflicting critical sections. A lock is an object, a data item in memory. API methods: Acquire and Release. Also called Lock() and Unlock(). Threads pair calls to Acquire and Release. Acquire upon entering a critical section. Release upon leaving a critical section. Between Acquire/Release, the thread holds the lock. Acquire does not pass until any previous holder releases. Waiting locks can spin (a spinlock) or block (a mutex). A A R R

Definition of a lock (mutex) Acquire + release ops on L are strictly paired. After acquire completes, the caller holds (owns) the lock L until the matching release. Acquire + release pairs on each L are ordered. Total order: each lock L has at most one holder at any given time. That property is mutual exclusion; L is a mutex.

Locking a critical section load add store 3.  mx->Acquire(); x = x + 1; mx->Release(); load add store load add store load add store  4. serialized atomic mx->Acquire(); x = x + 1; mx->Release(); load add store load add store Holding a shared mutex prevents competing threads from entering a critical section. If the critical section code acquires the mutex, then its execution is serialized: only one thread runs it at a time.

Portrait of a Lock in Motion The program may fail if it enters the grey box. A lock (mutex) prevents the schedule from ever entering the grey box, ever: both threads would have to hold the same lock at the same time, and locks don’t allow that.  R x=??? x=x+1 A A x=x+1 R x = x + 1;

Handing off a lock First I go. release acquire Then you go. Handoff serialized (one after the other) First I go. release acquire Then you go. Handoff The nth release, followed by the (n+1)th acquire

A peek at some deep tech mx->Acquire(); x = x + 1; An execution schedule defines a partial order of program events. The ordering relation (<) is called happens-before. mx->Acquire(); x = x + 1; mx->Release(); Two events are concurrent if neither happens-before the other. They might execute in some order, but only by luck. happens before (<) before The next schedule may reorder them. Just three rules govern happens-before order: mx->Acquire(); x = x + 1; mx->Release(); Events within a thread are ordered. Mutex handoff orders events across threads: the release #N happens-before acquire #N+1. Happens-before is transitive: if (A < B) and (B < C) then A < C. Machines may reorder concurrent events, but they always respect happens-before ordering.

How about this? A x = x + 1; mx->Acquire(); x = x + 1; load add store x = x + 1; A mx->Acquire(); x = x + 1; mx->Release(); load add store B

How about this? A x = x + 1; mx->Acquire(); x = x + 1; load add store x = x + 1; A The locking discipline is not followed: purple fails to acquire the lock mx. Or rather: purple accesses the variable x through another program section A that is mutually critical with B, but does not acquire the mutex. A locking scheme is a convention that the entire program must follow. mx->Acquire(); x = x + 1; mx->Release(); load add store B

How about this? lock->Acquire(); x = x + 1; lock->Release(); A load add store A mx->Acquire(); x = x + 1; mx->Release(); load add store B

How about this? lock->Acquire(); x = x + 1; lock->Release(); A load add store A This guy is not acquiring the right lock. Or whatever. They’re not using the same lock, and that’s what matters. A locking scheme is a convention that the entire program must follow. mx->Acquire(); x = x + 1; mx->Release(); load add store B

Mutual exclusion in Java Mutexes are built in to every Java object. no separate classes Every Java object is/has a monitor. At most one thread may “own” a monitor at any given time. A thread becomes owner of an object’s monitor by executing an object method declared as synchronized executing a block that is synchronized on the object public synchronized void increment() { x = x + 1; } public void increment() { synchronized(this) { x = x + 1; }

Roots: monitors A monitor is a module in which execution is serialized. A module is a set of procedures with some private state. [Brinch Hansen 1973] [C.A.R. Hoare 1974] state At most one thread runs in the monitor at a time. P1() (enter) ready to enter P2() P3() Other threads wait until the monitor is free. signal() P4() blocked wait() Java synchronized just allows finer control over the entry/exit points. Also, each Java object is its own “module”: objects of a Java class share methods of the class but have private state and a private monitor.

Monitors and mutexes are “equivalent” Entry to a monitor (e.g., a Java synchronized block) is equivalent to Acquire of an associated mutex. Lock on entry Exit of a monitor is equivalent to Release. Unlock on exit (or at least “return the key”…) Note: exit/release is implicit and automatic if the thread exits monitored code by a Java exception. Much less error-prone then explicit release

Monitors and mutexes are “equivalent” Well: mutexes are more flexible because we can choose which mutex controls a given piece of state. E.g., in Java we can use one object’s monitor to control access to state in some other object. Perfectly legal! So “monitors” in Java are more properly thought of as mutexes. Caution: this flexibility is also more dangerous! It violates modularity: can code “know” what locks are held by the thread that is executing it? Nested locks may cause deadlock (later). Keep your locking scheme simple and local! Java ensures that each Acquire/Release pair (synchronized block) is contained within a method, which is good practice.

Using monitors/mutexes Each monitor/mutex protects specific data structures (state) in the program. Threads hold the mutex when operating on that state. The state is consistent iff certain well-defined invariant conditions are true. A condition is a logical predicate over the state. state P1() (enter) ready to enter P2() Example invariant condition E.g.: suppose the state has a doubly linked list. Then for any element e either e.next is null or e.next.prev == e. P3() signal() P4() blocked wait() Threads hold the mutex when transitioning the structures from one consistent state to another, and restore the invariants before releasing the mutex.

New Problem: Ping-Pong void PingPong() { while(not done) { … if (blue) switch to purple; if (purple) switch to blue; }

Ping-Pong with Mutexes? void PingPong() { while(not done) { Mx->Acquire(); … Mx->Release(); } ???

Mutexes don’t work for ping-pong

Monitor wait/signal We need a way for a thread to wait for some condition to become true, e.g., until another thread runs and/or changes the state somehow. At most one thread runs in the monitor at a time. state A thread may wait (sleep) in the monitor, allowing another thread to enter. P1() (enter) ready to enter P2() signal() A thread may signal in the monitor. Signal means: wake one waiting thread, if there is one, else do nothing. The awakened thread returns from its wait. P3() signal() P4() wait() waiting (blocked) wait()

Condition variables are equivalent A condition variable (CV) is an object with an API. A CV implements the behavior of monitor conditions. interface to a CV: wait and signal (also called notify) Every CV is bound to exactly one mutex, which is necessary for safe use of the CV. “holding the mutex”  “in the monitor” A mutex may have any number of CVs bound to it. (But not in Java: only one CV per mutex in Java.) CVs also define a broadcast (notifyAll) primitive. Signal all waiters.

Ping-Pong using a condition variable void PingPong() { mx->Acquire(); while(not done) { … cv->Signal(); cv->Wait(); } mx->Release();

Ping-Pong using a condition variable wait signal wait signal wait signal

Example: Wait/Notify in Java Every Java object may be treated as a condition variable for threads using its monitor. There is no condition class. public class PingPong (extends Object) { public synchronized void PingPong() { while(true) { notify(); wait(); } public class Object { void notify(); /* signal */ void notifyAll(); /* broadcast */ void wait(); void wait(long timeout); } A thread must own an object’s monitor to call wait/notify, else the method raises an IllegalMonitorStateException. Wait(*) waits until the timeout elapses or another thread notifies.

Using condition variables In typical use a condition variable is associated with some logical condition or predicate on the state protected by its mutex. E.g., queue is empty, buffer is full, message in the mailbox. Note: CVs are not variables. You can associate them with whatever data you want, i.e, the state protected by the mutex. A caller of CV wait must hold its mutex (be “in the monitor”). This is crucial because it means that a waiter can wait on a logical condition and know that it won’t change until the waiter is safely asleep. Otherwise, another thread might change the condition and signal before the waiter is asleep! Signals do not stack! The waiter would sleep forever: the missed wakeup or wake-up waiter problem. The wait releases the mutex to sleep, and reacquires before return. But another thread could have beaten the waiter to the mutex and messed with the condition: loop before you leap!

Example: event/request queue worker loop We can implement an event queue with a mutex/CV pair. Protect the event queue data structure itself with the mutex. Handle one event, blocking as necessary. dispatch Incoming event queue When handler is complete, return to worker pool. threads waiting on CV Workers wait on the CV for next event if the event queue is empty. Signal the CV when a new event arrives.

Monitor wait/signal Design question: when a waiting thread is awakened by signal, must it start running immediately? Back in the monitor, where it called wait? At most one thread runs in the monitor at a time. Two choices: yes or no. state If yes, what happens to the thread that called signal within the monitor? Does it just hang there? They can’t both be in the monitor. If no, can’t other threads get into the monitor first and change the state, causing the condition to become false again? P1() (enter) ready to enter P2() signal() P3() ??? signal P4() wait() waiting (blocked) wait

Mesa semantics: Just say no Design question: when a waiting thread is awakened by signal, must it start running immediately? Back in the monitor, where it called wait? Mesa semantics: no. state An awakened waiter gets back in line. The signal caller keeps the monitor. So, can’t other threads get into the monitor first and change the state, causing the condition to become false again? Yes. So the waiter must recheck the condition: “Loop before you leap”. ready to (re)enter P1() (enter) ready to enter P2() signal() P3() signal P4() wait() waiting (blocked) wait

Alternative: Hoare semantics As originally defined in the 1960s, monitors chose “yes”: Hoare semantics. Signal suspends; awakened waiter gets the monitor. Monitors with Hoare semantics might be easier to program, somebody might think. Maybe. I suppose. But monitors with Hoare semantics are difficult to implement efficiently on multiprocessors. Birrell et. al. determined this when they built monitors for the Mesa programming language in the 1970s. So they changed the rules: Mesa semantics. Java uses Mesa semantics. Everybody uses Mesa semantics. Hoare semantics are of historical interest only. Loop before you leap!