A Sophomoric Introduction to Shared-Memory Parallelism and Concurrency Lecture 4 Shared-Memory Concurrency & Mutual Exclusion Dan Grossman Last Updated:

Slides:



Advertisements
Similar presentations
CS492B Analysis of Concurrent Programs Lock Basics Jaehyuk Huh Computer Science, KAIST.
Advertisements

Intro to Computer Org. Pipelining, Part 2 – Data hazards + Stalls.
Chapter 6 Process Synchronization Bernard Chen Spring 2007.
Chapter 6: Process Synchronization
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
Process Synchronization. Module 6: Process Synchronization Background The Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores.
Mutual Exclusion.
Concurrency 101 Shared state. Part 1: General Concepts 2.
Monitors Chapter 7. The semaphore is a low-level primitive because it is unstructured. If we were to build a large system using semaphores alone, the.
A Sophomoric Introduction to Shared-Memory Parallelism and Concurrency Lecture 4 Shared-Memory Concurrency & Mutual Exclusion Dan Grossman Last Updated:
Section 6: Mutual Exclusion Michelle Kuttel
Operating Systems ECE344 Ding Yuan Synchronization (I) -- Critical region and lock Lecture 5: Synchronization (I) -- Critical region and lock.
CS444/CS544 Operating Systems Synchronization 2/16/2006 Prof. Searleman
CSE332: Data Abstractions Lecture 22: Shared-Memory Concurrency and Mutual Exclusion Tyler Robison Summer
CSE332: Data Abstractions Lecture 22: Shared-Memory Concurrency and Mutual Exclusion Dan Grossman Spring 2010.
Threading Part 2 CS221 – 4/22/09. Where We Left Off Simple Threads Program: – Start a worker thread from the Main thread – Worker thread prints messages.
Synchronization in Java Fawzi Emad Chau-Wen Tseng Department of Computer Science University of Maryland, College Park.
1 Sharing Objects – Ch. 3 Visibility What is the source of the issue? Volatile Dekker’s algorithm Publication and Escape Thread Confinement Immutability.
02/17/2010CSCI 315 Operating Systems Design1 Process Synchronization Notice: The slides for this lecture have been largely based on those accompanying.
29-Jun-15 Java Concurrency. Definitions Parallel processes—two or more Threads are running simultaneously, on different cores (processors), in the same.
A. Frank - P. Weisberg Operating Systems Introduction to Cooperating Processes.
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science Emery Berger University of Massachusetts, Amherst Operating Systems CMPSCI 377 Lecture.
Synchronization CSCI 444/544 Operating Systems Fall 2008.
02/19/2007CSCI 315 Operating Systems Design1 Process Synchronization Notice: The slides for this lecture have been largely based on those accompanying.
Process Synchronization Ch. 4.4 – Cooperating Processes Ch. 7 – Concurrency.
Concurrency Recitation – 2/24 Nisarg Raval Slides by Prof. Landon Cox.
1 Thread II Slides courtesy of Dr. Nilanjan Banerjee.
Nachos Phase 1 Code -Hints and Comments
Object Oriented Analysis & Design SDL Threads. Contents 2  Processes  Thread Concepts  Creating threads  Critical sections  Synchronizing threads.
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems Introduction to Concurrency.
Games Development 2 Concurrent Programming CO3301 Week 9.
COMP 111 Threads and concurrency Sept 28, Tufts University Computer Science2 Who is this guy? I am not Prof. Couch Obvious? Sam Guyer New assistant.
Operating Systems ECE344 Ashvin Goel ECE University of Toronto Mutual Exclusion.
Producer-Consumer Problem The problem describes two processes, the producer and the consumer, who share a common, fixed-size buffer used as a queue.bufferqueue.
CSC321 Concurrent Programming: §5 Monitors 1 Section 5 Monitors.
11/18/20151 Operating Systems Design (CS 423) Elsa L Gunter 2112 SC, UIUC Based on slides by Roy Campbell, Sam.
Lecture 20: Parallelism & Concurrency CS 62 Spring 2013 Kim Bruce & Kevin Coogan CS 62 Spring 2013 Kim Bruce & Kevin Coogan Some slides based on those.
ICS 313: Programming Language Theory Chapter 13: Concurrency.
Chapter 7 -1 CHAPTER 7 PROCESS SYNCHRONIZATION CGS Operating System Concepts UCF, Spring 2004.
Chapter 6 – Process Synchronisation (Pgs 225 – 267)
CS399 New Beginnings Jonathan Walpole. 2 Concurrent Programming & Synchronization Primitives.
Monitors and Blocking Synchronization Dalia Cohn Alperovich Based on “The Art of Multiprocessor Programming” by Herlihy & Shavit, chapter 8.
Operating Systems Lecture Notes Synchronization Matthew Dailey Some material © Silberschatz, Galvin, and Gagne, 2002.
U NIVERSITY OF M ASSACHUSETTS A MHERST Department of Computer Science Computer Systems Principles Synchronization Emery Berger and Mark Corner University.
Fall 2008Programming Development Techniques 1 Topic 20 Concurrency Section 3.4.
Concurrency in Shared Memory Systems Synchronization and Mutual Exclusion.
Threads. Readings r Silberschatz et al : Chapter 4.
1 Previous Lecture Overview  semaphores provide the first high-level synchronization abstraction that is possible to implement efficiently in OS. This.
CS 153 Design of Operating Systems Winter 2016 Lecture 7: Synchronization.
CSCI1600: Embedded and Real Time Software Lecture 17: Concurrent Programming Steven Reiss, Fall 2015.
Concurrency (Threads) Threads allow you to do tasks in parallel. In an unthreaded program, you code is executed procedurally from start to finish. In a.
Synchronization Questions answered in this lecture: Why is synchronization necessary? What are race conditions, critical sections, and atomic operations?
Lecture 5 Page 1 CS 111 Summer 2013 Bounded Buffers A higher level abstraction than shared domains or simple messages But not quite as high level as RPC.
Mergesort example: Merge as we return from recursive calls Merge Divide 1 element 829.
6/27/20161 Operating Systems Design (CS 423) Elsa L Gunter 2112 SC, UIUC Based on slides by Roy Campbell, Sam King,
Multithreading / Concurrency
CSE 120 Principles of Operating
Background on the need for Synchronization
Shared-Memory Concurrency & Mutual Exclusion
Atomic Operations in Hardware
Atomic Operations in Hardware
CSE 332: Locks and Deadlocks
CSE332: Data Abstractions Lecture 21: Shared-Memory Concurrency & Mutual Exclusion Dan Grossman Spring 2012.
CSE 451: Operating Systems Autumn 2003 Lecture 7 Synchronization
CSE 451: Operating Systems Autumn 2005 Lecture 7 Synchronization
CSE 451: Operating Systems Winter 2003 Lecture 7 Synchronization
CSE 153 Design of Operating Systems Winter 19
CS333 Intro to Operating Systems
CSE 332: Concurrency and Locks
A Sophomoric Introduction to Shared-Memory Parallelism and Concurrency Lecture 4 Shared-Memory Concurrency & Mutual Exclusion Dan Grossman Last Updated:
Presentation transcript:

A Sophomoric Introduction to Shared-Memory Parallelism and Concurrency Lecture 4 Shared-Memory Concurrency & Mutual Exclusion Dan Grossman Last Updated: May 2012 For more information, see

Toward sharing resources (memory) Have been studying parallel algorithms using fork-join –Lower span via parallel tasks Algorithms all had a very simple structure to avoid race conditions –Each thread had memory “only it accessed” Example: array sub-range –On fork, “loan” some memory to “forkee” and do not access that memory again until after join on the “forkee” Strategy won’t work well when: –Memory accessed by threads is overlapping or unpredictable –Threads are doing independent tasks needing access to same resources (rather than implementing the same algorithm) 2Sophomoric Parallelism & Concurrency, Lecture 4

Concurrent Programming Concurrency: Correctly and efficiently managing access to shared resources from multiple possibly-simultaneous clients Requires coordination, particularly synchronization to avoid incorrect simultaneous access: make somebody block –join is not what we want –Want to block until another thread is “done using what we need” not “completely done executing” Even correct concurrent applications are usually highly non-deterministic: how threads are scheduled affects what operations from other threads they see when –non-repeatability complicates testing and debugging 3Sophomoric Parallelism & Concurrency, Lecture 4

Examples Multiple threads: 1.Processing different bank-account operations –What if 2 threads change the same account at the same time? 2.Using a shared cache of recent files (e.g., hashtable) –What if 2 threads insert the same file at the same time? 3.Creating a pipeline (think assembly line) with a queue for handing work to next thread in sequence? –What if enqueuer and dequeuer adjust a circular array queue at the same time? 4Sophomoric Parallelism & Concurrency, Lecture 4

Why threads? Unlike parallelism, not about implementing algorithms faster But threads still useful for: Code structure for responsiveness –Example: Respond to GUI events in one thread while another thread is performing an expensive computation Processor utilization (mask I/O latency) –If 1 thread “goes to disk,” have something else to do Failure isolation –Convenient structure if want to interleave multiple tasks and do not want an exception in one to stop the other 5Sophomoric Parallelism & Concurrency, Lecture 4

Sharing, again It is common in concurrent programs that: Different threads might access the same resources in an unpredictable order or even at about the same time Program correctness requires that simultaneous access be prevented using synchronization Simultaneous access is rare –Makes testing difficult –Must be much more disciplined when designing / implementing a concurrent program –Will discuss common idioms known to work 6Sophomoric Parallelism & Concurrency, Lecture 4

Canonical example Correct code in a single-threaded world 7Sophomoric Parallelism & Concurrency, Lecture 4 class BankAccount { private int balance = 0; int getBalance() { return balance; } void setBalance(int x) { balance = x; } void withdraw(int amount) { int b = getBalance(); if(amount > b) throw new WithdrawTooLargeException(); setBalance(b – amount); } … // other operations like deposit, etc. }

Interleaving Suppose: –Thread T1 calls x.withdraw(100) –Thread T2 calls y.withdraw(100) If second call starts before first finishes, we say the calls interleave –Could happen even with one processor since a thread can be pre-empted at any point for time-slicing If x and y refer to different accounts, no problem –“You cook in your kitchen while I cook in mine” –But if x and y alias, possible trouble… 8Sophomoric Parallelism & Concurrency, Lecture 4

A bad interleaving Interleaved withdraw(100) calls on the same account –Assume initial balance == 150 9Sophomoric Parallelism & Concurrency, Lecture 4 int b = getBalance(); if(amount > b) throw new …; setBalance(b – amount); int b = getBalance(); if(amount > b) throw new …; setBalance(b – amount); Thread 1 Thread 2 Time “Lost withdraw” – unhappy bank

Incorrect “fix” It is tempting and almost always wrong to fix a bad interleaving by rearranging or repeating operations, such as: 10Sophomoric Parallelism & Concurrency, Lecture 4 void withdraw(int amount) { if(amount > getBalance()) throw new WithdrawTooLargeException(); // maybe balance changed setBalance(getBalance() – amount); } This fixes nothing! Narrows the problem by one statement (Not even that since the compiler could turn it back into the old version because you didn’t indicate need to synchronize) And now a negative balance is possible – why?

Mutual exclusion Sane fix: Allow at most one thread to withdraw from account A at a time –Exclude other simultaneous operations on A too (e.g., deposit) Called mutual exclusion: One thread using a resource (here: an account) means another thread must wait –a.k.a. critical sections, which technically have other requirements Programmer must implement critical sections –“The compiler” has no idea what interleavings should or should not be allowed in your program –Buy you need language primitives to do it! 11Sophomoric Parallelism & Concurrency, Lecture 4

Wrong! Why can’t we implement our own mutual-exclusion protocol? –It’s technically possible under certain assumptions, but won’t work in real languages anyway 12Sophomoric Parallelism & Concurrency, Lecture 4 class BankAccount { private int balance = 0; private boolean busy = false; void withdraw(int amount) { while(busy) { /* “spin-wait” */ } busy = true; int b = getBalance(); if(amount > b) throw new WithdrawTooLargeException(); setBalance(b – amount); busy = false; } // deposit would spin on same boolean }

Just moved the problem! 13Sophomoric Parallelism & Concurrency, Lecture 4 while(busy) { } busy = true; int b = getBalance(); if(amount > b) throw new …; setBalance(b – amount); while(busy) { } busy = true; int b = getBalance(); if(amount > b) throw new …; setBalance(b – amount); Thread 1 Thread 2 Time “Lost withdraw” – unhappy bank

What we need There are many ways out of this conundrum, but we need help from the language One basic solution: Locks –Not Java yet, though Java’s approach is similar and slightly more convenient An ADT with operations: –new : make a new lock, initially “not held” –acquire : blocks if this lock is already currently “held” Once “not held”, makes lock “held” [all at once!] –release : makes this lock “not held” If >= 1 threads are blocked on it, exactly 1 will acquire it 14Sophomoric Parallelism & Concurrency, Lecture 4

Why that works An ADT with operations new, acquire, release The lock implementation ensures that given simultaneous acquires and/or releases, a correct thing will happen –Example: Two acquires: one will “win” and one will block How can this be implemented? –Need to “check if held and if not make held” “all-at-once” –Uses special hardware and O/S support See computer-architecture or operating-systems course –Here, we take this as a primitive and use it 15Sophomoric Parallelism & Concurrency, Lecture 4

Almost-correct pseudocode 16Sophomoric Parallelism & Concurrency, Lecture 4 class BankAccount { private int balance = 0; private Lock lk = new Lock(); … void withdraw(int amount) { lk.acquire(); // may block int b = getBalance(); if(amount > b) throw new WithdrawTooLargeException(); setBalance(b – amount); lk.release(); } // deposit would also acquire/release lk }

Some mistakes A lock is a very primitive mechanism –Still up to you to use correctly to implement critical sections Incorrect: Use different locks for withdraw and deposit –Mutual exclusion works only when using same lock –balance field is the shared resource being protected Poor performance: Use same lock for every bank account –No simultaneous operations on different accounts Incorrect: Forget to release a lock (blocks other threads forever!) –Previous slide is wrong because of the exception possibility! 17Sophomoric Parallelism & Concurrency, Lecture 4 if(amount > b) { lk.release(); // hard to remember! throw new WithdrawTooLargeException(); }

Other operations If withdraw and deposit use the same lock, then simultaneous calls to these methods are properly synchronized But what about getBalance and setBalance ? –Assume they are public, which may be reasonable If they do not acquire the same lock, then a race between setBalance and withdraw could produce a wrong result If they do acquire the same lock, then withdraw would block forever because it tries to acquire a lock it already has 18Sophomoric Parallelism & Concurrency, Lecture 4

Re-acquiring locks? Can’t let outside world call setBalance1 Can’t have withdraw call setBalance2 Alternately, we can modify the meaning of the Lock ADT to support re-entrant locks –Java does this –Then just use setBalance2 19Sophomoric Parallelism & Concurrency, Lecture 4 int setBalance1(int x) { balance = x; } int setBalance2(int x) { lk.acquire(); balance = x; lk.release(); } void withdraw(int amount) { lk.acquire(); … setBalance1(b – amount); lk.release(); }

Re-entrant lock A re-entrant lock (a.k.a. recursive lock) “Remembers” –the thread (if any) that currently holds it –a count When the lock goes from not-held to held, the count is set to 0 If (code running in) the current holder calls acquire : –it does not block –it increments the count On release : –if the count is > 0, the count is decremented –if the count is 0, the lock becomes not-held 20Sophomoric Parallelism & Concurrency, Lecture 4

Re-entrant locks work This simple code works fine provided lk is a reentrant lock Okay to call setBalance directly Okay to call withdraw (won’t block forever) 21Sophomoric Parallelism & Concurrency, Lecture 4 int setBalance(int x) { lk.acquire(); balance = x; lk.release(); } void withdraw(int amount) { lk.acquire(); … setBalance1(b – amount); lk.release(); }

Now some Java Java has built-in support for re-entrant locks –Several differences from our pseudocode –Focus on the synchronized statement 22Sophomoric Parallelism & Concurrency, Lecture 4 synchronized (expression) { statements } 1.Evaluates expression to an object Every object (but not primitive types) “is a lock” in Java 2.Acquires the lock, blocking if necessary “If you get past the {, you have the lock” 3.Releases the lock “at the matching } ” Even if control leaves due to throw, return, etc. So impossible to forget to release the lock

Java version #1 (correct but non-idiomatic) 23Sophomoric Parallelism & Concurrency, Lecture 4 class BankAccount { private int balance = 0; private Object lk = new Object(); int getBalance() { synchronized (lk) { return balance; } } void setBalance(int x) { synchronized (lk) { balance = x; } } void withdraw(int amount) { synchronized (lk) { int b = getBalance(); if(amount > b) throw … setBalance(b – amount); } // deposit would also use synchronized(lk) }

Improving the Java As written, the lock is private –Might seem like a good idea –But also prevents code in other classes from writing operations that synchronize with the account operations More idiomatic is to synchronize on this… –Also more convenient: no need to have an extra object 24Sophomoric Parallelism & Concurrency, Lecture 4

Java version #2 25Sophomoric Parallelism & Concurrency, Lecture 4 class BankAccount { private int balance = 0; int getBalance() { synchronized (this){ return balance; } } void setBalance(int x) { synchronized (this){ balance = x; } } void withdraw(int amount) { synchronized (this) { int b = getBalance(); if(amount > b) throw … setBalance(b – amount); } // deposit would also use synchronized(this) }

Syntactic sugar Version #2 is slightly poor style because there is a shorter way to say the same thing: Putting synchronized before a method declaration means the entire method body is surrounded by synchronized(this){…} Therefore, version #3 (next slide) means exactly the same thing as version #2 but is more concise 26Sophomoric Parallelism & Concurrency, Lecture 4

Java version #3 (final version) 27Sophomoric Parallelism & Concurrency, Lecture 4 class BankAccount { private int balance = 0; synchronized int getBalance() { return balance; } synchronized void setBalance(int x) { balance = x; } synchronized void withdraw(int amount) { int b = getBalance(); if(amount > b) throw … setBalance(b – amount); } // deposit would also use synchronized }

More Java notes Class java.util.concurrent.locks.ReentrantLock works much more like our pseudocode –Often use try { … } finally { … } to avoid forgetting to release the lock if there’s an exception Also library and/or language support for readers/writer locks and condition variables (future lecture) Java provides many other features and details. See, for example: –Chapter 14 of CoreJava, Volume 1 by Horstmann/Cornell –Java Concurrency in Practice by Goetz et al 28Sophomoric Parallelism & Concurrency, Lecture 4