Critical Sections and Race Conditions. Administrivia Install and build Nachos if you haven’t already.

Slides:



Advertisements
Similar presentations
Operating Systems Semaphores II
Advertisements

Ch. 7 Process Synchronization (1/2) I Background F Producer - Consumer process :  Compiler, Assembler, Loader, · · · · · · F Bounded buffer.
Chapter 6: Process Synchronization
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
Process Synchronization. Module 6: Process Synchronization Background The Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores.
CH7 discussion-review Mahmoud Alhabbash. Q1 What is a Race Condition? How could we prevent that? – Race condition is the situation where several processes.
CSE 451: Operating Systems Winter 2012 Synchronization Mark Zbikowski Gary Kimura.
CSE 451: Operating Systems Spring 2012 Module 7 Synchronization Ed Lazowska Allen Center 570.
Synchronization. Shared Memory Thread Synchronization Threads cooperate in multithreaded environments – User threads and kernel threads – Share resources.
Operating Systems ECE344 Ding Yuan Synchronization (I) -- Critical region and lock Lecture 5: Synchronization (I) -- Critical region and lock.
Starvation and Deadlock
CS444/CS544 Operating Systems Synchronization 2/21/2006 Prof. Searleman
1 CS318 Project #3 Preemptive Kernel. 2 Continuing from Project 2 Project 2 involved: Context Switch Stack Manipulation Saving State Moving between threads,
5.6 Semaphores Semaphores –Software construct that can be used to enforce mutual exclusion –Contains a protected variable Can be accessed only via wait.
Jonathan Walpole Computer Science Portland State University
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science Emery Berger University of Massachusetts, Amherst Operating Systems CMPSCI 377 Lecture.
Synchronization CSCI 444/544 Operating Systems Fall 2008.
9/8/2015cse synchronization-p1 © Perkins DW Johnson and University of Washington1 Synchronization Part 1 CSE 410, Spring 2008 Computer Systems.
CS510 Concurrent Systems Introduction to Concurrency.
Threads and Concurrency. A First Look at Some Key Concepts kernel The software component that controls the hardware directly, and implements the core.
Implementing Synchronization. Synchronization 101 Synchronization constrains the set of possible interleavings: Threads “agree” to stay out of each other’s.
Race Conditions Defined 1. Every data structure defines invariant conditions. defines the space of possible legal states of the structure defines what.
The Synchronization Toolbox. Mutual Exclusion Race conditions can be avoiding by ensuring mutual exclusion in critical sections. Critical sections are.
COMP 111 Threads and concurrency Sept 28, Tufts University Computer Science2 Who is this guy? I am not Prof. Couch Obvious? Sam Guyer New assistant.
Operating Systems ECE344 Ashvin Goel ECE University of Toronto Mutual Exclusion.
Chapter 6 – Process Synchronisation (Pgs 225 – 267)
Introduction to Operating Systems and Concurrency.
CS399 New Beginnings Jonathan Walpole. 2 Concurrent Programming & Synchronization Primitives.
U NIVERSITY OF M ASSACHUSETTS A MHERST Department of Computer Science Computer Systems Principles Synchronization Emery Berger and Mark Corner University.
Administrivia Nachos guide and Lab #1 are on the web. Form project teams of 2-3. Lab #1 due February 5. Synchronization.
CSE 153 Design of Operating Systems Winter 2015 Lecture 5: Synchronization.
Operating Systems CMPSC 473 Signals, Introduction to mutual exclusion September 28, Lecture 9 Instructor: Bhuvan Urgaonkar.
Synchronization CSCI 3753 Operating Systems Spring 2005 Prof. Rick Han.
Implementing Lock. From the Previous Lecture  The “too much milk” example shows that writing concurrent programs directly with load and store instructions.
CS 153 Design of Operating Systems Winter 2016 Lecture 7: Synchronization.
CSC 660: Advanced Operating SystemsSlide #1 CSC 660: Advanced OS Synchronization.
CS510 Concurrent Systems Jonathan Walpole. Introduction to Concurrency.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Chapter 6: Process Synchronization.
3/17/2016cse synchronization-p2 © Perkins, DW Johnson and University of Washington1 Synchronization Part 2 CSE 410, Spring 2008 Computer.
Implementing Mutual Exclusion Andy Wang Operating Systems COP 4610 / CGS 5765.
13/03/07Week 21 CENG334 Introduction to Operating Systems Erol Sahin Dept of Computer Eng. Middle East Technical University Ankara, TURKEY URL:
Mutual Exclusion -- Addendum. Mutual Exclusion in Critical Sections.
Synchronization Questions answered in this lecture: Why is synchronization necessary? What are race conditions, critical sections, and atomic operations?
December 1, 2006©2006 Craig Zilles1 Threads & Atomic Operations in Hardware  Previously, we introduced multi-core parallelism & cache coherence —Today.
CSE 120 Principles of Operating
CS703 – Advanced Operating Systems
Background on the need for Synchronization
Synchronization.
Atomic Operations in Hardware
Atomic Operations in Hardware
Semaphores and Condition Variables
More Threads and Synchronization
Jonathan Walpole Computer Science Portland State University
UNIVERSITY of WISCONSIN-MADISON Computer Sciences Department
Implementing Mutual Exclusion
Implementing Mutual Exclusion
CSE 451: Operating Systems Winter 2007 Module 6 Synchronization
CSE 451: Operating Systems Autumn 2004 Module 6 Synchronization
CSE 451: Operating Systems Autumn 2003 Lecture 7 Synchronization
CSE 451: Operating Systems Autumn 2005 Lecture 7 Synchronization
CSE 451: Operating Systems Winter 2004 Module 6 Synchronization
CSE 451: Operating Systems Winter 2003 Lecture 7 Synchronization
CSE 153 Design of Operating Systems Winter 19
CS333 Intro to Operating Systems
Chapter 6: Synchronization Tools
CSE 451: Operating Systems Spring 2008 Module 7 Synchronization
CSE 451: Operating Systems Winter 2007 Module 6 Synchronization
CSE 451: Operating Systems Autumn 2009 Module 7 Synchronization
CSE 451: Operating Systems Spring 2007 Module 7 Synchronization
CSE 542: Operating Systems
Presentation transcript:

Critical Sections and Race Conditions

Administrivia Install and build Nachos if you haven’t already. “I need more space to build Nachos.” The ball is in ACPUB’s court. Until then, only build in the threads directory (needs 1.2 MB). Lab #1 is due... Follow submission instructions in the project guide. your project group info before the deadline! Reading: Nutt: chapter 8 (Synchronization) Birrell, “An Introduction to Programming with Threads”

Warmup 1. What does it mean for threads to execute “concurrently”? How does this happen? 2. Why/when do context switches occur? 3. What happens on a context switch? 4. What causes programs to “crash”? 5. If a multithreaded program “crashes” and I show a stack trace in gdb, what will I see? 6. Can I use gdb to look at all the threads in my program? 7. What happens if I delete a thread while it’s running? 8. What happens if a thread makes recursive calls “forever”?

Race Conditions Defined 1. Every data structure defines invariant conditions. defines the space of possible legal states of the structure defines what it means for the structure to be “well-formed” 2. Operations depend on and preserve the invariants. The invariant must hold when the operation begins. The operation may temporarily violate the invariant. The operation restores the invariant before it completes. 3. Arbitrarily interleaved operations violate invariants. Rudely interrupted operations leave a mess behind for others. 4. Therefore we must constrain the set of possible schedules.

Resource Trajectory Graphs Resource trajectory graphs (RTG) depict the thread scheduler’s “random walk” through the space of possible system states. RTG for N threads is N-dimensional. Thread i advances along axis I. Each point represents one state in the set of all possible system states. cross-product of the possible states of all threads in the system (But not all states in the cross-product are legally reachable.) SnSn SoSo SmSm

Avoiding Races #1 1. Identify critical sections, code sequences that: rely on an invariant condition being true; temporarily violate the invariant; transform the data structure from one legal state to another; or make a sequence of actions that assume the data structure will not “change underneath them”. 2. Never sleep or yield in a critical section. Voluntarily relinquishing control may allow another thread to run and “trip over your mess” or modify the structure while the operation is in progress.

InitColorStack() { push(blue); push(purple); } PushColor() { if (s[top] == purple) { ASSERT(s[top-1] == blue); push(blue); } else { ASSERT(s[top] == blue); ASSERT(s[top-1] == purple); push(purple); } Critical Sections in the Color Stack

OK, so we never Sleep or Yield in a critical section. Is that sufficient to prevent races?

Avoiding Races #2 Is caution with yield and sleep sufficient to prevent races? No! Concurrency races may also result from: involuntary context switches (timeslicing) e.g., caused by the Nachos thread scheduler with -rs flag interrupts (inside the kernel) or signals (outside the kernel) physical concurrency (on a multiprocessor) How to ensure atomicity of critical sections in these cases? Synchronization primitives!

Mutual Exclusion and Locks

Administrivia Lab #1 is due at midnight. Carefully follow submission instructions in the project guide. your project group info before midnight. You may continue to work on projects after the deadline for less- than-full credit (tell us what you did). A new experiment: Web-based signup for demos. Look for it on the course Web page. At least one group member must have a password (see Rajiv). Reading: Nutt: chapter 8 (Synchronization) Birrell, “An Introduction to Programming with Threads”

Avoiding Races 1. Don’t yield or sleep in a critical section. Necessary but not sufficient Concurrency races may also result from: involuntary context switches (timeslicing) e.g., caused by the Nachos thread scheduler with -rs flag interrupts (inside the kernel) or signals (outside the kernel) physical concurrency (on a multiprocessor) How to ensure atomicity of critical sections in these cases? Synchronization primitives!

Mutual Exclusion Race conditions can be avoiding by ensuring mutual exclusion in critical sections. Critical sections are code sequences that contribute to races. Every race (possible incorrect interleaving) involves two or more threads executing related critical sections concurrently. To avoid races, we must serialize related critical sections. Never allow more than one thread in a critical section at a time. 1. BAD 2. interrupted critsec BAD 3. GOOD

Locks Locks can be used to ensure mutual exclusion in conflicting critical sections. A lock is an object, a data item in memory. Methods: Lock::Acquire and Lock::Release. Threads pair calls to Acquire and Release. Acquire before entering a critical section. Release after leaving a critical section. Between Acquire/Release, the lock is held. Acquire does not return until any previous holder releases. Waiting locks can spin (a spinlock) or block (a mutex). A A R R

Portrait of a Lock in Motion A A R R

/* shared by all threads */ int counters[N]; int total; /* * Increment a counter by a specified value, and keep a running sum. * This is called repeatedly by each of N threads. * tid is a thread identifier unique across all threads. * value is just some arbitrary number. */ void TouchCount(int tid, int value) { counters[tid] += value; total += value; } Example: Per-Thread Counts and Total

Using a Lock for the Counter/Sum Example int counters[N]; int total; Lock *lock; /* * Increment a counter by a specified value, and keep a running sum. */ void TouchCount(int tid, int value) { lock->Acquire(); counters[tid] += value;/* critical section code is atomic...*/ total += value;/* …as long as the lock is held */ lock->Release(); }

Reading Between the Lines of C /* counters[tid] += value; total += value; */ loadcounters, R1; load counters base load8(SP), R2; load tid index shlR2, #2, R2; index = index * sizeof(int) addR1, R2, R1; compute index to array load4(SP), R3; load value load(R1), R2; load counters[tid] addR2, R3, R2; counters[tid] += value storeR2, (R1); store back to counters[tid] loadtotal, R2; load total addR2, R3, R2; total += value storeR2, total; store total vulnerable between load and store of counters[tid]...but it’s non-shared. vulnerable between load and store of total, which is shared. load add store load add store

The Need for an Atomic Primitive To implement safe mutual exclusion, we need support for some sort of “magic toehold” for synchronization. The lock and mutex primitives themselves have critical sections to test and/or set the lock flags. These primitives must somehow be made atomic. uninterruptible a sequence of instructions that execute “all or nothing”