Race Conditions Critical Sections Dekker’s Algorithm.

Slides:



Advertisements
Similar presentations
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Advertisements

1 Implementations: User-level Kernel-level User-level threads package each u.process defines its own thread policies! flexible mgt, scheduling etc…kernel.
Ch. 7 Process Synchronization (1/2) I Background F Producer - Consumer process :  Compiler, Assembler, Loader, · · · · · · F Bounded buffer.
Process Synchronization Continued 7.2 The Critical-Section Problem.
Silberschatz, Galvin and Gagne ©2007 Operating System Concepts with Java – 7 th Edition, Nov 15, 2006 Chapter 6 (a): Synchronization.
Chapter 6 Process Synchronization Bernard Chen Spring 2007.
Chapter 6: Process Synchronization
Background Concurrent access to shared data can lead to inconsistencies Maintaining data consistency among cooperating processes is critical What is wrong.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Chapter 5: Process Synchronization.
5.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts with Java – 8 th Edition Chapter 5: CPU Scheduling.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
Process Synchronization. Module 6: Process Synchronization Background The Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores.
CH7 discussion-review Mahmoud Alhabbash. Q1 What is a Race Condition? How could we prevent that? – Race condition is the situation where several processes.
Chapter 3 The Critical Section Problem
Race Conditions. Isolated & Non-Isolated Processes Isolated: Do not share state with other processes –The output of process is unaffected by run of other.
1 Friday, June 16, 2006 "In order to maintain secrecy, this posting will self-destruct in five seconds. Memorize it, then eat your computer." - Anonymous.
Race Conditions Critical Sections Deker’s Algorithm.
Shared Memory Coordination We will be looking at process coordination using shared memory and busy waiting. –So we don't send messages but read and write.
Critical Sections with lots of Threads. Announcements CS 414 Homework due today.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 7: Process Synchronization Background The Critical-Section Problem Synchronization.
Chapter 6: Process Synchronization. Outline Background Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores Classic Problems.
Chapter 6: Process Synchronization. 6.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 8, 2005 Objectives Understand.
Semaphores. Announcements No CS 415 Section this Friday Tom Roeder will hold office hours Homework 2 is due today.
Synchronization Principles. Race Conditions Race Conditions: An Example spooler directory out in 4 7 somefile.txt list.c scores.txt Process.
Chapter 6: Synchronization. 6.2 Silberschatz, Galvin and Gagne ©2005 Operating System Principles Module 6: Synchronization 6.1 Background 6.2 The Critical-Section.
A. Frank - P. Weisberg Operating Systems Introduction to Cooperating Processes.
Instructor: Umar KalimNUST Institute of Information Technology Operating Systems Process Synchronization.
02/14/2007CSCI 315 Operating Systems Design1 Process Synchronization Notice: The slides for this lecture have been largely based on those accompanying.
Chapter 6: Process Synchronization. 6.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 8, 2005 Module 6: Process Synchronization.
Adopted from and based on Textbook: Operating System Concepts – 8th Edition, by Silberschatz, Galvin and Gagne Updated and Modified by Dr. Abdullah Basuhail,
Process Synchronization Ch. 4.4 – Cooperating Processes Ch. 7 – Concurrency.
1 Race Conditions/Mutual Exclusion Segment of code of a process where a shared resource is accessed (changing global variables, writing files etc) is called.
Operating Systems CSE 411 CPU Management Oct Lecture 13 Instructor: Bhuvan Urgaonkar.
1 Thread Synchronization: Too Much Milk. 2 Implementing Critical Sections in Software Hard The following example will demonstrate the difficulty of providing.
Chapter 6: Process Synchronization. 6.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 8, 2005 Background Concurrent.
Concurrency, Mutual Exclusion and Synchronization.
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems Introduction to Concurrency.
COMP 111 Threads and concurrency Sept 28, Tufts University Computer Science2 Who is this guy? I am not Prof. Couch Obvious? Sam Guyer New assistant.
1 Chapter 6: Process Synchronization Background The Critical-Section Problem Peterson’s Solution Special Machine Instructions for Synchronization Semaphores.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 7: Process Synchronization Background The Critical-Section Problem Synchronization.
Chap 6 Synchronization. Background Concurrent access to shared data may result in data inconsistency Maintaining data consistency requires mechanisms.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts Essentials – 9 th Edition Chapter 5: Process Synchronization.
Chapter 6: Process Synchronization. 6.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Module 6: Process Synchronization Background The.
Chapter 6: Process Synchronization. Module 6: Process Synchronization Background The Critical-Section Problem Peterson’s Solution Synchronization Hardware.
Background Concurrent access to shared data may result in data inconsistency Maintaining data consistency requires mechanisms to ensure the orderly execution.
CS399 New Beginnings Jonathan Walpole. 2 Concurrent Programming & Synchronization Primitives.
Operating Systems Lecture Notes Synchronization Matthew Dailey Some material © Silberschatz, Galvin, and Gagne, 2002.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
Operating Systems CMPSC 473 Signals, Introduction to mutual exclusion September 28, Lecture 9 Instructor: Bhuvan Urgaonkar.
4.1 Introduction to Threads Overview Multithreading Models Thread Libraries Threading Issues Operating System Examples Windows XP Threads Linux Threads.
Process Synchronization CS 360. Slide 2 CS 360, WSU Vancouver Process Synchronization Background The Critical-Section Problem Synchronization Hardware.
Process Synchronization. Objectives To introduce the critical-section problem, whose solutions can be used to ensure the consistency of shared data To.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Chapter 6: Process Synchronization.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Chapter 5: Process Synchronization.
CE Operating Systems Lecture 8 Process Scheduling continued and an introduction to process synchronisation.
Chapter 6 Synchronization Dr. Yingwu Zhu. The Problem with Concurrent Execution Concurrent processes (& threads) often access shared data and resources.
Process Synchronization
Chapter 5: Process Synchronization
Background on the need for Synchronization
Chapter 5: Process Synchronization
Race Conditions Critical Sections Dekker’s Algorithm
Topic 6 (Textbook - Chapter 5) Process Synchronization
Process Synchronization
Lecture 19 Syed Mansoor Sarwar
Module 7a: Classic Synchronization
Grades.
Concurrency: Mutual Exclusion and Process Synchronization
Chapter 6: Process Synchronization
CS333 Intro to Operating Systems
Chapter 6: Synchronization Tools
Presentation transcript:

Race Conditions Critical Sections Dekker’s Algorithm

Announcements CS 414 Homework this Wednesday, Feb 7th CS 415 Project due following Monday, February 12th –initial design documents due last, Friday, Feb 2nd Indy won the SuperBowl!

Review: CPU Scheduling Scheduling problem –Given a set of processes that are ready to run –Which one to select next Scheduling criteria –CPU utilization, Throughput, Turnaround, Waiting, Response –Predictability: variance in any of these measures Scheduling algorithms –FCFS, SJF, SRTF, RR –Multilevel (Feedback-)Queue Scheduling

Goals to Today Introduction to Synchronization –..or: the trickiest bit of this course Background Race Conditions The Critical-Section Problem Dekker’s Solution

Background Concurrent access to shared data may result in data inconsistency Maintaining data consistency requires mechanisms to ensure the orderly execution of cooperating processes Suppose that we wanted to provide a solution to the consumer-producer problem that fills all the buffers. –Assume an integer count keeps track of the number of full buffers. –Initially, count is set to 0. –It is incremented by the producer after it produces a new buffer –It is decremented by the consumer after it consumes a buffer.

Producer-Consumer Producer while (true) { /* produce an item and */ /* put in nextProduced */ while (count == BUFFER_SIZE) ; // do nothing b/c full buffer [in] = nextProduced; in = (in + 1) % BUFFER_SIZE; count++; } Consumer while (true) { while (count == 0) ; // do nothing b/c empty nextConsumed = buffer[out]; out = (out + 1) % BUFFER_SIZE; count--; /* consume the item */ /* in nextConsumed */ }

Race Condition count++ not atomic operation. Could be implemented as register1 = count register1 = register1 + 1 count = register1 count-- not atomic operation. Could be implemented as register2 = count register2 = register2 - 1 count = register2 Consider this execution interleaving with “count = 5” initially: S0: producer execute register1 = count {register1 = 5} S1: producer execute register1 = register1 + 1 {register1 = 6} S2: consumer execute register2 = count {register2 = 5} S3: consumer execute register2 = register2 - 1 {register2 = 4} S4: producer execute count = register1 {count = 6 } S5: consumer execute count = register2 {count = 4}

What just happened? Threads share global memory When a process contains multiple threads, they have –Private registers and stack memory (the context switching mechanism needs to save and restore registers when switching from thread to thread) –Shared access to the remainder of the process “state” This can result in race conditions

Two threads, one counter Popular web server Uses multiple threads to speed things up. Simple shared state error: –each thread increments a shared counter to track number of hits What happens when two threads execute concurrently? … hits = hits + 1; …

Shared counters Possible result: lost update! One other possible result: everything works.  Difficult to debug Called a “race condition” hits = read hits (0) hits = read hits (0) T1 T2 hits = 1 hits = 0 time

Race conditions Def: a timing dependent error involving shared state –Whether it happens depends on how threads scheduled –In effect, once thread A starts doing something, it needs to “race” to finish it because if thread B looks at the shared memory region before A is done, it may see something inconsistent Hard to detect: –All possible schedules have to be safe Number of possible schedule permutations is huge Some bad schedules? Some that will work sometimes? –they are intermittent Timing dependent = small changes can hide bug –Celebrate if bug is deterministic and repeatable!

If i is shared, and initialized to 0 –Who wins? –Is it guaranteed that someone wins? –What if both threads run on identical speed CPU executing in parallel Scheduler assumptions Process b: while(i > -10) i = i - 1; print “B won!”; Process a: while(i < 10) i = i +1; print “A won!”;

Scheduler Assumptions Normally we assume that –A scheduler always gives every executable thread opportunities to run In effect, each thread makes finite progress –But schedulers aren’t always fair Some threads may get more chances than others –To reason about worst case behavior we sometimes think of the scheduler as an adversary trying to “mess up” the algorithm

Critical Section Goals Threads do some stuff but eventually might try to access shared data CSEnter(); Critical section CSExit(); T1 T2 time CSEnter(); Critical section CSExit(); T1 T2

Critical Section Goals Perhaps they loop (perhaps not!) T1 T2 CSEnter(); Critical section CSExit(); T1 T2 CSEnter(); Critical section CSExit();

Critical Section Goals We would like –Safety (aka mutual exclusion) No more than one thread can be in a critical section at any time. –Liveness (aka progress) A thread that is seeking to enter the critical section will eventually succeed –Bounded waiting A bound must exist on the number of times that other threads are allowed to enter their critical sections after a thread has made a request to enter its critical section and before that request is granted Assume that each process executes at a nonzero speed No assumption concerning relative speed of the N processes Ideally we would like fairness as well –If two threads are both trying to enter a critical section, they have equal chances of success –… in practice, fairness is rarely guaranteed

Solving the problem CSEnter() { while(inside) continue; inside = true; } A first idea: –Have a boolean flag, inside. Initially false. CSExit() { inside = false; } Now ask: –Is this Safe? Live? Bounded waiting? Code is unsafe: thread 0 could finish the while test when inside is false, but then 1 might call CSEnter() before 0 can set inside to true!

Solving the problem: Take 2 CSEnter(int i) { inside[i] = true; while(inside[i^1]) continue; } A different idea (assumes just two threads): –Have a boolean flag, inside[i]. Initially false. CSExit(int i) { Inside[i] = false; } Now ask: –Is this Safe? Live? Bounded waiting? Code isn’t live: with bad luck, both threads could be looping, with 0 looking at 1, and 1 looking at 0

Solving the problem: Take 3 CSEnter(int i) { while(turn != i) continue; } Another broken solution, for two threads –Have a turn variable, turn, initially 1. CSExit(int i) { turn = i ^ 1; } Now ask: –Is this Safe? Live? Bounded waiting? Code isn’t live: thread 1 can’t enter unless thread 0 did first, and vice-versa. But perhaps one thread needs to enter many times and the other fewer times, or not at all

A solution that works Dekker’s Algorithm (1965) –(book: Exercise 6.1) CSEnter(int i) { inside[i] = true; while(inside[J]) { if (turn == J) { inside[i] = false; while(turn == J) continue; inside[i] = true; } }} CSExit(int i) { turn = J; inside[i] = false; }

Napkin analysis of Dekker’s algorithm: Safety: No process will enter its CS without setting its inside flag. Every process checks the other process inside flag after setting its own. If both are set, the turn variable is used to allow only one process to proceed. Liveness: The turn variable is only considered when both processes are using, or trying to use, the resource Bounded waiting: The turn variable ensures alternate access to the resource when both are competing for access

Why does it work? Safety: Suppose thread 0 is in the CS. Then inside[0] is true. If thread 1 was simultaneously trying to enter, then turn must equal 0 and thread 1 waits If thread 1 tries to enter “now”, it sets turn to 0 and waits Liveness: Suppose thread 1 wants to enter and can’t (stuck in while loop) –Thread 0 will eventually exit the CS –When inside[0] becomes false, thread 1 can enter –If thread 0 tries to reenter immediately, it sets turn=1 and hence will wait politely for thread 1 to go first!

Postscript Dekker’s algorithm does not provide strict alternation –Initially, a thread can enter critical section without accessing turn Dekker’s algorithm will not work with many modern CPUs –CPUs execute their instructions in an out-of-order (OOO) fashion –This algorithm won't work on Symmetric MultiProcessors (SMP) CPUs equipped with OOO without the use of memory barriers Additionally, Dekker’s algorithm can fail regardless of platform due to many optimizing compilers –Compiler may remove writes to flag since never accessed in loop –Further, compiler may remove turn since never accessed in loop Creating an infinite loop!