Lecture 5 Page 1 CS 111 Summer 2013 Bounded Buffers A higher level abstraction than shared domains or simple messages But not quite as high level as RPC.

Slides:



Advertisements
Similar presentations
Operating Systems Semaphores II
Advertisements

Synchronization. How to synchronize processes? – Need to protect access to shared data to avoid problems like race conditions – Typical example: Updating.
1 Chapter 5 Concurrency: Mutual Exclusion and Synchronization Principals of Concurrency Mutual Exclusion: Hardware Support Semaphores Readers/Writers Problem.
CS492B Analysis of Concurrent Programs Lock Basics Jaehyuk Huh Computer Science, KAIST.
Silberschatz, Galvin and Gagne ©2007 Operating System Concepts with Java – 7 th Edition, Nov 15, 2006 Chapter 6 (a): Synchronization.
Chapter 6: Process Synchronization
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
Mutual Exclusion.
Implementing A Simple Storage Case Consider a simple case for distributed storage – I want to back up files from machine A on machine B Avoids many tricky.
CY2003 Computer Systems Lecture 05 Semaphores - Theory.
Concurrency in Shared Memory Systems Synchronization and Mutual Exclusion.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Threading Part 4 CS221 – 4/27/09. The Final Date: 5/7 Time: 6pm Duration: 1hr 50mins Location: EPS 103 Bring: 1 sheet of paper, filled both sides with.
Threading Part 2 CS221 – 4/22/09. Where We Left Off Simple Threads Program: – Start a worker thread from the Main thread – Worker thread prints messages.
1 Synchronization Coordinating the Activity of Mostly Independent Entities.
Avishai Wool lecture Introduction to Systems Programming Lecture 4 Inter-Process / Inter-Thread Communication.
Inter Process Communication:  It is an essential aspect of process management. By allowing processes to communicate with each other: 1.We can synchronize.
Concurrency: Mutual Exclusion, Synchronization, Deadlock, and Starvation in Representative Operating Systems.
Synchronization in Java Fawzi Emad Chau-Wen Tseng Department of Computer Science University of Maryland, College Park.
CS533 - Concepts of Operating Systems 1 CS533 Concepts of Operating Systems Class 8 Synchronization on Multiprocessors.
Race Conditions CS550 Operating Systems. Review So far, we have discussed Processes and Threads and talked about multithreading and MPI processes by example.
A. Frank - P. Weisberg Operating Systems Introduction to Cooperating Processes.
Adopted from and based on Textbook: Operating System Concepts – 8th Edition, by Silberschatz, Galvin and Gagne Updated and Modified by Dr. Abdullah Basuhail,
Process Synchronization Ch. 4.4 – Cooperating Processes Ch. 7 – Concurrency.
Concurrency Recitation – 2/24 Nisarg Raval Slides by Prof. Landon Cox.
Lecture 5 Page 1 CS 111 Summer 2014 Process Communications, Synchronization, and Concurrency CS 111 Operating Systems Peter Reiher.
Programming with Alice Computing Institute for K-12 Teachers Summer 2011 Workshop.
Object Oriented Analysis & Design SDL Threads. Contents 2  Processes  Thread Concepts  Creating threads  Critical sections  Synchronizing threads.
1 Announcements The fixing the bug part of Lab 4’s assignment 2 is now considered extra credit. Comments for the code should be on the parts you wrote.
Operating Systems ECE344 Ashvin Goel ECE University of Toronto Mutual Exclusion.
CSC321 Concurrent Programming: §5 Monitors 1 Section 5 Monitors.
Deadlock Detection and Recovery
Chapter 6 – Process Synchronisation (Pgs 225 – 267)
COT 4600 Operating Systems Fall 2009 Dan C. Marinescu Office: HEC 439 B Office hours: Tu-Th 3:00-4:00 PM.
SPL/2010 Guarded Methods and Waiting 1. SPL/2010 Reminder! ● Concurrency problem: asynchronous modifications to object states lead to failure of thread.
CS399 New Beginnings Jonathan Walpole. 2 Concurrent Programming & Synchronization Primitives.
Lecture 7 Page 1 CS 111 Winter 2014 Process Communications and Concurrency CS 111 Operating Systems Peter Reiher.
Lecture 7 Page 1 CS 111 Fall 2015 Process Communications and Concurrency CS 111 Operating Systems Peter Reiher.
Monitors and Blocking Synchronization Dalia Cohn Alperovich Based on “The Art of Multiprocessor Programming” by Herlihy & Shavit, chapter 8.
1 Condition Variables CS 241 Prof. Brighten Godfrey March 16, 2012 University of Illinois.
13/03/07Week 21 CENG334 Introduction to Operating Systems Erol Sahin Dept of Computer Eng. Middle East Technical University Ankara, TURKEY URL:
Lecture 4 Page 1 CS 111 Online Modularity and Virtualization CS 111 On-Line MS Program Operating Systems Peter Reiher.
Chapter 7 - Interprocess Communication Patterns
Concurrency in Shared Memory Systems Synchronization and Mutual Exclusion.
Lecture 7 Page 1 CS 111 Online Process Communications and Concurrency CS 111 On-Line MS Program Operating Systems Peter Reiher.
Lecture 7 Page 1 CS 111 Spring 2015 Process Communications and Concurrency CS 111 Operating Systems Peter Reiher.
1 Previous Lecture Overview  semaphores provide the first high-level synchronization abstraction that is possible to implement efficiently in OS. This.
Instructor: Craig Duckett Lecture 07: Tuesday, October 20 th, 2015 Conflicts and Isolation, MySQL Workbench 1 BIT275: Database Design (Fall 2015)
Synchronization CSCI 3753 Operating Systems Spring 2005 Prof. Rick Han.
CSCI1600: Embedded and Real Time Software Lecture 17: Concurrent Programming Steven Reiss, Fall 2015.
Lecture 10 Page 1 CS 111 Online Memory Management CS 111 On-Line MS Program Operating Systems Peter Reiher.
Mutual Exclusion -- Addendum. Mutual Exclusion in Critical Sections.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
December 1, 2006©2006 Craig Zilles1 Threads & Atomic Operations in Hardware  Previously, we introduced multi-core parallelism & cache coherence —Today.
Remote Procedure Calls
Interprocess Communication Race Conditions
CS703 – Advanced Operating Systems
Background on the need for Synchronization
Atomic Operations in Hardware
Atomic Operations in Hardware
Multiple Writers and Races
Lecture 25 More Synchronized Data and Producer/Consumer Relationship
Synchronization Lecture 23 – Fall 2017.
Race Conditions & Synchronization
Process Synchronization
Background and Motivation
CSCI1600: Embedded and Real Time Software
CSE 153 Design of Operating Systems Winter 19
CS333 Intro to Operating Systems
CSCI1600: Embedded and Real Time Software
Presentation transcript:

Lecture 5 Page 1 CS 111 Summer 2013 Bounded Buffers A higher level abstraction than shared domains or simple messages But not quite as high level as RPC A buffer that allows writers to put messages in And readers to pull messages out FIFO Unidirectional – One process sends, one process receives With a buffer of limited size

Lecture 5 Page 2 CS 111 Summer 2013 SEND and RECEIVE With Bounded Buffers For SEND(), if buffer is not full, put the message into the end of the buffer and return – If full, block waiting for space in buffer – Then add message and return For RECEIVE(), if buffer has one or more messages, return the first one put in – If there are no messages in buffer, block and wait until one is put in

Lecture 5 Page 3 CS 111 Summer 2013 Practicalities of Bounded Buffers Handles problem of not having infinite space Ensures that fast sender doesn’t overwhelm slow receiver Provides well-defined, simple behavior for receiver But subject to some synchronization issues – The producer/consumer problem – A good abstraction for exploring those issues

Lecture 5 Page 4 CS 111 Summer 2013 The Bounded Buffer Process 1 Process 2 A fixed size buffer Process 1 is the writer Process 2 is the reader Process 1 SENDs a message through the buffer Process 2 RECEIVEs a message from the buffer More messages are sent And received What could possibly go wrong?

Lecture 5 Page 5 CS 111 Summer 2013 One Potential Issue Process 1 Process 2 What if the buffer is full? But the sender wants to send another message? The sender will need to wait for the receiver to catch up An issue of sequence coordination Another sequence coordination problem if receiver tries to read from an empty buffer

Lecture 5 Page 6 CS 111 Summer 2013 Handling Sequence Coordination Issues One party needs to wait – For the other to do something If the buffer is full, process 1’s SEND must wait for process 2 to do a RECEIVE If the buffer is empty, process 2’s RECEIVE must wait for process 1 to SEND Naively, done through busy loops – Check condition, loop back if it’s not true – Also called spin loops

Lecture 5 Page 7 CS 111 Summer 2013 Implementing the Loops What exactly are the processes looping on? They care about how many messages are in the bounded buffer That count is probably kept in a variable – Incremented on SEND – Decremented on RECEIVE – Never to go below zero or exceed buffer size The actual system code would test the variable

Lecture 5 Page 8 CS 111 Summer 2013 A Potential Danger Process 1 Process 2 BUFFER_COUNT 4 Process 1 checks BUFFER_COUNT 4 Process 2 checks BUFFER_COUNT 4 Process 1 wants to SEND Process 2 wants to RECEIVE Concurrency’s a bitch

Lecture 5 Page 9 CS 111 Summer 2013 Why Didn’t You Just Say BUFFER_COUNT=BUFFER_COUNT- 1 ? These are system operations Occurring at a low level Using variables not necessarily in the processes’ own address space – Perhaps even RAM memory locations The question isn’t, can we do it right? The question is, what must we do if we are to do it right?

Lecture 5 Page 10 CS 111 Summer 2013 One Possible Solution Use separate variables to hold the number of messages put into the buffer And the number of messages taken out Only the sender updates the IN variable Only the receiver updates the OUT variable Calculate buffer fullness by subtracting OUT from IN Won’t exhibit the previous problem When working with concurrent processes, it’s safest to only allow one process to write each variable

Lecture 5 Page 11 CS 111 Summer 2013 Multiple Writers and Races What if there are multiple senders and receivers sharing the buffer? Other kinds of concurrency issues can arise – Unfortunately, in non-deterministic fashion – Depending on timings, they might or might not occur – Without synchronization between threads/processes, we have no control of the timing – Any action interleaving is possible

Lecture 5 Page 12 CS 111 Summer 2013 A Multiple Sender Problem Process 1 Process 2 Process 3 Processes 1 and 3 are senders Process 2 is a receiver The buffer starts empty 0 IN Process 1 wants to SEND Process 3 wants to SEND There’s plenty of room in the buffer for both But We’re in trouble: We overwrote process 1’s message

Lecture 5 Page 13 CS 111 Summer 2013 The Source of the Problem Concurrency again Processes 1 and 3 executed concurrently At some point they determined that buffer slot 1 was empty – And they each filled it – Not realizing the other would do so Worse, it’s timing dependent – Depending on ordering of events

Lecture 5 Page 14 CS 111 Summer 2013 Process 1 Might Overwrite Process 3 Instead Process 1 Process 3 Process 2 0 IN 1 0 0

Lecture 5 Page 15 CS 111 Summer 2013 Or It Might Come Out Right Process 1 Process 3 Process 2 0 IN

Lecture 5 Page 16 CS 111 Summer 2013 Race Conditions Errors or problems occurring because of this kind of concurrency For some ordering of events, everything is fine For others, there are serious problems In true concurrent situations, either result is possible And it’s often hard to predict which you’ll get Hard to find and fix – A job for the OS, not application programmers

Lecture 5 Page 17 CS 111 Summer 2013 How Can The OS Help? By providing abstractions not subject to race conditions One can program race-free concurrent code – It’s not easy So having an expert do it once is better than expecting all programmers to do it themselves An example of the OS hiding unpleasant complexities

Lecture 5 Page 18 CS 111 Summer 2013 Locks A way to deal with concurrency issues Many concurrency issues arise because multiple steps aren’t done atomically – It’s possible for another process to take actions in the middle Locks prevent that from happening They convert a multi-step process into effectively a single step one

Lecture 5 Page 19 CS 111 Summer 2013 What Is a Lock? A shared variable that coordinates use of a shared resource – Such as code or other shared variables When a process wants to use the shared resource, it must first ACQUIRE the lock – Can’t use the resource till ACQUIRE succeeds When it is done using the shared resource, it will RELEASE the lock ACQUIRE and RELEASE are the fundamental lock operations

Lecture 5 Page 20 CS 111 Summer 2013 Using Locks in Our Multiple Sender Problem Process 1 Process 3 IN 0 To use the buffer properly, a process must: 1. Read the value of IN 2. If IN < BUFFER_SIZE, store message 3. Add 1 to IN WITHOUT INTERRUPTION! So associate a lock with those steps

Lecture 5 Page 21 CS 111 Summer 2013 The Lock in Action Process 1 Process 3 IN 0 Process 1 executes ACQUIRE on the lock Let’s assume it succeeds Now process 1 executes the code associated with the lock 1. Read the value of IN IN = 0 2. If IN < BUFFER_SIZE, store message 0 < 5 ✔ 3. Add 1 to IN 1 Process 1 now executes RELEASE on the lock

Lecture 5 Page 22 CS 111 Summer 2013 What If Process 3 Intervenes? Process 1 Process 3 IN 0 IN = 0 Let’s say process 1 has the lock already And has read IN Now, before process 1 can execute any more code, process 3 tries to SEND Before process 3 can go ahead, it needs the lock ACQUIRE() But that ACQUIRE fails, since process 1 already has the lock So process 1 can safely complete the SEND 1

Lecture 5 Page 23 CS 111 Summer 2013 Locking and Atomicity Locking is one way to provide the property of atomicity for compound actions – Actions that take more than one step Atomicity has two aspects: – Before-or-after atomicity – All-or-nothing atomicity Locking is most useful for providing before- or-after atomicity

Lecture 5 Page 24 CS 111 Summer 2013 Before-Or-After Atomicity As applied to a set of actions A If they have before-or-after atomicity, For all other actions, each such action either: – Happened before the entire set of A – Or happened after the entire set of A In our bounded buffer example, either the entire buffer update occurred first Or the entire buffer update came later Not partly before, partly after

Lecture 5 Page 25 CS 111 Summer 2013 Using Locks to Avoid Races Software designer must find all places where a race condition might occur – If he misses one, he may get errors there He must then properly use locks for all processes that could cause the race – If he doesn’t do it right, he might get races anyway Since neither is trivial to get right, OS should provide abstractions to handle proper locking