Instructor: Junfeng Yang

Slides:



Advertisements
Similar presentations
Ch. 7 Process Synchronization (1/2) I Background F Producer - Consumer process :  Compiler, Assembler, Loader, · · · · · · F Bounded buffer.
Advertisements

Chapter 6: Process Synchronization
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
EEE 435 Principles of Operating Systems Interprocess Communication Pt II (Modern Operating Systems 2.3)
CH7 discussion-review Mahmoud Alhabbash. Q1 What is a Race Condition? How could we prevent that? – Race condition is the situation where several processes.
Operating Systems CMPSC 473 Mutual Exclusion Lecture 13: October 12, 2010 Instructor: Bhuvan Urgaonkar.
CS444/CS544 Operating Systems Synchronization 2/21/2006 Prof. Searleman
Synchronization Principles. Race Conditions Race Conditions: An Example spooler directory out in 4 7 somefile.txt list.c scores.txt Process.
Classical Synchronization Problems. Paradigms for Threads to Share Data We’ve looked at critical sections –Really, a form of locking –When one thread.
Monitors CSCI 444/544 Operating Systems Fall 2008.
Semaphores CSCI 444/544 Operating Systems Fall 2008.
CS533 Concepts of Operating Systems Class 3 Monitors.
Race Conditions CS550 Operating Systems. Review So far, we have discussed Processes and Threads and talked about multithreading and MPI processes by example.
Instructor: Umar KalimNUST Institute of Information Technology Operating Systems Process Synchronization.
Synchronization CSCI 444/544 Operating Systems Fall 2008.
Operating Systems CSE 411 CPU Management Oct Lecture 13 Instructor: Bhuvan Urgaonkar.
6.3 Peterson’s Solution The two processes share two variables: Int turn; Boolean flag[2] The variable turn indicates whose turn it is to enter the critical.
Operating Systems ECE344 Ashvin Goel ECE University of Toronto Mutual Exclusion.
4061 Session 21 (4/3). Today Thread Synchronization –Condition Variables –Monitors –Read-Write Locks.
Chapter 6 – Process Synchronisation (Pgs 225 – 267)
CSE 451: Operating Systems Winter 2012 Semaphores and Monitors Mark Zbikowski Gary Kimura.
1 CMSC421: Principles of Operating Systems Nilanjan Banerjee Principles of Operating Systems Acknowledgments: Some of the slides are adapted from Prof.
CS252: Systems Programming Ninghui Li Based on Slides by Prof. Gustavo Rodriguez-Rivera Topic 11: Thread-safe Data Structures, Semaphores.
Chapter 6: Process Synchronization. 6.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Module 6: Process Synchronization Background The.
1 Condition Variables CS 241 Prof. Brighten Godfrey March 16, 2012 University of Illinois.
CIS Operating Systems Synchronization Professor Qiang Zeng Fall 2015.
Operating Systems CSE 411 CPU Management Dec Lecture Instructor: Bhuvan Urgaonkar.
CS252: Systems Programming Ninghui Li Based on Slides by Prof. Gustavo Rodriguez-Rivera Topic 13: Condition Variable, Read/Write Lock, and Deadlock.
Dining Philosophers & Monitors Questions answered in this lecture: How to synchronize dining philosophers? What are monitors and condition variables? What.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Chapter 6: Process Synchronization.
Semaphores Reference –text: Tanenbaum ch
Web Server Architecture Client Main Thread for(j=0;j
Case Study: Pthread Synchronization Dr. Yingwu Zhu.
6.1 Silberschatz, Galvin and Gagne ©2005 Operating System Principles 6.5 Semaphore Less complicated than the hardware-based solutions Semaphore S – integer.
CS 537 – Introduction to Operating Systems
Interprocess Communication Race Conditions
CS703 - Advanced Operating Systems
Semaphores Reference text: Tanenbaum ch
CS703 – Advanced Operating Systems
Process Synchronization: Semaphores
CIS Operating Systems Synchronization
Chapter 5: Process Synchronization – Part II
Operating Systems CMPSC 473
Chapter 5: Process Synchronization
CSE451 Basic Synchronization Spring 2001
CS510 Operating System Foundations
CSCI 511 Operating Systems Chapter 5 (Part C) Monitor
Lecture 14: Pthreads Mutex and Condition Variables
Semaphore Originally called P() and V() wait (S) { while S <= 0
Synchronization and Semaphores
UNIVERSITY of WISCONSIN-MADISON Computer Sciences Department
Lecture 2 Part 2 Process Synchronization
Chapter 30 Condition Variables
CSE 451: Operating Systems Winter Module 8 Semaphores and Monitors
CSE 451: Operating Systems Autumn Lecture 8 Semaphores and Monitors
CSE 451: Operating Systems Autumn Lecture 7 Semaphores and Monitors
Lecture 14: Pthreads Mutex and Condition Variables
CSE 451: Operating Systems Winter Module 7 Semaphores and Monitors
CSE 451: Operating Systems Autumn 2003 Lecture 7 Synchronization
CSE 451: Operating Systems Autumn 2005 Lecture 7 Synchronization
CSE 451: Operating Systems Winter 2003 Lecture 7 Synchronization
CSE 153 Design of Operating Systems Winter 19
CSE 153 Design of Operating Systems Winter 2019
CS333 Intro to Operating Systems
Chapter 6: Synchronization Tools
CSE 451: Operating Systems Winter Module 7 Semaphores and Monitors
Semaphores Reference text: Tanenbaum ch
Lecture 12 CV and Semaphores
CSE 542: Operating Systems
CSE 542: Operating Systems
Presentation transcript:

Instructor: Junfeng Yang W4118 Operating Systems Instructor: Junfeng Yang

Logistics Homework 2 due time: 3:09 pm this Thursday (one hour before class) Submit everything electronically at courseworks, including written assignment

Last lecture Synchronization Layered approach to synchronization Critical section requirements: safe, live, bounded Desirable: efficient, fair, simple Locks Uniprocessor implementation: disable and enable interrupts Software-based locks: peterson’s algorithm Locks with hardware support atomic test_and_set all pointer paramters are untrusted and from user.

Today Lock (wrap up) Semaphore Monitor A classical synchronization problem: read and write lock

Recall: Spin-wait or block Spin-lock may waste CPU cycles: lock holder gets preempted, and scheduled threads try to grab lock Shouldn’t use spin-lock on single core On multi-core, good plan is: spin a bit, then yield Worst as # of threads increase

Problem with simple yield lock() { while(test_and_set(&flag)) yield(); } Problem: Still a lot of context switches; poll for lock Starvation possible Why? No control over who gets the lock next Need explicit control over who gets the lock

Implementing locks: version 4 The idea Add thread to queue when lock unavailable In unlock(), wake up one thread in queue Problem I: may lose the wake up Fix: use a spin_lock or lock w/ simple yield! Doesn’t completely avoid spin-wait, but make wait time short, thus reasonable Problem II: may not wake up the right thread Fix: unlock() directly transfers lock to waiting thread lock() { if (flag == 1) add myself to wait queue yield … } unlock() { flag = 0 if(any thread in wait queue) wake up one wait thread … } Lock from a third thread

Implementing locks: version 4, the code typedef struct __mutex_t { int flag; // 0: mutex is available, 1: mutex is not available int guard; // guard lock to avoid losing wakeups queue_t *q; // queue of waiting threads } mutex_t; void lock(mutex_t *m) { while (test_and_set(m->guard)) ; //acquire guard lock by spinning if (m->flag == 0) { m->flag = 1; // acquire mutex m->guard = 0; } else { enqueue(m->q, self); yield(); } void unlock(mutex_t *m) { while (test_and_set(m->guard)) ; if (queue_empty(m->q)) // release mutex; no one wants mutex m->flag = 0; else // direct transfer mutex to next thread wakeup(dequeue(m->q)); m->guard = 0; } This is a more realistic mutex implementation than previous versions Interesting bits What is guard? A spin_lock to protect internal data structure. Possible that a thread gets preempted when holding m->guard, thus has the overhead discussed before. But since we hold m->guard for very short period of time, The spin-wait time is short and reasonable Queue to control lock order, thus increase efficiency and avoid starvation. In lock: Lock unavailabe  enqueue in unlock, dequeue a task and wake it up Flag not set to 0 in unlock. Not an error, but a must! current thread is holding guard, next thread can’t modify flag can set to 0, and let next thread grab lock, but a 3rd thread can come in and grab the lock first. This is very close to real mutex implementations

Today Lock (wrap up) Semaphore Monitor A classical synchronization problem: read and write lock

Semaphore Motivation Problem with lock: mutual exclusion, but no ordering; may want more E.g. Producer-consumer problem $ cat 1.txt | sort | uniq | wc Producer: creates a resource Consumer: uses a resource bounded buffer between them Scheduling order: producer waits if buffer full, consumer waits if buffer empty

Semaphore Definition A synchronization variable that: Contains an integer value Can’t access directly Must initialize to some value sem_init(sem_t *s, int pshared, unsigned int value) Has two operations to manipulate this integer sem_wait, or down(), P() (comes from Dutch) sem_post, or up(), V() (comes from Dutch) sem_wait: no wait if s > 0 Sem_post: never block caller sem_wait, sem_post are atomic int sem_wait(sem_t *s) { wait until value of semaphore s is greater than 0 decrement the value of semaphore s by 1 } int sem_post(sem_t *s) { increment the value of semaphore s by 1 if there are 1 or more threads waiting, wake 1 }

Semaphore Uses Mutual exclusion Scheduling order Semaphore as mutex // initialize to X sem_init(s, 0, X) sem_wait(s); // critical section sem_post(s); Mutual exclusion Semaphore as mutex What should initial value be? Binary semaphore: X=1 ( Counting semaphore: X>1 ) Scheduling order One thread waits for another //thread 0 … // 1st half of computation sem_post(s); // thread 1 sem_wait(s); … //2nd half of computation

Producer-Consumer (Bounded-Buffer) Problem Bounded buffer: size ‘N’ Access entry 0… N-1, then “wrap around” to 0 again Producer process writes data to buffer Must not write more than ‘N’ items more than consumer “ate” Consumer process reads data from buffer Should not try to consume if there is no data 1 N-1 Producer Consumer

Solving Producer-Consumer problem Two semaphores sem_t full; // # of filled slots sem_t empty; // # of empty slots Problem: mutual exclusion? sem_init(&full, 0, 0); sem_init(&empty, 0, N); producer() { sem_wait(empty); … // fill a slot sem_post(full); } consumer() { sem_wait(full); … // empty a slot sem_post(empty); }

Solving Producer-Consumer problem: Final Three semaphores sem_t full; // # of filled slots sem_t empty; // # of empty slots sem_t mutex; // mutual exclusion sem_init(&full, 0, 0); sem_init(&empty, 0, N); sem_init(&mutex, 0, 1); producer() { sem_wait(empty); sem_wait(&mutex); … // fill a slot sem_post(&mutex); sem_post(full); } consumer() { sem_wait(full); sem_wait(&mutex); … // empty a slot sem_post(&mutex); sem_post(empty); }

How to Implement Semaphores? Part of your next programming assignment

Today Lock (wrap up) Semaphore Monitor A classical synchronization problem: read and write lock

Monitors Background Concurrent programming meets object-oriented programming When concurrent programming became a big deal, object-oriented programming too People started to think about ways to make concurrent programming more structured Monitor: object with a set of monitor procedures and only one thread may be active (i.e. running one of the monitor procedures) at a time

Schematic view of a Monitor Can think of a monitor as one big lock for a set of operations/ methods In other words, a language implementation of mutexes

How to Implement Monitor? Compiler automatically inserts lock and unlock operations upon entry and exit of monitor procedures class account { int balance; public synchronized void deposit() { ++balance; } public synchronized void withdraw() { --balance; }; lock(m); ++balance; unlock(m); lock(m); --balance; unlock(m);

Condition Variables Need wait and wakeup as in semaphores Monitor uses Condition Variables Conceptually associated with some conditions Operations on condition variables: wait(): suspends the calling thread and releases the monitor lock. When it resumes, reacquire the lock. Called with condition is not true signal(): resumes one thread (if any) waiting in wait(). Called when condition becomes true broadcast(): resumes all threads waiting in wait() Why release monitor lock? only one thread can be in monitor at a time. Since current thread wants to wait, must release lock to let other threads to enter monitor

Monitor with Condition Variables

Subtle Differences between condition variables and semaphores Semaphores are sticky: they have memory, sem_post() will increment the semaphore, even if no one has called sem_wait() Condition variables are not: if no one is waiting for a signal(), this signal() is not saved

Producer-Consumer with Monitors monitor ProducerConsumer { int nfull = 0; cond notfull, notempty; producer() { if (nfull == N) wait (notfull); … // fill a slot ++ nfull; signal (notempty); } consumer() { if (nfull == 0) wait (notempty); … // empty a slot -- nfull signal (notfull); }; nfull: number of filled buffers Need to do our own counting for condition variables notfull and notempty: two condition variables notfull: not all slots are full notempty: not all slots are empty

Condition Variable Semantics Problem: when signal() wakes up a waiting thread, which thread to run inside the monitor, the signaling thread, or the waiting thread? Hoare semantics: suspends the signaling thread, and immediately transfers control to the woken thread Difficult to implement in practice Mesa semantics: signal() moves a single waiting thread from the blocked state to a runnable state, then the signaling thread continues until it exits the monitor Easy to implement Problem: race! E.g. before a woken consumer continues, another consumer comes in and grabs the buffer Tony Hoare: theoretician, invented quick sort "Experience with Processes and Monitors in Mesa", B.W. Lampson, D.R. Redell. Communications of the ACM. Volume 23, Number 2. pages 105-117. February 1980.

Fixing the Race in Mesa Monitors monitor ProducerConsumer { int nfull = 0; cond notfull, notempty; producer() { while (nfull == N) wait (notfull); … // fill slot ++ nfull; signal (notempty); } consumer() { while (nfull == 0) wait (notempty); … // empty slot -- nfull signal (notfull); }; The fix: when woken, a thread must recheck the condition it was waiting on Most systems use mesa semantics E.g. pthread Thus, you should remember to recheck

Monitor with pthread C/C++ don’t provide monitors; but we can implement monitors using pthread mutex and condition variable For producer-consumer problem, need 1 pthread mutex and 2 pthread condition variables (pthread_cond_t) Manually lock and unlock mutex for monitor procedures pthread_cond_wait (cv, m): atomically waits on cv and releases m Why atomically? You figure out class ProducerConsumer { int nfull = 0; pthread_mutex_t m; pthread_cond_t notfull, notempty; public: producer() { pthread_mutex_lock(&m); while (nfull == N) ptherad_cond_wait (&notfull, &m); … // fill slot ++ nfull; pthread_cond_signal (notempty); pthread_mutex_unlock(&m); } … }; Instead of pthread_cond_wait(cv, m), why not pthread_mutex_unlock(m); pthread_cond_awit(cv); ?

Today Lock (wrap up) Semaphore Monitor A classical synchronization problem: read and write lock

Readers-Writers Problem Courtois et al 1971 Models access to a database A reader is a thread that needs to look at the database but won’t change it. A writer is a thread that modifies the database Example: making an airline reservation When you browse to look at flight schedules the web site is acting as a reader on your behalf When you reserve a seat, the web site has to write into the database to make the reservation

Solving Readers-Writers w/ Regular Lock sem_t lock; Writer sem_wait (lock); . . . // write shared data sem_post (lock); Reader sem_wait (lock); . . . // read shared data sem_post(lock); Problem: unnecessary synchronization Only one writer can be active at a time However, any number of readers can be active simultaneously ! Solution: Idea: differentiate lock for read and lock for write

Readers-Writers Lock Problem: may starve writer int nreader = 0; sem_t lock = 1, write_lock = 1; Writer sem_wait (write_lock); . . . // write shared data sem_post (write_lock); Reader sem_wait (lock); ++ nreader; if (nreader == 1) // first reader sem_wait (write_lock); sem_post (lock); . . . // read shared data -- nreader; if (nreader == 0) // last reader sem_post (write_lock); nreader: number of readers lock: protect update to internal data structure write_lock: mutual exlusion between (1) one writer and other writers, and (2) one writer and all readers first reader grabs write_lock to drive out future writers last reader signals on write_lock to wake up waiting writers Once a reader is active, other readers can go through, too how to determine first or last? Use nreader to track how many readers If there is a writer First reader blocks on write_lock Other readers block on lock Other writers block on write_lock Problem: may starve writer How to fix? Not that straightforward. You figure out