CS252: Systems Programming Ninghui Li Based on Slides by Prof. Gustavo Rodriguez-Rivera Topic 13: Condition Variable, Read/Write Lock, and Deadlock.

Slides:



Advertisements
Similar presentations
Operating Systems Semaphores II
Advertisements

CS252: Systems Programming Ninghui Li Based on Slides by Prof. Gustavo Rodriguez-Rivera Topic 14: Deadlock & Dinning Philosophers.
Monitors & Blocking Synchronization 1. Producers & Consumers Problem Two threads that communicate through a shared FIFO queue. These two threads can’t.
Operating Systems CMPSC 473 Mutual Exclusion Lecture 13: October 12, 2010 Instructor: Bhuvan Urgaonkar.
6.5 Semaphore Can only be accessed via two indivisible (atomic) operations wait (S) { while S
Chapter 6: Process Synchronization. Outline Background Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores Classic Problems.
Chapter 6: Process Synchronization. 6.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 8, 2005 Objectives Understand.
CS444/CS544 Operating Systems Synchronization 2/19/2007 Prof. Searleman
8-1 JMH Associates © 2004, All rights reserved Windows Application Development Chapter 10 - Supplement Introduction to Pthreads for Application Portability.
Semaphores CSCI 444/544 Operating Systems Fall 2008.
Race Conditions CS550 Operating Systems. Review So far, we have discussed Processes and Threads and talked about multithreading and MPI processes by example.
Condition Variables Revisited Copyright ©: University of Illinois CS 241 Staff1.
CS 241 Section Week #4 (2/19/09). Topics This Section  SMP2 Review  SMP3 Forward  Semaphores  Problems  Recap of Classical Synchronization Problems.
CS252: Systems Programming Ninghui Li Final Exam Review.
Pthread (continue) General pthread program structure –Encapsulate parallel parts (can be almost the whole program) in functions. –Use function arguments.
Introduction to Pthreads. Pthreads Pthreads is a POSIX standard for describing a thread model, it specifies the API and the semantics of the calls. Model.
CS252: Systems Programming Ninghui Li Based on Slides by Prof. Gustavo Rodriguez-Rivera Topic 10: Threads and Thread Synchronization.
6.3 Peterson’s Solution The two processes share two variables: Int turn; Boolean flag[2] The variable turn indicates whose turn it is to enter the critical.
Operating Systems CMPSC 473 Mutual Exclusion Lecture 14: October 14, 2010 Instructor: Bhuvan Urgaonkar.
Copyright ©: University of Illinois CS 241 Staff1 Synchronization and Semaphores.
CS345 Operating Systems Threads Assignment 3. Process vs. Thread process: an address space with 1 or more threads executing within that address space,
Copyright ©: Nahrstedt, Angrave, Abdelzaher1 Semaphore and Mutex Operations.
Programming with POSIX* Threads Intel Software College.
Threads Manohara Pallewatta CSIM/SAT. Overview Concurrency and parallelismConcurrency and parallelism Processes and threadsProcesses and threads Thread.
Thread Synchronization Tutorial #8 CPSC 261. A thread is a virtual processor Each thread is provided the illusion that it owns a core – Copy of the registers.
1 Using Semaphores CS 241 March 14, 2012 University of Illinois Slides adapted in part from material accompanying Bryant & O’Hallaron, “Computer Systems:
Producer-Consumer Problem The problem describes two processes, the producer and the consumer, who share a common, fixed-size buffer used as a queue.bufferqueue.
4061 Session 21 (4/3). Today Thread Synchronization –Condition Variables –Monitors –Read-Write Locks.
CS252: Systems Programming
1 Pthread Programming CIS450 Winter 2003 Professor Jinhua Guo.
POSIX Synchronization Introduction to Operating Systems: Discussion Module 5.
Synchronizing Threads with Semaphores
IT 325 Operating systems Chapter6.  Threads can greatly simplify writing elegant and efficient programs.  However, there are problems when multiple.
Lecture 15 Semaphore & Bugs. Concurrency Threads Locks Condition Variables Fixing atomicity violations and order violations.
Operating Systems ECE344 Ashvin Goel ECE University of Toronto Synchronization.
1 CMSC421: Principles of Operating Systems Nilanjan Banerjee Principles of Operating Systems Acknowledgments: Some of the slides are adapted from Prof.
CS252: Systems Programming Ninghui Li Based on Slides by Prof. Gustavo Rodriguez-Rivera Topic 11: Thread-safe Data Structures, Semaphores.
U NIVERSITY OF M ASSACHUSETTS A MHERST Department of Computer Science Software Systems Advanced Synchronization Emery Berger and Mark Corner University.
CS252: Systems Programming Ninghui Li Based on Slides by Prof. Gustavo Rodriguez-Rivera Topic 12: Thread-safe Data Structures, Semaphores.
1 Synchronization Threads communicate to ensure consistency If not: race condition (non-deterministic result) Accomplished by synchronization operations.
1 Condition Variables CS 241 Prof. Brighten Godfrey March 16, 2012 University of Illinois.
CIS Operating Systems Synchronization Professor Qiang Zeng Fall 2015.
13/03/07Week 21 CENG334 Introduction to Operating Systems Erol Sahin Dept of Computer Eng. Middle East Technical University Ankara, TURKEY URL:
Chapter 6 P-Threads. Names The naming convention for a method/function/operation is: – pthread_thing_operation(..) – Where thing is the object used (such.
Operating Systems COMP 4850/CISG 5550 Interprocess Communication, Part II Dr. James Money.
CSC 360 Instructor: Kui Wu More on Process Synchronization Semaphore, Monitor, Condition Variables.
PThread Synchronization. Thread Mechanisms Birrell identifies four mechanisms commonly used in threading systems –Thread creation –Mutual exclusion (mutex)
CS 360 pthreads Condition Variables for threads. Page 2 CS 360, WSU Vancouver What is the issue? Creating a thread to perform a task and then joining.
6.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts with Java – 8 th Edition Module 6: Process Synchronization.
CSCI1600: Embedded and Real Time Software Lecture 17: Concurrent Programming Steven Reiss, Fall 2015.
Homework-6 Questions : 2,10,15,22.
Working with Pthreads. Operations on Threads int pthread_create (pthread_t *thread, const pthread_attr_t *attr, void * (*routine)(void*), void* arg) Creates.
Web Server Architecture Client Main Thread for(j=0;j
pThread synchronization
CS 311/350/550 Semaphores. Semaphores – General Idea Allows two or more concurrent threads to coordinate through signaling/waiting Has four main operations.
Case Study: Pthread Synchronization Dr. Yingwu Zhu.
6.1 Silberschatz, Galvin and Gagne ©2005 Operating System Principles 6.5 Semaphore Less complicated than the hardware-based solutions Semaphore S – integer.
CPS110: Reader-writer locks
Instructor: Junfeng Yang
Chapter 5: Process Synchronization – Part II
Principles of Operating Systems Lecture 11
Operating Systems CMPSC 473
CS252: Systems Programming
Lecture 14: Pthreads Mutex and Condition Variables
Semaphores Questions answered in this lecture:
Synchronization and Semaphores
CSE 451: Operating Systems Autumn Lecture 8 Semaphores and Monitors
Lecture 14: Pthreads Mutex and Condition Variables
CSE 153 Design of Operating Systems Winter 2019
Lecture 12 CV and Semaphores
Presentation transcript:

CS252: Systems Programming Ninghui Li Based on Slides by Prof. Gustavo Rodriguez-Rivera Topic 13: Condition Variable, Read/Write Lock, and Deadlock

Pseudo-Code Implementing Semaphore Using Mutex Lock sem_wait(sem_t *sem){ lock(sem->mutex); sem -> count--; if(sem->count < 0){ unlock(sem->mutex); wait(); } else { unlock(sem->mutex) } sem_post(sem_t *sem){ lock(sem -> mutex); sem ->count++; if(sem->count < 0){ wake up a thread; } unlock(sem->mutex); } Assume that wait() causes a thread to be blocked. What could go wrong? How to fix it? Think about a context switch here.

Condition Variable What we need is the ability to wait on a condition while simultaneously giving up the mutex lock. Condition Variable (CV): A thread can wait on a CV; it will be blocked until another thread call signal on the CV A condition variable is always used in conjunction with a mutex lock. The thread calling wait should hold the lock, and the wait call will releases the lock while going to wait

Using Condition Variable Declaration: #include pthread_cond_t cv; Initialization: pthread_cond_init(&cv, pthread_condattr_t *attr); Wait on the condition variable: int pthread_cond_wait(pthread_cond_t *cv, pthread_mutex_t *mutex); The calling threshold should hold mutex; it will be released atomically while start waiting on cv Upon successful return, the thread has re-aquired the mutex; however, the thread waking up and reaquiring the lock is not atomic.

Using Condition Variable Waking up waiting threads: int pthread_cond_signal(pthread_cond_t *cv); Unblocks one thread waiting on cv int pthread_cond_broadcast(pthread_cond_t *cv); Unblocks all threads waiting on cv The two methods can be called with or without holding the mutex that the thread calls wait with; but it is better to call it while holding the mutex

What is a Condition Variable? Each Condition Variable has a queue of blocked threads The cond_wait(cv, mutex) call adds the calling thread to cv’s queue, while releasing mutex; The call returns when the thread is unblocked (by another thread calling cond_signal), and the thread obtaining the mutex The cond_signal(cv) call removes one thread from the queue and unblocks it.

Implementing Semaphore using Mutex and Cond Var struct semaphore { pthread_cond_t cond; pthread_mutex_t mutex; int count; }; typedef struct semaphore semaphore_t; int semaphore_wait (semaphore_t *sem) { int res = pthread_mutex_lock(&(sem->mutex)); if (res != 0) return res; // error sem->count --; while (sem->count < 0) { res= pthread_cond_wait(&(sem->cond),&(sem->mutex)); } pthread_mutex_unlock(&(sem->mutex)); return res; }

Implementing Semaphore using Mutex and Cond Var int semaphore_post (semaphore_t *sem) { int res = pthread_mutex_lock(&(sem->mutex)); if (res != 0) return res; sem->count ++; if (sem->count <= 0) { res = pthread_cond_signal(&(sem->cond)); } pthread_mutex_unlock(&(sem->mutex)); return res; }

An Alternative and Buggy Implementation int semaphore_wait (semaphore_t *sem) { pthread_mutex_lock(&(sem->mutex)); if (sem->count <= 0) { pthread_cond_wait(&(sem->cond),&(sem->mutex)); } sem->count --; pthread_mutex_unlock(&(sem->mutex)); return res; } int semaphore_post (semaphore_t *sem) { pthread_mutex_lock(&(sem->mutex)); sem->count ++; pthread_cond_signal(&(sem->cond)); pthread_mutex_unlock(&(sem->mutex)); } What bad thing could happen if the two lines are switched?

Where is the Bug? Assume sem->count == 1 T1 T2 T3 T1 calls semaphore_wait() if (sem->count <= 0) { pthread_cond_wait(…); } sem->count --; 0 T1 continues T2 calls semaphore_wait() if (sem->count <= 0) { pthread_cond_wait(…); } T2 waits T1 calls semaphore_post() sem->count ++; 1 pthread_cond_signal(…);

Where is the Bug? Assume sem->count == 1 T1 T2 T3 T2 wakes up T3 calls semaphore_wait() if (sem->count <= 0) { pthread_cond_wait(…); } sem->count --;0 T3 continues T2 obtains mutex sem->count --;-1 T2 continues Both T2 and T3 are able to proceed now. This will not happen if while (sem->count <= 0) { pthread_cond_wait(…); } is used

Using while versus if when using cond_wait int semaphore_wait (…) { pthread_mutex_lock (…) while (sem->count<=0) { pthread_cond_wait (&(sem->cond), &(sem->mutex)); } sem->count --; thread_mutex_unlock (…) } int semaphore_wait (…) { pthread_mutex_lock (…) if (sem->count <=0) { pthread_cond_wait (&(sem->cond), &(sem->mutex)); } sem->count --; thread_mutex_unlock (…) } The left version is correct and the right version is wrong. Because waking up and obtaining mutex is not atomic. The condition sem->count<=0 may no longer hold when cond_wait returns control to the thread. Using while is also a defense against spurious wakeup

Usage of Semaphore: Bounded Buffer Implement a queue that has two functions enqueue() - adds one item into the queue. It blocks if queue if full dequeue() - remove one item from the queue. It blocks if queue is empty Strategy: Use an _emptySem semaphore that dequeue() will use to wait until there are items in the queue Use a _fullSem semaphore that enqueue() will use to wait until there is space in the queue.

Bounded Buffer #include enum {MaxSize = 10}; class BoundedBuffer{ int _queue[MaxSize]; int _head; int _tail; mutex_t _mutex; sem_t _emptySem; sem_t _fullSem; public: BoundedBuffer(); void enqueue(int val); int dequeue(); }; BoundedBuffer:: BoundedBuffer() { _head = 0; _tail = 0; pthtread_mutex_init(&_mutex, NULL); sem_init(&_emptySem, 0, 0); sem_init(&_fullSem, 0, MaxSize); }

Bounded Buffer void BoundedBuffer:: enqueue(int val) { sem_wait(&_fullSem); mutex_lock(_mutex); _queue[_tail]=val; _tail = (_tail+1)%MaxSize; mutex_unlock(_mutex); sem_post(_emptySem); } int BoundedBuffer:: dequeue() { sem_wait(&_emptySem); mutex_lock(_mutex); int val = _queue[_head]; _head = (_head+1)%MaxSize; mutex_unlock(_mutex); sem_post(_fullSem); return val; }

Bounded Buffer Assume queue is empty T1 T2 T3 v=dequeue() sem_wait(&_emptySem); _emptySem.count==-1 wait v=dequeue() sem_wait(&_emptySem); _emptySem.count==-2 wait enqueue(6) sem_wait(&_fullSem) put item in queue sem_post(&emptySem) _emptySem.count==-1 wakeup T1 T1 continues Get item from queue

Bounded Buffer Assume queue is empty T1 T2 …… T10 enqueue(1) sem_wait(&_fullSem); _fullSem.count==9 put item in queue enqueue(2) sem_wait(&_fullSem); _fullSem.count==8 put item in queue enqueue(10) sem_wait(&_fullSem); _fullSem.count==0 put item in queue

Bounded Buffer T11 T12 enqueue(11) sem_wait(&_fullSem); _fullSem.count==-1 wait val=dequeue() sem_wait(&_emptySem); _emptySem.count==9 get item from queue sem_post(&_fullSem) _fullSem.count==0 wakeup T11

Bounded Buffer Notes The counter for _emptySem represents the number of items in the queue The counter for _fullSem represents the number of spaces in the queue. Mutex locks are necessary to ensure that queue access is atomic.

Clicker Question 1 A POSIX pthread mutex may be normal/fast, recursive, or error-check based on their behavior on (1) a thread that holds the mutex calls lock again; (2) a thread that does not hold the mutex calls unlock. Which of the following describes a normal/fast mutex A. (1) calling thread can continue; (2) report error B. (1) report error; (2) succeeds C. (1) calling thread is blocked; (2) undefined by POSIX D. (1) report error; (2) report error E. None of the above

Clicker Question 2 A binary semaphore can sometimes be used in place of a mutex; what is its behavior in the following situations: (1) a thread that has called sem_wait calls sem_wait again; (2) a thread that has not called sem_wait calls sem_post. A. (1) calling thread continue; (2) report error B. (1) calling thread blocked; (2) succeeds C. (1) calling thread continue; (2) succeeds D. (1) calling thread blocked; (2) report error E. None of the above

Clicker Question 3 Consider the following code int semaphore_post (semaphore_t *sem) { pthread_mutex_lock (&(sem->mutex)); sem->count ++; pthread_mutex_unlock (&(sem->mutex)); pthread_cond_signal (&(sem->cond)); } What may go wrong? A. Too many threads may be able to continue B. A thread already waiting may not be correctly woke up C. A new thread may not be corrected woke up D. All of the above E. None of the above

Read/Write Locks They are locks for data structures that can be read by multiple threads simultaneously ( multiple readers ) but that can be modified by only one thread at a time. Example uses: Data Bases, lookup tables, dictionaries etc where lookups are more frequent than modifications.

Read/Write Locks Multiple readers may read the data structure simultaneously Only one writer may modify it and it needs to exclude the readers. Interface: ReadLock() – Lock for reading. Wait if there are writers holding the lock ReadUnlock() – Unlock for reading WriteLock() - Lock for writing. Wait if there are readers or writers holding the lock WriteUnlock() – Unlock for writing

Read/Write Locks Threads: R1 R2 R3 R4 W RL RL RL WL wait RU RU continue RL Wait WU continue rl = readLock; ru = readUnlock; wl = writeLock; wu = writeUnlock;

Read/Write Locks Implementation class RWLock { int _nreaders; //Controls access //to readers/writers sem_t _semAccess; mutex_t _mutex; public: RWLock(); void readLock(); void writeLock(); void readUnlock(); void writeUnlock(); }; RWLock::RWLock() { _nreaders = 0; sem_init( &semAccess, 1 ); mutex_init( &_mutex ); }

Read/Write Locks Implementation void RWLock::readLock() { mutex_Lock( &_mutex ); _nreaders++; if( _nreaders == 1 ) { //This is the // first reader //Get sem_Access sem_wait(&_semAccess); } mutex_unlock( &_mutex ); } void RWLock::readUnlock() { mutex_lock( &_mutex ); _nreaders--; if( _nreaders == 0 ) { //This is the last reader //Allow one writer to //proceed if any sem_post( &_semAccess ); } mutex_unlock( &_mutex ); }

Read/Write Locks Implementation void RWLock::writeLock() { sem_wait( &_semAccess ); } void RWLock::writeUnlock() { sem_post( &_semAccess ); }

Read/Write Locks Example Threads: R1 R2 R3 W1 W readLock nreaders++(1) if (nreaders==1) sem_wait continue readLock nreaders++(2) readLock nreaders++(3) writeLock sem_wait (block)

Read/Write Locks Example Threads: R1 R2 R3 W1 W writeLock sem_wait (block) readUnlock() nreaders—(2) readUnlock() nreaders—(1) readUnlock() nreaders—(0) if (nreaders==0) sem_post W1 continues writeUnlock sem_post W2 continues

Read/Write Locks Example Threads: (W2 is holding lock in write mode) R1 R2 R3 W1 W readLock mutex_lock nreaders++(1) if (nreaders==1) sema_wait block readLock mutex_lock block writeUnlock sema_post R1 continues mutex_unlock R2 continues

Notes on Read/Write Locks  Fairness in locking: First-come-first serve  Mutexes and semaphores are fair. The thread that has been waiting the longest is the first one to wake up.  This implementation of read/write locks suffers from “starvation” of writers. That is, a writer may never be able to write if the number of readers is always greater than 0.

Write Lock Starvation (Overlapping readers) Threads: R1 R2 R3 R4 W RL RL RL WL wait RU RL RU RU RL rl = readLock; ru = readUnlock; wl = writeLock; wu = writeUnlock;

Review Questions What are Condition Variables? What is the behavior of wait/signal on CV? How to implement semaphores using using CV and Mutex? How to implement bounded buffer using semaphores?

Review Questions What are read/write locks? What is the behavior of read/write lock/unlock? How to implement R/W locks using semaphore? Why the implementation given in the slides can cause writer starvation? How to Implement a read/write lock where writer is preferred (i.e., when a writer is waiting, no reader can gain read lock and must wait until all writers are finished)?