CS252: Systems Programming Ninghui Li Based on Slides by Prof. Gustavo Rodriguez-Rivera Topic 12: Thread-safe Data Structures, Semaphores.

Slides:



Advertisements
Similar presentations
Operating Systems Semaphores II
Advertisements

CS252: Systems Programming Ninghui Li Based on Slides by Prof. Gustavo Rodriguez-Rivera Topic 14: Deadlock & Dinning Philosophers.
More on Semaphores, and Classic Synchronization Problems CSCI 3753 Operating Systems Spring 2005 Prof. Rick Han.
Prepared By Sarath S Menon S6 CSE.  Imagine a scenario in which there exists two Distinct processes both operating on a single shared data area.  One.
EEE 435 Principles of Operating Systems Interprocess Communication Pt II (Modern Operating Systems 2.3)
Concurrency, Race Conditions, Mutual Exclusion, Semaphores, Monitors, Deadlocks Chapters 2 and 6 Tanenbaum’s Modern OS.
Avishai Wool lecture Introduction to Systems Programming Lecture 4 Inter-Process / Inter-Thread Communication.
5.6 Semaphores Semaphores –Software construct that can be used to enforce mutual exclusion –Contains a protected variable Can be accessed only via wait.
Chapter 6: Process Synchronization. 6.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 8, 2005 Objectives Understand.
Semaphores. Announcements No CS 415 Section this Friday Tom Roeder will hold office hours Homework 2 is due today.
Concurrency: Mutual Exclusion, Synchronization, Deadlock, and Starvation in Representative Operating Systems.
8-1 JMH Associates © 2004, All rights reserved Windows Application Development Chapter 10 - Supplement Introduction to Pthreads for Application Portability.
Semaphores CSCI 444/544 Operating Systems Fall 2008.
Race Conditions CS550 Operating Systems. Review So far, we have discussed Processes and Threads and talked about multithreading and MPI processes by example.
Semaphores Questions answered in this lecture: Why are semaphores necessary? How are semaphores used for mutual exclusion? How are semaphores used for.
CS 241 Section Week #4 (2/19/09). Topics This Section  SMP2 Review  SMP3 Forward  Semaphores  Problems  Recap of Classical Synchronization Problems.
CS252: Systems Programming Ninghui Li Final Exam Review.
Pthread (continue) General pthread program structure –Encapsulate parallel parts (can be almost the whole program) in functions. –Use function arguments.
Semaphores. Readings r Silbershatz: Chapter 6 Mutual Exclusion in Critical Sections.
CS510 Concurrent Systems Introduction to Concurrency.
Object Oriented Analysis & Design SDL Threads. Contents 2  Processes  Thread Concepts  Creating threads  Critical sections  Synchronizing threads.
The University of Adelaide, School of Computer Science
CS252: Systems Programming Ninghui Li Based on Slides by Prof. Gustavo Rodriguez-Rivera Topic 10: Threads and Thread Synchronization.
ICS 145B -- L. Bic1 Project: Process/Thread Synchronization Textbook: pages ICS 145B L. Bic.
CS345 Operating Systems Threads Assignment 3. Process vs. Thread process: an address space with 1 or more threads executing within that address space,
What is a thread? process: an address space with 1 or more threads executing within that address space, and the required system resources for those threads.
Copyright ©: Nahrstedt, Angrave, Abdelzaher1 Semaphore and Mutex Operations.
4061 Session 21 (4/3). Today Thread Synchronization –Condition Variables –Monitors –Read-Write Locks.
CS252: Systems Programming
Chapter 6 – Process Synchronisation (Pgs 225 – 267)
1 Pthread Programming CIS450 Winter 2003 Professor Jinhua Guo.
POSIX Synchronization Introduction to Operating Systems: Discussion Module 5.
IT 325 Operating systems Chapter6.  Threads can greatly simplify writing elegant and efficient programs.  However, there are problems when multiple.
2.3 interprocess communcation (IPC) (especially via shared memory & controlling access to it)
1 CMSC421: Principles of Operating Systems Nilanjan Banerjee Principles of Operating Systems Acknowledgments: Some of the slides are adapted from Prof.
CS252: Systems Programming Ninghui Li Based on Slides by Prof. Gustavo Rodriguez-Rivera Topic 11: Thread-safe Data Structures, Semaphores.
U NIVERSITY OF M ASSACHUSETTS A MHERST Department of Computer Science Software Systems Advanced Synchronization Emery Berger and Mark Corner University.
Operating Systems ECE344 Ashvin Goel ECE University of Toronto Unix System Calls and Posix Threads.
CS399 New Beginnings Jonathan Walpole. 2 Concurrent Programming & Synchronization Primitives.
1 Synchronization Threads communicate to ensure consistency If not: race condition (non-deterministic result) Accomplished by synchronization operations.
1 Condition Variables CS 241 Prof. Brighten Godfrey March 16, 2012 University of Illinois.
Chapter 6 P-Threads. Names The naming convention for a method/function/operation is: – pthread_thing_operation(..) – Where thing is the object used (such.
CS252: Systems Programming Ninghui Li Based on Slides by Prof. Gustavo Rodriguez-Rivera Topic 13: Condition Variable, Read/Write Lock, and Deadlock.
Barriers and Condition Variables
PThread Synchronization. Thread Mechanisms Birrell identifies four mechanisms commonly used in threading systems –Thread creation –Mutual exclusion (mutex)
CSCI1600: Embedded and Real Time Software Lecture 17: Concurrent Programming Steven Reiss, Fall 2015.
CS510 Concurrent Systems Jonathan Walpole. Introduction to Concurrency.
Working with Pthreads. Operations on Threads int pthread_create (pthread_t *thread, const pthread_attr_t *attr, void * (*routine)(void*), void* arg) Creates.
Mutual Exclusion -- Addendum. Mutual Exclusion in Critical Sections.
Web Server Architecture Client Main Thread for(j=0;j
CS 311/350/550 Semaphores. Semaphores – General Idea Allows two or more concurrent threads to coordinate through signaling/waiting Has four main operations.
Case Study: Pthread Synchronization Dr. Yingwu Zhu.
Chien-Chung Shen CIS/UD
CS 537 – Introduction to Operating Systems
Background on the need for Synchronization
Chapter 5: Process Synchronization – Part II
PThreads.
Principles of Operating Systems Lecture 11
Operating Systems CMPSC 473
CS252: Systems Programming
Lecture 14: Pthreads Mutex and Condition Variables
Synchronization and Semaphores
Dr. Mustafa Cem Kasapbaşı
Synchronization Primitives – Semaphore and Mutex
Lecture 14: Pthreads Mutex and Condition Variables
CSE 153 Design of Operating Systems Winter 19
CSE 153 Design of Operating Systems Winter 2019
CS 144 Advanced C++ Programming May 7 Class Meeting
“The Little Book on Semaphores” Allen B. Downey
Lab #9 Semaphores Operating System Lab.
Presentation transcript:

CS252: Systems Programming Ninghui Li Based on Slides by Prof. Gustavo Rodriguez-Rivera Topic 12: Thread-safe Data Structures, Semaphores

Pthread Overview Pthreads: POSIX threads Mutual exclusion and synchronization tools that we cover Mutex locks Read/write locks Semaphores (not part of pthreads, but part of POSIX standards) Condition variables: Suspend the execution of a thread until some pre-defined condition occurs

Pthread Mutex Revisited int pthread_mutex_init (pthread_mutex_t *mutex, const pthread_mutexattr_t *mutexattr); Inside a pthread_mutexattr_t structure, one can specify 3 types of mutex PTHREAD_MUTEX_NORMAL PTHREAD_MUTEX_RECURSIVE PTHREAD_MUTEX_ERRORCHECK

There are multiple types of Mutex Locks Types of mutex depend on behavior in the following 2 cases A: a thread that holds lock calls lock again B: a thread that does not hold lock calls unlock Recursive lock: A succeeds (locking n times requires unlocking n times); B returns error pthread_mutexattr_init(&Attr); pthread_mutexattr_settype(&Attr, PTHREAD_MUTEX_RECURSIVE); pthread_mutex_init(&Mutex, &Attr); Error checking lock: returns error on A,B Normal (fast) lock: deadlock on A; undefined on B Implementation does not need to remember which thread currently holds mutex

Which Type to Use It is recommended to use Recursive in production systems when available. When one has high confidence and full control of code, then using normal (fast) is fine i.e., when one is certain that A and B cannot occur

Behavior When Lock is Unavailable Blocked until the lock is available. Behavior of mutex lock Yield CPU and try to obtain lock when scheduled, return when lock is available Seems to be undesirable, since thread is off CPU, might as well wait until lock is available Busy-waiting when lock is unavailable. Behavior of SpinLock; don’t use on a single CPU Return “unable to obtain lock” immediately To get this behavior in pthreads, use trylock E.g., pthread_mutex_trylock()

Threads and Fork What happens to the threads in a process when a thread in a process calls fork()? Modern UNIX use the pthread_create semantics that only copies the calling thread. In Solaris, threads created using thr_create() are duplicated by fork() Threads created using pthread_create are not duplicated (except the thread calling fork()). In Solaris, system call fork1() copies only the calling thread to the child process.

Thread Safe and Race Condition Data structures and functions that are able to handle multiple threads are called “Thread Safe” or “Multi- Threaded (MT), or “Concurrent”. A Thread Unsafe function or data structure is one that when executed by multiple threads can result in logical errors. A bug related to having multiple threads modifying a data structure simultaneously is called a “Race Condition”. Race Conditions are difficult to debug because they are often difficult to reproduce.

A Thread Unsafe List Class #include struct ListEntry { int _val; ListEntry *_next; }; class List{ ListEntry *_head; public: List(); void insert(int val); int remove(); } List::List(){ _head = NULL; } List::insert(int val){ ListEntry *e = new ListEntry; e->_val = val; a) e->_next = _head; b) _head = e; }

A Thread Unsafe List Class int List::remove(){ ListEntry *tmp; c) tmp = _head; if(tmp == NULL) { return -1; } d) _head=tmp->_next; int val=tmp->_val; delete tmp; return val; }

Race Conditions in List We can have the following race condition: Assume T1 calls remove() and T2 calls insert(). Initially 1_head2 NULL

Race Conditions in List 2. T2 insert(5) 1_head T2: a) e2->next=_head NULL 1. T1 remove() 1_head2 tmp1 1. T1: c) tmp=_head NULL tmp1 3. T2 insert(5) 1_head T2: b) _head = e2 NULLtmp1 4. T1 remove() 1 _head T1: d)_head=tmp->_next; Node 5 is lost!!! tmp1NULL ctxswitch E2

Race Conditions in List Now find a race condition involving two insert. Now find a race condition involving two remove. Initially 1_head2 NULL

An Thread Safe List Class #include struct ListEntry { int _val; ListEntry *_next; }; class MTList{ ListEntry *_head; pthread_mutex_t _mutex; public: MTList(); void insert(int val); int remove(); } MTList::MTList(){ _head = NULL; pthread_mutex_init( &_mutex, NULL ); } MTList::insert(int val){ ListEntry *e = new ListEntry; e->_val = val; pthread_mutex_lock(&_mutex); a) e->_next = _head; b)_head = e; pthread_mutex_unlock(&mutex); }

A Thread Safe List int MTList::remove(){ ListEntry *tmp; pthread_mutex_lock(&_mutex); c) tmp = _head; if(tmp == NULL) { pthread_mutex_unlock(&_mutex); return -1; } d) _head=tmp->_next; pthread_mutex_unlock(&mutex); int val=tmp->_val; delete tmp; return val; }

Clicker Question 1 Which of the following memory section is not shared across multiple threads in one process? A. The stack section B. The heap section C. The data section D. The text section E. None of the above

Clicker Question 2 Which of the following is not an advantage of using multiple threads compared with using multiple processes? A. Creation a new thread is faster. B. Context switch from one thread to another is faster. C. Using multiple threads is more robust. D. Communicating between multiple threads is easier and faster. E. None of the above.

Clicker Question 3 Consider the following code from the C programming question in midterm; we want to make it safe when multiple threads may call enqueue: 1 void enqueue (… *q, char *v) { 2 struct node *nd = malloc(…); 3 nd->value = strdup(v); 4 nd->next = NULL; 5 if (q->tail == NULL) { 6 q->tail = nd; 7 q->head = nd; 8 } else { 9 q->tail->next = nd; 10 q->tail = nd; 11 } 12 mutex_unlock(&(q->mutex)); 13 } Suppose we want to add mutex_lock(..) as late as possible in the code, which of the following is the latest while still being correct? A. Before line 2 B. Before line 3 C. Before line 4 D. Before line 5 E. Before line 6 & 9

Clicker Question 4 Consider the following code from the C programming question in midterm; we want to make it safe when multiple threads may call enqueue: 1 char *dequeue (struct queue *q) { 2 mutex_lock(&(q->mutex)); 3 if (q->head == NULL) { 4 return NULL; } 5 if (q->tail == q->head) { 6 q->tail = NULL; } 7 char *s = q->head->value; 8 struct node *nd = q->head; 9 q->head = q->head->next; 10 free(nd); 11 return s; 12 } Where should we add mutex_unlock(…); A. Before line 10 B. Before line 11 C. Before line 4 & 10 D. Before line 5 & 11 E. None of the above

An Additional Requirement in the List Class The MT List implementation described before returns -1 if the list is empty. Suppose that we would like an implementation where the thread calling remove() waits if the List is empty. This behavior is naturally implemented using semaphores.

Semaphores A semaphore is initialized with an initial counter that is greater or equal to 0. Conceptually indicating # of available resources A sem_wait call decreases the counter, and block the thread if the value is negative. A sem_post call increases the counter, and unblock one waiting thread

Semaphore Calls Declaration: #include sem_t sem; Initialization: sem_init (sem_t *sem, int pshared, unsigned int value); Decrement Semaphore: sem_wait ( sem_t *sem); Increment Semaphore sem_post (sem_t *sem); Man pages man sem_wait

Semaphore Calls Semantics Pseudocode: sem_init(sem_t *sem, counter){ sem -> count = counter; } sem_wait(sem_t *sem){ sem -> count--; if(sem ->count < 0){ wait(); }

Semaphore Calls Semantics sem_post(sem_t *sem){ sem ->count++; if(there is a thread waiting){ wake up the thread that has been waiting the longest. } // one out and can let one in } Note: The mutex lock calls are omitted in the pseudocode for simplicity.

Semaphores vs. Mutex Semaphore can be initialized to any positive integer, and mutex is binary Binary semaphore vs. Mutex With mutex, only the thread holding the lock is supposed to unlock With semaphore, any thread can call sem_post to wake up a blocking thread Semaphores are a signaling mechanism; not specifically for mutual exclusion One can use a binary semaphore to achieve the effect of mutex, if one is careful to ensure that sem_post is only called by a thread who calls sem_wait first

Use of Semaphore Counter Mutex Lock Case: initial counter == 1 Can achieve effect of Mutex Lock, if one calls sem_post only after sem_wait N Resources Case: initial counter is n > 1 Control access to a fixed number n of resources. Example: Access to 5 printers. 5 computers can use them. 6 th computer will need to wait. Wait for an Event Case: initial counter == 0 Synchronization. Wait for an event. A thread calling sem_wait will block until another threads sends an event by calling sem_post.

Example Semaphore count=1 (Mutex Lock Case) Assume sem_t sem; sem_init(&sem, 1); T1 T2 sem_wait(&sem) sem->count--(==0) Does not wait.. ctxswitch sem_wait(&sem) sem->count--(==-1) if (sem->count < 0) wait (ctxswitch).. sem_post(&sem) sem->count++(==0) wakeup T2 continue

Example Semaphore count=3 (N Resources Case) Assume sem_t sem; sem_init(&sem, 3); (3 printers) T1 T2 T3 sem_wait(&sem) sem->count--(==2) Does not wait print.. sem_wait(&sem) sem->count--(==1) Does not wait print.. sem_wait(&sem) sem->count--(==0) Does not wait print

Example Semaphore count=3 (N Resources Case) T4 T5 T1 sem_wait(&sem) sem->count--(==-1) wait sem_wait(&sem) sem->count--(==-2) Wait Finished printing sem_post(&sem) sem->count++(==-1) Wakeup T4 print

Example Semaphore count=0 Wait for an Event Case Assume sem_t sem; sem_init(&sem, 0); T1 T2 // wait for event sem_wait(&sem) sem->count--(==-1) wait //Send event to t1 sem_post(&sem) sem->count++(==0) Wakeup T1.. T1 continues

A Synchronized List Class We want to implement a List class where remove() will block if the list is empty. To implement this class, we will use a semaphore “_emptySem” that will be initialized with a counter of 0. Remove() will call sem_wait(&_emptySem) and it will block until insert() calls sem_post(&emptySem). The counter in semaphore will be equivalent to the number of items in the list.

A Synchronized List Class SyncList.h #include struct ListEntry { int _val; ListEntry *_next; }; class SyncList{ ListEntry *_head; pthread_mutex_t _mutex; sem_t _emptySem; public: SyncList(); void insert(int val); int remove(); }; SynchList.cc SyncList:: SyncList(){ _ _head = NULL; pthread_mutex_init( &_mutex, NULL ); sem_init(&_emptySem,0, 0); } SyncList ::insert(int val){ ListEntry *e = new ListEntry; e->_val = val; pthread_mutex_lock(&_mutex); a)e->_next = _head; b)_head = e; pthread_mutex_unlock(&_mutex); sem_post(&_emptySem); } Can one switch the two lines?

A Synchronized List int SyncList ::remove(){ ListEntry *tmp; // Wait until list is not empty sem_wait(&_emptySem); pthread_mutex_lock(&_mutex); c)tmp = _head; d)head=tmp->_next; pthread_mutex_unlock(&mutex); int val=tmp->_val; delete tmp; return val; } Can one switch the two lines?

Example: Removing Before Inserting T1 T2 remove() sem_wait(&_emptySem) (count==-1) wait insert(5) pthread_mutex_lock() a) b) pthread_mutex_unlock() sem_post(&_emptySem) wakeup T1 T1 continues pthread_mutex_lock() c) d) pthread_mutex_unlock()

Example: Inserting Before Removing T1 T2 insert(7) pthread_mutex_lock() a) b) pthread_mutex_unlock() sem_post(&_emptySem)(count==1) Starts running remove() sem_wait(_emptySem); continue (count==0) pthread_mutex_lock() c) d) pthread_mutex_unlock()

Notes on Synchronized List Class We need the mutex lock to enforce atomicity in the critical sections a) b) and c) d). The semaphore is not enough to enforce atomicity.

Notes on Synchronized List Class int MTList::remove(){ ListEntry *tmp; sem_wait(&_emptySem); pthread_mutex_lock(&_mutex); c)tmp = _head; d)head=tmp->_next; pthread_mutex_unlock(&mutex); int val=tmp->_val; delete tmp; return val; } N (N= items in list) >N 1

Notes on Synchronized List Class The sem_wait() call has to be done outside the mutex_lock/unlock section. Otherwise we can get a deadlock. Example: pthread_mutex_lock(&_mutex); sem_wait(&_empty); c)tmp = _head; d)head=tmp->_next; pthread_mutex_unlock(&mutex);

Notes on Synchronized List Class T1 T2 remove() mutex_lock sem_wait(&_empty) wait for sem_post from T2 insert(5) pthread_mutex_lock() wait for mutex_unlock in T1 Deadlock!!!!

Review Questions How do Recursive Mutex locks differ from regular Mutex locks? Why one may prefer to use a mutex lock that is not recursive? How to implement recursive mutex locks (part of Lab 4)? When a process with multiple threads call fork(), what threads are created in child process, under pthread semantics?

Review Questions What is a race condition? (When given simple codes, should be able to come up with race condition scenario.) What happens when calling sem_wait(), sem_post? (Should be able to produce pseudo-code.) Different usage of semaphore when initiating it with different values. What are the key difference between binary semaphore and mutex lock? How to implement a synchronized list? Why we need both mutex and semaphore?