Concurrency, Race Conditions, Mutual Exclusion, Semaphores, Monitors, Deadlocks Chapters 2 and 6 Tanenbaum’s Modern OS.

Slides:



Advertisements
Similar presentations
Operating Systems Semaphores II
Advertisements

1 Interprocess Communication 1. Ways of passing information 2. Guarded critical activities (e.g. updating shared data) 3. Proper sequencing in case of.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Operating Systems: Monitors 1 Monitors (C.A.R. Hoare) higher level construct than semaphores a package of grouped procedures, variables and data i.e. object.
Global Environment Model. MUTUAL EXCLUSION PROBLEM The operations used by processes to access to common resources (critical sections) must be mutually.
Ch 7 B.
Ch. 7 Process Synchronization (1/2) I Background F Producer - Consumer process :  Compiler, Assembler, Loader, · · · · · · F Bounded buffer.
Chapter 2 Processes and Threads
Chapter 6: Process Synchronization
5.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts with Java – 8 th Edition Chapter 5: CPU Scheduling.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
EEE 435 Principles of Operating Systems Interprocess Communication Pt II (Modern Operating Systems 2.3)
Process Synchronization. Module 6: Process Synchronization Background The Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores.
Interprocess Communication
CY2003 Computer Systems Lecture 05 Semaphores - Theory.
Classic Synchronization Problems
Avishai Wool lecture Introduction to Systems Programming Lecture 4 Inter-Process / Inter-Thread Communication.
Concurrent Processes Lecture 5. Introduction Modern operating systems can handle more than one process at a time System scheduler manages processes and.
Chapter 6: Process Synchronization. Outline Background Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores Classic Problems.
Synchronization Principles. Race Conditions Race Conditions: An Example spooler directory out in 4 7 somefile.txt list.c scores.txt Process.
Concurrency: Mutual Exclusion, Synchronization, Deadlock, and Starvation in Representative Operating Systems.
1 CS 333 Introduction to Operating Systems Class 4 – Synchronization Primitives Semaphores Jonathan Walpole Computer Science Portland State University.
Chapter 2.3 : Interprocess Communication
Monitors CSCI 444/544 Operating Systems Fall 2008.
02/23/2004CSCI 315 Operating Systems Design1 Process Synchronization Notice: The slides for this lecture have been largely based on those accompanying.
Jonathan Walpole Computer Science Portland State University
Race Conditions CS550 Operating Systems. Review So far, we have discussed Processes and Threads and talked about multithreading and MPI processes by example.
1 Interprocess Communication Race Conditions Two processes want to access shared memory at same time.
Instructor: Umar KalimNUST Institute of Information Technology Operating Systems Process Synchronization.
1 Race Conditions/Mutual Exclusion Segment of code of a process where a shared resource is accessed (changing global variables, writing files etc) is called.
Operating Systems CSE 411 CPU Management Oct Lecture 13 Instructor: Bhuvan Urgaonkar.
© 2004, D. J. Foreman 1 High Level Synchronization and Inter-Process Communication.
1 Processes Chapter Processes 2.3 Interprocess communication 2.4 Classical IPC problems 2.5 Scheduling.
MODERN OPERATING SYSTEMS Third Edition ANDREW S. TANENBAUM Chapter 2 Processes and Threads Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall,
Midterm 1 – Wednesday, June 4  Chapters 1-3: understand material as it relates to concepts covered  Chapter 4 - Processes: 4.1 Process Concept 4.2 Process.
Operating Systems ECE344 Ashvin Goel ECE University of Toronto Mutual Exclusion.
Lecture 8 Page 1 CS 111 Online Other Important Synchronization Primitives Semaphores Mutexes Monitors.
Chapter 6 – Process Synchronisation (Pgs 225 – 267)
1 Interprocess Communication (IPC) - Outline Problem: Race condition Solution: Mutual exclusion –Disabling interrupts; –Lock variables; –Strict alternation.
2.3 interprocess communcation (IPC) (especially via shared memory & controlling access to it)
Background Concurrent access to shared data may result in data inconsistency Maintaining data consistency requires mechanisms to ensure the orderly execution.
CS399 New Beginnings Jonathan Walpole. 2 Concurrent Programming & Synchronization Primitives.
6.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts with Java – 8 th Edition Module 6: Process Synchronization.
1 Processes and Threads Part II Chapter Processes 2.2 Threads 2.3 Interprocess communication 2.4 Classical IPC problems 2.5 Scheduling.
1 5-High-Performance Embedded Systems using Concurrent Process (cont.)
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Chapter 5: Process Synchronization.
Semaphores Reference –text: Tanenbaum ch
© 2004, D. J. Foreman 1 Monitors and Inter-Process Communication.
Mutual Exclusion -- Addendum. Mutual Exclusion in Critical Sections.
Web Server Architecture Client Main Thread for(j=0;j
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Chien-Chung Shen CIS/UD
Interprocess Communication Race Conditions
Semaphores Reference text: Tanenbaum ch
Chapter 5: Process Synchronization – Part 3
Background on the need for Synchronization
Chapter 5: Process Synchronization – Part II
Lecture 13: Producer-Consumer and Semaphores
MODERN OPERATING SYSTEMS Third Edition ANDREW S
Process Synchronization
Lecture 2 Part 2 Process Synchronization
Concurrency: Mutual Exclusion and Process Synchronization
Chapter 6 Synchronization Principles
Lecture 13: Producer-Consumer and Semaphores
CS333 Intro to Operating Systems
Chapter 6: Synchronization Tools
Semaphores Reference text: Tanenbaum ch
CSE 542: Operating Systems
CSE 542: Operating Systems
Presentation transcript:

Concurrency, Race Conditions, Mutual Exclusion, Semaphores, Monitors, Deadlocks Chapters 2 and 6 Tanenbaum’s Modern OS

CS350/BU Concurrency versus Parallelism  Concurrency  When two or more control flows (threads) of execution share one or more CPUs.  In such cases, the CPU scheduler is responsible for deciding when each thread gets to execute and on which CPU.  For example, even if there is only one CPU, but two or more threads share the CPU, then its considered concurrent execution.  Parallelism  Is a subset of concurrency.  Its when two or more threads execute at the same real time on two or more CPUs.  For example, three threads executing on three different CPUs simultaneously.  Note: We use the term “thread” above loosely to refer to either threads or processes.

CS350/BU Critical Sections  A section of code that modifies or accesses a shared resource which can be modified by another process concurrently.  Examples  A piece of code that reads from or writes to a shared memory region  Or a code that modifies or traverses a linked list that can be accessed concurrently by another thread.

CS350/BU Race Condition  Incorrect behaviour of a program due to concurrent execution of critical sections by two or more threads.  For example, if thread 1 deletes an entry in a linked list while thread 2 is accessing the same entry.

CS350/BU Deadlocks  Deadlock occurs when two or more processes stop making progress indefinitely because they are all waiting for some inter-dependent events to occur.  For example:  If process A waits for process B to release a resource, and  Process B is waiting for process A to release another resource at the same time.  In this case, neither A not B can proceed because both are waiting for the other to proceed.

Solving Race Conditions

CS350/BU Mutual Exclusion Region of code that two or more related processes must not execute at the same time in order to avoid race-conditions.

CS350/BU Mutual Exclusion Four conditions for correct mutual exclusion 1.No two processes simultaneously in critical region 2.No assumptions made about speeds or numbers of CPUs 3.No process running outside its critical region may block another process running in the critical region 4.No process must wait forever to enter its critical region  Waiting forever indicates a deadlock condition

CS350/BU Mutual Exclusion with Busy Waiting Process 0 Process 1

CS350/BU Semaphore  Semaphore is a powerful synchronization primitive that can be used for both  Inter-process synchronization and  Locking around critical regions  Semaphore is basically a special type of integer on which only two operations can be performed.  DOWN(sem)  UP(sem)

CS350/BU The DOWN(sem) Operation  If (sem > 0) then  This operation simply decrements the value of semaphore sem by 1 and the calling process continues executing.  This is called a “successful” down operation.  If (sem == 0) then  This operation puts the calling process to sleep.  Meaning: the calling process is placed in “blocked” state  The process continues to sleep until some other process performs an UP operation on the semaphore.  At this time the process wakes up and tries to perform DOWN again.  If it succeeds, then it wakes up (moves to “ready” state) and continues executing.  Otherwise it goes back to sleep.

CS350/BU The UP(sem) Operation  This operation increments the value of semaphore sem by 1.  If the original value of the semaphore was 0, then UP operation wakes up any process that was sleeping on the DOWN(sem) operation.  All woken up processes compete to perform DOWN(sem) again.  Only one of them succeeds and the rest go back to sleep till the next UP(sem) operation.

CS350/BU Mutex  Mutex is simply a binary semaphore  It can have a value of either 0 or 1  Mutex is normally used as a lock around critical sections  Locking a mutex – Down(mutex)  If mutex==1, decrement mutex value to 0  Else, sleep until someone performs an UP  Unlocking a semaphore – UP(mutex)  Increment mutex value to 1  Wake up all sleepers on DOWN(mutex)  Only one sleeper succeeds in acquiring the mutex. Rest go back to sleep.  For example: Down(mutex) // Acquire the lock, sleep if mutex is 0 Critical Section… Up(mutex) // release the lock, wake up sleepers

CS350/BU Example: Producer-Consumer Problem  Producers and consumers run in concurrent processes.  Producers produce data and consumers consume data.  Producer informs consumers when data is available  Consumer informs producers when a buffer is empty.  Two types of synchronization needed  Locking the buffer to prevent concurrent modification  Informing the other side that data/buffer is available Full Empty Producers Consumers Common Buffer

CS350/BU Using Semaphores for the P-C problem Note: Two types of semaphores used here. A binary semaphore to lock the queue (mutex) Regular sems to block on event (empty and full). Up: Increments the value of semaphore, wakes up sleepers to compete on sem Down: Decrements semaphore, but blocks the caller if sem value is 0

CS350/BU Using Semaphores – POSIX interface  sem_open() -- Connects to, and optionally creates, a named semaphore  sem_init() -- Initializes a semaphore structure (internal to the calling program, so not a named semaphore).  sem_wait(), sem_trywait() -- Blocks while the semaphore is held by other processes or returns an error if the semaphore is held by another process.  sem_post() -- Increments the count of the semaphore.  sem_close() -- Ends the connection to an open semaphore.  sem_unlink() -- Ends the connection to an open semaphore and causes the semaphore to be removed when the last process closes it.  sem_destroy() -- Initializes a semaphore structure (internal to the calling program, so not a named semaphore).  sem_getvalue() -- Copies the value of the semaphore into the specified integer.  Semaphore overview : Do “man sem_overview” on any linux machine

CS350/BU Another way for using Semaphores - System V interface  Creation  int semget(key_t key, int nsems, int semflg);  Sets sem values to zero.  Initialization (NOT atomic with creation!) union semun arg; arg.val = 1; if (semctl(semid, 0, SETVAL, arg) == -1) { perror("semctl"); exit(1); }  Incr/Decr/Test-and-set  int semop(int semid,struct sembuf *sops, unsigned int nsops);  Deletion  semctl(semid, 0, IPC_RMID, 0); Examples: seminit.c semdemo.c semrm.c

Atomic Locking – TSL Instruction

CS350/BU Test-and-Set Lock (TSL) Instruction  Instruction format TSL Register, Lock  Lock  Located in memory.  Has a value of 0 or 1  Register  One of CPU registers  TSL does the following two steps atomically  Copies the current value of Lock to Register  Sets the value of Lock to 1  TSL is a basic primitive using which other more complex locking mechanisms can be implemented.

CS350/BU Busy waiting solution implemented using the TSL instruction Entering and leaving a critical region using the TSL instruction

CS350/BU Implementation of Mutex Using TSL #1

CS350/BU Implementing a generic semaphore down: mutex_lock(&mutex)); if(sem == 0) { add_to_wake_q(&sem); mutex_unlock(&mutex); sleep_on_sem(&sem); goto down; /* try again*/ } sem--; /*down operation*/ mutex_unlock(&mutex); up: mutex_lock(&mutex); sem++; /* up operation */ wake_all_sleepers(); mutex_unlock(&mutex); Use a mutex to guard changes to the semaphore sem Note: sleep_on_sem(&sem) should check if the wake signal has been already delivered.

Monitors and Condition Variables

CS350/BU Monitors and condition variables Function1() Function2() wait(c); signal(c);  Monitor is a collection of procedures (functions).  Only one procedure can be active at a time  wait(c) : releases the lock on monitor and puts the calling process to sleep. ALSO:Automatically re- acquires the lock upon return from wait(c).  signal(c): wakes up all the nodes sleeping on c; the woken nodes then compete to obtain lock on the monitor.

CS350/BU P-C problem with monitors and condition variables

Solving Deadlocks

CS350/BU Deadlock when using multiple locks  Say you have two processes P1 and P2  Both need to acquire two locks L1 and L2 to access a resource.  Consider the following sequence  P1 acquires L1  P2 acquires L2  P1 tries to acquire L2 and blocks  P2 tries to acquire L1 and blocks  We have a deadlock!  Solution: Sort the locks in a fixed order (say L1 followed by L2)  Always acquire locks in the sorted order.  P1 acquires L1  P2 tries to acquire L1 and blocks  P1 acquires L2  P1 executes critical section  P1 releases L2  P1 releases L1  P2 wakes up  P2 acquires L1  P2 acquires L2  P2 executes critical section  P2 releases L2  P2 releases L1  No deadlock!  Basic principle: All processes must acquire locks in the same order.  Same principle applies to any number of locks and any number of processes.

CS350/BU Priority Inversion  Say there are three processes using priority based scheduling.  Ph – High priority  Pm – Medium priority  Pl – Low priority  Pl acquires a lock L  Pl starts executing critical section  Ph tries to acquire lock L and blocks  Pm becomes “ready” and preempts Pl from the CPU.  Now, a high priority process Ph is blocked waiting for a low priority process Pl, and Pl cannot proceed because a medium priority process Pm is executing.  Solution: Priority Inheritance  Temporarily increase the priority of Pl to HIGH PRIORITY so that it can exit critical section quickly and allow Ph to execute.

CS350/BU Dining Philosophers (1)  N Philosophers either eat or think  Eating needs 2 forks per philosopher.  But only N forks available  All philos get hungry at the same time.  Everyone picks their left fork at the simultaneously  Then everyone tries to pick their right fork, then there’s a deadlock  How to prevent deadlock?  Besides giving them more forks for better hygiene!

CS350/BU Dining Philosophers (2) A non-solution to the dining philosophers problem

CS350/BU Dining Philosophers (3) Solution to dining philosophers problem (part 1)

CS350/BU Dining Philosophers (4) Solution to dining philosophers problem (part 2)