Concurrency: Mutual Exclusion and Process Synchronization

Slides:



Advertisements
Similar presentations
Operating Systems Part III: Process Management (Process Synchronization)
Advertisements

Synchronization and Deadlocks
1 Chapter 5 Concurrency: Mutual Exclusion and Synchronization Principals of Concurrency Mutual Exclusion: Hardware Support Semaphores Readers/Writers Problem.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Ch. 7 Process Synchronization (1/2) I Background F Producer - Consumer process :  Compiler, Assembler, Loader, · · · · · · F Bounded buffer.
Chapter 6 Process Synchronization Bernard Chen Spring 2007.
Chapter 6: Process Synchronization
Background Concurrent access to shared data can lead to inconsistencies Maintaining data consistency among cooperating processes is critical What is wrong.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Chapter 5: Process Synchronization.
5.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts with Java – 8 th Edition Chapter 5: CPU Scheduling.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
Process Synchronization. Module 6: Process Synchronization Background The Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores.
Concurrency: mutual exclusion and synchronization Slides are mainly taken from «Operating Systems: Internals and Design Principles”, 8/E William Stallings.
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Concurrent Processes Lecture 5. Introduction Modern operating systems can handle more than one process at a time System scheduler manages processes and.
Chapter 6: Process Synchronization. Outline Background Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores Classic Problems.
Chapter 6: Process Synchronization. 6.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 8, 2005 Objectives Understand.
Silberschatz, Galvin and Gagne ©2007 Operating System Concepts with Java – 7 th Edition, Nov 15, 2006 Process Synchronization (Or The “Joys” of Concurrent.
Process Synchronization
1 Chapter 6: Concurrency: Mutual Exclusion and Synchronization Operating System Spring 2007 Chapter 6 of textbook.
Chapter 6: Synchronization. 6.2 Silberschatz, Galvin and Gagne ©2005 Operating System Principles Module 6: Synchronization 6.1 Background 6.2 The Critical-Section.
Instructor: Umar KalimNUST Institute of Information Technology Operating Systems Process Synchronization.
Adopted from and based on Textbook: Operating System Concepts – 8th Edition, by Silberschatz, Galvin and Gagne Updated and Modified by Dr. Abdullah Basuhail,
Process Synchronization Ch. 4.4 – Cooperating Processes Ch. 7 – Concurrency.
Operating Systems CSE 411 CPU Management Oct Lecture 13 Instructor: Bhuvan Urgaonkar.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings 1.
Concurrency, Mutual Exclusion and Synchronization.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems Introduction to Concurrency.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 7: Process Synchronization Background The Critical-Section Problem Synchronization.
Chap 6 Synchronization. Background Concurrent access to shared data may result in data inconsistency Maintaining data consistency requires mechanisms.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts Essentials – 9 th Edition Chapter 5: Process Synchronization.
Chapter 7 -1 CHAPTER 7 PROCESS SYNCHRONIZATION CGS Operating System Concepts UCF, Spring 2004.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Chapter 6: Process Synchronization. 6.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Module 6: Process Synchronization Background The.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Process Synchronization. Objectives To introduce the critical-section problem, whose solutions can be used to ensure the consistency of shared data To.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Chapter 6: Process Synchronization.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Chapter 5: Process Synchronization.
Chapter 6 Synchronization Dr. Yingwu Zhu. The Problem with Concurrent Execution Concurrent processes (& threads) often access shared data and resources.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Interprocess Communication Race Conditions
Chapter 6: Process Synchronization
Process Synchronization
Chapter 5: Process Synchronization
Process Synchronization: Semaphores
Background on the need for Synchronization
Process Synchronization
G.Anuradha Reference: William Stallings
Chapter 5: Process Synchronization
143a discussion session week 3
Inter-Process Communication and Synchronization
Concurrency: Mutual Exclusion and Synchronization
Topic 6 (Textbook - Chapter 5) Process Synchronization
Process Synchronization
Module 7a: Classic Synchronization
Lecture 2 Part 2 Process Synchronization
Critical section problem
Grades.
Chapter 6: Process Synchronization
Chapter 6 Synchronization Principles
Chapter 6: Synchronization Tools
Operating Systems Concepts
CSE 542: Operating Systems
CSE 542: Operating Systems
Presentation transcript:

Concurrency: Mutual Exclusion and Process Synchronization

Concurrency Synchronization Critical Section Problem Race Conditions Overview Concurrency Synchronization Critical Section Problem Race Conditions

Definition of terms Concurrency Concurrency is the execution of several instruction sequences at the same time. In an operating system, this happens when there are several process threads running in parallel. These threads may communicate with each other through either shared memory or message passing.

Key terms related to Concurrency

Definition of terms Process Synchronization Process Synchronization means sharing system resources by processes in a such a way that, Concurrent access to shared data is handled thereby minimizing the chance of inconsistent data. Process Synchronization serves to handle problems that associated with multiple process executions. E.G.

Operating System Concerns The OS must be able to keep track of the various processes The OS must allocate and de-allocate various resources for each active process. The OS must protect the data and physical resources of each process against unintended interference by other processes.

Multiple Processes Central to the design of modern Operating Systems is managing multiple processes Multiprogramming Multiprocessing Distributed Processing Big Issue is Concurrency Managing the interaction of all of these processes The central themes of operating system design are all concerned with the management of processes and threads: • Multiprogramming: The management of multiple processes within a uniprocessor system. • Multiprocessing: The management of multiple processes within a multiprocessor. • Distributed processing: The management of multiple processes executing on multiple, distributed computer systems. E. G clusters Concurrency encompasses a host of design issues, including communication among processes, sharing of and competing for resources (such as memory, files, and I/O access), synchronization of the activities of multiple processes, and allocation of processor time to processes.

Concurrency Concurrency arises in: Multiple applications Sharing time b) Operating system structure OS themselves implemented as a set of processes or threads Concurrent access to shared data may result in data inconsistency. There is need for various mechanisms to ensure the orderly execution of cooperating processes that share a logical address space, so that data consistency is maintained • Multiple applications: Multiprogramming was invented to allow processing time to be dynamically shared among a number of active applications. • Structured applications: As an extension of the principles of modular design and structured programming, some applications can be effectively programmed as a set of concurrent processes. • Operating system structure: The same structuring advantages apply to systems programs, and we have seen that operating systems are themselves often implemented as a set of processes or threads.

The critical section problem A critical section is a code segment that accesses shared variables and has to be executed as an atomic action. The critical section problem refers to the problem of how to ensure that at most one process is executing its critical section at a given time. When one process is executing in its critical section, no other process is to be allowed to execute in its critical section I. E. no two processes should execute in their critical sections at the same time.

The critical section problem Contd… Each process must request permission to enter its critical section. The section of code implementing this request is the entry section. The critical section may be followed by an exit section. The remaining code is the remainder section.

Critical Sections

Requirements of the Critical Section Problem A solution to the critical-section problem must satisfy the following three requirements: Mutual exclusion. If process P1 is executing in its critical section, then no other processes can be executing in their critical sections. Progress. If no process is executing in its critical section and some processes wish to enter their critical sections, then only those processes that are not executing in their remainder sections can participate in deciding which will enter its critical section next, and this selection cannot be postponed indefinitely.

Requirements of the Critical Section Problem c) Bounded waiting. There exists a bound, or limit, on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted.

Solutions to the critical-section problem a) Software solutions: Semaphores Monitors b) Hardware solutions Test and set Swap

The Critical Section Problem

i) Semaphores A semaphore is used to indicate the status of a resource and to lock a resource that is being used. A process needing the resource checks the semaphore to determine the status of the resource and then decides how to proceed. A semaphore S is an integer variable that, apart from initialization, is accessed only through two standard atomic operations: wait () and signal (). When one process modifies the semaphore value, no other process can simultaneously modify that same semaphore value.

Types of semaphores Two types i.e. Counting and Binary Semaphores Operating systems often distinguish between counting and binary semaphores. The value of a counting semaphore can range over an unrestricted domain. The value of a binary semaphore can range only between 0 and 1.

Binary semaphores Binary semaphores are also known as mutex locks, as they are locks that provide mutual exclusion. We can use binary semaphores to deal with the critical-section problem for multiple processes. The n processes share a semaphore, mutex, initialized to 1.

Counting Semaphores Counting semaphores can be used to control access to a given resource consisting of a finite number of instances. The semaphore is initialized to the number of resources available. Each process that wishes to use a resource performs a wait() operation on the semaphore (thereby decrementing the count). When a process releases a resource, it performs a signal() operation (incrementing the count). When the count for the semaphore goes to 0, all resources are being used. After that, processes that wish to use a resource will block until the count becomes greater than 0.

Competition among Processes for Resources 3 control problems Mutual exclusion:- eg. Printer Mutual exclusion leads to two more additional problems Deadlock Starvation Mutual exclusion can be achieved by locking a resource prior to its use.

Requirements for Mutual Exclusion A process must not be delayed access to a critical section when there is no other process using it A process that halts in its non-critical section should not interfere with other processes When no process is in the CS, a process requiring CS should be granted permission Process remains in CS only for finite time No deadlock and starvation

Conditions to provide mutual exclusion a) No two processes simultaneously in critical region b) No assumptions made about speeds or numbers of CPUs c) No process running outside its critical region may block another process d) No process must wait forever to enter its critical region

Example If (x= = 5) { // the check y=x*2; // the act /* if x is changed by another thread in between the If (x==5) and the y=x*2 then y will not be equal to 10 */ }

Solution To prevent race conditions occurring; put a lock on the shared data to ensure that only one thread can access the data at a time. If (x= = 5) { // the check // obtain lock for x y=x*2; // the act /* nothing can change x until the lock is released, therefore y=10 */ }

Race Conditions A Race Condition occurs, if two or more processes/threads access and manipulate the same data concurrently, and the outcome of the execution depends on the particular order in which the access takes place. Synchronization is needed to prevent race conditions from happening

Race Condition A race condition occurs when Multiple processes or threads read and write data items They do so in a way where the final result depends on the order of execution of the processes. The output depends on who finishes the race last. To guard against the race condition, we need to ensure that only one process at a time can be manipulating the variable data.

Race Condition Contd … Synchronization is required to address race conditions With the growth of multicore systems, there is an increased emphasis on developing multithreaded applications where several threads-which are quite possibly sharing data-are running in parallel on different processing cores. Clearly, we want any changes that result from such activities not to interfere with one another.

Tutorial Explain the “Producer-Consumer Problem”. Give examples of race conditions in OS. Suggest solutions to race conditions. Write notes on the following Synchronization problems a) Dining Philosophers problem b) Bounded Buffer problem c) Readers Writers problem