Presentation is loading. Please wait.

Presentation is loading. Please wait.

Concurrency: Mutual Exclusion and Process Synchronization

Similar presentations


Presentation on theme: "Concurrency: Mutual Exclusion and Process Synchronization"— Presentation transcript:

1 Concurrency: Mutual Exclusion and Process Synchronization

2 Concurrency Synchronization Critical Section Problem Race Conditions
Overview Concurrency Synchronization Critical Section Problem Race Conditions

3 Definition of terms Concurrency
Concurrency is the execution of several instruction sequences at the same time. In an operating system, this happens when there are several process threads running in parallel. These threads may communicate with each other through either shared memory or message passing.

4 Key terms related to Concurrency

5 Definition of terms Process Synchronization
Process Synchronization means sharing system resources by processes in a such a way that, Concurrent access to shared data is handled thereby minimizing the chance of inconsistent data. Process Synchronization serves to handle problems that associated with multiple process executions. E.G.

6 Operating System Concerns
The OS must be able to keep track of the various processes The OS must allocate and de-allocate various resources for each active process. The OS must protect the data and physical resources of each process against unintended interference by other processes.

7 Multiple Processes Central to the design of modern Operating Systems is managing multiple processes Multiprogramming Multiprocessing Distributed Processing Big Issue is Concurrency Managing the interaction of all of these processes The central themes of operating system design are all concerned with the management of processes and threads: • Multiprogramming: The management of multiple processes within a uniprocessor system. • Multiprocessing: The management of multiple processes within a multiprocessor. • Distributed processing: The management of multiple processes executing on multiple, distributed computer systems. E. G clusters Concurrency encompasses a host of design issues, including communication among processes, sharing of and competing for resources (such as memory, files, and I/O access), synchronization of the activities of multiple processes, and allocation of processor time to processes.

8 Concurrency Concurrency arises in: Multiple applications
Sharing time b) Operating system structure OS themselves implemented as a set of processes or threads Concurrent access to shared data may result in data inconsistency. There is need for various mechanisms to ensure the orderly execution of cooperating processes that share a logical address space, so that data consistency is maintained • Multiple applications: Multiprogramming was invented to allow processing time to be dynamically shared among a number of active applications. • Structured applications: As an extension of the principles of modular design and structured programming, some applications can be effectively programmed as a set of concurrent processes. • Operating system structure: The same structuring advantages apply to systems programs, and we have seen that operating systems are themselves often implemented as a set of processes or threads.

9 The critical section problem
A critical section is a code segment that accesses shared variables and has to be executed as an atomic action. The critical section problem refers to the problem of how to ensure that at most one process is executing its critical section at a given time. When one process is executing in its critical section, no other process is to be allowed to execute in its critical section I. E. no two processes should execute in their critical sections at the same time.

10 The critical section problem Contd…
Each process must request permission to enter its critical section. The section of code implementing this request is the entry section. The critical section may be followed by an exit section. The remaining code is the remainder section.

11 Critical Sections

12 Requirements of the Critical Section Problem
A solution to the critical-section problem must satisfy the following three requirements: Mutual exclusion. If process P1 is executing in its critical section, then no other processes can be executing in their critical sections. Progress. If no process is executing in its critical section and some processes wish to enter their critical sections, then only those processes that are not executing in their remainder sections can participate in deciding which will enter its critical section next, and this selection cannot be postponed indefinitely.

13 Requirements of the Critical Section Problem
c) Bounded waiting. There exists a bound, or limit, on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted.

14 Solutions to the critical-section problem
a) Software solutions: Semaphores Monitors b) Hardware solutions Test and set Swap

15 The Critical Section Problem

16 i) Semaphores A semaphore is used to indicate the status of a resource and to lock a resource that is being used. A process needing the resource checks the semaphore to determine the status of the resource and then decides how to proceed. A semaphore S is an integer variable that, apart from initialization, is accessed only through two standard atomic operations: wait () and signal (). When one process modifies the semaphore value, no other process can simultaneously modify that same semaphore value.

17 Types of semaphores Two types i.e. Counting and Binary Semaphores
Operating systems often distinguish between counting and binary semaphores. The value of a counting semaphore can range over an unrestricted domain. The value of a binary semaphore can range only between 0 and 1.

18 Binary semaphores Binary semaphores are also known as mutex locks, as they are locks that provide mutual exclusion. We can use binary semaphores to deal with the critical-section problem for multiple processes. The n processes share a semaphore, mutex, initialized to 1.

19 Counting Semaphores Counting semaphores can be used to control access to a given resource consisting of a finite number of instances. The semaphore is initialized to the number of resources available. Each process that wishes to use a resource performs a wait() operation on the semaphore (thereby decrementing the count). When a process releases a resource, it performs a signal() operation (incrementing the count). When the count for the semaphore goes to 0, all resources are being used. After that, processes that wish to use a resource will block until the count becomes greater than 0.

20 Competition among Processes for Resources
3 control problems Mutual exclusion:- eg. Printer Mutual exclusion leads to two more additional problems Deadlock Starvation Mutual exclusion can be achieved by locking a resource prior to its use.

21 Requirements for Mutual Exclusion
A process must not be delayed access to a critical section when there is no other process using it A process that halts in its non-critical section should not interfere with other processes When no process is in the CS, a process requiring CS should be granted permission Process remains in CS only for finite time No deadlock and starvation

22 Conditions to provide mutual exclusion
a) No two processes simultaneously in critical region b) No assumptions made about speeds or numbers of CPUs c) No process running outside its critical region may block another process d) No process must wait forever to enter its critical region

23 Example If (x= = 5) { // the check y=x*2; // the act
/* if x is changed by another thread in between the If (x==5) and the y=x*2 then y will not be equal to 10 */ }

24 Solution To prevent race conditions occurring; put a lock on the shared data to ensure that only one thread can access the data at a time. If (x= = 5) { // the check // obtain lock for x y=x*2; // the act /* nothing can change x until the lock is released, therefore y=10 */ }

25 Race Conditions A Race Condition occurs, if two or more processes/threads access and manipulate the same data concurrently, and the outcome of the execution depends on the particular order in which the access takes place. Synchronization is needed to prevent race conditions from happening

26 Race Condition A race condition occurs when
Multiple processes or threads read and write data items They do so in a way where the final result depends on the order of execution of the processes. The output depends on who finishes the race last. To guard against the race condition, we need to ensure that only one process at a time can be manipulating the variable data.

27 Race Condition Contd … Synchronization is required to address race conditions With the growth of multicore systems, there is an increased emphasis on developing multithreaded applications where several threads-which are quite possibly sharing data-are running in parallel on different processing cores. Clearly, we want any changes that result from such activities not to interfere with one another.

28 Tutorial Explain the “Producer-Consumer Problem”.
Give examples of race conditions in OS. Suggest solutions to race conditions. Write notes on the following Synchronization problems a) Dining Philosophers problem b) Bounded Buffer problem c) Readers Writers problem


Download ppt "Concurrency: Mutual Exclusion and Process Synchronization"

Similar presentations


Ads by Google