Concurrent Programming Problems OS Spring 2011. Concurrency pros and cons Concurrency is good for users –One of the reasons for multiprogramming Working.

Slides:



Advertisements
Similar presentations
1 Interprocess Communication 1. Ways of passing information 2. Guarded critical activities (e.g. updating shared data) 3. Proper sequencing in case of.
Advertisements

Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Operating Systems Part III: Process Management (Process Synchronization)
CIS 540 Principles of Embedded Computation Spring Instructor: Rajeev Alur
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Ch. 7 Process Synchronization (1/2) I Background F Producer - Consumer process :  Compiler, Assembler, Loader, · · · · · · F Bounded buffer.
Process Synchronization Continued 7.2 The Critical-Section Problem.
Mutual Exclusion By Shiran Mizrahi. Critical Section class Counter { private int value = 1; //counter starts at one public Counter(int c) { //constructor.
Silberschatz, Galvin and Gagne ©2007 Operating System Concepts with Java – 7 th Edition, Nov 15, 2006 Chapter 6 (a): Synchronization.
Previously… Processes –Process States –Context Switching –Process Queues Threads –Thread Mappings Scheduling –FCFS –SJF –Priority scheduling –Round Robin.
Chapter 6 Process Synchronization Bernard Chen Spring 2007.
Chapter 6: Process Synchronization
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Chapter 5: Process Synchronization.
5.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts with Java – 8 th Edition Chapter 5: CPU Scheduling.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
Process Synchronization. Module 6: Process Synchronization Background The Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores.
Operating System Concepts and Techniques Lecture 12 Interprocess communication-1 M. Naghibzadeh Reference M. Naghibzadeh, Operating System Concepts and.
Mutual Exclusion.
CH7 discussion-review Mahmoud Alhabbash. Q1 What is a Race Condition? How could we prevent that? – Race condition is the situation where several processes.
China’s Software Industry August 2006 Instructor: Hengming Zou, Ph.D.
Critical Section chapter3.
Parallel Processing (CS526) Spring 2012(Week 6).  A parallel algorithm is a group of partitioned tasks that work with each other to solve a large problem.
Cpr E 308 Spring 2004 Recap for Midterm Introductory Material What belongs in the OS, what doesn’t? Basic Understanding of Hardware, Memory Hierarchy.
Concurrency.
Concurrent Processes Lecture 5. Introduction Modern operating systems can handle more than one process at a time System scheduler manages processes and.
Chapter 6: Process Synchronization. Outline Background Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores Classic Problems.
OS Spring’04 Concurrency Operating Systems Spring 2004.
Silberschatz, Galvin and Gagne ©2007 Operating System Concepts with Java – 7 th Edition, Nov 15, 2006 Process Synchronization (Or The “Joys” of Concurrent.
Process Synchronization
Chapter 2.3 : Interprocess Communication
CS444/CS544 Operating Systems Synchronization 2/14/2007 Prof. Searleman
Chapter 6: Synchronization. 6.2 Silberschatz, Galvin and Gagne ©2005 Operating System Principles Module 6: Synchronization 6.1 Background 6.2 The Critical-Section.
Synchronization (other solutions …). Announcements Assignment 2 is graded Project 1 is due today.
OS Fall’02 Concurrency Operating Systems Fall 2002.
1 Interprocess Communication Race Conditions Two processes want to access shared memory at same time.
Instructor: Umar KalimNUST Institute of Information Technology Operating Systems Process Synchronization.
Adopted from and based on Textbook: Operating System Concepts – 8th Edition, by Silberschatz, Galvin and Gagne Updated and Modified by Dr. Abdullah Basuhail,
Operating Systems CSE 411 CPU Management Oct Lecture 13 Instructor: Bhuvan Urgaonkar.
1 Lecture 9: Synchronization  concurrency examples and the need for synchronization  definition of mutual exclusion (MX)  programming solutions for.
Concurrency, Mutual Exclusion and Synchronization.
Process Synchronization Continued 7.2 Critical-Section Problem 7.3 Synchronization Hardware 7.4 Semaphores.
6.3 Peterson’s Solution The two processes share two variables: Int turn; Boolean flag[2] The variable turn indicates whose turn it is to enter the critical.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 7: Process Synchronization Background The Critical-Section Problem Synchronization.
Background Concurrent access to shared data may result in data inconsistency Maintaining data consistency requires mechanisms to ensure the orderly execution.
Operating Systems CMPSC 473 Mutual Exclusion Lecture 11: October 5, 2010 Instructor: Bhuvan Urgaonkar.
Rensselaer Polytechnic Institute CSCI-4210 – Operating Systems David Goldschmidt, Ph.D.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
1 Processes and Threads Part II Chapter Processes 2.2 Threads 2.3 Interprocess communication 2.4 Classical IPC problems 2.5 Scheduling.
1 Critical Section Problem CIS 450 Winter 2003 Professor Jinhua Guo.
Process Synchronization. Objectives To introduce the critical-section problem, whose solutions can be used to ensure the consistency of shared data To.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Chapter 6: Process Synchronization.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Chapter 5: Process Synchronization.
Synchronization Questions answered in this lecture: Why is synchronization necessary? What are race conditions, critical sections, and atomic operations?
Chapter 6 Synchronization Dr. Yingwu Zhu. The Problem with Concurrent Execution Concurrent processes (& threads) often access shared data and resources.
Process Synchronization
Process Synchronization: Semaphores
Background on the need for Synchronization
Process Synchronization
Concurrent Processes.
Chapter 5: Process Synchronization
Topic 6 (Textbook - Chapter 5) Process Synchronization
Process Synchronization
Lecture 2 Part 2 Process Synchronization
Grades.
Concurrency: Mutual Exclusion and Process Synchronization
Chapter 6: Process Synchronization
Chapter 6: Synchronization Tools
Chapter 5 Mutual Exclusion(互斥) and Synchronization(同步)
Operating Systems {week 10}
Presentation transcript:

Concurrent Programming Problems OS Spring 2011

Concurrency pros and cons Concurrency is good for users –One of the reasons for multiprogramming Working on the same problem, simultaneous execution of programs, background execution Concurrency is a “pain in the neck” for the system –Access to shared data structures –Danger of deadlock due to resource contention

new->next=current.next current new current new current.next=new insert_after(current,new): Linked list example

tmp=current.next; current.next=current.next.next; free(tmp); current remove_next(current): Linked list example

current new current new current new new->next=current.next remove_next(current): current.next=new

Mutual Exclusion OS is an instance of concurrent programming –Multiple activities may take place at the ‘same time’ Concurrent execution of operations involving multiple steps is problematic –Example: updating linked list Concurrent access to a shared data structure must be mutually exclusive

Atomic Operations A generic solution is to ensure atomic execution of operations –All the steps are perceived as executed in a single point of time insert_after(current,new) remove_next(current), or remove_next(current) insert_after(current,new)

The Critical Section Problem n processes P 0,…,P n-1 No assumptions on relative process speeds, no synchronized clocks, etc… –Models inherent non-determinism of process scheduling No assumptions on process activity when executing within critical section and remainder sections The problem: Implement a general mechanism for entering and leaving a critical section.

Success Criteria 1.Mutual exclusion: Only one process is in the critical section at a time. 2.Progress: There is no deadlock: some process will eventually get into the critical section. 3.Fairness: There is no starvation: no process will wait indefinitely while other processes continuously enter the critical section. 4.Generality: It works for N processes.

Peterson’s Algorithm There are two shared arrays, initialized to 0. –q[n]: q[i] is the stage of process i in entering the CS. Stage n means the process is in the CS. –turn[n-1]: turn[j] says which process has to wait in stage j. The algorithm for process i: for (j=1; j<n; j++) { q[i] = j; turn[j] = i; while ((  k ≠i s.t. q[k]≥j) && (turn[j] == i)) { skip; } critical section q[i] = 0;

Proof of Peterson’s algorithm for (j=1; j<n; j++) { q[i] = j; turn[j] = i; while((  k ≠i st q[k]≥j) && (turn[j] == i)) { skip; } critical section q[i] = 0; Definition: Process a is ahead of process b if q[a] > q[b]. Lemma 1: A process that is ahead of all others advances to the next stage (increments q[i]). Proof: This process is not stuck in the while loop because the first condition does not hold.

Proof of Peterson’s algorithm for (j=1; j<n; j++) { q[i] = j; turn[j] = i; while((  k ≠i st q[k]≥j) && (turn[j] == i)) { skip; } critical section q[i] = 0; Lemma 2: When a process advances to stage j+1, if it is not ahead of all processes, then there is at least one other process at stage j. Proof: To exit the while loop another process had to take turn[j].

Proof of Peterson’s algorithm for (j=1; j<n; j++) { q[i] = j; turn[j] = i; while((  k ≠i st q[k]≥j) && (turn[j] == i)) { skip; } critical section q[i] = 0; Lemma 3: If there is more than one process at stage j, then there is at least one process at every stage below j. Proof: Use lemma 2, and prove by induction on the stages.

Proof of Peterson’s algorithm for (j=1; j<n; j++) { q[i] = j; turn[j] = i; while((  k ≠i st q[k]≥j) && (turn[j] == i)){ skip; } critical section q[i] = 0; Lemma 4: The maximal number of processes at stage j is n-j+1 Proof: By lemma 3, there are at least j-1 processes at lower stages.

Proof of Peterson’s algorithm Mutual Exclusion: –By Lemma 4, only one process can be in stage n Progress: –There is always at least one process that can advance: If a process is ahead of all others it can advance If no process is ahead of all others, then there is more than one process at the top stage, and one of them can advance. Fairness: –A process will be passed over no more than n times by each of the other processes (proof: in the notes). Generality: –The algorithm works for any n.

Peterson’s Algorithm Peterson’s algorithm creates a critical section mechanism without any help from the OS. All the success criteria hold for this algorithm. It does use busy wait (no other option).

Classical Problems of Synchronization

Bounded-Buffer Problem Readers and Writers Problem Dining-Philosophers Problem

Bounded-Buffer Problem One cyclic buffer that can hold up to N items Producer and consumer use the buffer –The buffer absorbs fluctuations in rates. The buffer is shared, so protection is required. We use counting semaphores: –the number in the semaphore represents the number of resources of some type

Bounded-Buffer Problem Semaphore mutex initialized to the value 1 –Protects the index into the buffer Semaphore full initialized to the value 0 –Indicates how many items in the buffer are full (can read them) Semaphore empty initialized to the value N –Indicates how many items in the buffer are empty (can write into them)

Bounded-Buffer Problem – Cont. Producer: while (true) { produce an item P (empty); P (mutex); add the item to the buffer V (mutex); V (full); } Consumer: while (true) { P (full); P (mutex); remove an item from buffer V (mutex); V (empty); consume the item }

Readers-Writers Problem A data structure is shared among a number of concurrent processes: –Readers – Only read the data; They do not perform updates. –Writers – Can both read and write. Problem –Allow multiple readers to read at the same time. –Only one writer can access the shared data at the same time. –If a writer is writing to the data structure, no reader may read it.

Readers-Writers Problem: First Solution Shared Data: –The data structure –Integer readcount initialized to 0. –Semaphore mutex initialized to 1. Protects readcount –Semaphore write initialized to 1. Makes sure the writer doesn’t use data at the same time as any readers

Readers-Writers Problem: First solution The structure of a writer process: while (true) { P (write) ; writing is performed V (write) ; }

Readers-Writers Problem: First solution The structure of a reader process: while (true) { P (mutex) ; readcount ++ ; if (readcount == 1) P (write) ; V (mutex) reading is performed P (mutex) ; readcount - - ; if (readcount == 0) V (write) ; V (mutex) ; }

Readers-Writers Problem: First solution The structure of a reader process: while (true) { P (mutex) ; readcount ++ ; if (readcount == 1) P (write) ; V (mutex) reading is performed P (mutex) ; readcount - - ; if (readcount == 0) V (write) ; V (mutex) ; } This solution is not perfect: What if a writer is waiting to write but there are readers that read all the time? Writers are subject to starvation!

Second solution: Writer Priority Extra semaphores and variables: –Semaphore read initialized to 1 – inhibits readers when a writer wants to write. –Integer writecount initialized to 0 – counts waiting writers. –Semaphore write_mutex initialized to 1 – controls the updating of writecount. –Semaphore mutex now called read_mutex –Queue semaphore used only in the reader

Second solution:Writer Priority The writer: while (true) { P(write_mutex) writecount++; //counts number of waiting writers if (write_count ==1) P(read) V(write_mutex) P (write) ; writing is performed V(write) ; P(write_mutex) writecount--; if (writecount ==0) V(read) V(write_mutex) }

Second Solution: Writer Priority (cont.) The reader: while (true) { P(queue) P(read) P(read_mutex) ; readcount ++ ; if (readcount == 1) P(write) ; V(read_mutex) V(read) V(queue) reading is performed P(read_mutex) ; readcount - - ; if (readcount == 0) V(write) ; V(read_mutex) ; } Queue semaphore, initialized to 1: Since we don’t want to allow more than one reader at a time in this section (otherwise the writer will be blocked by multiple readers when doing P(read). )

Dining-Philosophers Problem Shared data Bowl of rice (data set) Semaphore chopstick [5] initialized to 1

Dining-Philosophers Problem – Cont. The structure of Philosopher i: While (true) { P ( chopstick[i] ); P ( chopstick[ (i + 1) % 5] ); eat V ( chopstick[i] ); V (chopstick[ (i + 1) % 5] ); think } This can cause deadlocks 

Dining Philosophers Problem This abstract problem demonstrates some fundamental limitations of deadlock-free synchronization. There is no symmetric solution –Any symmetric algorithm leads to a symmetric output, that is everyone eats (which is impossible) or no-one does.

Possible Solutions –Use a waiter –Execute different code for odd/even –Give them another chopstick –Allow at most 4 philosophers at the table –Randomized (Lehmann-Rabin)

Summary Concurrency can cause serious problems if not handled correctly. –Atomic operations –Critical sections –Semaphores and mutexes –Careful design to avoid deadlocks and livelocks.