CHAPTER3 Higher-Level Synchronization and Communication

Slides:



Advertisements
Similar presentations
1 Chapter 5 Concurrency: Mutual Exclusion and Synchronization Principals of Concurrency Mutual Exclusion: Hardware Support Semaphores Readers/Writers Problem.
Advertisements

Ch 7 B.
3. Higher-Level Synchronization 3.1 Shared Memory Methods –Monitors –Protected Types 3.2 Distributed Synchronization/Comm. –Message-Based Communication.
Chapter 6: Process Synchronization
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
Process Synchronization. Module 6: Process Synchronization Background The Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores.
Concurrency: mutual exclusion and synchronization Slides are mainly taken from «Operating Systems: Internals and Design Principles”, 8/E William Stallings.
1 Semaphores and Monitors CIS450 Winter 2003 Professor Jinhua Guo.
Lab2: Semaphores and Monitors. Credits Material in this slide set is from G. Andrews, “Foundation of Multithreaded, Parallel, and Distributed Programming”,
Intertask Communication and Synchronization In this context, the terms “task” and “process” are used interchangeably.
1 Condition Synchronization. 2 Synchronization Now that you have seen locks, is that all there is? No, but what is the “right” way to build a parallel.
Operating Systems CMPSC 473 Mutual Exclusion Lecture 13: October 12, 2010 Instructor: Bhuvan Urgaonkar.
6.5 Semaphore Can only be accessed via two indivisible (atomic) operations wait (S) { while S
Informationsteknologi Wednesday, September 26, 2007 Computer Systems/Operating Systems - Class 91 Today’s class Mutual exclusion and synchronization 
1 Threads CSCE 351: Operating System Kernels Witawas Srisa-an Chapter 4-5.
Concurrency: Mutual Exclusion and Synchronization Why we need Mutual Exclusion? Classical examples: Bank Transactions:Read Account (A); Compute A = A +
02/19/2010CSCI 315 Operating Systems Design1 Process Synchronization Notice: The slides for this lecture have been largely based on those accompanying.
1 CS 333 Introduction to Operating Systems Class 6 – Monitors and Message Passing Jonathan Walpole Computer Science Portland State University.
1 CS 333 Introduction to Operating Systems Class 6 – Monitors and Message Passing Jonathan Walpole Computer Science Portland State University.
Chapter 6 – Concurrent Programming Outline 6.1 Introduction 6.2Monitors 6.2.1Condition Variables 6.2.2Simple Resource Allocation with Monitors 6.2.3Monitor.
CS533 Concepts of Operating Systems Class 3 Monitors.
OS Fall’02 Concurrency Operating Systems Fall 2002.
Race Conditions CS550 Operating Systems. Review So far, we have discussed Processes and Threads and talked about multithreading and MPI processes by example.
1 Interprocess Communication Race Conditions Two processes want to access shared memory at same time.
Monitor  Giving credit where it is due:  The lecture notes are borrowed from Dr. I-Ling Yen at University of Texas at Dallas  I have modified them and.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings 1.
1 CS 333 Introduction to Operating Systems Class 6 – Monitors and Message Passing Jonathan Walpole Computer Science Portland State University.
6.3 Peterson’s Solution The two processes share two variables: Int turn; Boolean flag[2] The variable turn indicates whose turn it is to enter the critical.
4061 Session 21 (4/3). Today Thread Synchronization –Condition Variables –Monitors –Read-Write Locks.
CSC321 Concurrent Programming: §5 Monitors 1 Section 5 Monitors.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 7: Process Synchronization Background The Critical-Section Problem Synchronization.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Chapter 6 – Process Synchronisation (Pgs 225 – 267)
1 Interprocess Communication (IPC) - Outline Problem: Race condition Solution: Mutual exclusion –Disabling interrupts; –Lock variables; –Strict alternation.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Lecture 11: Synchronization (Chapter 6, cont)
Silberschatz, Galvin and Gagne  2002 Modified for CSCI 399, Royden, Operating System Concepts Operating Systems Lecture 24 Critical Regions.
Problems with Semaphores Used for 2 independent purposes –Mutual exclusion –Condition synchronization Hard to get right –Small mistake easily leads to.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
CSC 360 Instructor: Kui Wu More on Process Synchronization Semaphore, Monitor, Condition Variables.
Chapter 71 Monitors (7.7)  A high-level-language object-oriented concept that attempts to simplify the programming of synchronization problems  A synchronization.
Synchronicity II Introduction to Operating Systems: Module 6.
CS533 Concepts of Operating Systems Class 2a Monitors.
ICS Higher-Level Synchronization 3.1 Shared Memory Methods –Monitors –Protected Types 3.2 Distributed Synchronization/Comm. –Message-Based Communication.
Operating Systems NWEN 301 Lecture 6 More Concurrency.
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems The producer consumer problem Monitors and messaging.
Deadlock and Starvation
Semaphore Synchronization tool that provides more sophisticated ways (than Mutex locks) for process to synchronize their activities. Semaphore S – integer.
Process Synchronization: Semaphores
Auburn University COMP 3500 Introduction to Operating Systems Synchronization: Part 4 Classical Synchronization Problems.
Deadlock and Starvation
CS533 Concepts of Operating Systems Class 3
Jonathan Walpole Computer Science Portland State University
CS510 Operating System Foundations
Operating Systems CMPSC 473
CS510 Operating System Foundations
Chapter 5: Process Synchronization (Con’t)
Critical Section and Critical Resources
Critical Section and Critical Resources
Semaphore Originally called P() and V() wait (S) { while S <= 0
Process Synchronization
Counter in Java class Counter { private integer count = 0; public void Increment() synchronized { ++count; } public integer GetCount() synchronized.
Critical section problem
CS533 Concepts of Operating Systems Class 3
Chapter 6 Synchronization Principles
Chapter 7: Synchronization Examples
Don Porter Portions courtesy Emmett Witchel
Presentation transcript:

CHAPTER3 Higher-Level Synchronization and Communication Shortcoming of semaphores and events: Not supporting the elegant structuring of concurrent programs (being Implemented at lower level) Verifying the behavior of programs is difficult Contents: 3.1 Shared memory methods 3.2 Distributed synchronization and communication 3.3 Other classic synchronization problems

Monitors The principles of abstract data types: The monitor construct: A set of operations The monitor construct: Access to the resource (CS) is possible only via one of the monitor procedures; Procedures are mutually exclusive; Condition variable is used to manipulate processes’ communication and synchronization

Condition variable C.wait: causes the executing process to be suspended and placed on a queue associated with the CV c. C.signal: wakes up one of the processes waiting on c, placing it on a queue of processes wanting to reenter the monitor

Directions of CV No value associated with CV Referring to a specific event, state of a computation, or assertion Used inner of monitors

Example: monitor solution to the bounded-buffer problem Monitor bounded_buffer{ char buffer[n]; int nextin=0, nextout=0,full_cnt=0; condition notempty,notfull; Deposit(char c){ if(full_cnt==n) notfull.wait; buffer[nextin]=c; nextin=(nextin+1)%n; full_cnt=full_cnt+1; notempty.signal; } Remove(char c){ if(full_cnt==0) notempty.wait; c=buffer[nextout]; nextout=(nextout+1)%n; full_cnt=full_cnt-1; notfull.singal;

Priority waits c.wait(p) C: condition variable on which the process is to be suspended P: an integer expression defining a priority when the condition c is signaled and there is more than one process waiting, the one which specified the lowest value of p is resumed.

Monitor alarm_clock{ int now=0; condition wakeup; Wakeme(int n){ int alarmsetting; alarmsetting=now+n; while(now<alarmsetting) wakeup.wait(alarmsetting); wakeup.signal;/*in case more than one process is to wake up at the same time*/ } Tick(){ now=now+1; wakeup.signal;

Protected types Implicit wait at the beginning of the procedure Implicit signal at the ending of the procedure Mutual exclusive between protected types’ procedures

Distributed synchronization and communication Surroundings: For centralized systems where process isolation and encapsulation is desirable For distributed systems where processes may reside on different nodes in a network Message-based communication Procedure-based communication distributed mutual exclusion

Message-based communication Some fundamental questions: When a message is emitted, must the sending process wait until the message has been accepted by the receiver or can it continue processing immediately after emission? What should happen when a receive is issued and there is no message waiting? Must the sender name exactly one receiver to which it wishes to transmit a message or can messages be simultaneously sent to a group of receiver processes? Must the receiver specify exactly one sender from which it wishes to accept a message or can it accept messages arriving from any member of a group of senders?

Named-channel system Syntax: Send(channel, var); receive(channel, var) The problem of blocking receive with explicit naming: don’t permit the process executing receive to wait selectively for the arrival of one of several possible requests. Nondeterministic selective input: when(C) S

Procedure-based communication Asymmetric RPC Symmetric RPC client: q.f(params) server: accept f(params) S

Rendezvous p q p q accept f() q.f() wait wait q.f() accept f() S S called process is delayed Calling process is delayed

ADA’s select statement [when B1:] accept E1(…) S1; or … [when Bn:] accept En(…) Sn; [else R] } Mutual exclusive only one of the embedded accepts to be executed Nondeterministic choosing among eligible accept statements according to a fair internal policy

Distributed mutual exclusion Centralized controller --relying on the correct operation of the controller --potential performance bottleneck Fully distributed --large amount of message passing --accurate time stamps --difficulty of managing node or process failures Token ring

Token ring Process controller [i] { while(1) { accept token; select { accept Request_CS() {busy=1;} else null; } if(busy) accept Release_CS() {busy=0;} controller[(i+1)%n].Token; Process p [i] { controller [i].Request_CS(); CSi; controller [i].Release_CS(); programi;

Classic synchronization problems The Readers/Writers problem The Dining Philosophers Problem

The Readers/Writers problem Writers are permitted to modify the state of the resource and must have exclusive access; Readers only can interrogate the resource state and , consequently, may share the resource concurrently with an unlimited number of other readers; Fairness policies must be included.

Semaphore writelock=1; Cobegin Reader: while(1){ reading; } // Writer: while(1){ writing; coend Int readcount=0; Semaphore countlock=1; Exclusive P(countlock); if(readcount==0) P(writelock); readcount++; V(countlock); Writer Writer Exclusive Writer Reader P(countlock); readcount--; if(readcount==0) V(writelock); V(countlock); Concurrent Reader Reader P(writelock); V(writelock);

Fairness Policies A new reader should not be permitted to start during a read sequence if there is a writer waiting All readers waiting at the end of a write operation should have priority over the next writer

Monitor readers/writers{ int read_cnt=0, writing=0; condition OK_to_read, OK_to_write; Start_read(){ if(writing|| !empty(OK_to_write)) OK_to_read.wait; read_cnt=read_cnt+1; OK_to_read.signal; } End_read(){ read_cnt=read_cnt-1; if(read_cnt==0) OK_to_write.signal; Start_write(){ if((read_cnt!=0)||writing) OK_to_write.wait; writing=1; End_write(){ writing=0; if(!empty(OK_to_read))OK_to_read.signal; else OK_to_write.signal;

The Dining Philosophers Problem spaghetti P4 P3

Concerns about the problem Deadlock: A situation must be prevented where each philosopher obtains one of the forks and is blocked forever waiting for the other to be available Fairness: It should not be possible for one or more philosophers to conspire in such a way that another philosopher is prevented indefinitely from acquiring its fork Concurrency: When one philosopher, e.g., p1, is eating, only its two immediate neighbors (p5 and p2) should be prevented from eating. The others (p3 and p4) should not be blocked; one of these must be able to eat concurrently with p1

Semaphore f[5]={1,1,1,1,1,}; Cobegin{ P(i): while(1){ think(i); P(f(i)); P(f(i+1)%5);// grab_forks(i); eat(i); V(f[i]); V(f[i+1]);// return_forks(i); } // …… coend

P1 P5 P2 spaghetti P4 P3

Semaphore f[5]={1,1,1,1,1,}; Cobegin{ P(i): while(1){ think(i); P(f[i]); P(f[(i+1)%5]);// grab_forks(i); eat(i); V(f[i]); V(f[(i+1)%5]);// return_forks(i); } // …… P(j): while(1){ think(j); P(f[j+1]%5); P(f[j]); // grab_forks(j); eat(j); V(f[(j+1)%5]); V(f[j]); // return_forks(j); coend

P1 P5 P2 spaghetti P4 P3

Semaphore f[5]={1,1,1,1,1,}; Cobegin{ P(i): while(1){ think(i); if(i%2==1) P(f(i)); P(f(i+1)%5);// grab_forks(i); else P(f(i+1)%5); P(f(i)); // grab_forks(i); eat(i); V(f[i]); V(f[i+1]);// return_forks(i); } // …… coend

P1 P5 P2 spaghetti P4 P3

FAQs Rewrite the program below using cobegin/coend statements. Make sure that it exploits maximum parallelism but produces the same result as the sequential execution. Hint: Draw first the process flow graph where each line of the code corresponds to an edge. Start with the last line. W=X1 * X2; V=X3 * X4; Y= V * X5; Z= V * X6; Y= W * Y; Z= W * Z; A= Y + Z;

S Cobegin{ W=X1*X2; // V=X3* X4; cobegin{ Y=V*X5; Z=V*X6; } coend Coend Y=W* Y; Z=W*Z; A=Y+Z; V= X3* X4 W=X1 * X2 Y=V * X5 Z=V * X6 Y=W * Y Z=W * Z A=Y+Z E

Generalize the last software solution to the mutual exclusion problem(Section2.3.1) to work with three processes. Int c1=0,c2=0,c3=0,will_wait; Cobegin P1: while(1){ c1=1; will_wait=1; while((c2||c3)&&(will_wait==1)); cs1; c1=0;program1; } // … coend

Int c1=0,c2=0,c3=0,will_wait; Cobegin P1: while(1){ c1=1; will_wait=1; while(c2&&(will_wait==1)); while(c3&&(will_wait==1)); CS1;c1=0; program1; }// P2: while(1){ c2=1; will_wait=2; while(c1&&(will_wait==2)); while(c3&&(will_wait==2)); CS2;c2=0; program2; P3: while(1){ c3=1; will_wait=3; while((c1||c2)&&(will_wait==3); CS3; c3=0; program3; } coend