Operating Systems Principles Process Management and Coordination Lecture 3: Higher-Level Synchronization and Communication 主講人:虞台文.

Slides:



Advertisements
Similar presentations
Operating Systems Semaphores II
Advertisements

Operating Systems: Monitors 1 Monitors (C.A.R. Hoare) higher level construct than semaphores a package of grouped procedures, variables and data i.e. object.
CHAPTER3 Higher-Level Synchronization and Communication
1 Chapter 5 Concurrency: Mutual Exclusion and Synchronization Principals of Concurrency Mutual Exclusion: Hardware Support Semaphores Readers/Writers Problem.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Ch 7 B.
Concurrency Important and difficult (Ada slides copied from Ed Schonberg)
3. Higher-Level Synchronization 3.1 Shared Memory Methods –Monitors –Protected Types 3.2 Distributed Synchronization/Comm. –Message-Based Communication.
Ch. 7 Process Synchronization (1/2) I Background F Producer - Consumer process :  Compiler, Assembler, Loader, · · · · · · F Bounded buffer.
Chapter 6: Process Synchronization
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
CH7 discussion-review Mahmoud Alhabbash. Q1 What is a Race Condition? How could we prevent that? – Race condition is the situation where several processes.
1 Semaphores and Monitors CIS450 Winter 2003 Professor Jinhua Guo.
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Monitors Chapter 7. The semaphore is a low-level primitive because it is unstructured. If we were to build a large system using semaphores alone, the.
COSC 3407: Operating Systems Lecture 8: Semaphores, Monitors and Condition Variables.
Informationsteknologi Wednesday, September 26, 2007 Computer Systems/Operating Systems - Class 91 Today’s class Mutual exclusion and synchronization 
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Classical Problems of Concurrency
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
1 CS 333 Introduction to Operating Systems Class 6 – Monitors and Message Passing Jonathan Walpole Computer Science Portland State University.
Chapter 6: Process Synchronization. Outline Background Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores Classic Problems.
1 Semaphores Special variable called a semaphore is used for signaling If a process is waiting for a signal, it is suspended until that signal is sent.
Concurrency CS 510: Programming Languages David Walker.
1 CS 333 Introduction to Operating Systems Class 6 – Monitors and Message Passing Jonathan Walpole Computer Science Portland State University.
Concurrency: Mutual Exclusion, Synchronization, Deadlock, and Starvation in Representative Operating Systems.
Chapter 11: Distributed Processing Parallel programming Principles of parallel programming languages Concurrent execution –Programming constructs –Guarded.
Monitors CSCI 444/544 Operating Systems Fall 2008.
Concurrency - 1 Tasking Concurrent Programming Declaration, creation, activation, termination Synchronization and communication Time and delays conditional.
CS4231 Parallel and Distributed Algorithms AY 2006/2007 Semester 2 Lecture 2 (19/01/2006) Instructor: Haifeng YU.
1 CS 333 Introduction to Operating Systems Class 6 – Monitors and Message Passing Jonathan Walpole Computer Science Portland State University.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
1 Concurrency Architecture Types Tasks Synchronization –Semaphores –Monitors –Message Passing Concurrency in Ada Java Threads.
4061 Session 21 (4/3). Today Thread Synchronization –Condition Variables –Monitors –Read-Write Locks.
CSC321 Concurrent Programming: §5 Monitors 1 Section 5 Monitors.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
ICS 313: Programming Language Theory Chapter 13: Concurrency.
Synchronization Methods in Message Passing Model.
1 Interprocess Communication (IPC) - Outline Problem: Race condition Solution: Mutual exclusion –Disabling interrupts; –Lock variables; –Strict alternation.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Operating Systems Principles Process Management and Coordination Lecture 3: Higher-Level Synchronization and Communication 主講人:虞台文.
3. Higher-Level Synchronization
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Operating System Chapter 5. Concurrency: Mutual Exclusion and Synchronization Lynn Choi School of Electrical Engineering.
Chapter 71 Monitors (7.7)  A high-level-language object-oriented concept that attempts to simplify the programming of synchronization problems  A synchronization.
Synchronicity II Introduction to Operating Systems: Module 6.
ICS Higher-Level Synchronization 3.1 Shared Memory Methods –Monitors –Protected Types 3.2 Distributed Synchronization/Comm. –Message-Based Communication.
1 5-High-Performance Embedded Systems using Concurrent Process (cont.)
CS3771 Today: Distributed Coordination  Previous class: Distributed File Systems Issues: Naming Strategies: Absolute Names, Mount Points (logical connection.
6.1 Silberschatz, Galvin and Gagne ©2005 Operating System Principles 6.5 Semaphore Less complicated than the hardware-based solutions Semaphore S – integer.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Interprocess Communication Race Conditions
Jonathan Walpole Computer Science Portland State University
CS510 Operating System Foundations
CS510 Operating System Foundations
Concurrency: Mutual Exclusion and Synchronization
Chapter 5: Process Synchronization (Con’t)
Monitors Chapter 7.
Chapter 7: Synchronization Examples
Monitors Chapter 7.
CSE 451: Operating Systems Autumn Lecture 8 Semaphores and Monitors
Monitors Chapter 7.
CSE 451: Operating Systems Autumn Lecture 7 Semaphores and Monitors
CSE 542: Operating Systems
CSE 542: Operating Systems
Presentation transcript:

Operating Systems Principles Process Management and Coordination Lecture 3: Higher-Level Synchronization and Communication 主講人:虞台文

Content Motivation Shared Memory Methods – Monitors – Protected Types Distributed Synchronization/Communication – Message-Based Communication – Procedure-Based Communication – Distributed Mutual Exclusion Other Classical Problems – The Readers/Writers Problem – The Dining Philosophers Problem – The Elevator Algorithm – Event Ordering with Logical Clocks

Operating Systems Principles Process Management and Coordination Lecture 3: Higher-Level Synchronization and Communication Motivation

Semaphores and Events – Powerful but low-level abstractions  Such programs are difficult to design, debug, and maintain  Programming with them is highly error prone, e.g., deadlock – Insecure for share memory – Unusable in distributed systems Need higher-level primitives – Based on semaphores or messages

Solutions High-level share memory models – Monitors – Protected Types Distributed schemes for interprocess communication/Synchronization – Message-Based Communication – Procedure-Based Communication – Distributed Mutual Exclusion

Operating Systems Principles Process Management and Coordination Lecture 3: Higher-Level Synchronization and Communication Share Memory Methods

Monitors Higher level construct than semaphores. A package of grouped procedures, variables and data, i.e., object oriented. Processes call procedures within a monitor but cannot access internal data. Can be built into programming languages, e,g., – Mesa from Xerox was used to build a real operating system (Pilots). Synchronization enforced by the compiler. Only one process allowed within a monitor at one time. wait and signal operations on condition variables.

The Monitor Abstraction Internal Data Condition Variables Procedure 1 Procedure 2 Procedure 3 Shared among processes Processes cannot access them directly wait/signal primitives for processes communication or synchronization. Processes access the internal data only through these procedures. Procedure are mutually exclusive, i.e., only one process or thread may be executing a procedure within a given time.

Example: Queue Handler Queue AddToQueue RemoveFromQueue

Example: Queue Handler monitor QueueHandler { struct Queue queue; void AddToQueue( int val ) { … add val to end of queue … } /* AddToQueue */ int RemoveFromQueue() { … remove value from queue, return it … } /* RemoveFromQueue */ }; monitor QueueHandler { struct Queue queue; void AddToQueue( int val ) { … add val to end of queue … } /* AddToQueue */ int RemoveFromQueue() { … remove value from queue, return it … } /* RemoveFromQueue */ }; Using C-like Pseudo code. Since only one process may be executing a procedure within a given time, mutual exclusion is assured.

Process Synchronization monitor QueueHandler { struct Queue queue; void AddToQueue( int val ) { … add val to end of queue … } /* AddToQueue */ int RemoveFromQueue() { … remove value from queue, return it … } /* RemoveFromQueue */ }; monitor QueueHandler { struct Queue queue; void AddToQueue( int val ) { … add val to end of queue … } /* AddToQueue */ int RemoveFromQueue() { … remove value from queue, return it … } /* RemoveFromQueue */ }; How about a process call RemoveFromQueue when the queue is empty?

Condition Variables Monitors need more facilities than just mutual exclusion. Need some way to wait. For coordination, monitors provide: c.wait – Calling process is blocked and placed on waiting queue associated with condition variable c c.signal (Hoare) c.notify (Mesa) – Calling process wakes up first process on c queue Question: How about the procedure that makes the c.signal ( c.notify ) call? sleep or keep running?

Variations on Semantics Hoare semantics: awakened process gets monitor lock immediately. – Process doing the signal gets “thrown out” temporarily. – Probably need to signal as the last thing in the monitor (Hansen). Mesa semantics: signaler keeps monitor lock. – Awakened process waits for monitor lock with no special priority (a new process could get in before it). – This means that the event it was waiting for could have come and gone: must check again (use a loop) and be prepared to wait again if someone else took it. – Signal and broadcast are therefore hints rather than guarantees. wait/signal wait/notify

More on Condition Variables “Condition variable” c is not a conventional variable – c has no value – c is an arbitrary name chosen by programmer to designate an event, state, or condition – Each c has a waiting queue associated – A process may “block” itself on c -- it waits until another process issues a signal on c

Example: Queue Handler monitor QueueHandler{ struct Queue queue; condition itemAvail, freenodeAvail; void AddToQueue( int val ) { if ( queue is full ) { freenodeAvail.wait; }... add val to the end of the queue... itemAvail.signal; } /* AddToQueue */ int RemoveFromQueue() { if ( queue is empty ) { itemAvail.wait; }... remove value from queue... freenodeAvail.signal; return value; } /* RemoveFromQueue */ }; monitor QueueHandler{ struct Queue queue; condition itemAvail, freenodeAvail; void AddToQueue( int val ) { if ( queue is full ) { freenodeAvail.wait; }... add val to the end of the queue... itemAvail.signal; } /* AddToQueue */ int RemoveFromQueue() { if ( queue is empty ) { itemAvail.wait; }... remove value from queue... freenodeAvail.signal; return value; } /* RemoveFromQueue */ }; One must ensure that the queue is not full before adding the item. An item is available here. One must ensure that the queue is nonempty before remove an item. A free node is available here.

Example: Queue Handler monitor QueueHandler{ struct Queue queue; condition itemAvail, freenodeAvail; void AddToQueue( int val ) { if ( queue is full ) { freenodeAvail.wait; }... add val to the end of the queue... itemAvail.signal; } /* AddToQueue */ int RemoveFromQueue() { if ( queue is empty ) { itemAvail.wait; }... remove value from queue... freenodeAvail.signal; return value; } /* RemoveFromQueue */ }; monitor QueueHandler{ struct Queue queue; condition itemAvail, freenodeAvail; void AddToQueue( int val ) { if ( queue is full ) { freenodeAvail.wait; }... add val to the end of the queue... itemAvail.signal; } /* AddToQueue */ int RemoveFromQueue() { if ( queue is empty ) { itemAvail.wait; }... remove value from queue... freenodeAvail.signal; return value; } /* RemoveFromQueue */ }; One must ensure that the queue is not full before adding the item. An item is available here. One must ensure that the queue is nonempty before remove an item. A free node is available here. An event denotes that data item is available in the queue. An event denotes that some more item can be added to the queue.

Example: Queue Handler monitor QueueHandler{ struct Queue queue; condition itemAvail, freenodeAvail; void AddToQueue( int val ) { if ( queue is full ) { freenodeAvail.wait; }... add val to the end of the queue... itemAvail.signal; } /* AddToQueue */ int RemoveFromQueue() { if ( queue is empty ) { itemAvail.wait; }... remove value from queue... freenodeAvail.signal; return value; } /* RemoveFromQueue */ }; monitor QueueHandler{ struct Queue queue; condition itemAvail, freenodeAvail; void AddToQueue( int val ) { if ( queue is full ) { freenodeAvail.wait; }... add val to the end of the queue... itemAvail.signal; } /* AddToQueue */ int RemoveFromQueue() { if ( queue is empty ) { itemAvail.wait; }... remove value from queue... freenodeAvail.signal; return value; } /* RemoveFromQueue */ }; An item is available here. One must ensure that the queue is nonempty before remove an item. A free node is available here.

Example: Queue Handler monitor QueueHandler{ struct Queue queue; condition itemAvail, freenodeAvail; void AddToQueue( int val ) { if ( queue is full ) { freenodeAvail.wait; }... add val to the end of the queue... itemAvail.signal; } /* AddToQueue */ int RemoveFromQueue() { if ( queue is empty ) { itemAvail.wait; }... remove value from queue... freenodeAvail.signal; return value; } /* RemoveFromQueue */ }; monitor QueueHandler{ struct Queue queue; condition itemAvail, freenodeAvail; void AddToQueue( int val ) { if ( queue is full ) { freenodeAvail.wait; }... add val to the end of the queue... itemAvail.signal; } /* AddToQueue */ int RemoveFromQueue() { if ( queue is empty ) { itemAvail.wait; }... remove value from queue... freenodeAvail.signal; return value; } /* RemoveFromQueue */ }; An item is available here. A free node is available here.

Example: Queue Handler monitor QueueHandler{ struct Queue queue; condition itemAvail, freenodeAvail; void AddToQueue( int val ) { if ( queue is full ) { freenodeAvail.wait; }... add val to the end of the queue... itemAvail.signal; } /* AddToQueue */ int RemoveFromQueue() { if ( queue is empty ) { itemAvail.wait; }... remove value from queue... freenodeAvail.signal; return value; } /* RemoveFromQueue */ }; monitor QueueHandler{ struct Queue queue; condition itemAvail, freenodeAvail; void AddToQueue( int val ) { if ( queue is full ) { freenodeAvail.wait; }... add val to the end of the queue... itemAvail.signal; } /* AddToQueue */ int RemoveFromQueue() { if ( queue is empty ) { itemAvail.wait; }... remove value from queue... freenodeAvail.signal; return value; } /* RemoveFromQueue */ }; A free node is available here.

Example: Queue Handler monitor QueueHandler{ struct Queue queue; condition itemAvail, freenodeAvail; void AddToQueue( int val ) { if ( queue is full ) { freenodeAvail.wait; }... add val to the end of the queue... itemAvail.signal; } /* AddToQueue */ int RemoveFromQueue() { if ( queue is empty ) { itemAvail.wait; }... remove value from queue... freenodeAvail.signal; return value; } /* RemoveFromQueue */ }; monitor QueueHandler{ struct Queue queue; condition itemAvail, freenodeAvail; void AddToQueue( int val ) { if ( queue is full ) { freenodeAvail.wait; }... add val to the end of the queue... itemAvail.signal; } /* AddToQueue */ int RemoveFromQueue() { if ( queue is empty ) { itemAvail.wait; }... remove value from queue... freenodeAvail.signal; return value; } /* RemoveFromQueue */ };

Hoare Monitors monitor QueueHandler{ struct Queue queue; condition itemAvail, freenodeAvail; void AddToQueue( int val ) { if ( queue is full ) { freenodeAvail.wait; }... add val to the end of the queue... itemAvail.signal; } /* AddToQueue */ int RemoveFromQueue() { if ( queue is empty ) { itemAvail.wait; }... remove value from queue... freenodeAvail.signal; return value; } /* RemoveFromQueue */ }; monitor QueueHandler{ struct Queue queue; condition itemAvail, freenodeAvail; void AddToQueue( int val ) { if ( queue is full ) { freenodeAvail.wait; }... add val to the end of the queue... itemAvail.signal; } /* AddToQueue */ int RemoveFromQueue() { if ( queue is empty ) { itemAvail.wait; }... remove value from queue... freenodeAvail.signal; return value; } /* RemoveFromQueue */ }; Queue is full p1 call AddtoQueue 1 p2 call RemoveFromQueue 3 2 p1 is blocked on freenodeAvail 5 P1 continues 4 P2 Signals freenodeAvail event 5’ P2 is blocked 6 P1 terminates 7 P2 continues

n1n1 n2n2 Example: Bounded Buffer Deposit Remove

n1n1 n2n2 Example: Bounded Buffer Deposit Remove nextin nextout count

Example: Bounded Buffer count nextout nextin Deposit Remove

Example: Bounded Buffer monitor BoundedBuffer { char buffer[n]; int nextin=0, nextout=0, count=0; condition notempty, notfull; deposit(char data) { if (count==n) notfull.wait; buffer[nextin] = data; nextin = (nextin+1) % n; count = count+1; notempty.signal; } remove(char data) { if (count==0) notempty.wait; data = buffer[nextout]; nextout = (nextout+1) % n; count = count - 1; notfull.signal; } }; monitor BoundedBuffer { char buffer[n]; int nextin=0, nextout=0, count=0; condition notempty, notfull; deposit(char data) { if (count==n) notfull.wait; buffer[nextin] = data; nextin = (nextin+1) % n; count = count+1; notempty.signal; } remove(char data) { if (count==0) notempty.wait; data = buffer[nextout]; nextout = (nextout+1) % n; count = count - 1; notfull.signal; } };

Priority Waits Hoare monitor signal resumes longest waiting process. Not always what one wants, so Hoare introduced “Priority Waits” (aka “conditional” or “scheduled”): c.wait(p) – p is an integer (priority) – Blocked processes are kept sorted by p c.signal – Wakes up process with lowest p

Example: Alarm Clock The current time (now) of the alarm clock is increased periodically (tick). Wakeup Queue The wakeup queue is used to hold processes to be waken up orderly according to their wakeup time (alarm).

Example: Alarm Clock p1:... AlarmClock.Wakeme(100);... p1:... AlarmClock.Wakeme(100);... Call at time 150 Wakeup Queue p1(250)

Example: Alarm Clock p1:... AlarmClock.Wakeme(100);... p1:... AlarmClock.Wakeme(100);... Call at time 150 Wakeup Queue p1(250) p2:... AlarmClock.Wakeme(150);... p2:... AlarmClock.Wakeme(150);... Call at time 200 p1(250) p2(350)

Example: Alarm Clock p1:... AlarmClock.Wakeme(100);... p1:... AlarmClock.Wakeme(100);... Call at time 150 Wakeup Queue p1(250) p2:... AlarmClock.Wakeme(150);... p2:... AlarmClock.Wakeme(150);... Call at time 200 p1(250) p2(350) p3:... AlarmClock.Wakeme(30);... p3:... AlarmClock.Wakeme(30);... Call at time 210 p3(240)

Example: Alarm Clock p1:... AlarmClock.Wakeme(100);... p1:... AlarmClock.Wakeme(100);... Call at time 150 Wakeup Queue p1(250) p2:... AlarmClock.Wakeme(150);... p2:... AlarmClock.Wakeme(150);... Call at time 200 p1(250) p2(350) p3:... AlarmClock.Wakeme(30);... p3:... AlarmClock.Wakeme(30);... Call at time 210 p3(240) p4:... AlarmClock.Wakeme(10);... p4:... AlarmClock.Wakeme(10);... Call at time 230 p4(240)

Example: Alarm Clock monitor AlarmClock { int now=0; condition wakeup; wakeme(int n) { int alarm; alarm = now + n; while (now<alarm) wakeup.wait(alarm); wakeup.signal; } tick() { /*invoked automatically by hardware*/ now = now + 1; wakeup.signal; } monitor AlarmClock { int now=0; condition wakeup; wakeme(int n) { int alarm; alarm = now + n; while (now<alarm) wakeup.wait(alarm); wakeup.signal; } tick() { /*invoked automatically by hardware*/ now = now + 1; wakeup.signal; } }

Example: Alarm Clock p1:... AlarmClock.Wakeme(100);... p1:... AlarmClock.Wakeme(100);... Call at time 150 Wakeup Queue p1(250) p2:... AlarmClock.Wakeme(150);... p2:... AlarmClock.Wakeme(150);... Call at time 200 p1(250) p2(350) p3:... AlarmClock.Wakeme(30);... p3:... AlarmClock.Wakeme(30);... Call at time 210 p3(240) p4:... AlarmClock.Wakeme(10);... p4:... AlarmClock.Wakeme(10);... Call at time 230 p4(240)

Mesa and Java Monitors notify is a variant of signal After c.notify : – Calling process continues – Woken-up process continues when caller exits Problems – Caller may wake up multiple processes, e.g., P i, P j, P k, … – P i could change condition on which P j was blocked

Mesa and Java Monitors P1: if(!B1) c1.wait; P2: if(!B2) c2.wait; B1=FASLE;

Mesa and Java Monitors P1: if(!B1) c1.wait; P2: if(!B2) c2.wait; B1=FASLE; return; P3: B1=B2=TRUE; c1.notify; c2.notify; return; B1=FALSE

Mesa and Java Monitors P1: if(!B1) c1.wait; P2: if(!B2) c2.wait; B1=FASLE; return; P3: B1=B2=TRUE; c1.notify; c2.notify; return; B1=FALSE What action should P1 take? Continue or wait again?

Mesa and Java Monitors P1: if(!B1) c1.wait; P2: if(!B2) c2.wait; B1=FASLE; return; P3: B1=B2=TRUE; c1.notify; c2.notify; return; B1=FALSE What action should P1 take? Continue or wait again? 

Solution P1: if(!B1) c1.wait; P2: if(!B2) c2.wait; B1=FASLE; return; P3: B1=B2=TRUE; c1.notify; c2.notify; return; B1=FALSE What action should P1 take? Continue or wait again?    Replace if to while. while

Example: Queue Handler Queue AddToQueue RemoveFromQueue

Example: Queue Handler monitor QueueHandler{ struct Queue queue; condition itemAvail, freenodeAvail; void AddToQueue( int val ) { if ( queue is full ) { freenodeAvail.wait; }... add val to the end of the queue... itemAvail.signal; } /* AddToQueue */ int RemoveFromQueue() { if ( queue is empty ) { itemAvail.wait; }... remove value from queue... freenodeAvail.signal; return value; } /* RemoveFromQueue */ }; monitor QueueHandler{ struct Queue queue; condition itemAvail, freenodeAvail; void AddToQueue( int val ) { if ( queue is full ) { freenodeAvail.wait; }... add val to the end of the queue... itemAvail.signal; } /* AddToQueue */ int RemoveFromQueue() { if ( queue is empty ) { itemAvail.wait; }... remove value from queue... freenodeAvail.signal; return value; } /* RemoveFromQueue */ }; Hoare monitors use ` if ’.

Example: Queue Handler monitor QueueHandler{ struct Queue queue; condition itemAvail, freenodeAvail; void AddToQueue( int val ) { while ( queue is full ) { freenodeAvail.wait; }... add val to the end of the queue... itemAvail.signal; } /* AddToQueue */ int RemoveFromQueue() { while ( queue is empty ) { itemAvail.wait; }... remove value from queue... freenodeAvail.signal; return value; } /* RemoveFromQueue */ }; monitor QueueHandler{ struct Queue queue; condition itemAvail, freenodeAvail; void AddToQueue( int val ) { while ( queue is full ) { freenodeAvail.wait; }... add val to the end of the queue... itemAvail.signal; } /* AddToQueue */ int RemoveFromQueue() { while ( queue is empty ) { itemAvail.wait; }... remove value from queue... freenodeAvail.signal; return value; } /* RemoveFromQueue */ }; Mesa monitors use ` while ’.

Protected Types Special case of monitor where: – c.wait is the first operation of a procedure – c.signal is the last operation Typical in producer/consumer situations wait/signal combined into a when clause – when c forms a “barrier” or “guarded” – Procedure continues only when c is true Defined in the Ada95 language (ADA 1995).

Example: Bounded Buffer Protected body BoundedBuffer { char buffer[n]; int nextin=0, nextout=0, count=0; entry deposit(char c) when (count < n) /* guard */ { buffer[nextin] = c; nextin = (nextin + 1) % n; count = count + 1; } entry remove(char c) when (count > 0) /* guard */ { c = buffer[nextout]; nextout = (nextout + 1) % n; count = count - 1; } Protected body BoundedBuffer { char buffer[n]; int nextin=0, nextout=0, count=0; entry deposit(char c) when (count < n) /* guard */ { buffer[nextin] = c; nextin = (nextin + 1) % n; count = count + 1; } entry remove(char c) when (count > 0) /* guard */ { c = buffer[nextout]; nextout = (nextout + 1) % n; count = count - 1; } }

Operating Systems Principles Process Management and Coordination Lecture 3: Higher-Level Synchronization and Communication Distributed Synchronization and Communication

Distributed Synchronization Semaphore-based primitive – Requires Shared Memory For Distributed Memory: – send(p,m) Send message m to process p – receive(q,m) Receive message from process q in variable m Semantics of send and receive vary very substantially in different systems.

Questions of Send/Receive Does sender wait for message to be accepted? Does receiver wait if there is no message? Does sender name exactly one receiver? Does receiver name exactly one sender?

send blocking/synchronousnonblocking/asynchronous explicit naming send message m to receiver r wait until accepted send message m to receiver r implicit naming broadcast message m wait until accepted broadcast message m receive blocking/synchronousnonblocking/asynchronous explicit naming wait message from sender s if there is a message from sender s, then receive it; else proceed. implicit naming wait message from any senderif there is a message from any sender, then receive it; else proceed. Types of Send/Receive

send blocking/synchronousnonblocking/asynchronous explicit naming send message m to receiver r wait until accepted send message m to receiver r implicit naming broadcast message m wait until accepted broadcast message m receive blocking/synchronousnonblocking/asynchronous explicit naming wait message from sender s if there is a message from sender s, then receive it; else proceed. implicit naming wait message from any senderif there is a message from any sender, then receive it; else proceed. receive blocking/synchronousnonblocking/asynchronous explicit naming wait message from sender s if there is a message from sender s, then receive it; else proceed. implicit naming wait message from any senderif there is a message from any sender, then receive it; else proceed.

Types of Send/Receive send blocking/synchronousnonblocking/asynchronous explicit naming send message m to receiver r wait until accepted send message m to receiver r implicit naming broadcast message m wait until accepted broadcast message m receive blocking/synchronousnonblocking/asynchronous explicit naming wait message from sender s if there is a message from sender s, then receive it; else proceed. implicit naming wait message from any senderif there is a message from any sender, then receive it; else proceed. Little practical used Little practical used  no use, e.g., Some debugging software may like it.

Process Coordination send blocking/synchronousnonblocking/asynchronous explicit naming send message m to receiver r wait until accepted send message m to receiver r implicit naming broadcast message m wait until accepted broadcast message m receive blocking/synchronousnonblocking/asynchronous explicit naming wait message from sender s if there is a message from sender s, then receive it; else proceed. implicit naming wait message from any senderif there is a message from any sender, then receive it; else proceed. Little practical used Solving a variety of process coordination problems.

Example:Printer Sever send blocking/synchronousnonblocking/asynchronous explicit naming send message m to receiver r wait until accepted send message m to receiver r implicit naming broadcast message m wait until accepted broadcast message m receive blocking/synchronousnonblocking/asynchronous explicit naming wait message from sender s if there is a message from sender s, then receive it; else proceed. implicit naming wait message from any senderif there is a message from any sender, then receive it; else proceed. Little practical used

Implementation for Asynchronous Operations send blocking/synchronousnonblocking/asynchronous explicit naming send message m to receiver r wait until accepted send message m to receiver r implicit naming broadcast message m wait until accepted broadcast message m receive blocking/synchronousnonblocking/asynchronous explicit naming wait message from sender s if there is a message from sender s, then receive it; else proceed. implicit naming wait message from any senderif there is a message from any sender, then receive it; else proceed. Little practical used built-in buffers are required to hold messages

Channels, Ports, and Mailboxes Allow indirect communication: – Senders/Receivers name channel instead of processes – Senders/Receivers determined at runtime Sender does not need to know who receives the message Receiver does not need to know who sent the message

Named Message Channels Named Pipe (Win32) ch1 ch2 P1: send(ch1, msg1); P2: send(ch2, msg2); P3: receive(ch1, x); receive(ch2, y);

CSP/Occam CSP: Communicating Sequential Processes Occam: a Language Using Named channel, say, ch1 to connect processes, say, p1 and p2 – p1 sends to p2 using: send(ch1,’a’) – p2 receives from p1 using: receive(ch1,x) – Guarded commands: when (c) s Set of statements s executed only when c is true Allow processes to receive messages selectively based on arbitrary conditions

Bounded buffer with CSP Communicating Sequential Processes: – Buffer B – Producer P – Consumer C Problems: – When Buffer full: B can only send to C – When Buffer empty: B can only receive from P – When Buffer partially filled: B must know whether C or P is ready to act Solution: – C sends request to B first; B then sends data – Inputs from P and C are guarded with when B B P P C C

Bounded buffer with CSP Buffer Producer Consumer deposit request remove

Bounded buffer with CSP Buffer Producer Consumer deposit request remove send(deposit, data) send(request) receive(remove,data) receive(request) send(remove,data) receive(deposit, data)

Bounded buffer with CSP Buffer Producer Consumer deposit request remove receive(request) send(remove,data) receive(deposit, data) The Bounded Buffer uses the following three primitives for synchronization.

Bounded buffer with CSP process BoundedBuffer { char buf[n]; int nextin=0, nextout=0, count=0; while (1) { when ((count<n) && receive(deposit, buf[nextin])){ nextin = (nextin + 1) % n; count = count + 1; } or when ((fullCount>0) && receive(reqest)){ send(remove, buf[nextout]); nextout = (nextout + 1) % n; count = count - 1; } process BoundedBuffer { char buf[n]; int nextin=0, nextout=0, count=0; while (1) { when ((count<n) && receive(deposit, buf[nextin])){ nextin = (nextin + 1) % n; count = count + 1; } or when ((fullCount>0) && receive(reqest)){ send(remove, buf[nextout]); nextout = (nextout + 1) % n; count = count - 1; } } } Put data into buffer if buffer not full and producer’s data is available. Pass data to the consumer if it has requested one.

Bounded buffer with CSP process BoundedBuffer { char buf[n]; int nextin=0, nextout=0, count=0; while (1) { when ((count<n) && receive(deposit, buf[nextin])){ nextin = (nextin + 1) % n; count = count + 1; } or when ((fullCount>0) && receive(reqest)){ send(remove, buf[nextout]); nextout = (nextout + 1) % n; count = count - 1; } process BoundedBuffer { char buf[n]; int nextin=0, nextout=0, count=0; while (1) { when ((count<n) && receive(deposit, buf[nextin])){ nextin = (nextin + 1) % n; count = count + 1; } or when ((fullCount>0) && receive(reqest)){ send(remove, buf[nextout]); nextout = (nextout + 1) % n; count = count - 1; } } } Pass data to the consumer if it has requested one.

Bounded buffer with CSP process BoundedBuffer { char buf[n]; int nextin=0, nextout=0, count=0; while (1) { when ((count<n) && receive(deposit, buf[nextin])){ nextin = (nextin + 1) % n; count = count + 1; } or when ((count>0) && receive(request)){ send(remove, buf[nextout]); nextout = (nextout + 1) % n; count = count - 1; } process BoundedBuffer { char buf[n]; int nextin=0, nextout=0, count=0; while (1) { when ((count<n) && receive(deposit, buf[nextin])){ nextin = (nextin + 1) % n; count = count + 1; } or when ((count>0) && receive(request)){ send(remove, buf[nextout]); nextout = (nextout + 1) % n; count = count - 1; } } }

More on Named Channels Buffer Producer Consumer deposit request remove Each named channel serves for a particular purpose. Processes are connected through named channels directly.

Ports and Mailboxes Port Mailboxes

Ports and Mailboxes Processes are connected through intermediary. – Allowing a receiver to receive from multiple senders (nondeterministically). – Sending can be nonblocking. The intermediary usually is a queue. The queue is called mailbox or port, depending on number of receivers: – mailbox can have multiple senders and receivers – port can have only one receiver

Ports and Mailboxes

Procedure-Based Communication Send/Receive are too low level (like P/V ) Typical interaction among processes: – Send Request & (then) Receive Result. – Make this into a single higher-level primitive. Use RPC (Remote Procedure Call) or Rendezvous – Caller invokes procedure on remote machine. – Remote machine performs operation and returns result. – Similar to regular procedure call, but parameters cannot contain pointers because caller and server do not share any memory.

Procedure-Based Communication Send/Receive are too low level (like P/V ) Typical interaction among processes: – Send Request & (then) Receive Result. – Make this into a single higher-level primitive. Use RPC (Remote Procedure Call) or Rendezvous – Caller invokes procedure on remote machine. – Remote machine performs operation and returns result. – Similar to regular procedure call, but parameters cannot contain pointers because caller and server do not share any memory. In fact, it `can’ by doing marshalling on parameters.

RPC Caller issues: res = f(params) This is translated into: res = f(params) // caller // client process... send(RP,f,params); receive(RP,res);... // caller // client process... send(RP,f,params); receive(RP,res);... // callee // server process process RP_server { while (1) { receive(C,f,params); res=f(params); send(C,res); } // callee // server process process RP_server { while (1) { receive(C,f,params); res=f(params); send(C,res); } }

Rendezvous With RPC: – Called process p is part of a dedicated server – Setup is asymmetrical  Client-sever relation With Rendezvous: – p is part of an arbitrary process – p maintains state between calls – p may accept/delay/reject call – Setup is symmetrical: Any process may be a client or a server “Rendezvous” is French for “meeting.” Pronunciation  “RON-day-voo.”

Rendezvous Caller Server q.f(param)accept f(param) S Name of the remote process (sever) Procedure name Procedure parameter Procedure body Similar syntax/semantics to RPC Keyword

Semantics of a Rendezvous Caller or Server waits for the other. Then, they execute in parallel.

Semantics of a Rendezvous pq q.f() accept f() Rendezvous S pq q.f() accept f() Rendezvous S

Rendezvous: Selective Accept select { [when B1:] accept E1(…) S1; or [when B2:] accept E2(…) S2; or... [when Bn:] accept En(…) Sn; [else R] } Ada provides a select statement that permits multiple accepts (guarded or not) to be active simultaneous. Only one could be selected nondeterministically upon Rendezvous.

Rendezvous: Selective Accept [...] : optional select { [when B1:] accept E1(…) S1; or [when B2:] accept E2(…) S2; or... [when Bn:] accept En(…) Sn; [else R] }

Example: Bounded Buffer process BoundedBuffer { char buffer[n]; int nextin=0, nextout=0, count=0; while(1) { select { when (fullCount < n): accept deposit(char c) { buffer[nextin] = c; nextin = (nextin + 1) % n; count = count + 1; } or when (count > 0): accept remove(char c) { c = buffer[nextout]; nextout = (nextout + 1) % n; count = count - 1; } } } } To provide the following services forever: 1.deposit 2.remove To provide the following services forever: 1.deposit 2.remove

Example: Bounded Buffer process BoundedBuffer { char buffer[n]; int nextin=0, nextout=0, count=0; while(1) { select { when (count < n): accept deposit(char c) { buffer[nextin] = c; nextin = (nextin + 1) % n; count = count + 1; } or when (count > 0): accept remove(char c) { c = buffer[nextout]; nextout = (nextout + 1) % n; count = count - 1; } } } }

Example: Bounded Buffer BoundedBuffer.deposit(data) BoundedBuffer.remove(data)

Distributed Mutual Exclusion CS problem in a Distributed Environment – No shared memory, No shared clock, – Delays in message transmission. Central Controller Solution – Requesting process sends request to controller – Controller grant it to one processes at a time – Problems:  Single point of failure,  Performance bottleneck Fully Distributed Solution: – Processes negotiate access among themselves – Very complex

Distributed Mutual Exclusion with Token Ring A practical and elegant compromise version of fully distributed approach.

Distributed Mutual Exclusion with Token Ring Controler[2]: P[2]: CS2; Program2; Controler[3]: P[3]: CS3; Program3; Controler[1]: P[1]: CS1; Program1; token RequestCS ReleaseCS RequestCS ReleaseCS RequestCS ReleaseCS

Controler[2]: P[2]: CS2; Program2; Controler[3]: P[3]: CS3; Program3; Controler[1]: P[1]: CS1; Program1; token RequestCS ReleaseCS RequestCS ReleaseCS RequestCS ReleaseCS Distributed Mutual Exclusion with Token Ring process controller[i] { while(1) { accept Token; select { accept Request_CS() {busy=1;} else null; } if (busy) accept Release_CS() {busy=0;} controller[(i+1) % n].Token; } process p[i] { while(1) { controller[i].Request_CS(); CSi; controller[i].Release_CS(); programi; } process controller[i] { while(1) { accept Token; select { accept Request_CS() {busy=1;} else null; } if (busy) accept Release_CS() {busy=0;} controller[(i+1) % n].Token; } } process p[i] { while(1) { controller[i].Request_CS(); CSi; controller[i].Release_CS(); programi; } } Do four possible jobs: 1.Receive token 2.Transmit token 3.Lock token ( RequestCS ) 4.Unlock token ( ReleaseCS )

Controler[2]: P[2]: CS2; Program2; Controler[3]: P[3]: CS3; Program3; Controler[1]: P[1]: CS1; Program1; token RequestCS ReleaseCS RequestCS ReleaseCS RequestCS ReleaseCS Distributed Mutual Exclusion with Token Ring process controller[i] { while(1) { accept Token; select { accept Request_CS() {busy=1;} else null; } if (busy) accept Release_CS() {busy=0;} controller[(i+1) % n].Token; } process p[i] { while(1) { controller[i].Request_CS(); CSi; controller[i].Release_CS(); programi; } process controller[i] { while(1) { accept Token; select { accept Request_CS() {busy=1;} else null; } if (busy) accept Release_CS() {busy=0;} controller[(i+1) % n].Token; } } process p[i] { while(1) { controller[i].Request_CS(); CSi; controller[i].Release_CS(); programi; } }

Operating Systems Principles Process Management and Coordination Lecture 3: Higher-Level Synchronization and Communication Other Classical Problems

Database Readers/Writers Problem

Database Readers/Writers Problem Writers can work only when no reader activated. One Writer can work at a time.

Database Readers/Writers Problem Readers can work only when no writer activated. Allows infinite number of readers.

Readers/Writers Problem Extension of basic CS problem – (Courtois, Heymans, and Parnas, 1971) Two types of processes entering a CS: – Only one Writer (W) may be inside CS, (exclusive) or – Many Readers (Rs) may be inside CS Prevent starvation of either process type: – If Rs are in CS, a new R must not enter if W is waiting – If W is in CS, once it leaves, all Rs waiting should enter (even if they arrived after new Ws)

Solution Using Monitor monitor Readers_Writers { int readCount=0, writing=0; condition OK_R, OK_W; start_read() { if (writing || !empty(OK_W)) OK_R.wait; readCount = readCount + 1; OK_R.signal; } end_read() { readCount = readCount - 1; if (readCount == 0) OK_W.signal; } start_write() { if ((readCount != 0)||writing) OK_W.wait; writing = 1; } end_write() { writing = 0; if (!empty(OK_R)) OK_R.signal; else OK_W.signal; } monitor Readers_Writers { int readCount=0, writing=0; condition OK_R, OK_W; start_read() { if (writing || !empty(OK_W)) OK_R.wait; readCount = readCount + 1; OK_R.signal; } end_read() { readCount = readCount - 1; if (readCount == 0) OK_W.signal; } start_write() { if ((readCount != 0)||writing) OK_W.wait; writing = 1; } end_write() { writing = 0; if (!empty(OK_R)) OK_R.signal; else OK_W.signal; } } Called by a reader that wishes to read. Called by a reader that has finished reading. Called by a writer that wishes to write. Called by a writer that has finished writing.

Solution Using Monitor monitor Readers_Writers { int readCount=0, writing=0; condition OK_R, OK_W; start_read() { if (writing || !empty(OK_W)) OK_R.wait; readCount = readCount + 1; OK_R.signal; } end_read() { readCount = readCount - 1; if (readCount == 0) OK_W.signal; } start_write() { if ((readCount != 0)||writing) OK_W.wait; writing = 1; } end_write() { writing = 0; if (!empty(OK_R)) OK_R.signal; else OK_W.signal; } monitor Readers_Writers { int readCount=0, writing=0; condition OK_R, OK_W; start_read() { if (writing || !empty(OK_W)) OK_R.wait; readCount = readCount + 1; OK_R.signal; } end_read() { readCount = readCount - 1; if (readCount == 0) OK_W.signal; } start_write() { if ((readCount != 0)||writing) OK_W.wait; writing = 1; } end_write() { writing = 0; if (!empty(OK_R)) OK_R.signal; else OK_W.signal; } } Additional Primitive: empty(c) Return true if the associated queue of c is empty.

Dining Philosophers

Five philosophers sit around a circular table. In the centre of the table is a large plate of spaghetti. Each philosopher spends his life alternatively thinking and eating. A philosopher needs two forks to eat. Requirements – Prevent deadlock – Guarantee fairness: no philosopher must starve – Guarantee concurrency: non-neighbors may eat at the same time

Dining Philosophers p1p1 p2p2 p3p3 p4p4 p5p5 f1f1 f2f2 f3f3 f4f4 f5f5 p(i) { while (1) { think(i); grab_forks(i); eat(i); return_forks(i); } p(i) { while (1) { think(i); grab_forks(i); eat(i); return_forks(i); } } grab_forks(i): P(f[i]); P(f[(i+1)%5]); return_forks(i): V(f[i]); V(f[(i+1)%5]); Easily lead to deadlock.

Solutions to deadlock 1. Use a counter: At most n  1 philosophers may attempt to grab forks. 2. One philosopher requests forks in reverse order, e.g., grab_forks(1): P(f[2]); P(f[1]); 3. Divide philosophers into two groups: Odd grab Left fork first, Even grab Right fork first

Logical Clocks Many applications need to time-stamp events – for debugging, recovery, distributed mutual exclusion, ordering of broadcast messages, transactions, etc. Time-stamp allows us to determine the causality of events. – C(e1)<C(e2) means e1 happened before e2. Global clock is unavailable for distributed systems. Physical clocks in distributed systems are skewed.

The Problem of Clock Skewness File User (U) File Server (FS) e1e1 e2e2 e3e3 e4e4 CUCU C FS send receive delta1 delta2 True causality of events: The log of events: Impossible sequence!

Logical Clocks eiei eses ekek erer send receive p1p1 p2p2 ejej

Logical Clocks in Action u v x y

The Elevator Algorithm The simple algorithm by which a single elevator can decide where to stop is: – Continue traveling in the same direction while there are remaining requests in that same direction. – If there are no further requests then change direction. Scheduling hard disk requests.

The Elevator Algorithm

request(i) direction = up driection = down Pressing button at floor i or button i inside elevator invokes: request(i) Door closing, invokes: release() Scheduler policy: –direction = up: it services all requests at or above current position; then it reverses direction –direction = down: it services all requests at or below current position; then it reverses direction

The Elevator Algorithm Using Priority Waits monitor elevator { int dir=1, up=1, down=-1, pos=1, busy=0; condition upsweep, downsweep; request(int dest) { /*Called when button pushed*/ if (busy) { if ((pos<dest) || ((pos==dest) && (dir==up))) upsweep.wait(dest); else downsweep.wait(-dest); } busy = 1; pos = dest; } release() { /*Called when door closes*/ } monitor elevator { int dir=1, up=1, down=-1, pos=1, busy=0; condition upsweep, downsweep; request(int dest) { /*Called when button pushed*/ if (busy) { if ((pos<dest) || ((pos==dest) && (dir==up))) upsweep.wait(dest); else downsweep.wait(-dest); } busy = 1; pos = dest; } release() { /*Called when door closes*/ } } Put the request into the proper priority queue and wait.

The Elevator Algorithm Using Priority Waits monitor elevator { int dir=1, up=1, down=-1, pos=1, busy=0; condition upsweep, downsweep; request(int dest) { /*Called when button pushed*/ if (busy) { if ((pos<dest) || ((pos==dest) && (dir==up))) upsweep.wait(dest); else downsweep.wait(-dest); } busy = 1; pos = dest; } release() { /*Called when door closes*/ } monitor elevator { int dir=1, up=1, down=-1, pos=1, busy=0; condition upsweep, downsweep; request(int dest) { /*Called when button pushed*/ if (busy) { if ((pos<dest) || ((pos==dest) && (dir==up))) upsweep.wait(dest); else downsweep.wait(-dest); } busy = 1; pos = dest; } release() { /*Called when door closes*/ } }

The Elevator Algorithm Using Priority Waits monitor elevator { int dir=1, up=1, down=-1, pos=1, busy=0; condition upsweep, downsweep; release() { /*Called when door closes*/ busy = 0; if (dir==up) { if (!empty(upsweep)) upsweep.signal; else { dir = down; downsweep.signal; } else { /*direction==down*/ if (!empty(downsweep)) downsweep.signal; else { dir = up; upsweep.signal; } monitor elevator { int dir=1, up=1, down=-1, pos=1, busy=0; condition upsweep, downsweep; release() { /*Called when door closes*/ busy = 0; if (dir==up) { if (!empty(upsweep)) upsweep.signal; else { dir = down; downsweep.signal; } } else { /*direction==down*/ if (!empty(downsweep)) downsweep.signal; else { dir = up; upsweep.signal; } } } }