Presentation is loading. Please wait.

Presentation is loading. Please wait.

Critical Regions § 7.6 Although semaphores provide a convenient and effective mechanism for process synchronization, their incorrect use can still result.

Similar presentations


Presentation on theme: "Critical Regions § 7.6 Although semaphores provide a convenient and effective mechanism for process synchronization, their incorrect use can still result."— Presentation transcript:

1 Critical Regions § 7.6 Although semaphores provide a convenient and effective mechanism for process synchronization, their incorrect use can still result in timing errors that are difficult to detect. Examples: A process omits the wait(mutex), or the signal(mutex), or both Either mutual exclusion is violated or a deadlock will occur. wait(mutex); critical section A deadlock will occur. signal(mutex); critical section wait(mutex); Several processes may be executing in their critical section simultaneously.

2 Critical Regions § 7.6 Although semaphores provide a convenient and effective mechanism for process synchronization, their incorrect use can still result in timing errors that are difficult to detect. Examples: Multiple Choices Question: ( ) What kind of problem can happen if more than one thread work on a semaphore in the following sequence? signal(mutex); criticalSection(); wait(mutex); (a) starvation (b) deadlock (c) blocking (d) not synchronizing (e) violate mutual exclusion A process omits the wait(mutex), or the signal(mutex), or both Either mutual exclusion is violated or a deadlock will occur. wait(mutex); critical section A deadlock will occur. signal(mutex); critical section wait(mutex); Several processes may be executing in their critical section simultaneously. Answer: e

3 Critical Region High-level synchronization construct
A shared variable v of type T, is declared as: v:shared T; Variable v accessed only inside statement region v when (B) S; where B is a boolean expression. While statement S is being executed, no other process can access variable v.

4 Critical Region Regions referring to the same shared variable exclude each other in time. When a process tries to execute the region statement, the Boolean expression B is evaluated. If B is true, statement S is executed. If it is false, the process is delayed until B becomes true and no other process is in the region associated with v.

5 Critical Region Example: two statements region v when (true) S1; region v when (true) S2; are executed concurrently in distinct sequential processes, the result will be equivalent to the sequential execution “S1 followed by S2” or “S2 followed by S1.”

6 Critical Region The critical-region construct guards against certain simple errors associated with the semaphore solution to the critical-section problem that may be made by a programmer. However, it does not necessarily eliminate all synchronization errors; rather, it reduces their number. Can be used to solve certain general synchronization problems.

7 Example – bounded buffer
Shared data: struct buffer { item pool[n]; int count, in, out; } Producer process inserts nextp into the shared buffer region buffer when( count < n) { pool[in] = nextp; in = (in+1) % n; count++; }

8 Example – bounded buffer
Consumer process removes an item from the shared buffer and puts it in nextc region buffer when (count > 0) { nextc = pool[out]; out = (out+1) % n; count--; }

9 Implement the conditional critical region (Skip)

10 Monitors § 7.7 High-level synchronization construct that allows the safe sharing of an abstract data type among concurrent processes. A monitor presents a set of programmer-defined operations that are provided mutual exclusion within the monitor. A monitor type consists of declarations of variables whose values define the state of an instance of the type, as well as the bodies of procedures or functions that implement operations on the type.

11 Monitor Syntax monitor monitor-name { shared variable declarations
procedure body P1 (…) { . . . } procedure body P2 (…) { procedure body Pn (…) { initialization code

12 Condition Variables Encapsulation: limits access to the local variables by only the local procedures. The monitor construct prohibits concurrent access to all procedures defined within the monitor. Only one process may be active within the monitor at a time. Synchronization is built into the monitor type, the programmer does not need to code it explicitly. Special operations wait and signal can be invoked on the variables of type condition condition x, y; A process that invokes x.wait is suspended until another process invokes x.signal In a monitor, synchronization is already built-in, the programmer does not need to code the synchronization operations explicitly. 89(2) (A) Special operations – “wait” and “signal,” of a monitor can be invoked on the variables of type (A) condition (B) integer (C) semaphore (D) synchronized (89(2))

13 Condition Variables Encapsulation: limits access to the local variables by only the local procedures. The monitor construct prohibits concurrent access to all procedures defined within the monitor. Only one process may be active within the monitor at a time. Synchronization is built into the monitor type, the programmer does not need to code it explicitly. Special operations wait and signal can be invoked on the variables of type condition condition x, y; A process that invokes x.wait is suspended until another process invokes x.signal True-False Question: ( ) Although there maybe several processes inside the monitor at the same time, there can only be one process with the state of active at a time. In a monitor, synchronization is already built-in, the programmer does not need to code the synchronization operations explicitly. 89(2) (A) Special operations – “wait” and “signal,” of a monitor can be invoked on the variables of type (A) condition (B) integer (C) semaphore (D) synchronized (89(2)) Answer: ○

14 Schematic View of a Monitor

15 Condition Variables Condition variable can only be used with the operations wait and signal. The operation x.wait(); means that the process invoking this operation is suspended until another process invokes x.signal(); The x.signal operation resumes exactly one suspended process. If no process is suspended, then the signal operation has no effect. Contrast this operation with the signal operation associated with semaphores, which always affects the state of the semaphore.

16 Monitor with condition variables

17 More reasonable since P was already executing in the monitor.
Two Possibilities More reasonable since P was already executing in the monitor. When x.signal() operation is invoked by a process P, there is a suspended process Q associated with condition x. P either waits until Q leaves the monitor, or waits for another condition. Q either waits until P leaves the monitor, or waits for another condition. Concurrent C: when process P executes the signal operation, process Q is immediately resumed. Advocated by Hoare “logical” condition for which Q was waiting may no longer hold by the time Q is resumed.

18 Solution to Dining Philosophers
monitor dp { enum {thinking, hungry, eating} state[5]; condition self[5]; void pickup(int i) // following slides void putdown(int i) // following slides void test(int i) // following slides void init() { for (int i = 0; i < 5; i++) state[i] = thinking; } philosopher i can delay herself when she is hungry, but is unable to obtain the chopsticks she needs.

19 May result in the suspension of the philosopher thread.
pickUp() Procedure Each philosopher, before starting to eat, must invoke the operation pickup(). void pickup(int i) { state[i] = hungry; test[i]; if (state[i] != eating) self[i].wait(); } void putdown(int i) { state[i] = thinking; // test left and right neighbors test((i+4) % 5); test((i+1) % 5); May result in the suspension of the philosopher thread.

20 Release self[i] so that the thread can proceed.
test() Procedure void test(int i) { if ( (state[(i + 4) % 5] != eating) && (state[i] == hungry) && (state[(i + 1) % 5] != eating)) { state[i] = eating; self[i].signal(); } Philosopher i can set the variable state[i] = eating only if her two neighbors are not eating. Release self[i] so that the thread can proceed.

21 Solution to Dining Philosophers
Philosopher i must invoke the operations pickup and putdown in the following sequence: dp.pickUp(i); eat dp.putDown(i); This solution ensures that no two neighbors are eating simultaneously, and no deadlocks will occur. However, it is possible for a philosopher to starve to death.

22 Implement monitor using semaphore
Variables semaphore mutex; // (initially = 1) semaphore next; // (initially = 0) int next_count = 0; Each external procedure F will be replaced by wait(mutex); body of F; if (next_count > 0) signal(next) else signal(mutex); Mutual exclusion within a monitor is ensured.

23 Implement monitor using semaphore
For each condition variable x, we have: semaphore x_sem; // (initially = 0) int x_count = 0; The operation x.wait can be implemented as: x_count++; if (next_count > 0) signal(next); else signal(mutex); wait(x_sem); x_count--;

24 Implement monitor using semaphore
The operation x.signal can be implemented as: if (x_count > 0) { next_count++; signal(x_sem); wait(next); next_count--; }

25 Process-resumption order
If several processes are suspended on condition x, and an x.signal operation is executed by some process, then how to determine which of the suspended processes should be resumed next? Except FCFS, the conditional-wait construct can be used: x.wait(c) where c is an integer expression that is evaluated when the wait operation is executed. The value of c, called priority number, is then stored with the name of the process that is suspended. When x.signal is executed, the process with the smallest associated priority number is resumed next.

26 Example Each process, when requesting an allocation of its resources, specifies the maximum time it plans to use the resource. Monitor controlling the allocation of a single resource among competing processes. monitor ResourceAllocation { boolean busy; condition x; void acquire(int time) { if (busy) x.wait(time); busy = true; } void release() { busy = false; x.signal(); } void init() { busy = false; } } The monitor allocates the resource to that process that has the shortest time-allocation request.

27 Example R.acquire(t); ... access the resource; ... R.release();
A process that needs to access the resource must following the sequence: R.acquire(t); access the resource; R.release(); Unfortunately, the monitor concept cannot guarantee that the sequence will be followed. access resource without permission process not releasing resource process releasing not requested resource process request same resource before release it

28 Access-control problem
Check two conditions to establish correctness of system: User processes must always make their calls on the monitor in a correct sequence. Must ensure that an uncooperative process does not ignore the mutual-exclusion gateway provided by the monitor, and try to access the shared resource directly, without using the access protocols. Can be solved only by additional mechanisms. (chap18) These checkings are not reasonable for large or dynamic system.

29 OS Synchronization Solaris 2:
§ 7.8 Solaris 2: Implements a variety of locks to support multitasking, multithreading (including real-time threads), and multiprocessing. Synchronization in Solaris 2 Provides: - adaptive mutex - condition variables - semaphores - reader-writer locks

30 The thread holding the data is likely to end soon.
Adaptive Mutex An adaptive mutex protects access to every critical data item. It starts as a standard semaphore implemented as a spinlock. If the data are locked (in use), the adaptive mutex does one of two things: If the lock is held by a thread that is currently running on another CPU, the thread spins while waiting for the lock to become available. If the thread holding the lock is not currently in run state, the thread blocks and go to sleep until the lock being released. The thread holding the data is likely to end soon. Put to sleep for avoiding the spinning when the lock will not be freed reasonably quickly.

31 Adaptive Mutex Multiple Choices Question:
( ) Different operations may be adopted by the adaptive mutex mechanism when a thread is requesting a locked data. The decision is made by the status of (a) located memories (b) relative CPU speed (c) the thread holding the lock (d) the type of monitor entries An adaptive mutex protects access to every critical data item. It starts as a standard semaphore implemented as a spinlock. If the data are locked (in use), the adaptive mutex does one of two things: If the lock is held by a thread that is currently running on another CPU, the thread spins while waiting for the lock to become available. If the thread holding the lock is not currently in run state, the thread blocks and go to sleep until the lock being released. Put to sleep for avoiding the spinning when the lcok will not be freed reasonably quickly. Different operations may be adopted by the adaptive mutex mechanism of Solaris when a thread is requesting a locked data. What is adaptive mutex and how the decision of selecting suitable operation is made? 89(2) Answer: C

32 Adaptive Mutex Adaptive mutex only protect those data that are accessed by short code segments where a lock will be held for less than a few hundred instructions. For longer code segments, condition variables and semaphores are used. If the desired lock is already held, the thread issues a wait and sleep. The cost of putting a thread to sleep and waking it is less than the cost of wasting several hundred instructions waiting in a spinlock. If the code segment is longer than that, spin waiting will be exceedingly inefficient.

33 Readers-Writers Lock The readers-writers locks are used to protect data that are accessed frequently, but usually only in a read-only manner. In these circumstances, readers-writers locks are more efficient than semaphores. Expensive to implement, so again they are used on only long sections of code. THINK

34 OS Synchronization Windows 2000:
Uses interrupt masks to protect access to global resources on uniprocessor systems. Uses spinlocks on multiprocessor systems. Also provides dispatcher objects which may act as either mutexes and semaphores. Dispatcher objects may also provide events. An event acts much like a condition variable.

35 Atomic Transactions § 7.9 Need to make sure that a CS forms a single logical unit of work that either is performed in its entirety or is not performed at all. A collection of instructions (or operations) that performs a single logical function is called a transaction. A major issue in processing transactions is the preservation of atomicity despite the possibility of failures within the computer system.

36 Commit & Abort From our point of view, a transaction is simply a sequence of read and write operations, terminated by commit or abort operation. commit: signifies that the transaction has terminated successfully. abort: the transaction had to cease its normal execution due to some logical error.

37 Roll Back An aborted transaction must have no effect on the state of the data that it has already modified, so that the atomicity property is ensured. Thus, the state of the data accessed by an aborted transaction must be restored to what it was just before the transaction started executing --- rolled back.

38 Device Properties (Skip)
To determine how the system should ensure atomicity, we need first to identify the properties of devices used for storing the various data accessed by the transactions. Volatile Storage Information residing involatile storage does not usually survive system crashed. Nonvolatile Storage Information residing involatile storage usually survive system crashed. Stable Storage Information residing is never lost.

39 Mechanisms for ensuring Transaction Atomicity
Log-Based Recovery Checkpoints Concurrent Atomic Transactions

40 Log-Based Recovery § 7.9.2 Record informatin describing all the modifications made by the transaction to the various data it accessed. Write-ahead logging: Each log record describes a single operation of a transaction write. Fields: Transaction Name Data Item Name Old Value New Value

41 Log-Based Recovery Before a transaction Ti starts its execution, record <Ti starts> is written to the log. During its execution, any write operation by Ti is preceded by the writing of the appropriate new record to the log. When Ti commits, the record < Ti commits> is written to the log.

42 Log-Based Recovery Performance penalty ... two physical writes are required for every logical write requested. More storage is needed: for the data and the log. The recovery algorithm uses two procedures: undo(Ti) redo(Ti)

43 Log-Based Recovery If a transaction Ti aborts → restore the state of the data that it has updated by executing undo(Ti). If a system failure occurs, must consulting the log to determine the proper operation: The log contains < Ti starts> record, but does not contain < Ti commits> record → Transaction Ti needs to be undone. The log contains both the < Ti starts> and the < Ti commits> records → Transaction Ti needs to be redone. Drawbacks ...

44 Log-Based Recovery Drawbacks:
The search process takes time. Most of the transactions need to be redone already updated the data. Redoing the modificaiton takes longer. To reduce the overhead ... use “Checkpoints.”

45 allows the system to streamline its recovery procedure
Checkpoints § 7.9.3 In addition to the write-ahead log, the system periodically performs checkpoints that require: Output all log records currently residing in main memory onto stable storage. Output all modified data residing in main memory to the stable storage. Output a log record <checkpoint> onto stable storage. allows the system to streamline its recovery procedure

46 Checkpoints After failure occur, the recovery routine examines the log to determine the most recent transaction Ti that started before the most recent chekckpoint. The redo and undo need to be applied to only Ti and all Tj that started execution after Ti: For all Tk in T such that record <Tk commits> appears in the log, execute redo(Tk). For all Tk in T that have no <Tk commits> record in the log, execute undo(Tk). T

47 Concurrent Atomic Transactions
Serializability can be maintained by simply executing each transaction within a CS ... too restrictive. We can allow transactions to overlap their execution, while maintaining serializability ... concurrency-control algorithms

48 Serial Schedule Schedule 1:
§ T T1 read(A) write(A) read(B) write(B) Schedule 1: A schedule where each transaction is executed atomically is called a serial schedule. For a set of n transactions, there exist n! different valid serial schedules. Each serial schedule is correct.

49 Nonserial Schedule If two transactions are allowed to overlap their execution... nonserial schedule. A nonserial schedule does not necessarily incorrect.

50 Nonserial Schedule Two consective operation O1 and O2 of Ti and Tj are conflict if they access the same data item and at least one of these operations is a write operation. Schedule 2: T T1 read(A) write(A) read(B) write(B)

51 Nonserial Schedule Two consective operation O1 and O2 of Ti and Tj are conflict if they access the same data item and at least one of these operations is a write operation. Schedule 2: T T1 read(A) write(A) read(B) write(B) conflict

52 Nonserial Schedule Two consective operation O1 and O2 of Ti and Tj are conflict if they access the same data item and at least one of these operations is a write operation. Schedule 2: T T1 read(A) write(A) read(B) write(B) not conflict

53 Nonserial Schedule If Oi and Oj are consective and do not conflict, then Oi and Oj can be swapped to produce a new but equivalent schedule. Schedule 2: T T1 read(A) write(A) read(B) write(B) not conflict

54 Nonserial Schedule If Oi and Oj are consective and do not conflict, then Oi and Oj can be swapped to produce a new but equivalent schedule. Schedule 2: T T1 read(A) write(A) read(B) write(B)

55 Nonserial Schedule If Oi and Oj are consective and do not conflict, then Oi and Oj can be swapped to produce a new but equivalent schedule. Schedule 2: T T1 read(A) write(A) read(B) write(B)

56 Nonserial Schedule If Oi and Oj are consective and do not conflict, then Oi and Oj can be swapped to produce a new but equivalent schedule. Schedule 2: T T1 read(A) write(A) read(B) write(B)

57 Nonserial Schedule If Oi and Oj are consective and do not conflict, then Oi and Oj can be swapped to produce a new but equivalent schedule. Schedule 2: This schedule is conflict serializable. T T1 read(A) write(A) read(B) write(B)

58 Similar to the reader-writers algorithm
Locking Protocol Locking protocol governs how locks are acquired and released by transactions. Modes in which a data item can be locked: Shared: If a transaction Ti has obtained a shared-mode lock (denoted by S) on data item Q, then Ti can read this item, but cannot write Q. Exclusive: If a transaction Ti has obtained an exclusive-mode lock (denoted by X) on data item Q, then Ti can both read and write Q. Similar to the reader-writers algorithm

59 Locking protocol It is not always desirable for a transaction to unlock a data item immediately after its last access of that data item, because serializability may bot be ensured. One protocol ensures serializability is the two-phase locking protocol: each transaction issue lock and unlock in two phases: Growing Phase: A transaction may obtain locks, but may not release any lock. Shrinking Phase: A transaction may release locks, but may not obtain any new locks. Initially, a transaction is in the growing phase and acquires locks as needed. Once the transaction releases a lock, it enters the shrinking phase, and no more lock requests can be issed. Ensures conflict serializability. However, it does not ensure freedom from deadlock.

60 Timestamp-based Protocols
The serializability order among transactions can be selected in advance: Unique fixed timestamp TS(Ti) for each Ti assigned by the system before Ti starts. TS(Ti) < TS(Tj) for later transaction Tj. Value of the system clock or a logical counter can be used to implement the timestamp.

61 Timestamp-based Protocols
The timestamps of the transactions determine the serializability order. If TS(Ti) < TS(Tj), then the system must ensure that the produced schedule is equivalent to a serial schedule in which transaction Ti appears before transaction Tj. Each data item Q has two timestamp value: W-timestamp(Q): the largest timestamp of any transaction that executed write(Q) successfully. R-timestamp(Q): the largest timestamp of any transaction that executed read(Q) successfully. Updated whenever a new read(Q) or write(Q) instruction is executed.

62 Timestamp-based Protocols
This protocol ensures that any conflicting read and write operations are executed in timestamp order. Suppose Ti issues read(Q): If TS(Ti) < W-timestamp(Q) → Ti needs to read a value of Q that was already overwritten. read operation is rejected and Ti is rolled back. If TS(Ti) ≥ W-timestamp(Q) → read operation is executed, and R-timestamp(Q) is set to the maximum of R-timestamp(Q) and TS(Ti). Suppose Ti issues write(Q): If TS(Ti) < R-timestamp(Q) → the value of Q that Ti is producing was needed previously and Ti assumed that this value would never be produced. write operation is rejected, and Ti is rolled back. If TS(Ti) < W-timestamp(Q) → Ti is attempting to write an absolete value of Q. write operation is rejected and Ti is rolled back. Otherwise, the write operation is executed. A transaction Ti, that is rolled back by the concurrency-control scheme as a result of the issuing of either a read or write operation is assigned a new timestamp and is restarted.

63 Timestamp-based Protocols
This protocol ensures that any conflicting read and write operations are executed in timestamp order. Suppose Ti issues read(Q): If TS(Ti) < W-timestamp(Q) → Ti needs to read a value of Q that was already overwritten. read operation is rejected and Ti is rolled back. If TS(Ti) ≥ W-timestamp(Q) → read operation is executed, and R-timestamp(Q) is set to the maximum of R-timestamp(Q) and TS(Ti). Suppose Ti issues write(Q): If TS(Ti) < R-timestamp(Q) → the value of Q that Ti is producing was needed previously and Ti assumed that this value would never be produced. write operation is rejected, and Ti is rolled back. If TS(Ti) < W-timestamp(Q) → Ti is attempting to write an absolete value of Q. write operation is rejected and Ti is rolled back. Otherwise, the write operation is executed. A transaction Ti, that is rolled back by the concurrency-control scheme as a result of the issuing of either a read or write operation is assigned a new timestamp and is restarted. W-timestamp(Q) T1 read(Q) TS(Ti) W-timestamp(Q)

64 Timestamp-based Protocols
write(Q) This protocol ensures that any conflicting read and write operations are executed in timestamp order. Suppose Ti issues read(Q): If TS(Ti) < W-timestamp(Q) → Ti needs to read a value of Q that was already overwritten. read operation is rejected and Ti is rolled back. If TS(Ti) ≥ W-timestamp(Q) → read operation is executed, and R-timestamp(Q) is set to the maximum of R-timestamp(Q) and TS(Ti). Suppose Ti issues write(Q): If TS(Ti) < R-timestamp(Q) → the value of Q that Ti is producing was needed previously and Ti assumed that this value would never be produced. write operation is rejected, and Ti is rolled back. If TS(Ti) < W-timestamp(Q) → Ti is attempting to write an absolete value of Q. write operation is rejected and Ti is rolled back. Otherwise, the write operation is executed. A transaction Ti, that is rolled back by the concurrency-control scheme as a result of the issuing of either a read or write operation is assigned a new timestamp and is restarted. TS(Ti) W-timestamp(Q) R-timestamp(Q)

65 Example Assume a transaction is assigned a timestamp immediately before its first instruction. TS(T2) < TS(T3) Possible under timestamp protocol Also can be produced by two-phase locking protocol. T T3 read(B) write(B) read(A) write(A)

66 Timestamp-based Protocols
Some schedules are possible under two-phase locking protocol but not under the timestamp protocol, and vice versa. The timestamp-ordering protocol ensures conflict serializability. It follows from the fact that conflicting operations are processed in timestamp order. Ensures freedom from deadlock, because no transaction ever waits.

67 The End


Download ppt "Critical Regions § 7.6 Although semaphores provide a convenient and effective mechanism for process synchronization, their incorrect use can still result."

Similar presentations


Ads by Google