1 Mutual Exclusion: A Centralized Algorithm a)Process 1 asks the coordinator for permission to enter a critical region. Permission is granted b)Process.

Slides:



Advertisements
Similar presentations
Dr. Kalpakis CMSC621 Advanced Operating Systems Synchronization.
Advertisements

1 CS 194: Elections, Exclusion and Transactions Scott Shenker and Ion Stoica Computer Science Division Department of Electrical Engineering and Computer.
Synchronization Chapter clock synchronization * 5.2 logical clocks * 5.3 global state * 5.4 election algorithm * 5.5 mutual exclusion * 5.6 distributed.
Lecture 11 Recoverability. 2 Serializability identifies schedules that maintain database consistency, assuming no transaction fails. Could also examine.
1 Chapter 3. Synchronization. STEMPusan National University STEM-PNU 2 Synchronization in Distributed Systems Synchronization in a single machine Same.
Synchronization. Why Synchronize? Often important to control access to a single, shared resource. Also often important to agree on the ordering of events.
Page 1 Mutual Exclusion* Distributed Systems *referred to slides by Prof. Paul Krzyzanowski at Rutgers University and Prof. Mary Ellen Weisskopf at University.
Distributed Systems Spring 2009
Computer Science Lecture 12, page 1 CS677: Distributed OS Last Class Distributed Snapshots –Termination detection Election algorithms –Bully –Ring.
CS 582 / CMPE 481 Distributed Systems Concurrency Control.
1. Explain why synchronization is so important in distributed systems by giving an appropriate example When each machine has its own clock, an event that.
Synchronization. Physical Clocks Solar Physical Clocks Cesium Clocks International Atomic Time Universal Coordinate Time (UTC) Clock Synchronization Algorithms.
Database management concepts Database Management Systems (DBMS) An example of a database (relational) Database schema (e.g. relational) Data independence.
Synchronization Clock Synchronization Logical Clocks Global State Election Algorithms Mutual Exclusion.
Synchronization Part 2 REK’s adaptation of Claypool’s adaptation ofTanenbaum’s Distributed Systems Chapter 5 and Silberschatz Chapter 17.
Synchronization Chapter 5.
Transaction Management
Synchronization Tanenbaum Chapter 5. Synchronization Multiple processes sometimes need to agree on order of a sequence of events. This requires some synchronization,
EEC-681/781 Distributed Computing Systems Lecture 11 Wenbing Zhao Cleveland State University.
EEC-681/781 Distributed Computing Systems Lecture 12 Wenbing Zhao Cleveland State University.
EEC-681/781 Distributed Computing Systems Lecture 11 Wenbing Zhao Cleveland State University.
1 Synchronization  Clock Synchronization  and algorithm.
CS4513 Distributed Computer Systems Synchronization (Ch 5)
Synchronization Chapter 5. Clock Synchronization When each machine has its own clock, an event that occurred after another event may nevertheless be assigned.
Computer Science Lecture 12, page 1 CS677: Distributed OS Last Class Vector timestamps Global state –Distributed Snapshot Election algorithms.
Synchronization.
Synchronization Chapter 6 Part III Transactions. –Most of the lecture notes are based on slides by Prof. Jalal Y. Kawash at Univ. of Calgary –Some slides.
Tanenbaum & Van Steen, Distributed Systems: Principles and Paradigms, 2e, (c) 2007 Prentice-Hall, Inc. All rights reserved DISTRIBUTED SYSTEMS.
CIS 720 Concurrency Control. Locking Atomic statement –Can be used to perform two or more updates atomically Th1: …. ;……. Th2:…………. ;…….
Synchronization Chapter 6
1 Synchronization. 2 Why Synchronize?  Often important to control access to a single, shared resource.  Also often important to agree on the ordering.
Transaction Communications Yi Sun. Outline Transaction ACID Property Distributed transaction Two phase commit protocol Nested transaction.
Computer Science Lecture 12, page 1 CS677: Distributed OS Last Class Vector timestamps Global state –Distributed Snapshot Election algorithms –Bully algorithm.
Real-Time & MultiMedia Lab Synchronization Chapter 5.
Operating Systems Distributed Coordination. Topics –Event Ordering –Mutual Exclusion –Atomicity –Concurrency Control Topics –Event Ordering –Mutual Exclusion.
Global State (1) a)A consistent cut b)An inconsistent cut.
Synchronization CSCI 4780/6780. Mutual Exclusion Concurrency and collaboration are fundamental to distributed systems Simultaneous access to resources.
Fall 2007cs4251 Distributed Computing Umar Kalim Dept. of Communication Systems Engineering 15/01/2008.
Presenter: Long Ma Advisor: Dr. Zhang 4.5 DISTRIBUTED MUTUAL EXCLUSION.
Synchronization Chapter 5. Outline 1.Clock synchronization 2.Logical clocks 3.Global state 4.Election algorithms 5.Mutual exclusion 6.Distributed transactions.
Computer Science Lecture 13, page 1 CS677: Distributed OS Last Class: Canonical Problems Distributed synchronization and mutual exclusion Distributed Transactions.
Synchronization Chapter 5.
Computer Science Lecture 13, page 1 CS677: Distributed OS Last Class: Canonical Problems Election algorithms –Bully algorithm –Ring algorithm Distributed.
Transactions and Concurrency Control. Concurrent Accesses to an Object Multiple threads Atomic operations Thread communication Fairness.
Synchronization Chapter Contents qClock Synchronization qLogical Clocks qGlobal State qElection Algorithms qMutual Exclusion qDistributed Transactions.
Synchronization Chapter 5. Table of Contents Clock Synchronization Logical Clocks Global State Election Algorithms Mutual Exclusion.
1 DC8: Transactions Chapter 12 Transactions and Concurrency Control.
3. Synchronization in Distributed Systems
Chapter 5 Synchronization Presenter: Maria Riaz. Distributed Systems – Fall 2004 – Prof. SY Lee2 Sequence of Presentation Synchronization Clock Synchronization.
Clock Synchronization When each machine has its own clock, an event that occurred after another event may nevertheless be assigned an earlier time.
Page 1 Mutual Exclusion & Election Algorithms Paul Krzyzanowski Distributed Systems Except as otherwise noted, the content.
Synchronization. Clock Synchronization In a centralized system time is unambiguous. In a distributed system agreement on time is not obvious. When each.
Synchronization CSCI 4900/6900. Transactions Protects data and allows processes to access and modify multiple data items as a single atomic transaction.
COMP 655: Distributed/Operating Systems Summer 2011 Dr. Chunbo Chu Week 6: Synchronyzation 3/5/20161 Distributed Systems - COMP 655.
Distributed Mutual Exclusion Synchronization in Distributed Systems Synchronization in distributed systems are often more difficult compared to synchronization.
Synchronization Chapter 5. Clock Synchronization When each machine has its own clock, an event that occurred after another event may nevertheless be assigned.
Mutual Exclusion Algorithms. Topics r Defining mutual exclusion r A centralized approach r A distributed approach r An approach assuming an organization.
CS3771 Today: Distributed Coordination  Previous class: Distributed File Systems Issues: Naming Strategies: Absolute Names, Mount Points (logical connection.
Lecture on Synchronization Submitted by
Transactions Chapter 12 Transactions and Concurrency Control.
Synchronization Tanenbaum Chapter 5. Synchronization Multiple processes sometimes need to agree on order of a sequence of events. This requires some synchronization,
Computer Science Lecture 13, page 1 CS677: Distributed OS Last Class: Canonical Problems Election algorithms –Bully algorithm –Ring algorithm Distributed.
Atomic Tranactions. Sunmeet Sethi. Index  Meaning of Atomic transaction.  Transaction model Types of storage. Transaction primitives. Properties of.
Last Class: Canonical Problems
Synchronization Chapter 5C
Mutual Exclusion What is mutual exclusion? Single processor systems
Outline Distributed Mutual Exclusion Introduction Performance measures
Synchronization outline
Prof. Leonardo Mostarda University of Camerino
Presentation transcript:

1 Mutual Exclusion: A Centralized Algorithm a)Process 1 asks the coordinator for permission to enter a critical region. Permission is granted b)Process 2 then asks permission to enter the same critical region. The coordinator does not reply. c)When process 1 exits the critical region, it tells the coordinator, when then replies to 2

2 Advantages: –Guarantees mutual exclusion –Fair –Requires only three messages per use of a critical region (request, grant, release) Shortcomings –Single point of failure –If processes normally blocks after making request, they cannot distinguish a dead coordinator from “permission denied”. –In a large system, a single coordinator becomes a performance bottleneck. A Centralized Algorithm

3 A Distributed Algorithm Assuming there is a total ordering of all events in the system 1.When a process wants to enter a critical region, it builds a message containing the name of the critical region it wants to enter, its process number, and the current time. Then it sends the message to all other processes. 2.When a process receives a request, 1.If the receiver is not in the critical region, and does not want to enter it, it sends back an OK message to the sender. 2.If the receiver is already in the critical region, it does not reply. It queues the request. 3.If the receiver wants to enter the critical region, but has not yet done so, it compares the timestamp in the incoming message with the one contained in the message that it has sent to everyone. The lowest one wins. If the incoming message is lower, the receiver sends back an OK message. Otherwise, it queues the message. 3.As soon as all permissions are in, it may enter the critical region. When it exits the critical region, it sends OK message to all processes on its queue and deletes them all from the queue.

4 A Distributed Algorithm a)Two processes want to enter the same critical region at the same moment. b)Process 0 has the lowest timestamp, so it wins. c)When process 0 is done, it sends an OK also, so 2 can now enter the critical region.

5 A Distributed Algorithm Advantage –No deadlock or starvation –Number of message per entry is 2(n-1) –No single point of failure Disadvantage –n points of failure If any process crashes, the failure to respond to requests will be interpreted as denial of permission Patch: When a request comes in, the receiver always sends a reply, either granting or denying. Whenever a request or reply is lost, the sender times out and keep trying until a reply comes back or the sender concludes that the receiver is dead. After a request is denied, the sender should block waiting for a subsequent OK message. – group communication support needed –If one process is unable to handle the load, it is unlikely that forcing everyone to do exactly the same thing in parallel is going to help much. Modify the algorithm so that when a process has collected permission from majority of the other processes.

6 A Toke Ring Algorithm (a)An unordered group of processes on a network. (b)A logical ring constructed in software. When a process acquires the token from its neighbor, if it wants to enter a critical region, it can enter that critical region, do its work, and then leave the region. It then can pass the token to the next process. Otherwise, it just passes the token to the next process.

7 A Toke Ring Algorithm If the token is lost, it must be regenerated. Detecting the lost is difficult, since the amount of time between successive appearances of the token on the network is unbounded. If a process crashes, the algorithm will fail too. –We could amend the problem by requiring a process receiving the token to acknowledge receipt. A dead process will be detected when its neighbor tries to give it the token and fails. The token holder can send the token to the successor of the dead process in the ring.

8 Comparison A comparison of three mutual exclusion algorithms. Algorithm Messages per entry/exit Delay before entry (in message times) Problems Centralized32Coordinator crash Distributed2 ( n – 1 ) Crash of any process Token ring 1 to  0 to n – 1 Lost token, process crash

9 The Transaction Model (1) Updating a master tape is fault tolerant.

10 The Transaction Model (2) Examples of primitives for transactions. PrimitiveDescription BEGIN_TRANSACTIONMake the start of a transaction END_TRANSACTIONTerminate the transaction and try to commit ABORT_TRANSACTIONKill the transaction and restore the old values READRead data from a file, a table, or otherwise WRITEWrite data to a file, a table, or otherwise

11 The Transaction Model (3) a)Transaction to reserve three flights commits b)Transaction aborts when third flight is unavailable BEGIN_TRANSACTION reserve WP -> JFK; reserve JFK -> Nairobi; reserve Nairobi -> Malindi; END_TRANSACTION (a) BEGIN_TRANSACTION reserve WP -> JFK; reserve JFK -> Nairobi; reserve Nairobi -> Malindi full => ABORT_TRANSACTION (b)

12 The Transaction Model (4) Four characteristics that transactions have: –Atomic: To the outside world, the transaction happens individually –Consistent: The transaction does not violate system invariants –Isolated: Concurrent transactions do not interfere with each other –Durable: Once a transaction commits, the changes are permanent

13 Nested vs. Distributed Transactions a)A nested transaction b)A distributed transaction

14 Private Workspace a)The file index and disk blocks for a three-block file b)The situation after a transaction has modified block 0 and appended block 3 c)After committing The scheme works for distributed transaction too. A process is started on each machine containing a file that is to be accessed as part of the transaction. Each process is given its own private workspace. If the transaction aborts, all processes just discard its private workspace. When the transaction commits, updates are propagated.

15 Writeahead Log a) A transaction b) – d) The log before each statement is executed x = 0; y = 0; BEGIN_TRANSACTION; x = x + 1; y = y + 2 x = y * y; END_TRANSACTION; (a) Log [x = 0 / 1] (b) Log [x = 0 / 1] [y = 0/2] (c) Log [x = 0 / 1] [y = 0/2] [x = 1/4] (d) Before any block is changed, a record is written to a log telling which transaction is making the change, which file and block are being changed, and what the new and old values are. Only after the log has been written successfully, the change will be made to the file. If the transaction commits, a commit record is written to the log. If the transaction aborts, a rollback must be performed.

16 Concurrency Control (1) General organization of managers for handling transactions.

17 Concurrency Control (2) General organization of managers for handling distributed transactions.

18 Serializability a) – c) Three transactions T 1, T 2, and T 3 d) Possible schedules BEGIN_TRANSACTION x = 0; x = x + 1; END_TRANSACTION (a) BEGIN_TRANSACTION x = 0; x = x + 2; END_TRANSACTION (b) BEGIN_TRANSACTION x = 0; x = x + 3; END_TRANSACTION (c) Schedule 1x = 0; x = x + 1; x = 0; x = x + 2; x = 0; x = x + 3Legal Schedule 2x = 0; x = 0; x = x + 1; x = x + 2; x = 0; x = x + 3;Legal Schedule 3x = 0; x = 0; x = x + 1; x = 0; x = x + 2; x = x + 3;Illegal (d)

19 Conflicting Operations Two operations conflict if they operate on the same data item, and if at least one of them is a write operation. Concurrency control can be classified –by their synchonization methods: Locking or timestamps –By the expected frequency of conflicts Pessimistic Optimistic

20 Two-Phase Locking (1) Two-phase locking.

21 Two-Phase Locking (2) Strict two-phase locking eliminates cascaded aborts.

22 Two-Phase Locking (3) Deadlock problem –Fixed by asking to acquire locks in some canonical order –Deadlock detection Distributed –Centralized 2PL –Primary 2PL: each data item is assigned a primary copy. The lock manager on that copy’s machine is responsible for granting and releasing locks. –Distributed 2PL: assumes data may be replicated. The scheduler on each machine not only take care that locks are granted and released, but also that the operation is forwarded to the (local) data manager.

23 Pessimistic Timestamp Ordering Every operation that is part of a transaction T, is timestamped with ts(T). Every data item x has a read timestamp set to the timestamp of the transaction that most recently read x. Every data item x has a write timestamp set to the timestamp of the transaction that most recently write x. For operation read(T, x), suppose timestamp of T is ts. If ts <, then T aborts. Otherwise, let the read take place, and set to For operation write(T, x), suppose timestamp of T is ts. If ts <, then T aborts. Otherwise, let the write take place, and set to

24 Pessimistic Timestamp Ordering Concurrency control using timestamps.

25 Optimistic Timestamp Ordering Idea: just go ahead and do whatever you want. It keeps tracks of which data items have been read and written. At the point of committing, it checks all other transactions to see if any of them have been changed since the transaction started. If so, it is aborted. Otherwise, it is committed. It fits best with the implementation based on private workspaces. It is deadlock free and allows maximum parallelism. Disadvantage: when failed, the transaction has to be run again.