Lecture 12-1 Computer Science 425 Distributed Systems CS 425 / CSE 424 / ECE 428 Fall 2012 Indranil Gupta (Indy) October 4, 2012 Lecture 12 Mutual Exclusion.

Slides:



Advertisements
Similar presentations
Dr. Kalpakis CMSC 621, Advanced Operating Systems. Distributed Mutual Exclusion.
Advertisements

CS542 Topics in Distributed Systems Diganta Goswami.
Ricart and Agrawala’s Algorithm
CS425 /CSE424/ECE428 – Distributed Systems – Fall 2011 Material derived from slides by I. Gupta, M. Harandi, J. Hou, S. Mitra, K. Nahrstedt, N. Vaidya.
Page 1 Mutual Exclusion* Distributed Systems *referred to slides by Prof. Paul Krzyzanowski at Rutgers University and Prof. Mary Ellen Weisskopf at University.
CS 582 / CMPE 481 Distributed Systems
Computer Science Lecture 12, page 1 CS677: Distributed OS Last Class Distributed Snapshots –Termination detection Election algorithms –Bully –Ring.
Distributed Systems Fall 2009 Coordination and agreement, Multicast, and Message ordering.
Synchronization in Distributed Systems. Mutual Exclusion To read or update shared data, a process should enter a critical region to ensure mutual exclusion.
CS 425 / ECE 428 Distributed Systems Fall 2014 Indranil Gupta (Indy) Lecture 12: Mutual Exclusion All slides © IG.
Distributed Mutual Exclusion Béat Hirsbrunner References G. Coulouris, J. Dollimore and T. Kindberg "Distributed Systems: Concepts and Design", Ed. 4,
Computer Science Lecture 12, page 1 CS677: Distributed OS Last Class Vector timestamps Global state –Distributed Snapshot Election algorithms.
Distributed Mutex EE324 Lecture 11.
Distributed Mutual Exclusion
CSE 486/586, Spring 2013 CSE 486/586 Distributed Systems Mutual Exclusion Steve Ko Computer Sciences and Engineering University at Buffalo.
CSE 486/586, Spring 2012 CSE 486/586 Distributed Systems Mutual Exclusion Steve Ko Computer Sciences and Engineering University at Buffalo.
MUTUAL EXCLUSION AND QUORUMS CS Distributed Mutual Exclusion Given a set of processes and a single resource, develop a protocol to ensure exclusive.
Computer Science Lecture 12, page 1 CS677: Distributed OS Last Class Vector timestamps Global state –Distributed Snapshot Election algorithms –Bully algorithm.
CS425 /CSE424/ECE428 – Distributed Systems – Fall 2011 Material derived from slides by I. Gupta, M. Harandi, J. Hou, S. Mitra, K. Nahrstedt, N. Vaidya.
Lecture 6-1 Computer Science 425 Distributed Systems CS 425 / ECE 428 Fall 2013 Indranil Gupta (Indy) September 12, 2013 Lecture 6 Global Snapshots Reading:
Coordination and Agreement. Topics Distributed Mutual Exclusion Leader Election.
CS 425/ECE 428/CSE424 Distributed Systems (Fall 2009) Lecture 8 Leader Election Section 12.3 Klara Nahrstedt.
Presenter: Long Ma Advisor: Dr. Zhang 4.5 DISTRIBUTED MUTUAL EXCLUSION.
Studying Different Problems from Distributed Computing Several of these problems are motivated by trying to use solutiions used in `centralized computing’
Lecture 11-1 Computer Science 425 Distributed Systems CS 425 / CSE 424 / ECE 428 Fall 2010 Indranil Gupta (Indy) September 28, 2010 Lecture 11 Leader Election.
Lecture 10 – Mutual Exclusion Distributed Systems.
CSE 486/586, Spring 2012 CSE 486/586 Distributed Systems Mutual Exclusion & Leader Election Steve Ko Computer Sciences and Engineering University.
Physical clock synchronization Question 1. Why is physical clock synchronization important? Question 2. With the price of atomic clocks or GPS coming down,
ITEC452 Distributed Computing Lecture 6 Mutual Exclusion Hwajung Lee.
Hwajung Lee. Mutual Exclusion CS p0 p1 p2 p3 Some applications are:  Resource sharing  Avoiding concurrent update on shared data  Controlling the.
Hwajung Lee. Mutual Exclusion CS p0 p1 p2 p3 Some applications are: 1. Resource sharing 2. Avoiding concurrent update on shared data 3. Controlling the.
Computer Science 425 Distributed Systems (Fall 2009) Lecture 6 Multicast/Group Communication and Mutual Exclusion Section &12.2.
CS 425 / ECE 428 Distributed Systems Fall 2015 Indranil Gupta (Indy) Oct 1, 2015 Lecture 12: Mutual Exclusion All slides © IG.
Lecture 10: Coordination and Agreement (Chap 12) Haibin Zhu, PhD. Assistant Professor Department of Computer Science Nipissing University © 2002.
Election Distributed Systems. Algorithms to Find Global States Why? To check a particular property exist or not in distributed system –(Distributed) garbage.
Page 1 Mutual Exclusion & Election Algorithms Paul Krzyzanowski Distributed Systems Except as otherwise noted, the content.
Lecture 7- 1 CS 425/ECE 428/CSE424 Distributed Systems (Fall 2009) Lecture 7 Distributed Mutual Exclusion Section 12.2 Klara Nahrstedt.
From Coulouris, Dollimore, Kindberg and Blair Distributed Systems: Concepts and Design Edition 5, © Addison-Wesley 2012 Slides for Chapter 15: Coordination.
CIS 825 Review session. P1: Assume that processes are arranged in a ring topology. Consider the following modification of the Lamport’s mutual exclusion.
Distributed Mutual Exclusion Synchronization in Distributed Systems Synchronization in distributed systems are often more difficult compared to synchronization.
Mutual Exclusion Algorithms. Topics r Defining mutual exclusion r A centralized approach r A distributed approach r An approach assuming an organization.
Hwajung Lee. Mutual Exclusion CS p0 p1 p2 p3 Some applications are:  Resource sharing  Avoiding concurrent update on shared data  Controlling the.
CSE 486/586 CSE 486/586 Distributed Systems Leader Election Steve Ko Computer Sciences and Engineering University at Buffalo.
CS3771 Today: Distributed Coordination  Previous class: Distributed File Systems Issues: Naming Strategies: Absolute Names, Mount Points (logical connection.
CSC 8420 Advanced Operating Systems Georgia State University Yi Pan Transactions are communications with ACID property: Atomicity: all or nothing Consistency:
Revisiting Logical Clocks: Mutual Exclusion Problem statement: Given a set of n processes, and a shared resource, it is required that: –Mutual exclusion.
CS 425 / ECE 428 Distributed Systems Fall 2015 Indranil Gupta (Indy) Oct 1, 2015 Lecture 12: Mutual Exclusion All slides © IG.
Slides for Chapter 11: Coordination and Agreement From Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edition 3, © Addison-Wesley.
Lecture 18: Mutual Exclusion
Exercises for Chapter 11: COORDINATION AND AGREEMENT
Coordination and Agreement
CSE 486/586 Distributed Systems Leader Election
Distributed Mutex EE324 Lecture 11.
Coordination and Agreement
Computer Science 425 Distributed Systems CS 425 / ECE 428 Fall 2013
Distributed Mutual Exclusion
Mutual Exclusion Problem Specifications
ITEC452 Distributed Computing Lecture 7 Mutual Exclusion
Outline Distributed Mutual Exclusion Introduction Performance measures
Mutual Exclusion CS p0 CS p1 p2 CS CS p3.
CSE 486/586 Distributed Systems Leader Election
CSE 486/586 Distributed Systems Mutual Exclusion
Physical clock synchronization
Synchronization (2) – Mutual Exclusion
ITEC452 Distributed Computing Lecture 7 Mutual Exclusion
Prof. Leonardo Mostarda University of Camerino
Distributed Systems and Concurrency: Synchronization in Distributed Systems Majeed Kassis.
Distributed Mutual eXclusion
CSE 486/586 Distributed Systems Mutual Exclusion
CSE 486/586 Distributed Systems Leader Election
Presentation transcript:

Lecture 12-1 Computer Science 425 Distributed Systems CS 425 / CSE 424 / ECE 428 Fall 2012 Indranil Gupta (Indy) October 4, 2012 Lecture 12 Mutual Exclusion Reading: Sections 15.2  2012, I. Gupta, K. Nahrtstedt, S. Mitra, N. Vaidya, M. T. Harandi, J. Hou

Lecture 12-2 Why Mutual Exclusion? Bank’s Servers in the Cloud: Think of two simultaneous deposits of $10,000 into your bank account, each from one ATM. –Both ATMs read initial amount of $1000 concurrently from the bank’s cloud server –Both ATMs add $10,000 to this amount (locally at the ATM) –Both write the final amount to the server –What’s wrong?

Lecture 12-3 Bank’s Servers in the Cloud: Think of two simultaneous deposits of $10,000 into your bank account, each from one ATM. –Both ATMs read initial amount of $1000 concurrently from the bank’s cloud server –Both ATMs add $10,000 to this amount (locally at the ATM) –Both write the final amount to the server –What’s wrong? The ATMs need mutually exclusive access to your account entry at the server (or, to executing the code that modifies the account entry) Why Mutual Exclusion?

Lecture 12-4  Critical section problem: Piece of code (at all clients) for which we need to ensure there is at most one client executing it at any point of time.  Solutions:  Semaphores, mutexes, etc. in single-node operating systems  Message-passing-based protocols in distributed systems:  enter() the critical section  AccessResource() in the critical section  exit() the critical section  Distributed mutual exclusion requirements:  Safety – At most one process may execute in CS at any time  Liveness – Every request for a CS is eventually granted  Ordering (desirable) – Requests are granted in the order they were made Mutual Exclusion

Lecture 12-5 Refresher - Semaphores To synchronize access of multiple threads to common data structures Semaphore S=1; Allows two operations: wait and signal 1. wait(S) (or P(S)): while(1){ // each execution of the while loop is atomic if (S > 0) S--; break; } Each while loop execution and S++ are each atomic operations –how? 2. signal(S) (or V(S)): S++; // atomic

Lecture 12-6 Refresher - Semaphores To synchronize access of multiple threads to common data structures Semaphore S=1; Allows two operations: wait and signal 1. wait(S) (or P(S)): while(1){ // each execution of the while loop is atomic if (S > 0) S--; break; } Each while loop execution and S++ are each atomic operations –how? 2. signal(S) (or V(S)): S++; // atomic enter() exit()

Lecture 12-7 How are semaphores used? semaphore S=1; ATM1: wait(S); // enter // critical section obtain bank amount; add in deposit; update bank amount; signal(S); // exit extern semaphore S; ATM2 wait(S); // enter // critical section obtain bank amount; add in deposit; update bank amount; signal(S); // exit One Use: Mutual Exclusion – Bank ATM example

Lecture 12-8 Distributed Mutual Exclusion: Performance Evaluation Criteria Bandwidth: the total number of messages sent in each entry and exit operation. Client delay: the delay incurred by a process at each entry and exit operation (when no other process is in, or waiting) (We will prefer mostly the entry operation.) Synchronization delay: the time interval between one process exiting the critical section and the next process entering it (when there is only one process waiting) These translate into throughput -- the rate at which the processes can access the critical section, i.e., x processes per second. (these definitions more correct than the ones in the textbook)

Lecture 12-9 Assumptions/System Model For all the algorithms studied, we make the following assumptions: –Each pair of processes is connected by reliable channels (such as TCP). –Messages are eventually delivered to recipient in FIFO order. –Processes do not fail.

Lecture  A central coordinator (master or leader)  Is elected (which algorithm?)  Grants permission to enter CS & keeps a queue of requests to enter the CS.  Ensures only one process at a time can access the CS  Has a special token message, which it can give to any process to access CS.  Operations  To enter a CS Send a request to the coord & wait for token.  On exiting the CS Send a message to the coord to release the token.  Upon receipt of a request, if no other process has the token, the coord replies with the token; otherwise, the coord queues the request.  Upon receipt of a release message, the coord removes the oldest entry in the queue (if any) and replies with a token.  Features:  Safety, liveness are guaranteed  Ordering also guaranteed (what kind?)  Requires 2 messages for entry + 1 messages for exit operation.  Client delay: one round trip time (request + grant)  Synchronization delay: 2 message latencies (release + grant)   The coordinator becomes performance bottleneck and single point of failure. 1. Centralized Control of Mutual Exclusion

Lecture  Processes are organized in a logical ring: p i has a communication channel to p (i+1)mod N.  Operations:  Only the process holding the token can enter the CS.  To enter the critical section, wait passively for the token. When in CS, hold on to the token and don’t release it.  To exit the CS, send the token onto your neighbor.  If a process does not want to enter the CS when it receives the token, it simply forwards the token to the next neighbor. 2. Token Ring Approach P0 P1 P2 P3 PN-1 Previous holder of token next holder of token current holder of token  Features:  Safety & liveness are guaranteed  Ordering is not guaranteed.  Bandwidth: 1 message per exit  Client delay: 0 to N message transmissions.  Synchronization delay between one process’s exit from the CS and the next process’s entry is between 1 and N-1 message transmissions.

Lecture Timestamp Approach: Ricart & Agrawala  Processes requiring entry to critical section multicast a request, and can enter it only when all other processes have replied positively.  Messages requesting entry are of the form, where T is the sender’s timestamp (from a Lamport clock) and p i the sender’s identity (used to break ties in T).  To enter the CS  set state to wanted  multicast “request” to all processes (including timestamp) – use R-multicast  wait until all processes send back “reply”  change state to held and enter the CS  On receipt of a request at p j :  if (state = held) or (state = wanted & (T j, p j )<(T i,p i )), // lexicographic ordering enqueue request  else “reply” to p i  On exiting the CS  change state to release and “reply” to all queued requests.

Lecture Ricart & Agrawala’s Algorithm On initialization state := RELEASED; To enter the section state := WANTED; Multicast request to all processes;request processing deferred here T := request’s timestamp; Wait until (number of replies received = (N – 1)); state := HELD; On receipt of a request at p j (i ≠ j) if (state = HELD or (state = WANTED and (T, p j ) < (T i, p i ))) then queue request from p i without replying; else reply immediately to p i ; end if To exit the critical section state := RELEASED; reply to any queued requests;

Lecture Ricart & Agrawala’s Algorithm p 3 34 Reply p 1 p 2 Reply

Lecture Analysis: Ricart & Agrawala  Safety, liveness, and ordering (causal) are guaranteed  Why?  Bandwidth: 2(N-1) messages per entry operation  N-1 unicasts for the multicast request + N-1 replies  N messages if the underlying network supports multicast  N-1 unicast messages per exit operation  1 multicast if the underlying network supports multicast  Client delay: one round-trip time  Synchronization delay: one message transmission time

Lecture Timestamp Approach: Maekawa’s Algorithm  Setup  Each process p i is associated with a voting set v i (of processes)  Each process belongs to its own voting set  The intersection of any two voting sets is non-empty  Each voting set is of size K  Each process belongs to M other voting sets  Maekawa showed that K=M=  N works best One way of doing this is to put N processes in a  N by  N matrix and for each p i, v i = row + column containing p i

Lecture Maekawa Voting Set with N=4 p1 p2 p3p4 p1’s voting set = v1 v2 v3 v4 p1 p2 p3 p4

Lecture Timestamp Approach: Maekawa’s Algorithm  Protocol  Each process p i is associated with a voting set v i (of processes)  To access a critical section, p i requests permission from all other processes in its own voting set v i  Voting set member gives permission to only one requestor at a time, and queues all other requests  Guarantees safety  May not guarantee liveness (may deadlock)

Lecture Maekawa’s Algorithm – Part 1 On initialization state := RELEASED; voted := FALSE; For p i to enter the critical section state := WANTED; Multicast request to all processes in V i – {p i }; Wait until (number of replies received = (K – 1)); state := HELD; On receipt of a request from p i at p j (i ≠ j) if (state = HELD or voted = TRUE) then queue request from p i without replying; else send reply to p i ; voted := TRUE; end if Continues on next slide X X X X X X

Lecture Maekawa’s Algorithm – Part 2 For p i to exit the critical section state := RELEASED; Multicast release to all processes in V i – {p i }; On receipt of a release from p i at p j (i ≠ j) if (queue of requests is non-empty) then remove head of queue – from p k, say; send reply to p k ; voted := TRUE; else voted := FALSE; end if X X X X

Lecture Maekawa’s Algorithm – Analysis 2  N messages per entry,  N messages per exit –Better than Ricart and Agrawala’s (2(N-1) and N-1 messages) Client delay: One round trip time Synchronization delay: 2 message transmission times

Lecture Summary Mutual exclusion –Semaphores review –Coordinator-based token –Token ring –Ricart and Agrawala’s timestamp algo. –Maekawa’s algo. MP2 due this Sunday midnight –By now you should have a fully working system, and be taking measurements Demos next Monday 2-6 pm –Watch Piazza for Signup sheet