Presentation is loading. Please wait.

Presentation is loading. Please wait.

Distributed Systems Fall 2009 Distributed transactions.

Similar presentations


Presentation on theme: "Distributed Systems Fall 2009 Distributed transactions."— Presentation transcript:

1

2 Distributed Systems Fall 2009 Distributed transactions

3 Fall 20095DV0203 Outline Flat and nested distributed transactions Atomic commit –Two-phase commit protocol Concurrency control –Locking –Optimistic concurrency control Distributed deadlock –Edge chasing Summary

4 Fall 20095DV0204 Flat and nested distributed transactions Distributed transaction: –Transactions dealing with objects managed by different processes Allows for even better performance –At the price of increased complexity Transaction coordinators and object servers –Participants in the transaction

5 Fall 20095DV0205 Atomic commit If client is told that the transaction is committed, it must be committed at all object servers –...at the same time –...in spite of (crash) failures and asynchronous systems

6 Fall 20095DV0206 Two-phase commit protocol Phase 1: Coordinator collects votes –“Abort” Any participant can abort its part of the transaction –“Prepared to commit” Save update to permanent storage to survive crashes May not change vote to “abort” Phase 2: Participants carry out the joint decision

7 Fall 20095DV0207 Two-phase commit protocol (in detail) Phase 1 (voting): –Coordinator sends “canCommit?” to each participant –Participants answer “yes” or “no” “Yes”: update saved to permanent storage “No”: abort immediately

8 Fall 20095DV0208 Two-phase commit protocol (in detail) Phase 2 (completion): –Coordinator collects votes (including own) No failures and all votes are “yes”? Send “doCommit” to each participant, otherwise, send “doAbort” –Participants are in the “uncertain” state until they receive “doCommit” or “doAbort”, and may act accordingly Confirm commit via “haveCommitted”

9 Fall 20095DV0209 Two-phase commit protocol If coordinator fails –Participants are “uncertain” If some have received an answer (or they can figure it out themselves), they can coordinate themselves –Participants can request status –If participant has not received “canCommit?” and waits too long, it may abort

10 Two-phase commit protocol If participant fails –No reply to “canCommit?” in time? Coordinator can abort –Crash after “canCommit?” Use permanent storage to get up to speed

11 Fall 20095DV02011 Two-phase commit protocol for nested transactions Subtransactions a “provisional commit” –Nothing written to permanent storage Ancestor could still abort! –If they crash, the replacement cannot commit Status information is passed upward in tree –List of provisionally committed subtransactions eventually reach top level

12 Fall 20095DV02012 Two-phase commit protocol for nested transactions Top-level transaction initiates voting phase with provisionally committed transactions –If they have crashed since the provisional commit, they must abort –Before voting “yes”, must prepare to commit data At this point we use permanent storage –Hierarchic or flat voting

13 Fall 20095DV02013 Hierarchic voting Responsibility to vote passed one level/generation at a time, through the tree

14 Fall 20095DV02014 Flat voting Contact coordinators directly using parameters –Transaction ID –List of transactions that are reported as aborted Coordinators may manage more than one subtransaction, and due to crashes, this information may be required

15 Fall 20095DV02015 Concurrency control revisited Locks –Release locks when transaction can finish After phase 1 if transaction should abort After phase 2 if transaction should commit –Distributed deadlock, oh my! Optimistic concurrency control –Validate access to local objects –Commitment deadlock if serial –Different transaction order if parallel –Interesting problem! Read book!

16 Fall 20095DV02016 Distributed deadlock Local and distributed deadlocks –Phantom deadlocks Simplest solution –Manager collects local wait-for information and constructs global wait- for graph Single point of failure, bad performance, does not scale, what about availability, etc. Distributed solution

17 Fall 20095DV02017 Edge chasing Initiation: a server notices that T waits for U for object A, so sends to server handling A (where U may be blocked)

18 Edge chasing Detection: servers handle incoming requests by inspecting if the relevant transaction (U) is also waiting for another transaction (V) – if so, updates probe ( ) and sends it along –Loops (e.g. ) indicate deadlock

19 Edge chasing Resolution: abort a transaction in the cycle Servers communicate with the coordinators for each transaction to find out what they wait for

20 Fall 20095DV02020 Edge chasing Any problem with the algorithm? –What if all coordinators initiate it, and then (when they detect the loop) start aborting left and right? Totally ordered transaction priorities –Abort lowest priority!

21 Edge chasing Optimization: only initiate probe if a transaction with higher priority waits for a lower one –Also only forward probes to transactions of lower priority

22 Fall 20095DV02022 Edge chasing Any problem with the optimized algorithm? –If higher transactions wait for a lower one (but the lower one is not blocked when the request comes), and it then becomes blocked, it will not initiate probing

23 Edge chasing Add probe queues! –All probes that are related to a transaction are saved, and are sent (by the coordinator) to the server of the object with the request for access –Works, but increases complexity –Probe queues must be maintained

24 Fall 20095DV02024 Summary Distributed transactions Atomic commit protocol –Two-phase commit protocol Vote, then carry out order Flat transactions Nested transactions –Voting schemes Concurrency control –Problems! –Distributed deadlock Edge chasing

25 Fall 20095DV02025 Next lecture Daniel takes over! Beyond client-server –Peer to peer (P2P) –BitTorrent –...and more!


Download ppt "Distributed Systems Fall 2009 Distributed transactions."

Similar presentations


Ads by Google