Download presentation
Presentation is loading. Please wait.
1
Chapter 7: Distributed Transactions
Transaction concepts Centralized/Distributed Transaction Architecture Schedule concepts Locking schemes Deadlocks Distributed commit protocol ( 2PC ) Distributed Systems
2
An Updating Transaction
Updating a master tape is fault tolerant: If a run fails for any reason, all the tape could be rewound and the job restarted with no harm done. Distributed Systems
3
Transaction concepts OS Processes DBMS Transactions
A collection of actions that make consistent transformation of system states while preserving consistency Termination of transactions – commit vs abort Example: T : x = x + y R(x) Read(x) W(x) C Read(y) R(y) Write(x) Commit Distributed Systems
4
Properties of Transactions: ACID
Atomicity All or Nothing Consistency No violation of integrity constants, i.e., from one consistent state to another consistent state Isolation Concurrent changes invisible, i.e. serializable Durability Committed updates persist (saved in permanent storage) Distributed Systems
5
Transaction Processing Issues
Transaction structure Flat vs Nested vs Distributed Internal database consistency Semantic data control and integrity enforcement Reliability protocols Atomicity and durability Local recovery protocols Global commit protocols Concurrency control algorithms Replica control protocols Distributed Systems
6
Basic Transaction Primitives
Description BEGIN_TRANSACTION Make the start of a transaction END_TRANSACTION Terminate the transaction and try to commit ABORT_TRANSACTION Kill the transaction and restore the old values READ Read data from a file, a table WRITE Write data to a file, a table Distributed Systems
7
A Transaction Example Transaction to reserve three flights commits
BEGIN_TRANSACTION reserve BJ -> JFK; reserve JFK -> TTY; reserve TTY -> MON; END_TRANSACTION (a) BEGIN_TRANSACTION reserve BJ -> JFK; reserve JFK -> TTY; reserve TTY -> MON full => ABORT_TRANSACTION (b) Transaction to reserve three flights commits Transaction aborts when third flight is unavailable Distributed Systems
8
Transaction execution
Distributed Systems
9
Nested vs Distributed Transactions
Distributed Systems
10
Flat/nested Distributed Transactions
(a) Distributed flat (b) Distributed nested S7 S0 A circle (Si) denotes a server, and a square (Tj) represents a sub-transaction. Distributed Systems
11
Distributed Transaction
A distributed transaction accesses resource managers distributed across a network When resource managers are DBMSs we refer to the system as a distributed database system Each DBMS might export stored procedures or an SQL interface. In either case, operations at a site are grouped together as a subtransaction and the site is referred to as a cohort of the distributed transaction Coordinator module plays major role in supporting ACID properties of distributed transaction Transaction manager acts as coordinator Distributed Systems
12
Distributed Transaction execution
Distributed Systems
13
Distributed ACID Global Atomicity: All subtransactions of a distributed transaction must commit or all must abort. An atomic commit protocol, initiated by a coordinator (e.g., the transaction manager), ensures this. Coordinator must poll cohorts to determine if they are all willing to commit. Global deadlocks: there must be no deadlocks involving multiple sites Global serialization: distributed transaction must be globally serializable Distributed Systems
14
Schedule Synchronizing concurrent transactions
Data base remains consistent Maximum degree of concurrency Transaction execute concurrently but the net effect of the resulting history is equivalent to some serial history Conflict equivalence The relative order of execution of the conflicting operations belonging to unaborted transactions in the two schedules is same Distributed Systems
15
Transaction T and U both work on the same bank branch K
Lost Update Problem BEGIN_TRANSACTION(T) : K:withdraw(A, 40) ; K:deposit(B, 40) ; END_TRANSACTION(T) ; BEGIN_TRANSACTION(U) : K:withdraw(C, 30) K:deposit(B, 30) END_TRANSACTION(U) ; Operations balance A.balance A.read() (A) 100 A.write(A.balance – 40) (A) 60 C.balance C.read() (C) 300 C.write(C.balance – 30) (C) 270 B.balance B.read() (B) 200 B.write(B.balance + 30) (B) 230 B.write(B.balance + 40) (B) 240 Transaction T and U both work on the same bank branch K Distributed Systems
16
Inconsistent Retrieval Problem
BEGIN_TRANSACTION(T) : K:withdraw(A, 100) ; K:deposit(B, 100) ; END_TRANSACTION(T) ; BEGIN_TRANSACTION(U) : K:Total_balance(A, B, C) … END_TRANSACTION(U) ; Operations balance total_balance A.balance A.read() (A) 200 A.write(A.balance – 100) (A) 100 total_balance A.read() 100 total_balance total_balance + B.read() total_balance total_balance + C.read() B.balance B.read() (B) 200 B.write(B.balance + 100) (B) 300 …. Transaction T and U both work on the same bank branch K Distributed Systems
17
Transaction T and U both work on the same bank branch K
Serial Equivalent BEGIN_TRANSACTION(T) : K:withdraw(A, 40) ; K:deposit(B, 40) ; END_TRANSACTION(T) ; BEGIN_TRANSACTION(U) : K:withdraw(C, 30) K:deposit(B, 30) END_TRANSACTION(U) ; Operations balance A.balance A.read() (A) 100 A.write(A.balance – 40) (A) 60 C.balance C.read() (C) 300 C.write(C.balance – 30) (C) 270 B.balance B.read() (B) 200 B.write(B.balance + 40) (B) 240 B.write(B.balance + 30) (B) 270 Transaction T and U both work on the same bank branch K Distributed Systems
18
Serializability a) – c) Three transactions T1, T2, and T3
BEGIN_TRANSACTION x = 0; x = x + 1; END_TRANSACTION (a) BEGIN_TRANSACTION x = 0; x = x + 2; END_TRANSACTION (b) BEGIN_TRANSACTION x = 0; x = x + 3; END_TRANSACTION (c) Schedule 1 x = 0; x = x + 1; x = 0; x = x + 2; x = 0; x = x + 3 Legal Schedule 2 x = 0; x = 0; x = x + 1; x = x + 2; x = 0; x = x + 3; Schedule 3 x = 0; x = 0; x = x + 1; x = 0; x = x + 2; x = x + 3; Illegal (d) a) – c) Three transactions T1, T2, and T3 d) Possible schedules Distributed Systems
19
Concurrency control Algorithms
Pessimistic Two-Phase locking based(2PL) Centralized 2PL Distributed 2PL Timestamp Ordering (TO) Basic TO Multiversion TO Conservative TO Hybrid Optimistic Locking and TO based Distributed Systems
20
Two-Phase Locking (1) A transaction locks an object before using it
When an object is locked by another transaction, the requesting transaction must wait When a transaction releases a lock, it may not request another lock Strict 2 PL – hold lock till the end The scheduler first acquires all the locks it needs during the growing phase The scheduler releases locks during shrinking phase Distributed Systems
21
Two-Phase Locking (2) Distributed Systems
22
Two-Phase Locking (3) Operations balance BEGIN_TRANSACTION(T)
K:withdraw(A, 40) ; K:deposit(B, 40) ; END_TRANSACTION(T) ; BEGIN_TRANSACTION(U) : K:withdraw(C, 30) K:deposit(B, 30) END_TRANSACTION(U) ; Operations balance BEGIN_TRANSACTION(T) BEGIN_TRANSACTION(U) A.balance A.read() lock (A) 100 C.balance C.read() lock (C) 300 A.write(A.balance – 40) (A) 60 C.write(C.balance – 30) (C) 270 B.balance B.read() lock (B) 200 Waiting for (B)’s lock B.write(B.balance + 40) (B) 240 … END_TRANSACTION(T) release (A) (B) lock (B) 240 B.write(B.balance + 30) (B) 270 END_TRANSACTION(U) Distributed Systems
23
Two-Phase Locking (4) Centralized 2PL Primary 2PL Distributed 2PL
One 2PL scheduler in the distributed system Lock requests are issued to the central scheduler Primary 2PL Each data item is assigned a primary copy, the lock manager on that copy is responsible for lock/release Like centralized 2PL, but locking has been distributed Distributed 2PL 2PL schedulers are placed at each site and each scheduler handles lock requests at that site A transaction may read any of the replicated copies by obtaining a read lock on one of the copies. Writing into x requires obtaining write lock on all the copies. Distributed Systems
24
Timestamp ordering (1) Transaction Ti is assigned a globally unique time stamp ts(Ti) Using Lamport’s algorithm, we can ensure that the timestamps are unique (important) Transaction manager attaches the timestamp to all the operations Each data item is assigned a write timestamp (wts) and a read timestamp (rts) such that: rts(x) = largest time stamp of any read on x wts(x) = largest time stamp of any write on x Distributed Systems
25
Timestamp ordering (2) Conflicting operations are resolved by timestamp order, let ts(Ti) be the timestamp of transaction Ti, and Ri(x), Wi(x) be read/write operation from Ti for Ri(x): For Wi(x) if (ts(Ti) < wts(x)) if (ts(Ti) < wts(x) and then reject Ri(x) (abort Ti) ts(Ti) < rts(x)) else accept Ri(x) then reject Wi(x) (abort Ti) rts(x) ts(Ti) else accept Wi(x) wts(x) ts(Ti) Distributed Systems
26
Distributed commit protocols
How to execute commit for distributed transactions Issue: How to ensure atomicity and durability One-phase commit (1PC): the coordinator communicates with all servers to commit. Problem: a server can not abort a transaction. Two-phase commit (2PC): allow any server to abort its part of a transaction. Commonly used. Three-phase commit (3PC): avoid blocking servers in the presence of coordinator failure. Mostly referred in literature, not in practice. Distributed Systems
27
Two phase commit (2PC) Consider a distributed transaction involving the participation of a number of processes each running on a different machine, and assuming no failure occur Phase1: The coordinator gets the participants ready to write the results into the database Phase2: Everybody writes the results into database Coordinator: The process at the site where the transaction originates and which controls the execution Participant: The process at the other sites that participate in executing the transaction Global commit rule: The coordinator aborts iff at least one participant votes to abort The coordinator commits iff all the participants vote to commit Distributed Systems
28
2PC phases Distributed Systems
29
2PC actions Distributed Systems
30
Distributed 2PC The Coordinator initiates 2PC
The participants run a distributed algorithm to reach the agreement of global commit or abort. Distributed Systems
31
Problems with 2PC Blocking Independent recovery not possible
Ready implies that the participant waits for the coordinator If coordinator fails, site is blocked until recovery Blocking reduces availability Independent recovery not possible Independent recovery protocols do not exist for multiple site failures Distributed Systems
32
Deadlocks If transactions follow 2PL, then it may have deadlocks
Consider the following scenarios: w1(x) w2(y) r1(y) r2(x) r1(x) r2(x) w1(x) w2(x) Deadlock management Ignore – Let the application programmers handle it Prevention – no run time support Avoidance – run time support Detection and recovery – find it at your own leisure!! Distributed Systems
33
Deadlock Conditions Mutual exclusion Hold and wait (a) TUVWT
(a) Simple circle (b) Complex circle Mutual exclusion Hold and wait (a) TUVWT No preemption (b) VWTV Circular chain VWV Distributed Systems
34
Deadlock Avoidance WAIT-DIE rule WOUND-WAIT rule
If (ts(Ti) < ts(Tj)) then Ti waits else Ti dies Non preemptive – Ti never preempts Tj Prefers younger transactions WOUND-WAIT rule If (ts(Ti) < ts(Tj)) then Tj is wounded else Ti waits Preemptive – Ti preempts Tj if it is younger Prefers older transactions Problem: very expensive, as deadlock is rarely and randomly occurred, forcing deadlock detection involves system overhead. Distributed Systems
35
Deadlock Detection Deadlock is a stable property.
With distributed transaction, a deadlock might not be detectable at any one site But a deadlock still comes from cycles in a Wait for graph Topology for deadlock detection algorithms Centralized – periodically collecting waiting states Distributed – Path pushing Hierarchical – build a hierarchy of detectors Distributed Systems
36
Centralized – periodically collecting waiting states
B V W U wait lock S1 :U → V S2 :V → W S3 :W → U Distributed Systems
Similar presentations
© 2025 SlidePlayer.com Inc.
All rights reserved.