ICS 421 Spring 2010 Distributed Transactions Asst. Prof. Lipyeow Lim Information & Computer Science Department University of Hawaii at Manoa 3/16/20101Lipyeow.

Slides:



Advertisements
Similar presentations
CS542: Topics in Distributed Systems Distributed Transactions and Two Phase Commit Protocol.
Advertisements

Distributed Databases Chapter 22 By Allyson Moran.
(c) Oded Shmueli Distributed Recovery, Lecture 7 (BHG, Chap.7)
CS 603 Handling Failure in Commit February 20, 2002.
ICS 421 Spring 2010 Transactions & Recovery Asst. Prof. Lipyeow Lim Information & Computer Science Department University of Hawaii at Manoa 3/4/20101Lipyeow.
1 ICS 214B: Transaction Processing and Distributed Data Management Lecture 12: Three-Phase Commits (3PC) Professor Chen Li.
Consensus Algorithms Willem Visser RW334. Why do we need consensus? Distributed Databases – Need to know others committed/aborted a transaction to avoid.
CIS 720 Concurrency Control. Timestamp-based concurrency control Assign a timestamp ts(T) to each transaction T. Each data item x has two timestamps:
Distributed Databases
ICS 421 Spring 2010 Transactions & Concurrency Control (i) Asst. Prof. Lipyeow Lim Information & Computer Science Department University of Hawaii at Manoa.
Distributed Transaction Processing Some of the slides have been borrowed from courses taught at Stanford, Berkeley, Washington, and earlier version of.
Transaction Processing Lecture ACID 2 phase commit.
CS 603 Distributed Transactions February 18, 2002.
Distributed DBMSPage © 1998 M. Tamer Özsu & Patrick Valduriez Outline Introduction Background Distributed DBMS Architecture Distributed Database.
Session - 18 RECOVERY CONTROL - 2 Matakuliah: M0184 / Pengolahan Data Distribusi Tahun: 2005 Versi:
Distributed DBMSPage © 1998 M. Tamer Özsu & Patrick Valduriez Outline Introduction Background Distributed DBMS Architecture Distributed Database.
Chapter 18: Distributed Coordination (Chapter 18.1 – 18.5)
1 CS 194: Distributed Systems Distributed Commit, Recovery Scott Shenker and Ion Stoica Computer Science Division Department of Electrical Engineering.
CS 603 Three-Phase Commit February 22, Centralized vs. Decentralized Protocols What if we don’t want a coordinator? Decentralized: –Each site broadcasts.
©Silberschatz, Korth and Sudarshan19.1Database System Concepts Distributed Transactions Transaction may access data at several sites. Each site has a local.
Failure and Availibility DS Lecture # 12. Optimistic Replication Let everyone make changes –Only 3 % transactions ever abort Make changes, send updates.
1 ICS 214B: Transaction Processing and Distributed Data Management Distributed Database Systems.
Distributed Commit. Example Consider a chain of stores and suppose a manager – wants to query all the stores, – find the inventory of toothbrushes at.
CMPT Dr. Alexandra Fedorova Lecture XI: Distributed Transactions.
Distributed Systems Fall 2009 Distributed transactions.
CMPT Dr. Alexandra Fedorova Lecture XI: Distributed Transactions.
Commit Protocols. CS5204 – Operating Systems2 Fault Tolerance Causes of failure: process failure machine failure network failure Goals : transparent:
Distributed Deadlocks and Transaction Recovery.
CS162 Section Lecture 10 Slides based from Lecture and
Distributed Transactions March 15, Transactions What is a Distributed Transaction?  A transaction that involves more than one server  Network.
Distributed Transactions Chapter 13
Transactions and Their Distribution Zachary G. Ives University of Pennsylvania CIS 455 / 555 – Internet and Web Systems October 20, 2015.
Implementation of Database Systems, Jarek Gryz1 Distributed Databases Chapter 21, Part B.
Distributed Transactions
CSE 486/586 CSE 486/586 Distributed Systems Concurrency Control Steve Ko Computer Sciences and Engineering University at Buffalo.
PAVANI REDDY KATHURI TRANSACTION COMMUNICATION. OUTLINE 0 P ART I : I NTRODUCTION 0 P ART II : C URRENT R ESEARCH 0 P ART III : F UTURE P OTENTIAL 0 R.
DISTRIBUTED COMPUTING
Distributed Databases DBMS Textbook, Chapter 22, Part II.
Instructor: Marina Gavrilova. Outline Introduction Types of distributed databases Distributed DBMS Architectures and Storage Replication Synchronous replication.
Databases Illuminated
Overview of DBMS recovery and concurrency control: Eksemplerne er fra kapitel 3 I bogen: Lars Fank Databaser Teori og Praksis ISBN
Distributed Transactions Zachary G. Ives University of Pennsylvania CIS 455 / 555 – Internet and Web Systems April 15, 2008.
Commit Algorithms Hamid Al-Hamadi CS 5204 November 17, 2009.
Distributed Transactions Chapter – Vidya Satyanarayanan.
Committed:Effects are installed to the database. Aborted:Does not execute to completion and any partial effects on database are erased. Consistent state:
Two-Phase Commit Brad Karp UCL Computer Science CS GZ03 / M th October, 2008.
Lampson and Lomet’s Paper: A New Presumed Commit Optimization for Two Phase Commit Doug Cha COEN 317 – SCU Spring 05.
1 Advanced Database Topics Copyright © Ellis Cohen Concurrency Control for Distributed Databases These slides are licensed under a Creative Commons.
6.830 Lecture 19 Eventual Consistency No class next Wednesday Oscar Office Hours Today 4PM G9 Lounge.
IM NTU Distributed Information Systems 2004 Distributed Transactions -- 1 Distributed Transactions Yih-Kuen Tsay Dept. of Information Management National.
ARIES, Concluded and Distributed Databases: R* Zachary G. Ives University of Pennsylvania CIS 650 – Implementing Data Management Systems October 2, 2008.
Synchronization & Transactions Zachary G. Ives University of Pennsylvania CIS 455 / 555 – Internet and Web Systems April 3, 2008 Some slide content by.
© 2016 A. Haeberlen, Z. Ives CIS 455/555: Internet and Web Systems 1 University of Pennsylvania Distributed transactions April 11, 2016.
Topics in Distributed Databases Database System Implementation CSE 507 Some slides adapted from Navathe et. Al and Silberchatz et. Al.
1 Advanced Database Systems: DBS CB, 2 nd Edition Parallel and Distributed Databases Ch. 20.
CS 540 Database Management Systems NoSQL & NewSQL Some slides due to Magda Balazinska 1.
Distributed Databases – Advanced Concepts Chapter 25 in Textbook.
CS 440 Database Management Systems
Chapter 19: Distributed Databases
Two phase commit.
Transactions.
Commit Protocols CS60002: Distributed Systems
CS 440 Database Management Systems
RELIABILITY.
Outline Introduction Background Distributed DBMS Architecture
Outline Announcements Fault Tolerance.
Distributed Databases Recovery
UNIVERSITAS GUNADARMA
CIS 720 Concurrency Control.
Distributed Databases
Presentation transcript:

ICS 421 Spring 2010 Distributed Transactions Asst. Prof. Lipyeow Lim Information & Computer Science Department University of Hawaii at Manoa 3/16/20101Lipyeow Lim -- University of Hawaii at Manoa

Isolation: what is different ? Consider shared nothing parallel/distributed DB Node 1 receives transaction that updates relation P Node 3 receives transaction that updates P Node 4 receives transactions that reads P 3/16/2010Lipyeow Lim -- University of Hawaii at Manoa2 Network P1 Node 1 P2 Node 2 P3 Node 3 P4 Node 4 P1P2P3P4 How do we ensure isolation ?

Distributed Locking How do we manage locks for objects across many sites? Centralized: One site does all locking. – Vulnerable to single site failure. Primary Copy: All locking for an object done at the primary copy site for this object. – Reading requires access to locking site as well as site where the object is stored. Fully Distributed: Locking for a copy done at site where the copy is stored. – Locks at all sites while writing an object. 3/16/2010Lipyeow Lim -- University of Hawaii at Manoa3 What about deadlocks?

Distributed Deadlock Detection Each site maintains a local waits-for graph. A global deadlock might exist even if the local graphs contain no cycles Three solutions: – Centralized (send all local graphs to one site); – Hierarchical (organize sites into a hierarchy and send local graphs to parent in the hierarchy); – Timeout (abort Xact if it waits too long). 3/16/2010Lipyeow Lim -- University of Hawaii at Manoa4 T1T2 SITE A T1T2 SITE B T1 T2 GLOBAL

Atomicity & Durability: what is different ? Consider shared nothing parallel/distributed DB Node 1 receives transaction that updates relation P Node 3 receives transaction that updates P Node 4 receives transactions that reads P 3/16/2010Lipyeow Lim -- University of Hawaii at Manoa5 Network P1 Node 1 P2 Node 2 P3 Node 3 P4 Node 4 P1P2P3P4 What if one node crashes ?What if network does down ?

Distributed Recovery Two new issues: – New kinds of failure, e.g., links and remote sites. – If “sub-transactions” of an Xact execute at different sites, all or none must commit. Need a commit protocol to achieve this! A log is maintained at each site, as in a centralized DBMS, and commit protocol actions are additionally logged. 3/16/2010Lipyeow Lim -- University of Hawaii at Manoa6

2-Phase Commit (2PC) When an Xact wants to commit: ¬ Coordinator sends prepare msg to each subordinate. ­ Subordinate force-writes an abort or prepare log record and then sends a no or yes msg to coordinator. ® If coordinator gets unanimous yes votes, force-writes a commit log record and sends commit msg to all subs. Else, force-writes abort log rec, and sends abort msg. ¯ Subordinates force-write abort/commit log rec based on msg they get, then send ack msg to coordinator. ° Coordinator writes end log rec after getting all acks. 3/16/2010Lipyeow Lim -- University of Hawaii at Manoa7 Network Parallel DB layer DBMS Parallel DB layer DBMS Update coordinatorsubordinate

Example: 2PC (commit success) 3/16/2010Lipyeow Lim -- University of Hawaii at Manoa8 subordinate1coordinatorsubordinate2 prepare Log: prepare prepare yes Log: commit ack Log: end commit

Comments on 2PC Two rounds of communication: first, voting; then, termination. Both initiated by coordinator. Any site can decide to abort an Xact. Every msg reflects a decision by the sender; to ensure that this decision survives failures, it is first recorded in the local log. All commit protocol log recs for an Xact contain Xactid and Coordinatorid. The coordinator’s abort/commit record also includes ids of all subordinates. 3/16/2010Lipyeow Lim -- University of Hawaii at Manoa9

Restart after a failure at a site If we have a commit or abort log rec for Xact T, but not an end rec, must redo/undo T. – If this site is the coordinator for T, keep sending commit/abort msgs to subs until acks received. If we have a prepare log rec for Xact T, but not commit/abort, this site is a subordinate for T. – Repeatedly contact the coordinator to find status of T, then write commit/abort log rec; redo/undo T; and write end log rec. If we don’t have even a prepare log rec for T, unilaterally abort and undo T. – This site may be coordinator! If so, subs may send msgs. 3/16/2010Lipyeow Lim -- University of Hawaii at Manoa10

Coordinator Failures: Blocking If coordinator for Xact T fails, subordinates who have voted yes cannot decide whether to commit or abort T until coordinator recovers. – T is blocked. – Even if all subordinates know each other (extra overhead in prepare msg) they are blocked unless one of them voted no. 3/16/2010Lipyeow Lim -- University of Hawaii at Manoa11

Link and Remote Site Failures If a remote site does not respond during the commit protocol for Xact T, either because the site failed or the link failed: – If the current site is the coordinator for T, should abort T. – If the current site is a subordinate, and has not yet voted yes, it should abort T. – If the current site is a subordinate and has voted yes, it is blocked until the coordinator responds. 3/16/2010Lipyeow Lim -- University of Hawaii at Manoa12

Observations on 2PC Ack msgs used to let coordinator know when it can “forget” an Xact; until it receives all acks, it must keep T in the Xact Table. If coordinator fails after sending prepare msgs but before writing commit/abort log recs, when it comes back up it aborts the Xact. If a subtransaction does no updates, its commit or abort status is irrelevant. 3/16/2010Lipyeow Lim -- University of Hawaii at Manoa13

2PC with Presumed Abort When coordinator aborts T, it undoes T and removes it from the Xact Table immediately. – Doesn’t wait for acks; “presumes abort” if Xact not in Xact Table. Names of subs not recorded in abort log rec. Subordinates do not send acks on abort. If subxact does not do updates, it responds to prepare msg with reader instead of yes/no. Coordinator subsequently ignores readers. If all subxacts are readers, 2nd phase not needed. 3/16/2010Lipyeow Lim -- University of Hawaii at Manoa14