ARIES, Concluded and Distributed Databases: R* Zachary G. Ives University of Pennsylvania CIS 650 – Implementing Data Management Systems October 2, 2008.

Slides:



Advertisements
Similar presentations
1 CS411 Database Systems 12: Recovery obama and eric schmidt sysadmin song
Advertisements

Crash Recovery R&G - Chapter 20
Transaction Management: Crash Recovery, part 2 CS634 Class 21, Apr 23, 2014 Slides based on “Database Management Systems” 3 rd ed, Ramakrishnan and Gehrke.
CS 440 Database Management Systems Lecture 10: Transaction Management - Recovery 1.
1 Crash Recovery Chapter Review: The ACID properties  A  A tomicity: All actions of the Xact happen, or none happen.  C  C onsistency: If each.
Jianlin Feng School of Software SUN YAT-SEN UNIVERSITY
Database Management Systems, 3ed, R. Ramakrishnan and J. Gehrke 1 Crash Recovery Chapter 18.
Crash Recovery R&G - Chapter 18.
CMU SCS Carnegie Mellon Univ. Dept. of Computer Science /615 - DB Applications C. Faloutsos – A. Pavlo Lecture#23: Crash Recovery – Part 2 (R&G ch.
ARIES: Logging and Recovery Slides derived from Joe Hellerstein; Updated by A. Fekete If you are going to be in the logging business, one of the things.
Crash Recovery, Part 1 If you are going to be in the logging business, one of the things that you have to do is to learn about heavy equipment. Robert.
5/13/20151 PSU’s CS …. Recovery - Review (only for DB with updates…..!)  ACID: Atomicity & Durability  Trading execution time for recovery time.
1 Crash Recovery Chapter Review: The ACID properties  A  A tomicity: All actions of the Xact happen, or none happen.  C  C onsistency: If each.
© Dennis Shasha, Philippe Bonnet 2001 Log Tuning.
Introduction to Database Systems1 Logging and Recovery CC Lecture 2.
1 Crash Recovery Chapter Review: The ACID properties  A  A tomicity: All actions in the Xact happen, or none happen.  C  C onsistency: If each.
COMP9315: Database System Implementation 1 Crash Recovery Chapter 18 (3 rd Edition)
Database Management Systems, 2 nd Edition. R. Ramakrishnan and J. Gehrke 1 Crash Recovery Chapter 20 If you are going to be in the logging business, one.
Transaction Management: Crash Recovery CS634 Class 20, Apr 16, 2014 Slides based on “Database Management Systems” 3 rd ed, Ramakrishnan and Gehrke.
ICS 421 Spring 2010 Transactions & Recovery Asst. Prof. Lipyeow Lim Information & Computer Science Department University of Hawaii at Manoa 3/4/20101Lipyeow.
ICS 421 Spring 2010 Distributed Transactions Asst. Prof. Lipyeow Lim Information & Computer Science Department University of Hawaii at Manoa 3/16/20101Lipyeow.
Crash Recovery Lecture 24 R&G - Chapter 18 If you are going to be in the logging business, one of the things that you have to do is to learn about heavy.
Crash Recovery CS 186 Spring 2006, Lecture 26 & 27 R&G - Chapter 18 If you are going to be in the logging business, one of the things that you have to.
Database Management Systems 1 Logging and Recovery If you are going to be in the logging business, one of the things that you have to do is to learn about.
Chapter 19 Database Recovery Techniques. Slide Chapter 19 Outline Databases Recovery 1. Purpose of Database Recovery 2. Types of Failure 3. Transaction.
1 Crash Recovery Yanlei Diao UMass Amherst April 3 and 5, 2007 Slides Courtesy of R. Ramakrishnan and J. Gehrke.
Recovery with Aries. Locking Lock and Unlock take DB resources as arguments: DB, a relation, tuple, etc. Lock and Unlock take DB resources as arguments:
Slides derived from Joe Hellerstein; Updated by A. Fekete
CPSC 461. Goal Goal of this lecture is to study Crash Recovery which is subpart of transaction management in DBMS. Crash recovery in DBMS is achieved.
Optimistic Concurrency Control & ARIES: Database Logging and Recovery Zachary G. Ives University of Pennsylvania CIS 650 – Implementing Data Management.
DatabaseSystems/COMP4910/Spring03/Melikyan1 Crash Recovery.
DURABILITY OF TRANSACTIONS AND CRASH RECOVERY These are mostly the slides of your textbook !
Logging and Recovery Chapter 18 If you are going to be in the logging business, one of the things that you have to do is to learn about heavy equipment.
Transactions and Their Distribution Zachary G. Ives University of Pennsylvania CIS 455 / 555 – Internet and Web Systems October 20, 2015.
ARIES: Logging and Recovery Slides derived from Joe Hellerstein; Updated by A. Fekete If you are going to be in the logging business, one of the things.
1 Crash Recovery Lecture 23 Ramakrishnan - Chapter 20 If you are going to be in the logging business, one of the things that you have to do is to learn.
ARIES: Database Logging and Recovery Zachary G. Ives University of Pennsylvania CIS 650 – Implementing Data Management Systems February 9, 2005 Some content.
© Dennis Shasha, Philippe Bonnet 2001 Log Tuning.
Motivation for Recovery Atomicity: –Transactions may abort (“Rollback”). Durability: –What if DBMS stops running? (Causes?) crash! v Desired Behavior after.
Implementation of Database Systems, Jarek Gryz 1 Crash Recovery Chapter 18.
1 Logging and Recovery. 2 Review: The ACID properties v A v A tomicity: All actions in the Xact happen, or none happen. v C v C onsistency: If each Xact.
Database Applications (15-415) DBMS Internals- Part XIV Lecture 25, April 17, 2016 Mohammad Hammoud.
1 Database Systems ( 資料庫系統 ) January 3, 2005 Chapter 18 By Hao-hua Chu ( 朱浩華 )
SIMPLE CONCURRENCY CONTROL. Correctness criteria: The ACID properties A A tomicity: All actions in the Xact happen, or none happen. C C onsistency: If.
Database Recovery Techniques
DURABILITY OF TRANSACTIONS AND CRASH RECOVERY
CS 440 Database Management Systems
Crash Recovery The slides for this text are organized into chapters. This lecture covers Chapter 20. Chapter 1: Introduction to Database Systems Chapter.
Crash Recovery R&G - Chapter 20
Slides derived from Joe Hellerstein; Updated by A. Fekete
CS186 Slides by Joe Hellerstein Enhanced by Alan Fekete
Crash Recovery Chapter 18
Database Applications (15-415) DBMS Internals- Part XIV Lecture 23, November 20, 2016 Mohammad Hammoud.
Ali Ghodsi and Ion Stoica,
Database Systems (資料庫系統)
Crash Recovery R&G - Chapter 20
Crash Recovery Chapter 18
Crash Recovery The slides for this text are organized into chapters. This lecture covers Chapter 20. Chapter 1: Introduction to Database Systems Chapter.
Introduction to Database Systems
Recovery I: The Log and Write-Ahead Logging
Recovery II: Surviving Aborts and System Crashes
Crash Recovery Chapter 18
Kathleen Durant PhD CS 3200 Lecture 11
Crash Recovery, Part 2 R&G - Chapter 18
Crash Recovery The slides for this text are organized into chapters. This lecture covers Chapter 20. Chapter 1: Introduction to Database Systems Chapter.
RECOVERY MANAGER E0 261 Jayant Haritsa Computer Science and Automation
Database Recovery 1 Purpose of Database Recovery
Crash Recovery Chapter 18
Slides derived from Joe Hellerstein; Updated by A. Fekete
Crash Recovery Chapter 18
Presentation transcript:

ARIES, Concluded and Distributed Databases: R* Zachary G. Ives University of Pennsylvania CIS 650 – Implementing Data Management Systems October 2, 2008 Some content on 2-phase commit courtesy Ramakrishnan, Gehrke

2 Administrivia Next reading assignment:  Principles of Data Integration Chapter 3.3  No review required for this  Review Mariposa & R* for Tuesday

ARIES: Basic Data Structures DB Data pages each with a pageLSN Xact Table lastLSN status Dirty Page Table recLSN flushedLSN RAM prevLSN XID type length pageID offset before-image after-image LogRecords LOG master record

Normal Execution of a Transaction Series of reads & writes, followed by commit or abort We will assume that write is atomic on disk  In practice, additional details to deal with non-atomic writes Strict 2PL STEAL, NO-FORCE buffer management, with Write- Ahead Logging

Checkpointing Periodically, the DBMS creates a checkpoint  Minimizes recovery time in the event of a system crash  Write to log:  begin_checkpoint record: when checkpoint began  end_checkpoint record: current Xact table and dirty page table  A “fuzzy checkpoint”:  Other Xacts continue to run; so these tables accurate only as of the time of the begin_checkpoint record  No attempt to force dirty pages to disk; effectiveness of checkpoint limited by oldest unwritten change to a dirty page. (So it’s a good idea to periodically flush dirty pages to disk!)  Store LSN of checkpoint record in a safe place (master record)

Simple Transaction Abort, 1/2 For now, consider an explicit abort of a Xact  (No crash involved) We want to “play back” the log in reverse order, UNDO ing updates  Get lastLSN of Xact from Xact table  Can follow chain of log records backward via the prevLSN field  When do we quit?  Before starting UNDO, write an Abort log record  For recovering from crash during UNDO!

Abort, 2/2 To perform UNDO, must have a lock on data! No problem – no one else can be locking it Before restoring old value of a page, write a CLR:  You continue logging while you UNDO!!  CLR has one extra field: undoNextLSN  Points to the next LSN to undo (i.e. the prevLSN of the record we’re currently undoing).  CLRs never Undone (but they might be Redone when repeating history: guarantees Atomicity!) At end of UNDO, write an “end” log record

Transaction Commit  Write commit record to log  All log records up to Xact’s lastLSN are flushed  Guarantees that flushedLSN  lastLSN  Note that log flushes are sequential, synchronous writes to disk  Many log records per log page  Commit() returns  Write end record to log

Crash Recovery: Big Picture  Start from a checkpoint (found via master record)  Three phases: 1.Figure out which Xacts committed since checkpoint, which failed (Analysis) 2.REDO all actions –(repeat history) 3.UNDO effects of failed Xacts Oldest log rec. of Xact active at crash Smallest recLSN in dirty page table after Analysis Last chkpt CRASH A RU

Recovery: The Analysis Phase Reconstruct state at checkpoint  via end_checkpoint record Scan log forward from checkpoint  End record: Remove Xact from Xact table (no longer active)  Other records: Add Xact to Xact table, set lastLSN=LSN, change Xact status on commit  Update record: If P not in Dirty Page Table,  Add P to D.P.T., set its recLSN=LSN

Recovery: The REDO Phase Repeat history to reconstruct state at crash:  Reapply all updates (even of aborted Xacts!), redo CLRs  Puts us in a state where we know UNDO can do right thing Scan forward from log rec containing smallest recLSN in D.P.T. For each CLR or update log rec LSN, REDO the action unless:  Affected page is not in the Dirty Page Table, or  Affected page is in D.P.T., but has recLSN > LSN, or  pageLSN (in DB)  LSN To REDO an action:  Reapply logged action  Set pageLSN to LSN. Don’t log this!

Recovery: The UNDO Phase ToUndo = { l | l a lastLSN of a “loser” Xact} Repeat:  Choose largest LSN among ToUndo  If this LSN is a CLR and undoNextLSN==NULL  Write an End record for this Xact  If this LSN is a CLR and undoNextLSN != NULL  Add undoNextLSN to ToUndo  Else this LSN is an update Undo the update, write a CLR, add prevLSN to ToUndo Until ToUndo is empty

Example of Recovery begin_checkpoint end_checkpoint update: T1 writes P5 update T2 writes P3 T1 abort CLR: Undo T1 LSN 10 T1 End update: T3 writes P1 update: T2 writes P5 CRASH, RESTART LSN LOG Xact Table lastLSN status Dirty Page Table recLSN flushedLSN ToUndo prevLSNs RAM

Example: Crash During Restart begin_checkpoint, end_checkpoint update: T1 writes P5 update T2 writes P3 T1 abort CLR: Undo T1 LSN 10, T1 End update: T3 writes P1 update: T2 writes P5 CRASH, RESTART CLR: Undo T2 LSN 60 CLR: Undo T3 LSN 50, T3 end CRASH, RESTART CLR: Undo T2 LSN 20, T2 end LSN LOG 00, , ,85 90 Xact Table lastLSN status Dirty Page Table recLSN flushedLSN ToUndo undoNextLSN RAM

Additional Crash Issues What happens if system crashes during Analysis? How do you limit the amount of work in REDO?  Flush asynchronously in the background  Watch “hot spots”! How do you limit the amount of work in UNDO?  Avoid long-running Xacts

Summary of Logging/Recovery  Recovery Manager guarantees Atomicity & Durability  Use WAL to allow STEAL/NO-FORCE w/o sacrificing correctness  LSNs identify log records; linked into backwards chains per transaction (via prevLSN)  pageLSN allows comparison of data page and log records

Summary, Continued  Checkpointing: A quick way to limit the amount of log to scan on recovery.  Recovery works in 3 phases:  Analysis: Forward from checkpoint  Redo: Forward from oldest recLSN  Undo: Backward from end to first LSN of oldest Xact alive at crash  Upon Undo, write CLRs  Redo “repeats history”: Simplifies the logic!

18 Distributed Databases  Goal: provide an abstraction of a single database, but with the data distributed on different sites  Pose a new set of challenges:  Source tables (or subsets of tables) may be located on different machines  There is a data transfer cost – over the network  Different CPUs have different amounts of resources  Available resources change during optimization- and run-time  Today:  R*: the first “real” distributed DBMS prototype (Distributed INGRES never actually ran) – focus was a LAN, sites  Mariposa: an attempt to distribute across the wide area

19 Issues in Distributed Databases  How do we place the data?  R*: this is done by humans  Mariposa: this is done via economic model  What new capabilities do we have?  R*: SHIP, 2-phase dependent join, bloomjoin, …  Mariposa: ship processing to another node  Challenges in optimization  R*: more complex cost model, more exec. options  Mariposa: bidding on computation and other resources

20 System-R* Optimization  Focus: distributed joins 1.Can ship a table and then join it (“ship whole”) 2.Can probe the inner table and return matches (“fetch matches”)  Their measurements favored #1 – why? Why do they require:  Cardinality of outer < ½ # messages required to ship inner  Join cardinality < inner cardinality  How can the 2 nd case be improved, in the spirit of block NLJ?

21 Parallelism in Joins  They assert it’s better to ship from outer relation to site of inner relation because of potential for parallelism  Where?  What changes if the inner relation is horizontally partitioned across multiple sites (i.e., relations are “striped”)?  They can also exploit the possibility of parallelism in sorting for a merge join – how?

22 Other Joins  Ship the inner relation and then index it  Two-phase semijoin  Very similar to “fetch matches”  Take S, T, sort them and remove duplicates  Ship to opposite sites, use to fetch tuples that match  Merge-join the matching tuples  Bloomjoin  Generate a Bloom filter from S  Send to site of T, find matches  Return to S, join

23 Bloom Filters (Bloom 1970)  Use k hash functions, m-bit vector  For each tuple, hash key k with each function h i  Set bit h i (k) in the bit vector  Probe the Bloom filter for k’ by testing whether all h i (k’) are set  After n values have been isnerted, probability of false positive is (1 – 1/m) kn h 1 (k) h 2 (k) h 3 (k) h 4 (k) m bits

24 Joins – “High-Speed” Network Semijoin R* + Temp Indices R* (Distributed) R* (Local) Bloomjoin

25 R* Optimization Assessment  Distributed optimization is hard!  They ignore load-balance issues, and messaging overhead is probably ~10%, but still…  Shipping costs are difficult to assess, since they depend on precise cardinality results  Optimizing the plan locally, and using that as a model for distributed processing, doesn’t provide any optimality guarantees either – doesn’t account for parallelism

Updates in R* Require Two-Phase Commit (2PC)  Site at which a transaction originates is the coordinator; other sites at which it executes are subordinates  Two rounds of communication, initiated by coordinator:  Voting  Coordinator sends prepare messages, waits for yes or no votes  Then, decision or termination  Coordinator sends commit or rollback messages, waits for acks  Any site can decide to abort a transaction!

27 Steps in 2PC When a transaction wants to commit:  Coordinator sends prepare message to each subordinate  Subordinate force-writes an abort or prepare log record and then sends a no (abort) or yes (prepare) message to coordinator  Coordinator considers votes:  If unanimous yes votes, force-writes a commit log record and sends commit message to all subordinates  Else, force-writes abort log rec, and sends abort message  Subordinates force-write abort/commit log records based on message they get, then send ack message to coordinator  Coordinator writes end log record after getting all acks

28 Illustration of 2PC CoordinatorSubordinate 1Subordinate 2 force-write begin log entry force-write prepared log entry force-write prepared log entry send “prepare” send “yes” force-write commit log entry send “commit” force-write commit log entry force-write commit log entry send “ack” write end log entry

Comments on 2PC  Every message reflects a decision by the sender; to ensure that this decision survives failures, it is first recorded in the local log  All log records for a transaction contain its ID and the coordinator’s ID  The coordinator’s abort/commit record also includes IDs of all subordinates  Thm: there exists no distributed commit protocol that can recover without communicating with other processes, in the presence of multiple failures!

What if a Site Fails in the Middle?  If we have a commit or abort log record for transaction T, but not an end record, we must redo/undo T  If this site is the coordinator for T, keep sending commit/abort msgs to subordinates until acks have been received  If we have a prepare log record for transaction T, but not commit/abort, this site is a subordinate for T  Repeatedly contact the coordinator to find status of T, then write commit/abort log record; redo/undo T; and write end log record  If we don’t have even a prepare log record for T, unilaterally abort and undo T  This site may be coordinator! If so, subordinates may send messages and need to also be undone

Blocking for the Coordinator  If coordinator for transaction T fails, subordinates who have voted yes cannot decide whether to commit or abort T until coordinator recovers  T is blocked  Even if all subordinates know each other (extra overhead in prepare msg) they are blocked unless one of them voted no

Link and Remote Site Failures  If a remote site does not respond during the commit protocol for transaction T, either because the site failed or the link failed:  If the current site is the coordinator for T, should abort T  If the current site is a subordinate, and has not yet voted yes, it should abort T  If the current site is a subordinate and has voted yes, it is blocked until the coordinator responds!

Observations on 2PC  Ack msgs used to let coordinator know when it’s done with a transaction; until it receives all acks, it must keep T in the transaction-pending table  If the coordinator fails after sending prepare msgs but before writing commit/abort log recs, when it comes back up it aborts the transaction

34 R* Wrap-Up  One of the first systems to address both distributed query processing and distributed updates  Focus was on local-area networks, small number of sites  Next system, Mariposa, focuses on environments more like the Internet…