Presentation is loading. Please wait.

Presentation is loading. Please wait.

CM20145 Recovery + Intro. to Concurrency Dr Alwyn Barry Dr Joanna Bryson.

Similar presentations


Presentation on theme: "CM20145 Recovery + Intro. to Concurrency Dr Alwyn Barry Dr Joanna Bryson."— Presentation transcript:

1 CM20145 Recovery + Intro. to Concurrency Dr Alwyn Barry Dr Joanna Bryson

2 Last Time… Transaction Concepts ACID Possible States Schedules Serializability Conflict View Others Testing for Serializability Precedence Graphs Conflict View Now: Recovery & Some Concurrency

3 Overview Recovery Cascading Rollbacks Storage & Data Access Algorithms for: Shadow paging, Log-based recovery Deferred & immediate DB modifications. Checkpoints Intro. to Concurrency Introduction to Locking Pitfalls of Locking The Two-Phase Locking Protocol Weaker Levels of Consistency

4 Recovery Algorithms Recovery algorithms are techniques to ensure database consistency and transaction atomicity and durability despite failures. Recovery algorithms have two parts 1.Actions taken during normal transaction processing to ensure enough information exists to recover from failures. 2.Actions taken after a failure to recover the database contents to a state that ensures atomicity, consistency and durability.

5 Failure – Classifications Transaction failure: Logical errors: transaction cannot complete due to some internal error condition. System errors: the database system must terminate an active transaction due to an error condition (e.g., deadlock – covered in Lecture 15). System crash: a power failure or other hardware or software failure causes the system to crash. Fail-stop assumption: non-volatile storage contents are assumed to not be corrupted by system crash. Database systems have numerous integrity checks to prevent corruption of disk data. Disk failure: a head crash or similar disk failure destroys all or part of disk storage. Destruction is assumed to be detectable. Disk drives use checksums to detect failures.

6 Recoverability Recoverable schedule: if a transaction T j reads a data item previously written by a transaction T i, the commit operation of T i appears before the commit operation of T j The schedule above is not recoverable if T 9 commits immediately after the read. If T 8 should abort, T 9 would have read (and possibly shown to the user) an inconsistent database state. Hence database must ensure that schedules are recoverable. How do we address failures when we are running concurrent transactions? ©Silberschatz, Korth and Sudarshan Modifications & additions by J Bryson

7 Cascading Rollbacks Cascading rollback – a single transaction failure leads to a series of transaction rollbacks. Consider the following schedule where none of the transactions has yet committed (so the schedule is recoverable). If T 10 fails, T 11 and T 12 must also be rolled back. Can lead to the undoing of a significant amount of work.

8 Cascadeless Schedules Cascadeless schedules cascading rollbacks cannot occur; for each pair of transactions T i and T j such that T j reads a data item previously written by T i, the commit operation of T i appears before the read operation of T j Every cascadeless schedule is also recoverable.

9 Overview Recovery Cascading Rollbacks Storage & Data Access Algorithms for: Shadow paging, Log-based recovery Deferred & immediate DB modifications. Checkpoints Intro. to Concurrency Introduction to Locking Pitfalls of Locking The Two-Phase Locking Protocol Weaker Levels of Consistency

10 Storage Hierarchy (Lecture 9)

11 Storage Structure Volatile storage: Does not survive system crashes. Examples: main memory, cache memory. Nonvolatile storage: Survives system crashes. Examples: disk, tape, flash memory, non-volatile (battery backed up) RAM. Stable storage: A mythical form of storage that survives all failures. Approximated by maintaining multiple copies on distinct nonvolatile media.

12 Stable-Storage Implementation Maintain multiple copies of each block on separate disks (& locations…) Failure during data transfer can still result in inconsistent copies. Block transfer can result in: Successful completion, Partial failure – destination block has incorrect information, or Total failure – destination block was never updated.

13 Data Access Physical blocks: blocks residing on the disk. Buffer blocks: blocks residing temporarily in main memory. Block movements between disk and main memory are initiated through the following two operations: input(B) transfers the physical block B to main memory. output(B) transfers the buffer block B to the disk, and replaces the appropriate physical block there. Each transaction T i has its private work-area in which local copies of all data items accessed and updated by it are kept. T i 's local copy of a data item X is called x i. We assume, for simplicity, that each data item fits in, and is stored inside, a single block.

14 Sample Data Access Diagram x Y A B x1x1 y1y1 buffer Buffer Block A Buffer Block B input(A) output(B) read(X) write(Y) disk work area of T 1 work area of T 2 memory x2x2

15 Data Access (Cont.) Transaction transfers data items between system buffer blocks and its private work- area. Transactions Perform read(X) while accessing X for the first time; All subsequent accesses are to the local copy. After last access, transaction executes write(X). output(B X ) need not immediately follow write(X). System can perform the output operation when it deems fit. Reminder: Volatile memory is faster, but more vulnerable!

16 Protecting Storage (FYI, not for exam) During data transfer two copies of each block: 1.Write the information onto the first physical block. 2.When the first write successfully completes, write the same information onto the second physical block. 3.The output is completed only after the second write successfully completes. To recover from failure: 1.First find inconsistent blocks: 1.Expensive solution: Compare the 2 copies of every disk block. 2.Better solution: nRecord in-progress disk writes on non-volatile storage (Non-volatile RAM or special area of disk). nUse this information during recovery to find blocks that may be inconsistent, and only compare copies of these. nUsed in hardware RAID systems. 2.If either copy of an inconsistent block is detected to have an error (bad checksum), overwrite it by the other copy. If both have no error, but are different, overwrite the second block by the first block.

17 Overview Recovery Cascading Rollbacks Storage & Data Access Algorithms for: Shadow paging, Log-based recovery Deferred & immediate DB modifications. Checkpoints Intro. to Concurrency Introduction to Locking Pitfalls of Locking The Two-Phase Locking Protocol Weaker Levels of Consistency

18 Recovery and Atomicity Modifying the database without ensuring that the transaction will commit may leave the database in an inconsistent state. To ensure atomicity despite failures, we first output information describing the modifications to stable storage without modifying the database itself. Two approaches shown here: shadow-paging (naïve), and log-based recovery. Well assume that transactions run serially (book goes further if youre curious).

19 Shadow Database Assume only one transaction is active at a time. db_pointer always points to the current consistent copy of the database. Updates made on a copy of the database. Pointer moved to updated copy after transaction reaches partial commit & pages written. On transaction failure, old consistent copy pointed to by db_pointer is used, and the shadow copy is deleted. Assumes disks dont fail. Useful for text editors, but extremely inefficient for large database -- executing a single transaction requires copying the entire database!

20 Log-Based Recovery A log is kept on stable storage. A log is a sequence of log records, records the update activities on the database. When transaction T i starts, it registers itself by writing a log record. Before T i executes write(X), a log record is written, where V 1 is the value of X before the write, and V 2 is the value to be written to X. When T i finishes its last statement, the log record is written. Assume here that log records are written directly to stable storage (that is, they are not buffered). Two approaches using logs: Deferred database modification. Immediate database modification.

21 Deferred Database Modification Deferred database modification scheme records all modifications to the log, but defers all writes to after partial commit. Transaction starts by writing record to log. A write(X) operation results in a log record being written, where V is the new value for X. Note: old value is not needed for this scheme. The real write is not performed on X at this time, but is deferred. When T i partially commits, is written to the log. Finally, the log records are used to actually execute the previously deferred writes. Assumes that transactions execute serially.

22 Deferred DB Modification (2) During recovery, a transaction needs to be redone if and only if both and are there in the log. Redoing a transaction T i ( redoT i ) sets the value of all data items updated by the transaction to the new values. Crashes can occur while: the transaction is executing the original updates, or while recovery action is being taken Example: T 0 and T 1 (T 0 executes before T 1 ): T 0 : read (A)T 1 : read (C) A: - A - 50 C:- C- 100 write (A) write (C) read (B) B:- B + 50 write (B)

23 Deferred DB Modification (3) Consider a log at three instances of time. If log on stable storage at time of crash: (a) No redo actions need to be taken. (b) redo(T 0 ) must be performed since is present. (c) redo(T 0 ) must be performed followed by redo(T 1 ) since and are present.

24 Immediate DB Modification The immediate database modification scheme allows database updates of an uncommitted transaction to be made as the writes are issued. Since undoing may be needed, update logs must have both old value and new value. Update log record must be written before database item. Log record must be output directly to stable storage. Can postpone log record output, so long as prior to execution of an output(B) operation, all log records corresponding to items B are flushed to stable storage. Output of updated blocks can take place at any time before or after transaction commit. Order in which blocks are output can be different from the order they are written.

25 Overview Recovery Cascading Rollbacks Storage & Data Access Algorithms for: Shadow paging, Log-based recovery Deferred & Immediate DB modifications. Checkpoints Intro. to Concurrency Introduction to Locking Pitfalls of Locking The Two-Phase Locking Protocol Weaker Levels of Consistency

26 Checkpoints Problems with log-based recovery procedure: 1.Searching the entire log is time-consuming. 2.We might unnecessarily redo transactions which have already output their updates to the database. Can streamline recovery procedure by periodically performing checkpointing. 1.Output all log records currently residing in main memory onto stable storage. 2.Output all modified buffer blocks to the disk. 3.Write a log record onto stable storage.

27 Checkpoints & Recovery Need consider only transactions that didnt commit before checkpoint. Simple algorithm if serialized transactions: 1.Scan backwards from end of log to find the most recent record. 2.Continue scanning backwards till a record is found. 3.Need only consider the part of log following above start record. Earlier part of log can be ignored during recovery, and can be erased whenever desired. 4.For all transactions with no, execute undo(T i ). 5.Scanning forward in the log, for all transactions starting from T i or later with a, execute redo(T i ).

28 Example of Checkpoints T 1 can be ignored (updates already output to disk due to checkpoint) T 2 and T 3 redone. T 4 undone TcTc TfTf T1T1 T2T2 T3T3 T4T4 checkpoint system failure

29 Overview Recovery Cascading Rollbacks Storage & Data Access Algorithms for: Shadow paging, Log-based recovery Deferred & Immediate DB modifications. Checkpoints Intro. to Concurrency Introduction to Locking Pitfalls of Locking The Two-Phase Locking Protocol Weaker Levels of Consistency

30 Concurrency Goal – to develop concurrency control protocols that will ensure serializability. These protocols will impose a discipline that avoids nonseralizable schedules. A common concurrency control protocol uses locks. While one transaction is accessing a data item, no other transaction can modify it. Require a transaction to lock the item before accessing it. Topic of Lecture 15! But well introduce locking now.

31 Lock-Based Protocols A lock is a mechanism to control concurrent access to a data item. Lock requests are made to concurrency-control manager. Transaction can proceed only after request is granted. Data items can be locked in two modes: 1.exclusive (X) mode. Data item can be both read and written. X-lock is requested using the lock-X instruction. 2.shared (S) mode. Data item can only be read. S-lock is requested using lock-S.

32 Lock-Based Protocols (2) Lock-compatibility matrix: A transaction may be granted a lock on an item if the requested lock is compatible with locks already held on the item by other transactions Any number of transactions can hold shared locks on an item, but if any transaction holds an exclusive on the item no other transaction may hold any lock on the item. If a lock cannot be granted, the requesting transaction is made to wait till all incompatible locks held by other transactions have been released. The lock is then granted.

33 Lock-Based Protocols (3) Example of a transaction performing locking: T 2 : lock-S(A); read (A); unlock(A); lock-S(B); read (B); unlock(B); display(A+B) Locking as above is not sufficient to guarantee serializability if A and B get updated between the read of A and B, and the display of their sum, that sum would be out of date. A locking protocol is a set of rules followed by all transactions while requesting and releasing locks. Locking protocols restrict the set of possible schedules.

34 Pitfalls of Lock-Based Protocols Consider the partial schedule: Neither T 3 nor T 4 can make progress. Executing lock-S(B) causes T 4 to wait for T 3 to release its lock on B, while executing lock-X(A) causes T 3 to wait for T 4 to release its lock on A. Such a situation is called a deadlock. To handle a deadlock one of T 3 or T 4 must be rolled back and its locks released.

35 Pitfalls of Locking (2) The potential for deadlock exists in most locking protocols. Deadlocks are a necessary evil. Starvation is also possible if the concurrency-control manager is badly designed. For example: A transaction may be waiting for an X-lock on an item, while a sequence of other transactions request and are granted an S-lock on the same item. The same transaction is repeatedly rolled back due to deadlocks. Concurrency-control managers can be designed to prevent starvation.

36 The Two-Phase Locking Protocol This is a protocol which ensures conflict- serializable schedules. Phase 1: Growing Phase transaction may obtain locks transaction may not release locks Phase 2: Shrinking Phase transaction may release locks transaction may not obtain locks The protocol assures serializability. It can be proved that the transactions can be serialized in the order of their lock points (i.e. the point where a transaction acquired its final lock.)

37 The Two-Phase Locking Protocol Two-phase locking does not ensure freedom from deadlocks Cascading roll-back is possible under two-phase locking. Avoided with strict two-phase locking. Transaction must hold all its exclusive locks till it commits/aborts. Rigorous two-phase locking is even stricter: All locks are held till commit/abort. This lets protocol transactions be serialized in the order in which they commit.

38 Lock Conversions Two-phase locking with lock conversions: – First Phase: can acquire a lock-S on item can acquire a lock-X on item can convert a lock-S to a lock-X (upgrade) – Second Phase: can release a lock-S can release a lock-X can convert a lock-X to a lock-S (downgrade) This protocol assures serializability. But still relies on the programmer to insert the various locking instructions.

39 Overview Recovery Cascading Rollbacks Storage & Data Access Algorithms for: Shadow paging, Log-based recovery Deferred & Immediate DB modifications. Checkpoints Intro. to Concurrency Introduction to Locking Pitfalls of Locking The Two-Phase Locking Protocol Weaker Levels of Consistency

40 Weak Levels of Consistency Degree-two consistency: differs from two-phase locking in that S-locks may be released at any time, and locks may be acquired at any time X-locks must be held till end of transaction Serializability is not guaranteed, programmer must ensure that no erroneous database state will occur] Cursor stability: For reads, each tuple is locked, read, and lock is immediately released X-locks are held till end of transaction Special case of degree-two consistency FYI only – you arent responsible for this slides content.

41 Weak Consistency in SQL SQL allows non-serializable executions Serializable: is the default Repeatable read: allows only committed records to be read, and repeating a read should return the same value (so read locks should be retained) However, the phantom phenomenon need not be prevented T1 may see some records inserted by T2, but may not see others inserted by T2 Read committed: same as degree two consistency, but most systems implement it as cursor-stability Read uncommitted: allows even uncommitted data to be read FYI only – you arent responsible for this slides content.

42 Summary Recovery Cascading & Its Avoidance Storage & Data Access Algorithms for: Shadow paging, Log-based recovery Deferred & immediate DB modifications. Checkpoints Intro. to Concurrency Introduction to Locking Pitfalls of Locking The Two-Phase Locking Protocol Weaker Levels of Consistency (for interest only) Next: Concurrency Control

43 Reading & Exercises Reading Silberschatz Ch: Connolly & Begg: 20.3 You will need the rest of 20.2 for next week, so if you want to stay in order go ahead and read that. Exercises: Silberschatz Connolly & Begg , 20.27


Download ppt "CM20145 Recovery + Intro. to Concurrency Dr Alwyn Barry Dr Joanna Bryson."

Similar presentations


Ads by Google