Presentation is loading. Please wait.

Presentation is loading. Please wait.

Split Snapshots and Skippy Indexing: Long Live the Past! Ross Shaull Liuba Shrira Brandeis University.

Similar presentations


Presentation on theme: "Split Snapshots and Skippy Indexing: Long Live the Past! Ross Shaull Liuba Shrira Brandeis University."— Presentation transcript:

1 Split Snapshots and Skippy Indexing: Long Live the Past! Ross Shaull Liuba Shrira Brandeis University

2 Our Idea of a Snapshot A window to the past in a storage system Access data as it was at time snapshot was requested System-wide Snapshots may be kept forever –I.e., “long-lived” snapshots Snapshots are consistent –Whatever that means… High frequency (up to CDP)

3 Why Take Snapshots? Fix operator errors Auditing –When did Bob’s salary change, and who made the changes? Analysis –How much capital was tied up in blue shirts at the beginning of this fiscal year? We don’t necessarily know now what will be interesting in the future

4 BITE Give the storage system a new capability: Back-in-Time Execution Run read-only code against current state and any snapshot After issuing request for BITE, no special code required for accessing data in the snapshot

5 Other Approaches: Databases ImmortalDB, Time-Split BTree (Lomet) –Reorganizes current state –Complex Snapshot isolation (PostgreSQL, Oracle) –Extension to transactions –Only for recent past Oracle FlashBack –Page-level copy of recent past (not forever) –Interface seems similar to BITE

6 Other Approaches: FS WAFL (Hitz), ext3cow (Peterson) –Limited on-disk locality –Application-level consistency a challenge VSS (Sankaran) –Blocks disk requests –Suitable for backup-type frequency

7 A Different Approach Goals: –Avoid declustering current state –Don’t change how current state is accessed –Application requests snapshot –Snapshots are “on-line” (not in warehouse) Split Snapshots –Copy past out incrementally –Snapshots available through virtualized buffer manager

8 Our Storage System Model A “database” –Has transactions –Has recovery log –Organizes data in pages on disk

9 Our Consistency Model Crash consistency –Imagine that a snapshot is declared, but then before any modifications can be made, the system crashes –After restart, recovery kicks in and the current state is restored to *some* consistent point –All snapshots will have this same consistency guarantee after a crash

10 I want record R Our Storage System Model P1P1 P3P3 … Application Cache Disk P1P1 …PnPn Access Methods Database Snapshot Now Find Table Find Root Search for R Return R P 1  Address X P 2  Address Y … Page Table

11 Retaining the Past Versus

12 Copy-on-Write (COW) P1P1 P2P2 P1P1 P1P1 P2P2 P1P1 P2P2 Page Table Snapshot Page Table “S” Operations: Snapshot “S” Modify P 1 The old page table became the Snapshot page table

13 P1P1 P1P1 Split-COW Expensive to update P 2 in both page tables P1P1 P2P2 P1P1 P1P1 P2P2 P1P1 P2P2 Page Table P1P1 P2P2 SPT(S) P1P1 P1P1 P2P2 P2P2 SPT(S+1)

14 What’s next 1.How to manage the metadata? 2.How will snapshot pages be accessed? 3.Can we be non-disruptive?

15 Metadata Solution Metadata (page tables) created incrementally Keeping many SPTs costly Instead, write “mappings” into log Materialize SPT on-demand

16 Maplog Start Maplog Mappings created incrementally Added to append-only log Start points to first mapping created after a snapshot is declared P1P1 P1P1 P2P2 P1P1 P1P1 P1P1 P2P2 P1P1 P2P2 Snap 1Snap 2Snap 3Snap 4Snap 5Snap 6 P3P3

17 P1P1 P1P1 P2P2 P1P1 P1P1 P1P1 P2P2 P1P1 P2P2 Snap 1Snap 2Snap 3Snap 4Snap 5Snap 6 Maplog Start Maplog Materialize SPT with scan Scan for SPT(S) begins at Start(S) Notice that we read some mappings that we do not need P3P3

18 Cost of Scanning Maplog Let overwrite cycle length L be the number of page updates required to overwrite entire database Maplog scan cannot be longer than overwrite cycle Let N be the number of pages in the database For a uniformly random workload, L  N ln N (by the “coupon collector’s waiting time” problem) Skew in the update workload lengthens overwrite cycle Skew of 80/20 (80% of updates to 20% of pages) increases L by a factor of 4 Skew hurts

19 Skippy P1P1 P2P2 P1P1 P1P1 P2P2 Skippy Level 1 Maplog Start Copy first-encountered mapping (FEM) within node to next level P1P1 P1P1 P2P2 P1P1 P1P1 P1P1 P2P2 P1P1 P2P2 Snap 1Snap 2Snap 3Snap 4Snap 5Snap 6 P3P3 P3P3 Pointers Copies

20 Skippy P1P1 P2P2 P1P1 P1P1 P2P2 Maplog Start P1P1 P1P1 P2P2 P1P1 P1P1 P1P1 P2P2 P1P1 P2P2 Snap 1Snap 2Snap 3Snap 4Snap 5Snap 6 P3P3 P3P3 Skippy Level 1 Cut redundant mapping count in half

21 K-Level Skippy Can eliminate effect of skew — or more Enables ad-hoc, on-line access to snapshots, whether they are old or young Skew# Skippy LevelsTime to Materialize SPT (s) 50/50013.8 80/20019.0 115.8 214.7 313.9 99/1033.3 16.69

22 Read Current State BITE Accessing Snapshots Transparent to layers above cache Indirection layer to redirect page requests from a BITE transaction into the snapstore P1P1 P2P2 P1P1 P1P1 P2P2 Cache P1P1 P2P2 P2P2

23 Non-Disruptiveness Can we create Skippy and COW pre- states without disrupting the current state? Key idea: –Leverage recovery to defer all snapshot- related writes –Write snapshot data in background to secondary disk

24 Implementation BDB 4.6.21 Page cache augmented –COWs write-locked pages –Trickle COW’d pages out over time Leverage recovery –Metadata created in-memory at transaction commit time, but only written at checkpoint time –After crash, snapshot pages and metadata can be recovered in one log pass Costs –Snapshot log record –Extra memory –Longer checkpoints

25 Early Disruptiveness Results Single-threaded updating workload of 100,000 transactions 66M database We can retain a snapshot after every transaction for a 6–8% penalty to writers Tests with readers show little impact on sequential scans (not depicted)

26 Paper Trail Upcoming poster and short paper at ICDE08 “Skippy: a New Snapshot Indexing Method for Time Travel in the Storage Manager” to appear in SIGMOD08 Poster and workshop talks –NEDBDay08, SYSTOR08

27 Questions?

28 Backups…

29 Recovery Sketch 1 Snapshots are crash consistent Must recover data and metadata for all snapshots since last checkpoint Pages might have been trickled, so must truncate snapstore back to last mapping before previous checkpoint We require only that a snapshot log record be forced into the log with a group commit, no other data/metadata must be logged until checkpoint.

30 Recovery Sketch 2 Walk backward through WAL, applying UNDOs When snapshot record is encountered, copy the “dirty” pages and create a mapping Trouble is that snapshots can be concurrent with transactions Cope with this by “COWing” a page when an UNDO for a different transaction is applied to that page

31 The Future Sometimes we want to scrub the past –Running out of space? –Retention windows for SOX-compliance Change past state representation –Deduplication –Compression


Download ppt "Split Snapshots and Skippy Indexing: Long Live the Past! Ross Shaull Liuba Shrira Brandeis University."

Similar presentations


Ads by Google