FS Consistency, Block Allocation, and WAFL

Slides:



Advertisements
Similar presentations
File Systems.
Advertisements

The Zebra Striped Network Filesystem. Approach Increase throughput, reliability by striping file data across multiple servers Data from each client is.
Lecture 18 ffs and fsck. File-System Case Studies Local FFS: Fast File System LFS: Log-Structured File System Network NFS: Network File System AFS: Andrew.
G Robert Grimm New York University Sprite LFS or Let’s Log Everything.
1 File Systems Chapter Files 6.2 Directories 6.3 File system implementation 6.4 Example file systems.
File System Implementation
The design and implementation of a log-structured file system The design and implementation of a log-structured file system M. Rosenblum and J.K. Ousterhout.
Lecture 17 I/O Optimization. Disk Organization Tracks: concentric rings around disk surface Sectors: arc of track, minimum unit of transfer Cylinder:
CS 333 Introduction to Operating Systems Class 18 - File System Performance Jonathan Walpole Computer Science Portland State University.
Cse Feb-001 CSE 451 Section February 24, 2000 Project 3 – VM.
Ext3 Journaling File System “absolute consistency of the filesystem in every respect after a reboot, with no loss of existing functionality” chadd williams.
Chapter 12: File System Implementation
G Robert Grimm New York University Sprite LFS or Let’s Log Everything.
Crash recovery All-or-nothing atomicity & logging.
CS 333 Introduction to Operating Systems Class 19 - File System Performance Jonathan Walpole Computer Science Portland State University.
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science Emery Berger University of Massachusetts Amherst Operating Systems CMPSCI 377 Lecture.
Secondary Storage Management Hank Levy. 8/7/20152 Secondary Storage • Secondary Storage is usually: –anything outside of “primary memory” –storage that.
File System Reliability. Main Points Problem posed by machine/disk failures Transaction concept Reliability – Careful sequencing of file system operations.
THE DESIGN AND IMPLEMENTATION OF A LOG-STRUCTURED FILE SYSTEM M. Rosenblum and J. K. Ousterhout University of California, Berkeley.
The Design and Implementation of a Log-Structured File System Presented by Carl Yao.
Transactions and Reliability. File system components Disk management Naming Reliability  What are the reliability issues in file systems? Security.
CS4432: Database Systems II Data Storage (Better Block Organization) 1.
FFS, LFS, and RAID Andy Wang COP 5611 Advanced Operating Systems.
CS 352 : Computer Organization and Design University of Wisconsin-Eau Claire Dan Ernst Storage Systems.
More I/O services and recovery Jeff Chase Duke University.
AN IMPLEMENTATION OF A LOG-STRUCTURED FILE SYSTEM FOR UNIX Margo Seltzer, Harvard U. Keith Bostic, U. C. Berkeley Marshall Kirk McKusick, U. C. Berkeley.
Disk Access. DISK STRUCTURE Sector: Smallest unit of data transfer from/to disk; 512B 2/4/8 adjacent sectors transferred together: Blocks Read/write heads.
JOURNALING VERSUS SOFT UPDATES: ASYNCHRONOUS META-DATA PROTECTION IN FILE SYSTEMS Margo I. Seltzer, Harvard Gregory R. Ganger, CMU M. Kirk McKusick Keith.
IT 344: Operating Systems Winter 2008 Module 16 Journaling File Systems Chia-Chi Teng CTB 265.
THE DESIGN AND IMPLEMENTATION OF A LOG-STRUCTURED FILE SYSTEM M. Rosenblum and J. K. Ousterhout University of California, Berkeley.
UNIX File and Directory Caching How UNIX Optimizes File System Performance and Presents Data to User Processes Using a Virtual File System.
Log-structured File System Sriram Govindan
The Design and Implementation of Log-Structure File System M. Rosenblum and J. Ousterhout.
1 File Systems: Consistency Issues. 2 File Systems: Consistency Issues File systems maintains many data structures  Free list/bit vector  Directories.
File Systems and Disk Layout. I/O: The Big Picture I/O Bus Memory Bus Processor Cache Main Memory Disk Controller Disk Graphics Controller Network Interface.
Log-Structured File Systems
CS 153 Design of Operating Systems Spring 2015 Lecture 22: File system optimizations.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 12: File System Implementation File System Structure File System Implementation.
File System Implementation
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 11: File System Implementation.
CS 153 Design of Operating Systems Spring 2015 Lecture 21: File Systems.
Advanced file systems: LFS and Soft Updates Ken Birman (based on slides by Ben Atkin)
Embedded System Lab. 서동화 The Design and Implementation of a Log-Structured File System - Mendel Rosenblum and John K. Ousterhout.
CS333 Intro to Operating Systems Jonathan Walpole.
Lecture 21 LFS. VSFS FFS fsck journaling SBDISBDISBDI Group 1Group 2Group N…Journal.
UNIX File System (UFS) Chapter Five.
Local Filesystems (part 1) CPS210 Spring Papers  The Design and Implementation of a Log- Structured File System  Mendel Rosenblum  File System.
11.1 Silberschatz, Galvin and Gagne ©2005 Operating System Principles 11.5 Free-Space Management Bit vector (n blocks) … 012n-1 bit[i] =  1  block[i]
Transactions and Reliability Andy Wang Operating Systems COP 4610 / CGS 5765.
Lecture 20 FSCK & Journaling. FFS Review A few contributions: hybrid block size groups smart allocation.
Embedded System Lab. 정영진 The Design and Implementation of a Log-Structured File System Mendel Rosenblum and John K. Ousterhout ACM Transactions.
File System Performance CSE451 Andrew Whitaker. Ways to Improve Performance Access the disk less  Caching! Be smarter about accessing the disk  Turn.
CSE 451: Operating Systems Winter 2015 Module 17 Journaling File Systems Mark Zbikowski Allen Center 476 © 2013 Gribble, Lazowska,
File System Consistency
Database Recovery Techniques
Jonathan Walpole Computer Science Portland State University
Transactions and Reliability
Chapter 11: File System Implementation
AN IMPLEMENTATION OF A LOG-STRUCTURED FILE SYSTEM FOR UNIX
CS703 - Advanced Operating Systems
Filesystems 2 Adapted from slides of Hank Levy
Lecture 20 LFS.
Overview Continuation from Monday (File system implementation)
Printed on Monday, December 31, 2018 at 2:03 PM.
Chapter 14: File-System Implementation
CSE 451: Operating Systems Autumn 2009 Module 17 Berkeley Log-Structured File System Ed Lazowska Allen Center
File System Implementation
File System Performance
The Design and Implementation of a Log-Structured File System
Presentation transcript:

FS Consistency, Block Allocation, and WAFL

Summary of Issues for File Systems 1. Buffering disk data for access from the processor. block I/O (DMA) must use aligned, physically resident buffers block update is a read-modify-write 2. Creating/representing/destroying independent files. disk block allocation, file block map structures directories and symbolic naming 3. Masking the high seek/rotational latency of disk access. smart block allocation on disk block caching, read-ahead (prefetching), and write-behind 4. Reliability and the handling of updates.

Rotational Media Track Sector Arm Cylinder Platter Head Access time = seek time + rotational delay + transfer time seek time = 5-15 milliseconds to move the disk arm and settle on a cylinder rotational delay = 8 milliseconds for full rotation at 7200 RPM: average delay = 4 ms transfer time = 1 millisecond for an 8KB block at 8 MB/s Bandwidth utilization is less than 50% for any noncontiguous access at a block grain.

The Problem of Disk Layout The level of indirection in the file block maps allows flexibility in file layout. “File system design is 99% block allocation.” [McVoy] Competing goals for block allocation: allocation cost bandwidth for high-volume transfers stamina efficient directory operations Goal: reduce disk arm movement and seek overhead. metric of merit: bandwidth utilization (or effective bandwidth)

FFS and LFS We will study two different approaches to block allocation: Cylinder groups in the Fast File System (FFS) [McKusick81] clustering enhancements [McVoy91], and improved cluster allocation [McKusick: Smith/Seltzer96] FFS can also be extended with metadata logging [e.g., Episode] Log-Structured File System (LFS) proposed in [Douglis/Ousterhout90] implemented/studied in [Rosenblum91] BSD port, sort of maybe: [Seltzer93] extended with self-tuning methods [Neefe/Anderson97] Other approach: extent-based file systems

WAFL: High-Level View The whole on-disk file system layout is a tree of blocks. Fixed location Everything else: write anywhere. User data Allocation maps

WAFL: A Closer Look

Snapshots “WAFL’s primary distinguishing characteristic is Snapshots, which are readonly copies of the entire file system.” This was really the origin of the idea of a point-in-time copy for the file server market. What is this idea good for? Backups Mirroring “pilot error” Consistency points

Snapshots The snapshot mechanism is used for user-accessible snapshots and for transient consistency points. How is this like a fork?

Shadowing Shadowing is the basic technique for doing an atomic force. reminiscent of copy-on-write 1. starting point modify purple/grey blocks 2. write new blocks to disk prepare new block map 3. overwrite block map (atomic commit) and free old blocks Frequent problems: nonsequential disk writes, damages clustered allocation on disk. How does WAFL deal with this?

WAFL Consistency Points “WAFL uses Snapshots internally so that it can restart quickly even after an unclean system shutdown.” “A consistency point is a completely self-consistent image of the entire file system. When WAFL restarts, it simply reverts to the most recent consistency point.” Buffer dirty data in memory (delayed writes) and write new consistency points as an atomic batch (force). A consistency point transitions the FS from one self-consistent state to another. Combine with NFS operation log in NVRAM What if NVRAM fails?

The Problem of Metadata Updates Metadata updates are a second source of FFS seek overhead. Metadata writes are poorly localized. E.g., extending a file requires writes to the inode, direct and indirect blocks, cylinder group bit maps and summaries, and the file block itself. Metadata writes can be delayed, but this incurs a higher risk of file system corruption in a crash. If you lose your metadata, you are dead in the water. FFS schedules metadata block writes carefully to limit the kinds of inconsistencies that can occur. Some metadata updates must be synchronous on controllers that don’t respect order of writes.

FFS Failure Recovery FFS uses a two-pronged approach to handling failures: 1. Carefully order metadata updates to ensure that no dangling references can exist on disk after a failure. Never recycle a resource (block or inode) before zeroing all pointers to it (truncate, unlink, rmdir). Never point to a structure before it has been initialized. E.g., sync inode on creat before filling directory entry, and sync a new block before writing the block map. 2. Run a file system scavenger (fsck) to fix other problems. Free blocks and inodes that are not referenced. Fsck will never encounter a dangling reference or double allocation.

Alternative: Logging and Journaling Logging can be used to localize synchronous metadata writes, and reduce the work that must be done on recovery. Universally used in database systems. Used for metadata writes in journaling file systems (e.g., Episode). Key idea: group each set of related updates into a single log record that can be written to disk atomically (“all-or-nothing”). Log records are written to the log file or log disk sequentially. No seeks, and preserves temporal ordering. Each log record is trailed by a marker (e.g., checksum) that says “this log record is complete”. To recover, scan the log and reapply updates.

Metadata Logging Here’s one approach to building a fast filesystem: 1. Start with FFS with clustering. 2. Make all metadata writes asynchronous. But, that approach cannot survive a failure, so: 3. Add a supplementary log for modified metadata. 4. When metadata changes, write new versions immediately to the log, in addition to the asynchronous writes to “home”. 5. If the system crashes, recover by scanning the log. Much faster than scavenging (fsck) for large volumes. 6. If the system does not crash, then discard the log.

The Nub of WAFL WAFL’s consistency points allow it to buffer writes and push them out in a batch. Deferred, clustered allocation Batch writes Localize writes Indirection through the metadata “tree” allows it to write data wherever convenient: the tree can point anywhere. Maximize the benefits from batching writes in consistency points. Also allow multiple copies of a given piece of metadata, for snapshots.

SnapMirror Is it research? What makes it interesting/elegant? What are the tech trends that motivate SnapMirror, and WAFL before it? Why is disaster recovery so important now? How does WAFL make mirroring easier? If a mirror fails, what is lost? Can both mirrors operate at the same time?

Mirroring Structural issue: build mirroring support at: Application level FS level Block storage level (e.g., RAID unit) Who has the information? What has changed? What has been deallocated?

What Has Changed? Given a snapshot X, WAFL can ask: is block B allocated in snapshot X? Given a snapshot X and a later snapshot Y, WAFL can ask: what blocks of Y should be sent to the mirror? Y 1 unused added X deleted unchanged 1

Details SnapMirror names disk blocks: why? What are the implications? What if a mirror fails? What is lost? How to keep the mirror self-consistent? How does the no-overwrite policy of WAFL help in SnapMirror? What is the strengths/weaknesses with implementing this functionality above or below the file system? Does this conclusion depend on other details of WAFL? What can we conclude from the experiments?

FFS Cylinder Groups FFS defines cylinder groups as the unit of disk locality, and it factors locality into allocation choices. typical: thousands of cylinders, dozens of groups Strategy: place “related” data blocks in the same cylinder group whenever possible. seek latency is proportional to seek distance Smear large files across groups: Place a run of contiguous blocks in each group. Reserve inode blocks in each cylinder group. This allows inodes to be allocated close to their directory entries and close to their data blocks (for small files).

FFS Allocation Policies 1. Allocate file inodes close to their containing directories. For mkdir, select a cylinder group with a more-than-average number of free inodes. For creat, place inode in the same group as the parent. 2. Concentrate related file data blocks in cylinder groups. Most files are read and written sequentially. Place initial blocks of a file in the same group as its inode. How should we handle directory blocks? Place adjacent logical blocks in the same cylinder group. Logical block n+1 goes in the same group as block n. Switch to a different group for each indirect block.

Disk Hardware (4) Raid levels 3 through 5 Backup and parity drives are shaded

What to Know We did not cover the LFS material in class, though it was in the Tanenbaum reading. I just want you to know what LFS is and how it compares to WAFL. FS is log-structured: all writes are to the end of the log WAFL can write anywhere Both use no overwrite and indirect access to metadata LFS requires a cleaner to find log segments with few allocated blocks, and rewrite those blocks at the end of the log so it can free the segment.

Log-Structured File System (LFS) In LFS, all block and metadata allocation is log-based. LFS views the disk as “one big log” (logically). All writes are clustered and sequential/contiguous. Intermingles metadata and blocks from different files. Data is laid out on disk in the order it is written. No-overwrite allocation policy: if an old block or inode is modified, write it to a new location at the tail of the log. LFS uses (mostly) the same metadata structures as FFS; only the allocation scheme is different. Cylinder group structures and free block maps are eliminated. Inodes are found by indirecting through a new map (the ifile).

Writing the Log in LFS 1. LFS “saves up” dirty blocks and dirty inodes until it has a full segment (e.g., 1 MB). Dirty inodes are grouped into block-sized clumps. Dirty blocks are sorted by (file, logical block number). Each log segment includes summary info and a checksum. 2. LFS writes each log segment in a single burst, with at most one seek. Find a free segment “slot” on the disk, and write it. Store a back pointer to the previous segment. Logically the log is sequential, but physically it consists of a chain of segments, each large enough to amortize seek overhead.

Writing the Log: the Rest of the Story 1. LFS cannot always delay writes long enough to accumulate a full segment; sometimes it must push a partial segment. fsync, update daemon, NFS server, etc. Directory operations are synchronous in FFS, and some must be in LFS as well to preserve failure semantics and ordering. 2. LFS allocation and write policies affect the buffer cache, which is supposed to be filesystem-independent. Pin (lock) dirty blocks until the segment is written; dirty blocks cannot be recycled off the free chain as before. Endow *indirect blocks with permanent logical block numbers suitable for hashing in the buffer cache.

Cleaning in LFS What does LFS do when the disk fills up? 1. As the log is written, blocks and inodes written earlier in time are superseded (“killed”) by versions written later. files are overwritten or modified; inodes are updated when files are removed, blocks and inodes are deallocated 2. A cleaner daemon compacts remaining live data to free up large hunks of free space suitable for writing segments. look for segments with little remaining live data write remaining live data to the log tail can consume a significant share of bandwidth, and there are lots of cost/benefit heuristics involved.

Allocating a Block in FFS 1. Try to allocate the rotationally optimal physical block after the previous logical block in the file. Skip rotdelay physical blocks between each logical block. (rotdelay is 0 on track-caching disk controllers.) 2. If not available, find another block a nearby rotational position in the same cylinder group We’ll need a short seek, but we won’t wait for the rotation. If not available, pick any other block in the cylinder group. 3. If the cylinder group is full, or we’re crossing to a new indirect block, go find a new cylinder group. Pick a block at the beginning of a run of free blocks.

Clustering in FFS Clustering improves bandwidth utilization for large files read and written sequentially. Allocate clumps/clusters/runs of blocks contiguously; read/write the entire clump in one operation with at most one seek. Typical cluster sizes: 32KB to 128KB. FFS can allocate contiguous runs of blocks “most of the time” on disks with sufficient free space. This (usually) occurs as a side effect of setting rotdelay = 0. Newer versions may relocate to clusters of contiguous storage if the initial allocation did not succeed in placing them well. Must modify buffer cache to group buffers together and read/write in contiguous clusters.

Effect of Clustering Access time = seek time + rotational delay + transfer time average seek time = 2 ms for an intra-cylinder group seek, let’s say rotational delay = 8 milliseconds for full rotation at 7200 RPM: average delay = 4 ms transfer time = 1 millisecond for an 8KB block at 8 MB/s 8 KB blocks deliver about 15% of disk bandwidth. 64KB blocks/clusters deliver about 50% of disk bandwidth. 128KB blocks/clusters deliver about 70% of disk bandwidth. Actual performance will likely be better with good disk layout, since most seek/rotate delays to read the next block/cluster will be “better than average”.

note sequential block allocation Sequential File Write note sequential block allocation write write stall read physical disk sector sync command (typed to shell) pushes indirect blocks to disk read next block of free space bitmap (??) sync time in milliseconds

Sequential Writes: A Closer Look 16 MB in one second (one indirect block worth) physical disk sector longer delay for head movement to push indirect blocks 140 ms delay for cylinder seek etc. (???) write write stall time in milliseconds

Small-File Create Storm 50 MB note synchronous writes for some metadata inodes and file contents (localized allocation) sync physical disk sector sync delayed-write metadata sync write write stall time in milliseconds

Small-File Create: A Closer Look physical disk sector time in milliseconds