Presentation is loading. Please wait.

Presentation is loading. Please wait.

CS 153 Design of Operating Systems Spring 2015 Lecture 22: File system optimizations.

Similar presentations


Presentation on theme: "CS 153 Design of Operating Systems Spring 2015 Lecture 22: File system optimizations."— Presentation transcript:

1 CS 153 Design of Operating Systems Spring 2015 Lecture 22: File system optimizations

2 2 Physical Disk Structure l Disk components u Platters u Surfaces u Tracks u Sectors u Cylinders u Arm u Heads Arm Heads Track Platter Surface Cylinder Sector

3 3 Cylinder Groups l BSD FFS addressed these problems using the notion of a cylinder group u Disk partitioned into groups of cylinders u Data blocks in same file allocated in same cylinder u Files in same directory allocated in same cylinder u Inodes for files allocated in same cylinder as file data blocks l Free space requirement u To be able to allocate according to cylinder groups, the disk must have free space scattered across cylinders u 10% of the disk is reserved just for this purpose »Only used by root – this is why “df” may report >100%

4 4 Other Problems l Small blocks (1K) caused two problems: u Low bandwidth utilization u Small max file size (function of block size) l Fix: Use a larger block (4K) u Very large files, only need two levels of indirection for 2^32 u Problem: internal fragmentation u Fix: Introduce “fragments” (1K pieces of a block) l Problem: Media failures u Replicate master block (superblock) l Problem: Device oblivious u Parameterize according to device characteristics

5 5 The Results

6 6 Log-structured File System l The Log-structured File System (LFS) was designed in response to two trends in workload and technology: 1. Disk bandwidth scaling significantly (40% a year) »While seek latency is not 2. Large main memories in machines »Large buffer caches »Absorb large fraction of read requests »Can use for writes as well »Coalesce small writes into large writes l LFS takes advantage of both of these to increase FS performance u Rosenblum and Ousterhout (Berkeley, 1991)

7 7 FFS Problems l LFS also addresses some problems with FFS u Placement is improved, but still have many small seeks »Possibly related files are physically separated »Inodes separated from files (small seeks) »Directory entries separate from inodes u Metadata requires synchronous writes »With small files, most writes are to metadata (synchronous) »Synchronous writes very slow

8 8 LFS Approach l Treat the disk as a single log for appending u Collect writes in disk cache, write out entire collection in one large disk request »Leverages disk bandwidth »No seeks (assuming head is at end of log) u All info written to disk is appended to log »Data blocks, attributes, inodes, directories, etc. l Looks simple, but only in abstract

9 9 LFS Challenges l LFS has two challenges it must address for it to be practical 1. Locating data written to the log »FFS places files in a location, LFS writes data “at the end” 2. Managing free space on the disk »Disk is finite, so log is finite, cannot always append »Need to recover deleted blocks in old parts of log

10 10 LFS: Locating Data l FFS uses inodes to locate data blocks u Inodes pre-allocated in each cylinder group u Directories contain locations of inodes l LFS appends inodes to end of the log just like data u Makes them hard to find l Approach u Use another level of indirection: Inode maps u Inode maps map file #s to inode location u Location of inode map blocks kept in checkpoint region u Checkpoint region has a fixed location u Cache inode maps in memory for performance

11 11 LFS Layout

12 12 LFS: Free Space Management l LFS append-only quickly runs out of disk space u Need to recover deleted blocks l Approach: u Fragment log into segments u Thread segments on disk »Segments can be anywhere u Reclaim space by cleaning segments »Read segment »Copy live data to end of log »Now have free segment you can reuse l Cleaning is a big problem u Costly overhead

13 13 Write Cost Comparison Write cost of 2 if 20% full Write cost of 10 if 80% full

14 14 Write Cost: Simulation

15 15 RAID l Redundant Array of Inexpensive Disks (RAID) u A storage system, not a file system u Patterson, Katz, and Gibson (Berkeley, 1988) l Idea: Use many disks in parallel to increase storage bandwidth, improve reliability u Files are striped across disks u Each stripe portion is read/written in parallel u Bandwidth increases with more disks

16 RAID 16

17 17 RAID Challenges l Small files (small writes less than a full stripe) u Need to read entire stripe, update with small write, then write entire stripe out to disks l Reliability u More disks increases the chance of media failure (MTBF) l Turn reliability problem into a feature u Use one disk to store parity data »XOR of all data blocks in stripe u Can recover any data block from all others + parity block u Hence “redundant” in name u Introduces overhead, but, hey, disks are “inexpensive”

18 RAID with parity 18 +++=

19 19 RAID Levels l In marketing literature, you will see RAID systems advertised as supporting different “RAID Levels” l Here are some common levels: u RAID 0: Striping »Good for random access (no reliability) u RAID 1: Mirroring »Two disks, write data to both (expensive, 1X storage overhead) u RAID 5: Floating parity »Parity blocks for different stripes written to different disks »No single parity disk, hence no bottleneck at that disk u RAID “10”: Striping plus mirroring »Higher bandwidth, but still have large overhead »See this on PC RAID disk cards

20 RAID 0 l RAID 0: Striping lGood for random access (no reliability) lBetter read/write speed 20

21 RAID 1 l RAID 1: Mirroring lTwo disks, write data to both (expensive, 1X storage overhead) 21

22 RAID 5 l RAID 5: Floating parity u Parity blocks for different stripes written to different disks u No single parity disk, hence no bottleneck at that disk u Fast read while slower write (parity computation) 22

23 RAID 1+0 l RAID “10”: Striping plus mirroring u Higher bandwidth, but still have large overhead u See this on PC RAID disk cards 23

24 24 Summary l LFS u Improve write performance by treating disk as a log u Need to clean log complicates things l RAID u Spread data across disks and store parity on separate disk


Download ppt "CS 153 Design of Operating Systems Spring 2015 Lecture 22: File system optimizations."

Similar presentations


Ads by Google