Presentation is loading. Please wait.

Presentation is loading. Please wait.

SYSTOR2010, Haifa Israel Optimization of LFS with Slack Space Recycling and Lazy Indirect Block Update Yongseok Oh The 3rd Annual Haifa Experimental Systems.

Similar presentations


Presentation on theme: "SYSTOR2010, Haifa Israel Optimization of LFS with Slack Space Recycling and Lazy Indirect Block Update Yongseok Oh The 3rd Annual Haifa Experimental Systems."— Presentation transcript:

1 SYSTOR2010, Haifa Israel Optimization of LFS with Slack Space Recycling and Lazy Indirect Block Update Yongseok Oh The 3rd Annual Haifa Experimental Systems Conference May 24, 2010 Haifa, Israel 1 Yongseok Oh and Donghee Lee (University of Seoul) Jongmoo Choi (Dankook University) Eunsam Kim and Sam H. Noh (Hongik University)

2 SYSTOR2010, Haifa Israel Introduction Slack Space Recycling and Lazy Indirect Block Update Implementation of SSR-LFS Performance Evaluation Conclusions 2 Outline

3 SYSTOR2010, Haifa Israel LFS collects small write requests and writes them sequentially to the storage device [Rosenblum at el, ACM TOCS ‘91] Advantages ■ Superior write performance for random workloads ■ Fast recovery Drawbacks ■ On-demand cleaning ■ Cascading meta-data update 3 Log-structured File System Storage ABCD Segment buffer

4 SYSTOR2010, Haifa Israel Hard Disk Drives ■ Mechanical components – disk head, spindle motor, and platter ■ Poor random read/write performance Solid State Drives ■ Consist of NAND flash memory, no mechanical parts ■ High performance, low-power consumption, and shock resistance ■ Sequential writes are faster than random writes 4 Storage Devices

5 SYSTOR2010, Haifa Israel Introduction Slack Space Recycling and Lazy Indirect Block Update Implementation of SSR-LFS Performance Evaluation Conclusions 5 Outline

6 SYSTOR2010, Haifa Israel To make free segments ■ LFS cleaner copies valid blocks to other free segment On-demand cleaning ■ Overall performance decreases Background cleaning ■ It does not affect the performance 6 LFS cleaning AB CD ABCD Copy Segment 1Segment 2Segment 3

7 SYSTOR2010, Haifa Israel Matthews et al. employed Hole-plugging in LFS [Matthews et al, ACM OSR ’97] The cleaner copies valid blocks to holes of other segments 7 Hole-plugging ABCD Copy Segment 1Segment 2Segment 3 AB

8 SYSTOR2010, Haifa Israel We proposed SSR scheme that directly recycles slack space to avoid on- demand cleaning Slack Space is invalid area in used segment 8 Slack Space Recycling (SSR) Scheme SSR ABCD Segment 1Segment 2Segment 3 Segment buffer EF GH EFGH

9 SYSTOR2010, Haifa Israel Update of a data block incurs cascading meta-data update Modification of a data block consumes 4 blocks 9 Cascading meta-data update Indirect Data A A B Double Indirect B C i-node C 1 N-1 N D ABCDEF A’ Segment 1Segment 2 Data’ A’ Update C’ 1 N-1 N D’ Update A’ B’ update B’ C’ update D’B’C’

10 SYSTOR2010, Haifa Israel We propose LIBU scheme to decrease cascading meta-data update ■ LIBU uses IBC (Indirect Block Cache) to absorb the frequent update of indirect blocks ■ Indirect Map is necessary to terminate cascading meta data update 10 Lazy Indirect Block Update (LIBU) scheme Indirect Data A A B Double Indirect B C i-node 2 1 N-1 N D E C ABCDEF A’ D’ Segment 1Segment 2 Data’ A’ Update 2 1 N-1 N D’ Update A’ B’ Insert B’ C’ Insert C’ Insert E’

11 SYSTOR2010, Haifa Israel For crash recovery, LFS periodically stores checkpoint information If power failure occurs, ■ search the last check point ■ scan all segments written after the last check point ■ rebuild i-node map, segment usage table, indirect map, and indirect blocks in IBC 11 Crash Recovery Power failure search the last checkpoint Scan Check point Check point(last) i-node map Segment usage table indirect map indirect blocks flush Write Rebuild Consistent state Disk RAM

12 SYSTOR2010, Haifa Israel Introduction Slack Space Recycling and Lazy Indirect Block Update Implementation of SSR-LFS Performance Evaluation Conclusions 12 Outline

13 SYSTOR2010, Haifa Israel We implemented SSR-LFS (Slack Space Recycling LFS) ■ Using FUSE (Filesystem in Userspace) framework in Linux SSR-LFS selectively chooses either SSR or cleaning ■ When the system is idle, it performs background cleaning ■ When the system is busy, it performs SSR or on-dmenad cleaning  If average slack size is too small, it selects on-demand cleaning  Otherwise, it selects SSR 13 Implementation of SSR-LFS VFSFUSE write(“/mnt/file”) SSR-LFS Core Userspace Kernel Syncer IBC Cleaner buffer cache Recycler i-node cache libfuse Architecture of SSR-LFS

14 SYSTOR2010, Haifa Israel Introduction Slack Space Recycling and Lazy Indirect Block Update Implementation of SSR-LFS Performance Evaluation Conclusions 14 Outline

15 SYSTOR2010, Haifa Israel For comparison, we used several file systems ■ Ext3-FUSE ■ Org-LFS(cleaning) ■ Org-LFS(plugging) ■ SSR-LFS Benchmarks ■ IO TEST that generates random write workload ■ Postmark that simulates the workload of a mail server Storage Devices ■ SSD – INTEL SSDSA2SH032G1GN ■ HDD – SEAGATE ST3160815AS 15 Experimental Environment

16 SYSTOR2010, Haifa Israel SSR-LFS shows better than others for a wide range of utilization On HDD ■ SSR-LFS and Org-LFS outperform Ext3-FUSE under utilization of 85%, On SSD ■ Ext3-FUSE outperforms Org-LFS due to optimization of SSD for random writes 16 Random Update Performance HDD SSD

17 SYSTOR2010, Haifa Israel Medium file size (16KB ~ 256KB) ■ Subdirectories 1000, # of files 100,000, # of transactions 100,000 SSR-LFS outperforms other file systems on both devices Org-LFS shows better performance than Ext3-FUSE on HDD Ext3-FUSE shows comparative performance to Org-LFS on SSD 17 Postmark Benchmark Result (1)

18 SYSTOR2010, Haifa Israel Small file size (4KB ~ 16KB) ■ Subdirectories 1000, # of files 500,000,# of transactions 200,000 Ext3-FUSE performs better than other file systems on SSD ■ due to meta-data optimization of Ext3 such as hash-based directory 18 Postmark Benchmark Result (2)

19 SYSTOR2010, Haifa Israel Introduction Slack Space Recycling and Lazy Indirect Block Update Implementation of SSR-LFS Performance Evaluation Conclusions 19 Outline

20 SYSTOR2010, Haifa Israel SSR-LFS outperforms original style LFS for a wide range of utilization Future works ■ Optimization of meta-data structures ■ Cost-based selection between cleaning and SSR We plan to release the source code of SSR-LFS this year ■ http://embedded.uos.ac.kr 20 Conclusions

21 SYSTOR2010, Haifa Israel Thank you 21 Q & A

22 SYSTOR2010, Haifa Israel 22 Back up slides

23 SYSTOR2010, Haifa Israel 23 Storage devices

24 SYSTOR2010, Haifa Israel To identify the performance penalty of a user space implementation Ext3-FUSE underperforms Kernel Ext3 for almost patterns ■ Due to FUSE overhead 24 Measurement of FUSE overhead

25 SYSTOR2010, Haifa Israel Medium file size (16KB ~ 256KB) ■ Subdirectories 1000, # of files 100,000, # of transactions 100,000 SSR-LFS outperforms other file systems on both devices Org-LFS shows better performance then Ext3-FUSE on HDD Ext3-FUSE shows comparative performance to Org-LFS on SSD 25 Postmark Benchmark Result (1) 1302 cleanings 4142 SSRs

26 SYSTOR2010, Haifa Israel Small file size (4KB ~ 16KB) ■ Subdirectories 1000, # of files 500,000,# of transactions 200,000 Ext3-FUSE performs better than other file systems on SSD ■ due to meta-data optimization of Ext3 such as hash-based directory 26 Postmark Benchmark Result (2) 3018 cleanings 1692 cleanings 1451 SSRs

27 SYSTOR2010, Haifa Israel Large file size (disappeared in the paper) ■ File size 256KB ~ 1MB ■ Subdirectories 1000 ■ # of files 10,000 ■ # of transactions 10,000 27 Postmark Benchmark Result (3)

28 SYSTOR2010, Haifa Israel 28 Statistics of IO-TEST

29 SYSTOR2010, Haifa Israel IO TEST benchmark ■ 1 st stage  Creates 64KB files until 90% utilization  Randomly deletes files until target utilization ■ 2 nd stage  Randomly updates up to 4GB of file capacity  Measures the elapsed time in this stage  LFS has no free segments  Including cleaning or SSR cost 29 Random Update Performance Disk 90%(High threshold) 20%(desired threshold) Create files Delete files Random update files

30 SYSTOR2010, Haifa Israel 30 Experimental Environment

31 SYSTOR2010, Haifa Israel 31 Comparison of cleaning and SSR scheme T seg write Foreground Background Time Free Segments Write Request T back-ground 1 T1T3 0 T seg write 1 T on-demand cleaning 0 1 2 3 delay T2 x T idle T seg write Foreground Background Time Free Segments T ssr SSR Write Request T back-ground 0 1 2 T1 T2T3 0 T seg write 13 T idle Cleaning case : SSR case :

32 SYSTOR2010, Haifa Israel To measure the performance impact of Lazy Indirect Block Update 1 worker - File size 1GB, Request size 4KB, Sync I/O mode SSR-LFS outperforms Org-LFS and Ext3-FUSE for sequential and random writes on both devices 32 FIO Benchmark Result (1)

33 SYSTOR2010, Haifa Israel 4 workers - File size 1GB, Request size 4KB, Sync mode The LIBU scheme has greater impact on performance when four workers are running 33 FIO Benchmark Result (2)


Download ppt "SYSTOR2010, Haifa Israel Optimization of LFS with Slack Space Recycling and Lazy Indirect Block Update Yongseok Oh The 3rd Annual Haifa Experimental Systems."

Similar presentations


Ads by Google