Presentation is loading. Please wait.

Presentation is loading. Please wait.

BEHIND THE SCENES LOOK AT INTEGRITY IN A PERMANENT STORAGE SYSTEM Gene Oleynik, Jon Bakken, Wayne Baisley, David Berg, Eileen Berman, Chih-Hao Huang, Terry.

Similar presentations


Presentation on theme: "BEHIND THE SCENES LOOK AT INTEGRITY IN A PERMANENT STORAGE SYSTEM Gene Oleynik, Jon Bakken, Wayne Baisley, David Berg, Eileen Berman, Chih-Hao Huang, Terry."— Presentation transcript:

1 BEHIND THE SCENES LOOK AT INTEGRITY IN A PERMANENT STORAGE SYSTEM Gene Oleynik, Jon Bakken, Wayne Baisley, David Berg, Eileen Berman, Chih-Hao Huang, Terry Jones, Dmitry Litvintsev, Alexander Moibenko, George Szmuksta, Don Petravick, Michael Zalokar Fermilab ABSTRACT Fermilab provides a primary and tertiary permanent storage facility for its High Energy Physics program and other world wide scientific endeavors. The lifetime of the files in this facility, which are maintained in automated robotic tape libraries, is typically many years. Currently the amount of data in the Fermilab permanent store facility is 3.6 PB and growing rapidly. The Fermilab "enstore" software provides a file system based interface to the permanent store. While access to files through this interface is simple and straightforward, there is a lot that goes on behind the scenes to provide reliable and fast file access, and to insure file integrity and high availability. This paper discusses the measures enstore takes and the administrative steps that are taken to assure users’ files are kept safe, secure, and readily accessible over their long lifetimes. Techniques such as automated write protection, randomized file and tape integrity audits, tape lifetime strategies, and metadata protection are discussed in detail. Brief Description of Enstore: Manages multi-Petabyte Tertiary tape media store & archive for Fermilab –Currently 3.6 PB and rapidly increasing –3 bytes read for every byte written Several media types and vendors File based with long (> 5 year) file lifetimes Provides nfs-style namespace to files in the store Keeps track of files, volumes and namespace in several metadata databases (postgreSQL) Provides numerous policies & mechanisms for insuring user file and metadata integrity as described herein 9940B (200GB media) ingest plot File/Media Life Cycle Enstore Soft Tape States Normal: R/W (read/write ) or FULL Protected: READONLY, NOACCESS, NOTALLOWED Normal Life Cycle: Tape written until filled Cloned to new media after ~1/2 manufacturer's mount limit, or any read difficulties Recycled for reuse of media only after owner deletes all files and confirms Deleted files (and their metadata) are not removed from tape until it is recycled Protected States: READONLY – Write error. Can only mount for read. NOACCESS – User’s denied access to files on tape. Cleared by admin after investigation NOTALLOWED – Manually set by admin for long term NOACCESS (e.g. tape sent to vendor for recovery) File reads errors: (e.g. CRC mismatch): –Retried on different drives until success or “N strikes” in M hours (configurable - nominally 3 strikes in 24 hours) –Tape/drive put on suspect list with a N hour lifetime for each failure. –If tape has M entries it is made NOACCESS Tape transport (mount/dismount) to protect against damaged tape/cartridge Selective CRC error: file read back immediately after written and CRC mismatch. Tape may be cloned by admin N successive errors on a drive in M hours (nominal 5 in 1 hour) or Selective CRC makes the drive inaccessible and it is investigated Volume Migration (not shown): Files are periodically migrated to denser media when advantageous. See C.H.Huang’s paper this conference CRC Mechanism Cyclic Redundancy Check (CRC) calculated at sending and receiving ends of transfer and transfer retries if there is a mismatch File CRC calculated and stored in metadata when file is written to tape CRC recalculated every read and compared to CRC in the metadata Metadata Protection Each file on tape has a file “wrapper” which contains the full path file name –Data and metadata self contained –In principle, metadata could be reconstructed from this information, but timescale would be prohibitive Metadata integrity relies on a tiered backup of the Volume, file and pnfs (namespace) and Accounting (transaction) databases. For each enstore instance –Journals kept on each database server machine –Most recent 2 days backup + journals kept on independent servers –Backups + journals written into enstore tape daily. Currently all metadata database backups are stored in a single tape library. A new library will be geographically located 1 mile from the current library where duplicate backup copies will be stored Since there are several interdependent metadata databases, backups are periodically restored and scanned for consistency and corrected as necessary Physical Protection Full or READONLY tapes get their physical write protection tab locked. –Protect against accidental erasure of files by humans or computers –Help protect against malicious erasure –Role separation: tab flipping not performed by enstore admins, but a tape aide Automated write protection Enstore keeps soft state of tape’s write protection state in metadata A tape is a candidate for locking if full or READONLY Enstore monitors the number of these candidates and generates “tab flip” requests when a threshold is passed. Job information is generated for the request and a help desk ticket is generated and sent to a tape aide Tape aide gets the ticket and runs a script which uses the job information to eject the tapes for write protect tab locking, reenter them and set their soft state to locked Audits An audit runs on each enstore instance which randomly reads back files –Forces a CRC check –If there is a mismatch, the usual procedure is followed for retries An audit runs on each enstore instance which randomly selects tapes and compares their physical write protect state to their soft write protect state and alarms if different. Conclusion Enstore has provided a successful and comprehensive set of policies and mechanisms to ensure the integrity of user’s data and associated meta-data –Media lifetime policy implemented by tape cloning at ~ ½ manufacturer's recommended mount limit –File retention policies to recover from accidental deletion by user’s or software –Checks and balances between users, admins and tape aide –An automated program of physically write protecting full or READONLY tapes –Tiered metadata database backups –Automated physical write protection program clone @ capacity Normal data tape Life Cycle NOACCESS Tape Transport problem “3 strikes” Selective CRC error recycle


Download ppt "BEHIND THE SCENES LOOK AT INTEGRITY IN A PERMANENT STORAGE SYSTEM Gene Oleynik, Jon Bakken, Wayne Baisley, David Berg, Eileen Berman, Chih-Hao Huang, Terry."

Similar presentations


Ads by Google