Performance/Reliability of Disk Systems So far, we looked at ways to improve the performance of disk systems. Next, we will look at ways to improve the.

Slides:



Advertisements
Similar presentations
RAID Redundant Arrays of Independent Disks Courtesy of Satya, Fall 99.
Advertisements

Redundant Array of Independent Disks (RAID) Striping of data across multiple media for expansion, performance and reliability.
A CASE FOR REDUNDANT ARRAYS OF INEXPENSIVE DISKS (RAID) D. A. Patterson, G. A. Gibson, R. H. Katz University of California, Berkeley.
DISK FAILURES PROF. T.Y.LIN CS-257 Presenter: Shailesh Benake(104)
Reliability of Disk Systems. Reliability So far, we looked at ways to improve the performance of disk systems. Next, we will look at ways to improve the.
CS 346 – April 4 Mass storage –Disk formatting –Managing swap space –RAID Commitment –Please finish chapter 12.
RAID (Redundant Arrays of Independent Disks). Disk organization technique that manages a large number of disks, providing a view of a single disk of High.
 RAID stands for Redundant Array of Independent Disks  A system of arranging multiple disks for redundancy (or performance)  Term first coined in 1987.
Introduction to Information Technologies
Lecture 36: Chapter 6 Today’s topic –RAID 1. RAID Redundant Array of Inexpensive (Independent) Disks –Use multiple smaller disks (c.f. one large disk)
1 A Case for Redundant Arrays of Inexpensive Disks Patterson, Gibson and Katz (Seminal paper) Chen, Lee, Gibson, Katz, Patterson (Survey) Circa late 80s..
Sean Traber CS-147 Fall  7.9 RAID  RAID Level 0  RAID Level 1  RAID Level 2  RAID Level 3  RAID Level 4 
Theoretical Program Checking Greg Bronevetsky. Background The field of Program Checking is about 13 years old. Pioneered by Manuel Blum, Hal Wasserman,
NETWORKING CONCEPTS. ERROR DETECTION Error occures when a bit is altered between transmission& reception ie. Binary 1 is transmitted but received is binary.
Reliability of Disk Systems. Reliability So far, we looked at ways to improve the performance of disk systems. Next, we will look at ways to improve the.
CSC1016 Coursework Clarification Derek Mortimer March 2010.
CPSC-608 Database Systems Fall 2008 Instructor: Jianer Chen Office: HRBB 309B Phone: Notes #6.
Other Disk Details. 2 Disk Formatting After manufacturing disk has no information –Is stack of platters coated with magnetizable metal oxide Before use,
Section Disk Failures Kevin Grant
Chapter 11 Algebraic Coding Theory. Single Error Detection M = (1, 1, …, 1) is the m  1 parity check matrix for single error detection. If c = (0, 1,
Disk Failures Xiaqing He ID: 204 Dr. Lin. Content 1) RAID stands for: “redundancy array of independent disks” 2) Several schemes to recover from disk.
1 Anna Östlin Pagh and Rasmus Pagh IT University of Copenhagen Advanced Database Technology April 1, 2004 MEDIA FAILURES Lecture based on [GUW, ]
Error Detection and Correction
Data Representation Recovery from Disk Crashes – 13.4 Presented By: Deepti Bhardwaj Roll No. 223_103 SJSU ID:
Transaction. A transaction is an event which occurs on the database. Generally a transaction reads a value from the database or writes a value to the.
RAID Systems CS Introduction to Operating Systems.
Storage System: RAID Questions answered in this lecture: What is RAID? How does one trade-off between: performance, capacity, and reliability? What is.
Transactions and Reliability. File system components Disk management Naming Reliability  What are the reliability issues in file systems? Security.
ICOM 6005 – Database Management Systems Design Dr. Manuel Rodríguez-Martínez Electrical and Computer Engineering Department Lecture 6 – RAID ©Manuel Rodriguez.
Storage Systems CSE 598d, Spring 2007 Lecture 5: Redundant Arrays of Inexpensive Disks Feb 8, 2007.
1 Storage Refinement. Outline Disk failures To attack Intermittent failures To attack Media Decay and Write failure –Checksum To attack Disk crash –RAID.
1 Recitation 8 Disk & File System. 2 Disk Scheduling Disks are at least four orders of magnitude slower than main memory –The performance of disk I/O.
Chapter 2 Data Storage How does a computer system store and manage very large volumes of data ?
1 Failure Correction Techniques for Large Disk Array Garth A. Gibson, Lisa Hellerstein et al. University of California at Berkeley.
1 Chapter 7: Storage Systems Introduction Magnetic disks Buses RAID: Redundant Arrays of Inexpensive Disks.
RAID COP 5611 Advanced Operating Systems Adapted from Andy Wang’s slides at FSU.
Lecture 9 of Advanced Databases Storage and File Structure (Part II) Instructor: Mr.Ahmed Al Astal.
CMPT 454, Simon Fraser University, Fall 2009, Martin Ester 29 Database Systems II Secondary Storage.
CS1Q Computer Systems Lecture 6 Simon Gay. Lecture 6CS1Q Computer Systems - Simon Gay2 Algebraic Notation Writing AND, OR, NOT etc. is long-winded and.
10.1 Chapter 10 Error Detection and Correction Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Chapter 2. Data Storage Chapter 2.
Copyright © Curt Hill, RAID What every server wants!
Unit 5 Lecture 2 Error Control Error Detection & Error Correction.
COSC 3213: Computer Networks I Instructor: Dr. Amir Asif Department of Computer Science York University Section M Topics: 1. Error Detection Techniques:
Error Control Code. Widely used in many areas, like communications, DVD, data storage… In communications, because of noise, you can never be sure that.
- Disk failure ways and their mitigation - Priya Gangaraju(Class Id-203)
ADVANTAGE of GENERATOR MATRIX:
COSC 3330/6308 Solutions to the Third Problem Set Jehan-François Pâris November 2012.
Error Detection and Correction – Hamming Code
CS1Q Computer Systems Lecture 6 Simon Gay. Lecture 6CS1Q Computer Systems - Simon Gay2 Algebraic Notation Writing AND, OR, NOT etc. is long-winded and.
Transactions and Reliability Andy Wang Operating Systems COP 4610 / CGS 5765.
1 Lecture 27: Disks Today’s topics:  Disk basics  RAID  Research topics.
Part IV I/O System Chapter 12: Mass Storage Structure.
Disk Failures Skip. Index 13.4 Disk Failures Intermittent Failures Organizing Data by Cylinders Stable Storage Error- Handling.
Reliability of Disk Systems. Reliability So far, we looked at ways to improve the performance of disk systems. Next, we will look at ways to improve the.
Magnetic Disks Have cylinders, sectors platters, tracks, heads virtual and real disk blocks (x cylinders, y heads, z sectors per track) Relatively slow,
CS Introduction to Operating Systems
Disk Failures Xiaqing He ID: 204 Dr. Lin.
Transactions and Reliability
Introduction to Information Technologies
RAID Redundant Arrays of Independent Disks
Vladimir Stojanovic & Nicholas Weaver
CS 554: Advanced Database System Notes 02: Hardware
RAID RAID Mukesh N Tekwani
ICOM 6005 – Database Management Systems Design
Introduction to Information Technologies
Information Redundancy Fault Tolerant Computing
UNIT IV RAID.
RAID RAID Mukesh N Tekwani April 23, 2019
Disk Failures Disk failure ways and their mitigation
Presentation transcript:

Performance/Reliability of Disk Systems So far, we looked at ways to improve the performance of disk systems. Next, we will look at ways to improve the reliability of disk systems. What is reliability? Availability of data when there is a disk “failure” of some sort. This is achieved at the cost of some redundancy (of data and/or disks).

Disk failures – A classification 1.Intermittent failure: An attempt to read or write a sector is unsuccessful, but with repeated tries we are able to read or write successfully. 2.Media decay: A bit or bits are permanently corrupted, and it becomes impossible to read a sector correctly no matter how many times we try. 3.Write failure: We attempt to write a sector, but we can neither write successfully nor can we retrieve the previously written sector. A possible cause: power outage during the writing of the sector. 4.Disk crash: The entire disk becomes unreadable, suddenly and permanently.

Intermittent Failures Disk sectors are stored with some redundant bits, whose purpose is to to tell whether what we are reading from the sector is correct or not. –The reading function returns a pair (w,s), where w is the data in the sector that is read, and s is a status bit that tells whether or not the read was successful; In an intermittent failure, we may get a status "bad" several times, but if the read function is repeated enough times (100 times is a typical limit), then eventually a status "good" will be returned. Writing: A straightforward way to perform the check is to read the sector and compare it with the sector we intended to write. However, instead of performing the complete comparison at the disk controller, it is simpler to attempt to read the sector and see if its status is "good." –If so, we assume the write was correct, and if the status is "bad" then the write was apparently unsuccessful and must be repeated.

Checksums for failure detection A useful model of disk read: the reading function returns (w,s) – –w is the data in the sector that is read and –s is the status bit. How s gets “good” or “bad” values? Easy; each sector has additional bits, called checksum (written by the disk controller). Simple form of checksum is the parity bit: The number of 1’s among data bits and their parity is always even. Read(w,s) function returns value “good” for s, if w has even number of 1’s; otherwise, s=“bad”.

(Interleaved) Parity bits It is possible that more than one bit in a sector be corrupted –Error(s) may not be detected. Suppose bits error randomly: Probability of undetected error (i.e. even 1’s) is thus 50% (Why?) Let’s have 8 parity bits Probability of error is 1/2 8 = 1/256 With n parity bits, the probability of undetected error = 1/2 n

Recovery from disk crashes Mean time to failure (MTTF) = when 50% of the disks have crashed, typically 10 years Simplified (assuming this happens linearly) –In the 1 st year = 5%, –In the 2 nd year = 5%, –… –In the 20 th year = 5% However the mean time to a disk crash doesn’t have to be the same as the mean time to data loss; there are solutions.

Redundant Array of Independent Disks, RAID RAID 1:Mirror each disk (data/redundant disks) If a disk fails, restore using the mirror Assume: 5% failure per year; MTTF = 10 years (for disks). 3 hours to replace and restore failed disk. If a failure to one disk occurs, then the other better not fail in the next three hours. Probability of failure = 5%  3/(24  365) = 1/ If one disk fails every ten years, then one of two will fail every 5 years. One in 58,400 of those failures results in data loss; MTTF = 292,000 years. Drawback: We need one redundant disk for each data disk. This is the mean time to failure for data.

RAID 4 Problem with RAID 1 (also called Mirroring): n data disks & n redundant disks RAID 4: One redundant disk only. x  y modulo-2 sum of x and y (XOR)  = n data disks & 1 redundant disk (for any n) Each block in the redundant disk has the parity bits for the corresponding blocks in the other disks: (Block-interleaved parity). Number the blocks (on each disk): 1,2,3,…,k i th Block of Disk 1: i th Block of Disk 2: i th Block of Disk 3: i th Block of red. disk:

Properties of XOR:  Commutativity: x  y = y  x Associativity: x  (y  z) = (x  y)  z Identity: x  0 = 0  x = x (0 is vector) Self-inverse: x  x = 0 –As a useful consequence, if x  y=z, then we can “add” x to both sides and get y=x  z –More generally: 0 = x 1 ...  x n Then “adding” x i to both sides, we get: x i = x 1  …x i-1  x i+1 ...  x n

Failure recovery in RAID 4 We must be able to restore whatever disk crashes. Just compute the modulo­2 sum of corresponding blocks of the other disks. Use equation Example: i th Block of Disk1: i th Block of Disk 2: i th Block of Disk 3: i th Block of red disk: Disk 2 crashes. Compute it by taking the modulo 2 sum of the rest.

RAID 4 (Cont’d) Reading: as usual –Interesting possibility: If we want to read from disk i, but it is busy and all other disks are free, then instead we can read the corresponding blocks from all other disks and modulo­2 sum them. Writing: –Write block. –Update redundant block

How do we get the value for the redundant block? Naively: Read all n corresponding blocks  n+1 disk I/O’s, which is n-1 blocks read, 1 data block write, 1 redundant block write). Better: How?

How do we get the value for the redundant block? Better Writing: To write block j of data disk i (new value = v): –Read old value of that block, say o. –Read the j th block of the redundant disk, say r. –Compute w = v  o  r. –Write v in block j of disk i. –Write w in block j of the redundant disk. Total: 4 disk I/O; (true for any number of data disks) Problem Why does this work? –Intuition: v  o is the “change” to the parity. –Redundant disk must change to compensate.

Example i th Block of Disk1: i th Block of Disk 2: i th Block of Disk 3: i th Block of red disk: Suppose we change into

RAID 5 RAID 4: Problem: The redundant disk is involved in every write (but more cost-effective than mirroring)  A bottleneck! Solution is RAID 5: vary the redundant disk for different blocks. –Example: n disks; block j is redundant on disk i if i = remainder of j/n. Example: n=4. So, there are 4 disks. –The first disk numbered 0, would be the “redundant” when considering cylinders numbered: 0, 4, 8, 12 etc. (because they leave reminder 0 when divided by 4). –The disk numbered 1, would be the “redundant” for its cylinders numbered: 1, 5, 9, etc.

RAID 5 (Cont’d) The reading/writing load for each disk is the same. Problem. In one block write what’s the probability that a disk is involved? –Each disk has 1/(n+1) probability to have the block. –If not, i.e. with probability n/(n+1), then it has 1/n chance that it will be the redundant block for that block. –So, each of the four disks is involved in: 1/(n+1) * 1 + (n/(n+1))*(1/n) = 2/(n+1) of the writes.

RAID 6 - for multiple disk crashes Let’s focus on recovering from two disk crashes. Setup: 7 disks, numbered 1through 7 The first 4 are data disks, and disks 5 through 7 are redundant. The relationship between data and redundant disks is summarized by a 3 x 7 matrix of 0's and l's The columns for the redundant disks have a single 1. The columns for the data disks each have at least two l's. Data disks The disks with 1 in a given row of the matrix are treated as if they were the entire set of disks in a RAID level 4 scheme.

RAID 6 - example 1) ) ) ) ) ) ) Data disks disk 5 is modulo 2 sum of disks 1,2,3 disk 6 is modulo 2 sum of disks 1,2,4 disk 7 is modulo 2 sum of disks 1,3,4

RAID 6 Failure Recovery Why is it possible to recover from two disk crashes? Let the failed disks be a and b. Since all columns of the redundancy matrix are different, we must be able to find some row r in which the columns for a and b are different. Suppose that a has 0 in row r, while b has 1 there. Then we can compute the correct b by taking the modulo-2 sum of corresponding bits from all the disks other than b that have 1 in row r. –Note that a is not among these, so none of them have failed. Having done so, we must recompute a, with all other disks available.

RAID 6 – How many redundant disks? The number of disks can be one less than any power of 2, say 2 k – 1. Of these disks, k are redundant, and the remaining 2 k – 1– k are data disks, so the redundancy grows roughly as the logarithm of the number of data disks. For any k, we can construct the redundancy matrix by writing all possible columns of k 0's and 1's, except the all-0's column. –The columns with a single 1 correspond to the redundant disks, and the columns with more than one 1 are the data disks.

RAID 6 - exercise Find a RAID level 6 scheme using 15 disks, 4 of which are redundant