Presentation is loading. Please wait.

Presentation is loading. Please wait.

"1"1 Introduction to Managing Data " Describe problems associated with managing large numbers of disks " List requirements for easily managing large amounts.

Similar presentations


Presentation on theme: ""1"1 Introduction to Managing Data " Describe problems associated with managing large numbers of disks " List requirements for easily managing large amounts."— Presentation transcript:

1 "1"1 Introduction to Managing Data " Describe problems associated with managing large numbers of disks " List requirements for easily managing large amounts of data " List techniques that facilitate managing large amounts of data " Define and differentiate high availability and fault tolerance " Describe commonly implemented RAID (redundant array of independent disks) levels

2 "2"2 Managing Large Numbers of Disks Servers today are configured with more disks than ever. This poses significant problems for system administrators: " Greater probability of disk failure: As the number of disks increases, so does the probability that one of them will fail. (MTBF (mean time between failures) is reduced.) " Partitioning of file systems: File systems are limited to the size of a single disk; to use a large number of disks, the file systems must partitioned so that at least one file system resides on each disk. " Longer reboot times: A busy system with many active disks can take an an unacceptably long time to run fsck after a system crash.

3 "3"3 Managing Large Sets of Data " Lower the number of file systems that must be managed: If file system can larger than one physical disk, it will not be necessary to artificially decompose a logical file system into multiple smaller file systems that each fit on a single disk. " Prevent failed disks from making data unavailable: The probability of a disk failure increases with the number of disks on a system. " More efficient balance the I/O load across the disks: Balancing the I/O performance. " Remove the need of check file systems at boot time: Performing file system checks (fsck) at boot time is time consuming.

4 "4"4 Managing Large Sets of Data " Allow file systems to grow while they are in use: Allowing file system to grow while they are in use reduces the system down time and eases the system administration burden (no need for backup/restore cycle. " Ease administration by providing a GUI to mask the underlying complexity " Allow dual-host failover configurations with redundant disks: In a dual-host failover configuration, one host can "take over" disk management for another failed host.

5 "5"5 Techniques for Managing Data

6 "6"6 " Integrated graphical user interface (GUI): More intuitive and easier to use " Concatenation: Two or more physical devices are combined into a single logical device " Expanding file Systems: Increasing the size of a UNIX file system while it is mounted and without disrupting access to the data " Hot Spares: A component set up to be automatically substituted for a failed component of a mirrored or RAID device. " Disk Striping: Data is interlaced among multiple physical devices; improves I/O performance by balancing the load. " RAID 5: Data and parity is interlaced among multiple physical devices. " Disk Mirroring: Multiple copies of the data are maintained on different physical devices.

7 "7"7 High Availability and Fault Tolerance " High availability: A system is said to have "high availability" (HA) if it can provide access to data most of the time while managing the integrity of that data " Fault tolerant: A system is said to be "fault tolerant" (FT) if it provides data integrity and continuos data availability. " High availability versus fault tolerant: Highly available and fault tolerant systems are separated by issues of function, cost, and design.

8 "8"8 Common RAID Implementations Some of the RAID levels are: " RAID 0:striping/concatenation " RAID 1:mirroring " RAID 0+1:striping plus mirroring " RAID 3:Striping with dedicated parity " RAID 5:Striping with distributed parity

9 "9"9 Concatenation -- RAID level 0 " Combines multiple physical disks into a single virtual disk " Address space is contiguous " No data redundancy

10 " 1010 Concatenation Summary " Write performance is the same; read performance may be improved if the reads are random. " One hundred percent of the disk capacity is available for user data. " There is no redundancy. " Concatenation is less reliable, as the loss of one disk ultimately results in the loss of data on all disks.

11 " 1111 Striping -- RAID level 0 " Data stream placed across multiple disks in equal-sized chunks " Improves I/O per second (IOPS) performance " Degrades reliability

12 " 1212 Striping Summary " Performance is improved; chunk sizes can be optimized for sequential or random access. " One hundred percent of the disk capacity is available for user data. " There is no redundancy. " Striping is less reliable, as the loss of one disk results in the loss of data on all striped disks.

13 " 1313 Mirroring -- RAID Level 1 " Fully redundant copy of the data on one or more disks (double the cost per megabyte of disk space) " All writes duplicated implies slower write performance " Both drives can be used for reads to improve performance

14 " 1414 Mirroring Summary " Performance may be improved on read performance, but will suffer on write performance. " The cost of a mirrored implementation is much higher than a standard disk system -- mirroring requires double the storage costs. " In the event of failure, applications can continue to use the remaining half of the mirror at close to full performance. " Recovering from a disk failure consists of simply duplicating the contents of the failed disk's mirror to a new drive.

15 " 1515 Striping and Mirroring--Raid level 0+1 " By combining mirroring and striping both high reliability and performance are provided (but at a high cost)

16 " 1616 Striped, Then Mirrored (RAID 0+1) " RAID 0+1 systems have both the improved performance of striping and improved reliability of mirroring. " RAID 0+1 systems suffer the high cost of mirrored systems, requiring twice the disk space of fully independent spindles. " RAID 0+1 systems can tolerate the failure of any single disk and continue to deliver data with virtually no performance degradation.

17 " 1717 Striping With Dedicated Parity--RAID Level 3 " Data is striped across a group of disks " One disk per group is dedicated to parity " Parity disk protects against any one disk of the group failing " All disks are read and written to simultaneously

18 " 1818 Striping With Dedicated Parity (Cont.,) Other Features " Data is striped across all spindles " Dedicated parity disk contains XOR (exclusive OR) of data disks " Bandwidth is equal to n-1 disk transfer rate " All actuators move in concert as the spindles are synchronized. " Single-chunk random I/O slows the RAID group to the performance of a single disk. XOR is commutative and associated across the equation: Cp = C1 (XOR) C2 (XOR) C3 and C2 = C1 (XOR) C3 (XOR) Cp, etc. Where do chunks 7, 8, and 9 go on the diagram? Where does the parity go? " RAID 3 provides good sequential transfer rates, but at the expense of random I/O performance. " RAID 3 requires only one additional drive beyond those used for data.

19 " 1919 Striping With Dedicated Parity (Cont.,) " If the parity drive fails, operations continue with no loss of performance (but there is no redundancy). If a data disk fails, the data is still available, but it must be calculated from the remaining data disks and the parity disk. " Recovery involves reading data from surviving disks, computing the exclusive OR, and writing the result to the replacement drive.

20 " 2020 Striping With Distributed Parity -- RAID level 5 " Both parity and data are striped across a group of drives " Each drive can be read independently " Parity protects against single disk failure

21 " 2121 RAID Level 5 Summary RAID 5 implements data protection through a distributed parity scheme. Additional features include: " Independent access is available to individual drives " Data and parity are both striped across spindles " Reads per second can reach disk rate times number of disks " Single-chunk writes require four disk operations: read old data, read old parity, calculate new parity, write new data, write new parity.

22 " 2222 RAID Level 5 Summary " Overall random I/O performance is dependent on percentage of writes. " RAID 5 requires only one additional drive beyond those used for data " Data can be accessed with a failed drive, with some performance penalties: A To read data from a surviving drive--No change. A To read data from a failed drive--Corresponding chunks from surviving drives in the stripe are read and linked together with Xor to derive the data. A To write to a surviving drive--If the failed drive holds parity data, the parity data, the write proceeds normally without calculating parity. If the failed drive holds data, then a read-modify-write sequence is required A To write to a failed drive--All the data from the surviving data drives are linked with the new data using XOR, and the result is written to the parity drive.

23 " 2323 RAID Level 5 Summary (contd.,) " Recovery requires that the data from the remaining chunks in the stripe be read, linked together with XOR, and the result written to the replacement drive.


Download ppt ""1"1 Introduction to Managing Data " Describe problems associated with managing large numbers of disks " List requirements for easily managing large amounts."

Similar presentations


Ads by Google