Presentation is loading. Please wait.

Presentation is loading. Please wait.

HARD DISKS AND OTHER STORAGE DEVICES Jehan-François Pâris Spring 2015.

Similar presentations


Presentation on theme: "HARD DISKS AND OTHER STORAGE DEVICES Jehan-François Pâris Spring 2015."— Presentation transcript:

1 HARD DISKS AND OTHER STORAGE DEVICES Jehan-François Pâris Spring 2015

2 Magnetic disks (I) Sole part of computer architecture with moving parts: Data stored on circular tracks of a disk  Spinning speed between 5,400 and 15,000 rotations per minute  Accessed through a read/write head

3 Magnetic disks (II) Platter R/W head Arm Servo

4 Magnetic disks (III) Data are stored into circular tracks Tracks are partitioned into a variable number of fixed-size sectors  Outside tracks have more sectors than inside tracks If disk drive has more than one platter, all tracks corresponding to the same position of the R/W head form a cylinder

5 Seagate ST4000DM000 (I) Interface: SATA 6Gb/s (750MB/s) Capacity:4TB Cache:64MB multisegmented Seek Average  Read:< 8.5ms  Write:<9.5ms Average data rate:146 MB/s (R/W) Maximum sustained data rate:180MB/s

6 Seagate ST4000DM000 (II) Number of platters:4 Number of heads:8 Bytes per sector:4,096 Irrecoverable read errors per bit read:1 in 10 14 Power consumption  Operating:7.5W  Idle:5W  Standby & Sleep:0.75W

7 Sectors and blocks Sectors are the smallest physical storage unit on a disk  Fixed-size  Traditionally 512 bytes  Separated by intersector gaps Blocks are the smallest transfer unit between the disk and the main memory

8 Magnetic disks (III) Disk spins at a speed varying between  5,400 rpm (laptops) and  15,000 rpm (Seagate Cheetah X15, …)  Accessing data requires Positioning the head on the right track:  Seek time Waiting for data to reach the head  On the average half a rotation Transferring the data

9 Accessing disk contents Each block on a disk has a unique address  Normally a single number Logical block addressing (LBA)  Standard since 1996  Older disks used a different scheme Cylinder-head-sector  Exposed disk internal organization  Can still map old CHS triples onto LBA addresses

10 Disk access times Dominated by seek time and rotational delay We try to reduce seek times by placing all data that are likely to be accessed together on nearby tracks or same cylinder Cannot do as much for rotational delay

11 Seek times (I) Depend on the distance between the two tracks Minimal delay for  Seeks between adjacent tracks Track to track (1-3 ms)  Switching between tracks within the same cylinder  Worse delay for end to end seeks

12 Seek times (II) 3 to 5x x Track to trackEnd to end Seek time

13 Rotational latency On the average half a rotation  Same for read and writes One and half rotations for write/verify

14 Average rotational delay RPM Delay (ms) 54005.6 72004.2 10,0003.0 15,0002.0

15 Transfer rate (I) Burst rate:  Observed while transferring a block  Highest for blocks on outside tracks More of them on each track Sustained transfer rate:  Observe red while reading sequential blocks  Lower

16 Transfer rate (II) Actual transfer rate

17 Double buffering (I) Speeds up handling of sequential file B0B1B2B3B4B5B6… File B1 Buffers B2 Processed by DBMS In transfer

18 Double buffering (II) When both tasks are completed B0B1B2B3B4B5B6… File B3 Buffers B2 Processed by DBMS In transfer

19 The five minute rule Jim Gray Keep in memory any data item that will be used during the next five minutes

20 The internal disk controller Printed circuit board attached to disk drive  As powerful as the CPU of a personal computer of the early 80's Functions include  Speed buffering  Disk scheduling  …

21 Reliability Issues

22 Disk failure rates Failure rates follow a bathtub curve  High infantile mortality  Low failure rate during useful life  Higher failure rates as disks wear out

23 Disk failure rates (II) Failure rate Time Infantile mortality Useful life Wear out

24 Disk failure rates (III) Infant mortality effect can last for months for disk drives Cheap SATA disk drives seem to age less gracefully than SCSI drives

25 The Backblaze study Reported on the disk failure rates of more than 25,000 disks at Backblaze. Their disks tend to fail at a rate of  5.1 percent per year during their first eighteen months  1.4 percent per year during the next eighteen months  11.8 percent per year after that

26

27 MTTF Disk manufacturers advertise very high Mean Times To Fail (MTTF) for their products  500,000 to 1,000,000 hours, that is, 57 to 114 years Does not mean that disk will last that long! Means that disks will fail at an average rate of one failure per 500,000 to 100,000 hours during their useful life

28 More MTTF Issues (I) Manufacturers' claims are not supported by solid experimental evidence Obtained by submitting disks to a stress test at high temperature and extrapolating results to ideal conditions  Procedure raises many issues

29 More MTTF Issues (II) Failure rates observed in the field are much higher  Can go up to 8 to 9 percent per year Corresponding MTTFs are 11 to 12.5 years If we have 100 disks and a MTTF of 12.5 years, we can expect an average of 8 disk failures per year

30 Flash Drives

31 What about flash? Widely used in flash drives, most MP3 players and some small portable computers Several important limitations  Limited write bandwidth Must erase a whole block of data before overwriting any part of it  Limited endurance 10,000 to 100,000 write cycles

32 Flash drives Widely used in flash drives, most MP3 players and some small portable computers Similar technology as EEPROM Three technologies:  NOR flash  NAND flash  Vertical NAND

33 NOR Technology Each cell has  one end connected straight to ground  the other end connected straight to a bit line Longest erase and write times Allow random access to any memory location Good choice for storing BIOS code  Replace older ROM chips

34 NAND Technology Shorter erase and write times Requires less chip area per cell Up to ten times the endurance of NOR flash. Disk-like interface:  Data must be read on a page-wise basis Block erasure:  Erasing older data must be performed one block at a time Typically 32, 64 or 128 pages

35 Vertical NAND Technology Fastest

36 The flash drive controller Performs  Error correction Higher flash densities result in many errors  Load leveling Distribute writes among blocks to prevent failures resulting from uneven numbers of erase cycles Flash drives works best with sequential workloads

37 Performance data Widely vary between models: One random pair of specs:  Read Speed22MBps  Write Speed15MBps

38 RAID level 0 No replication Advantages:  Simple to implement  No overhead Disadvantage:  If array has n disks failure rate is n times the failure rate of a single disk

39 RAID levels 0 and 1 RAID level 0 RAID level 1 Mirrors

40 RAID level 1 Mirroring  Two copies of each disk block Advantages:  Simple to implement  Fault-tolerant Disadvantage:  Requires twice the disk capacity of normal file systems

41 RAID level 4 (I) Requires N+1 disk drives  N drives contain data Individual blocks, not chunks  Blocks with same disk address form a stripe xxxx?

42 RAID level 4 (II) Parity drive contains exclusive or of the N blocks in stripe p[k] = b[k]  b[k+1] ...  b[k+N-1] Parity block now reflects contents of several blocks! Can now do parallel reads/writes

43 RAID levels 4 and 5 RAID level 4 RAID level 5 Bottleneck

44 RAID level 5 Single parity drive of RAID level 4 is involved in every write  Will limit parallelism RAID-5 distribute the parity blocks among the N+1 drives  Much better

45 The small write problem Specific to RAID 5 Happens when we want to update a single block  Block belongs to a stripe  How can we compute the new value of the parity block... b[k+1] p[k] b[k+2]b[k]

46 First solution Read values of N-1 other blocks in stripe Recompute p[k] = b[k]  b[k+1] ...  b[k+N-1] Solution requires  N-1 reads  2 writes (new block and new parity block)

47 Second solution Assume we want to update block b[m] Read old values of b[m] and parity block p[k] Compute p[k] = new b[m]  old b[m]  old p[k] Solution requires  2 reads (old values of block and parity block)  2 writes (new block and new parity block)

48 Other RAID organizations (I) RAID 6:  Two check disks  Tolerates two disk failures  More complex updates

49 Other RAID organizations (II) RAID 10:  Also known as RAID 1 + 0  Data are striped (as in RAID 0 or RAID 5) over pairs of mirrored disks (RAID 1) RAID 0 RAID 1

50 Other RAID organizations (III) Two dimensional RAIDs  Designed for archival storage Data are written once and read maybe (WORM) Update rate is less important than  High reliability  Low storage costs

51 Complete 2D RAID arrays Have  n parity disks  n(n – 1)/2 data disks P2P2 P1P1 P3P3 D 13 D 14 P4P4 D 34 D 23 D 24 D 12

52 Main advantage Work in progress


Download ppt "HARD DISKS AND OTHER STORAGE DEVICES Jehan-François Pâris Spring 2015."

Similar presentations


Ads by Google