Presentation is loading. Please wait.

Presentation is loading. Please wait.

R. Ramakrishnan and J. Gehrke: Storing Data on Disks 1 Storing Data: Disks and Files Chapter 9 “Yea, from the table of my memory I’ll wipe away all trivial.

Similar presentations


Presentation on theme: "R. Ramakrishnan and J. Gehrke: Storing Data on Disks 1 Storing Data: Disks and Files Chapter 9 “Yea, from the table of my memory I’ll wipe away all trivial."— Presentation transcript:

1 R. Ramakrishnan and J. Gehrke: Storing Data on Disks 1 Storing Data: Disks and Files Chapter 9 “Yea, from the table of my memory I’ll wipe away all trivial fond records.” -- Shakespeare, Hamlet

2 R. Ramakrishnan and J. Gehrke: Storing Data on Disks 2 Teaching Plan (covers Ch. 9) 0. Basic Introduction to Disk Drives --- already covered in the first week of the semester 1. Redundant Arrays of Independent Disks (RAID) 2. Buffer Management 3. More on Disk Drives 1.Slideshow1 2.Slideshow2

3 R. Ramakrishnan and J. Gehrke: Storing Data on Disks 3 Disks and Files  DBMS stores information on (“hard”) disks.  This has major implications for DBMS design! oREAD: transfer data from disk to main memory (RAM). oWRITE: transfer data from RAM to disk. oBoth are high-cost operations, relative to in-memory operations, so must be planned carefully!

4 R. Ramakrishnan and J. Gehrke: Storing Data on Disks 4 Why Not Store Everything in Main Memory?  Costs too much. $1000 will buy you either 128MB of RAM or 7.5GB of disk today.  Main memory is volatile. We want data to be saved between runs. (Obviously!)  Typical storage hierarchy: oMain memory (RAM) for currently used data. oDisk for the main database (secondary storage). oTapes for archiving older versions of the data (tertiary storage).

5 R. Ramakrishnan and J. Gehrke: Storing Data on Disks 5 Disks  Secondary storage device of choice.  Main advantage over tapes: random access vs. sequential.  Data is stored and retrieved in units called disk blocks or pages.  Unlike RAM, time to retrieve a disk page varies depending upon location on disk. oTherefore, relative placement of pages on disk has major impact on DBMS performance!

6 R. Ramakrishnan and J. Gehrke: Storing Data on Disks 6 Components of a Disk Platters v The platters spin (say, 90rps). Spindle v The arm assembly is moved in or out to position a head on a desired track. Tracks under heads make a cylinder (imaginary!). Disk head Arm movement Arm assembly v Only one head reads/writes at any one time. Tracks Sector v Block size is a multiple of sector size (which is fixed).

7 R. Ramakrishnan and J. Gehrke: Storing Data on Disks 7 Accessing a Disk Page  Time to access (read/write) a disk block: o seek time ( moving arms to position disk head on track ) o rotational delay ( waiting for block to rotate under head ) o transfer time ( actually moving data to/from disk surface )  Seek time and rotational delay dominate. oSeek time varies from about 1 to 20msec oRotational delay varies from 0 to 10msec oTransfer rate is about 1msec per 4KB page  Key to lower I/O cost: reduce seek/rotation delays! Hardware vs. software solutions?

8 R. Ramakrishnan and J. Gehrke: Storing Data on Disks 8 RAID  Disk Array: Arrangement of several disks that gives abstraction of a single, large disk.  Goals: Increase performance and reliability.  Two main techniques: oData striping: Data is partitioned; size of a partition is called the striping unit. Partitions are distributed over several disks. oRedundancy: More disks -> more failures. Redundant information allows reconstruction of data if a disk fails.

9 R. Ramakrishnan and J. Gehrke: Storing Data on Disks 9 Disks and Redundancy  MTTF (mean-time-to-failure):= e.g. for a single disk it could be 50000 hours (5.7 years);  The MTTF of a disk array of 100 disks is only 50000/100=21 days assuming failures are independent and the failure properties of disks does not change over time (in general, disks are more likely to fail early or late in the life time)  Redundancy can be used to increase the reliability of a disk: (redundant data is used to reconstruct the data on a failed disk). Key problems: oWhere do we store the redundant information? Possible solutions: oUse check disks oDistribute redundant information uniformly over disks oHow do we compute the redundant information? 1.Redundancy Scheme (parity scheme, Hamming Codes, Reed Solomon Codes) 2.Disk array is partitioned into reliability groups that consists of a set of data disks and check disks

10 R. Ramakrishnan and J. Gehrke: Storing Data on Disks 10 Example: Parity Scheme  E.g. could have D data disks and one check disk oLet n be the number of data disks for which a particular bit is set to 1: If odd(n) set corresponding bit of check disk (parity bit) to 1; otherwise, 0 oAssume one disks fails: Compute m to be the number of the remaining bits for which a particular bid is set to 1. If odd(m) and parity=1 or even(m) and parity=0 then bit of failed disk has to be set to 0; if not, it has to be set to 1.

11 R. Ramakrishnan and J. Gehrke: Storing Data on Disks 11 RAID Levels  Level 0: No redundancy  Level 1: Mirrored (two identical copies) oEach disk has a mirror image (check disk) oParallel reads, writes are not performed simultaneously oNo striping oMaximum transfer rate = transfer rate of one disk  Level 0+1: Striping and Mirroring oParallel reads, writes are not performed simultaneously oMaximum transfer rate = aggregate bandwidth  Level 2: Error Correcting Codes (use multiple check disks)

12 R. Ramakrishnan and J. Gehrke: Storing Data on Disks 12 RAID Levels (Contd.)  Level 3: Bit-Interleaved Parity oStriping Unit: One bit. One check disk. oEach read and write request involves all disks; disk array can process one request at a time.  Level 4: Block-Interleaved Parity oStriping Unit: One disk block. One check disk. oSmall requests need only access one or few disks oLarge requests can utilize full bandwidth oWrites involve modified block and check disk oProblem: check disk becomes bottleneck  Level 5: Block-Interleaved Distributed Parity oSimilar to RAID Level 4, but parity blocks are distributed over all disks  Level 6: P+Q Redundancy (can recover from 2 simultaneous disk failures)

13 R. Ramakrishnan and J. Gehrke: Storing Data on Disks 13 Disk Space Management  Lowest layer of DBMS software manages space on disk.  Higher levels call upon this layer to: oallocate/de-allocate a page oread/write a page  Request for a sequence of pages must be satisfied by allocating the pages sequentially on disk! Higher levels don’t need to know how this is done, or how free space is managed.

14 R. Ramakrishnan and J. Gehrke: Storing Data on Disks 14 Buffer Management in a DBMS  Data must be in RAM for DBMS to operate on it!  Table of pairs is maintained. DB MAIN MEMORY DISK disk page free frame Page Requests from Higher Levels BUFFER POOL choice of frame dictated by replacement policy

15 R. Ramakrishnan and J. Gehrke: Storing Data on Disks 15 When a Page is Requested...  If requested page is not in pool: oChoose a frame for replacement oIf frame is dirty, write it to disk oRead requested page into chosen frame  Pin the page and return its address. * If requests can be predicted (e.g., sequential scans) pages can be pre-fetched several pages at a time! page has been modified Increment pin count of the page

16 R. Ramakrishnan and J. Gehrke: Storing Data on Disks 16 More on Buffer Management  If done, the requestor of page must unpin it, and indicate whether page has been modified: o dirty bit is used for this.  Page in pool may be requested many times, oa pin count is used. A page is a candidate for replacement iff pin count = 0.  CC & recovery may entail additional I/O when a frame is chosen for replacement. Number of users of the page

17 R. Ramakrishnan and J. Gehrke: Storing Data on Disks 17 Buffer Replacement Policy  Frame is chosen for replacement by a replacement policy: oLeast-recently-used (LRU), Clock, MRU etc.  Policy can have big impact on # of I/O’s; depends on the access pattern.

18 R. Ramakrishnan and J. Gehrke: Storing Data on Disks 18 DBMS vs. OS File System OS does disk space & buffer mgmt: why not let OS manage these tasks?  Differences in OS support: portability issues  Some limitations, e.g., files can’t span disks.  Buffer management in DBMS requires ability to: opin a page in buffer pool, force a page to disk (important for implementing CC & recovery), oadjust replacement policy, and pre-fetch pages based on access patterns in typical DB operations.

19 R. Ramakrishnan and J. Gehrke: Storing Data on Disks 19 Summary  Disks provide cheap, non-volatile storage. oRandom access, but cost depends on location of page on disk; important to arrange data sequentially to minimize seek and rotation delays.  Buffer manager brings pages into RAM. oPage stays in RAM until released by requestor. oWritten to disk when frame chosen for replacement (which is sometime after requestor releases the page). oChoice of frame to replace based on replacement policy. oTries to pre-fetch several pages at a time.

20 R. Ramakrishnan and J. Gehrke: Storing Data on Disks 20 Summary (Contd.)  DBMS vs. OS File Support oDBMS needs features not found in many OS’s, e.g., forcing a page to disk, controlling the order of page writes to disk, files spanning disks, ability to control pre-fetching and page replacement policy based on predictable access patterns, etc.


Download ppt "R. Ramakrishnan and J. Gehrke: Storing Data on Disks 1 Storing Data: Disks and Files Chapter 9 “Yea, from the table of my memory I’ll wipe away all trivial."

Similar presentations


Ads by Google