Presentation is loading. Please wait.

Presentation is loading. Please wait.

6. Files of (horizontal) Records The concepts of pages or blocks suffices when doing I/O, but the higher layers of a DBMS operate on records and files.

Similar presentations


Presentation on theme: "6. Files of (horizontal) Records The concepts of pages or blocks suffices when doing I/O, but the higher layers of a DBMS operate on records and files."— Presentation transcript:

1 6. Files of (horizontal) Records The concepts of pages or blocks suffices when doing I/O, but the higher layers of a DBMS operate on records and files of records FILE: A collection of pages, each containing a collection of records. Must support: – insert/delete/modify record – read a particular record (specified using record id) – scan all records (possibly with some conditions on the records to be retrieved)

2 Files Types The three basic file organizations supported by the File Manager of most DBMSs are: HEAP FILES (files of un-ordered records) SORTED or CLUSTERED FILES ( records sorted or clustered on some field(s) ) HASHED FILES (files in which records are positioned based on a hash function on some field(s)

3 Unordered (Heap) Files Simplest file structure contains records in no particular order. As file grows and shrinks, disk pages are allocated and de-allocated. To support record level operations, DBMS must: – keep track of the pages in a file – keep track of free space on pages – keep track of the records on a page There are many alternatives for keeping track of these.

4 Heap File Implemented as a Linked List The header page id and Heap file name must be stored someplace. Each page contains 2 `pointers’ plus data. Header Page Data Page Data Page Data Page Data Page Data Page Data Page Pages with Free Space Full Pages … …

5 Heap File Using a Page Directory The entry for a page can include the number of free bytes on the page. The directory is a collection of pages; linked list implementation is just one alternative. Data Page 1 Data Page 2 Data Page N Header Page DIRECTORY (linked list of Header blocks Containing page IDs)

6 Heap File Facts Record insert? Method-1: System inserts new records at the end of the file (need indicator), moves last record into freed slot following a deletion, updates indicator. - doesn't allow support of the RID or RRN concept. Or a deleted record slot can remain empty (until file reorganized) - allows support of RID/RRN concept <- page | record |0 | record |1 | record |2 | |3 | |4 | |5 |3 | <- next open slot indicator

7 Heap File Facts Record insert Method-2: Insert in any open slot. Must maintain a data structure indicating open slots (e.g., bit filter (or bit map) identifies open slots) - as a list or - as a bit_filter <- page record  availability bit filter (0 means available) If we want all records with a given value in particular field, need an "index" Of course index files must provide a fast way to find the particular value entries of interest (the heap file organization for index files would makes little sense). Index files are usually sorted files. Indexes are examples of ACCESS PATHS.

8 Sorted File (Clustered File) Facts File is sorted on one attribute (e.g., using the unpacked record-pointer page-format) Advantages over heap includes: - reading records in that particular order is efficient - finding next record in order is efficient. For efficient "value-based" ordering (clustering), a level of indirection is useful (unpacked, record-pointer page-format) page 3 RID(3,3) 0 RID(3,0) 1 RID(3,4) 2 RID(3,2) 3 RID(3,1)  unpacked record-pointer page-format slot-directory What happens when a page fills up? RID(3,8) 5 RID(3,6) 0 Use an overflow page for next record? Ovfl page | When a page fills up and,e.g., a record must be inserted and clustered between (3,1) and (3,5), one solution is to simply place it on an overflow page in arrival order. Then the overflow page is scanned like an unordered file page, when necessary. Periodically the primary and overflow pages can be reorganized as an unpacked record-pointer extent to improve sequential access speed (next slide for an example)

9 Sorted File (Clustered File) Facts Reorganizing a Sorted File with several overflow levels. THE BEFORE: Ovfl page page 3 RID(3,3) 0 RID(3,0) 1 RID(3,4) 2 RID(3,2) 3 RID(3,1) RID(3,8) RID(3,6) Ovfl page RID(3,9) RID(3,5) RID(3,11) RID(3,10) RID(3,15) RID(3,7) Ovfl page AFTER: Ovfl page page 3 RID(3,3) 0 RID(3,0) 1 RID(3,4) 2 RID(3,2) 3 RID(3,1) RID(3,5) RID(3,6) RID(3,9) RID(3,8) RID(3,11) RID(3,10) RID(3,7) RID(3,15) 0 In this case re- organization requires only 2 record swaps and 1 slot directory re-write.

10 Hash files A hash function is applied to the key of a record to determine which "file bucket" it goes to ("file buckets" are usually the pages of that file. Assume there are M pages, numbered 0 through M-1. Then the hash function can be any function that converts the key to a number between 0 and M-1 (e.g., for numeric keys, MOD M-1 is typical) ). Collisions or Overflows can occur (when a new record hashes to a bucket that is already full). The simplest Overflow method is to use separate Overflow pages: e.g., h(key) mod M h key Primary bucket pages 1 0 M-1 Overflow pages(as separate link list) Overflow pages are allocated if needed (as a separate link list for each bucket. Page#s are needed for pointers) or a shared link list. 2 Overflow pages(as Single link list) Long overflow chains can develop and degrade performance. – Extendible and Linear Hashing are dynamic techniques to fix this problem.

11 Other Static Hashing overflow handling methods e.g., h(key) mod M h key bucket pages rec Overflow can be handled by open addressing also (more commonly used for internal hash tables where a bucket is a allocation of main memory, not a page. rec In Open Addressing, upon collision, search forward in the bucket sequence for the next open record slot. rec h(rec_key)=1 Collision! 2? no 3? yes Then to search, apply h. If not found, search sequentially ahead until found (circle around to search start point)!

12 Other overflow handling methods h 0 (key) h then h 1 then h 2... bucket pages rec Overflow can be handled by re-hashing also. rec In re-hashing, upon collision, apply next hash function from a sequence of hash functions rec Then to search, apply h. If not found, apply next hash function until found or list exhausted. These methods can be combined also.

13 Extendible Hashing Idea: Use directory of pointers to buckets, split just the bucket that overflowed double the directory when needed Directory is much smaller than file, so doubling it is cheap. Only one page of data entries is split. No overflow page! Trick lies in how hash function is adjusted!

14 Example blocking factor(bfr)=4 (# entries per bucket) Local depth of a bucket: # of bits used to determine if an entry belongs to bucket Global depth of directory: Max # of bits needed to tell which bucket an entry belongs to (= max of local depths) Insert : If bucket is full, split it ( allocate 1 new page, re-distribute over those 2 pages ). GLOBAL DEPTH = gd DATA PAGES 13* LOCAL DEPTH DIRECTORY Bucket A Bucket B Bucket C Bucket D 10* 1*21* 4*12*32* 16* 15*7*19* 5* To find the bucket for a new key value, r, take just the last global depth bits of h(r), not all of it! (last 2 bits in this example) (for simplicity we let h(r)=r here) E.g., h(5)=5=101 binary thus it's in bucket pointed in the directory by 01. Apply hash function, h, to key value, r Follow pointer of last 2 bits of h(r).

15 Example how did we get there? GLOBAL DEPTH = gd DATA PAGES LOCAL DEPTH DIRECTORY Bucket A 4* First insert is 4: h(4) = 4 = 100 binary in bucket pointed in the directory by 0. 0 Bucket B

16 Example GLOBAL DEPTH = gd DATA PAGES LOCAL DEPTH DIRECTORY Bucket A 4* Insert: 12, 32, 16 and 1 1 Bucket B 1* 12*32* 16* h(12) = 12 = 1100 binary in bucket pointed in the directory by 0. h(32) = 32 = binary in bucket pointed in the directory by 0. h(16) = 16 = binary in bucket pointed in the directory by 0. h(1) = 1 = 1 binary in bucket pointed in the directory by 1.

17 Example GLOBAL DEPTH = gd DATA PAGES LOCAL DEPTH DIRECTORY Bucket A 4* Insert: 5, 21 and 13 1 Bucket B 1* 12*32* 16* 13*21* 5* h(5) = 5 = 101 binary in bucket pointed in the directory by 1. h(21) = 21 = binary in bucket pointed in the directory by 1. h(13) = 13 = 1101 binary in bucket pointed in the directory by 1.

18 Example GLOBAL DEPTH = gd DATA PAGES LOCAL DEPTH DIRECTORY Bucket A 4* 9 th insert: 10 h(10) = 10 = 1010 binary in bucket pointed in the directory by 0. Collision! 1 Bucket B 1* 12*32* 16* 13*21* 5* 2 Bucket C 10* Split bucket A into A and C. 0 1 Reset one pointer. Double directory (by copying what is there and adding a bit on the left). Redistribute values among A and C (if necessary Not necessary this time since all 2's bits correct: 4 = = = = = 1010

19 Example GLOBAL DEPTH = gd LOCAL DEPTH DIRECTORY Bucket A 4* h(15) = 15 = 1111 binary 1 Bucket B 1* 12*32* 16* 13*21* 5* 2 Bucket C 10* Split bucket B into B and D. No need to double directory because the local depth of B is less than the global depth. 0 1 Reset one pointer Redistribute values among B and D (if necessary, not necessary this time). DATA PAGES 1 Bucket D 15*7*19* Reset local depth of B and D 2 2 Inserts: 15, 7 and 19 h(15) = 7 = 111 binary h(19) = 15 = binary

20 Insert h(20)=20=10100  Bucket pointed to by 00 is full! LOCAL DEPTH 2 GLOBAL DEPTH Bucket A Bucket B Bucket C Bucket D 1* 5*21*13* 10* 15*7*19* * 12* 32* 16* Split A.Double directory and reset 1 pointer Bucket E (`split image' of Bucket A) 3 3 4* 12* Redistribute contents of A

21 Points to Note 20 = binary Last 2 bits (00) tell us r belongs in either A or A2, but not which one. Last 3 bits needed to tell which one. – Local depth of a bucket: # of bits used to determine if an entry belongs to this bucket. – Global depth of directory: Max # of bits needed to tell which bucket an entry belongs to (= max of local depths) When does bucket split cause directory doubling? – Before insert, local depth of bucket = global depth. Insert causes local depth to become > global depth; directory is doubled by copying it over and `fixing’ pointer to split image page. – Use of least significant bits enables efficient doubling via copying of directory!)

22 Comments on Extendible Hashing If directory fits in memory, equality search answered with one disk access; else two. – Directory grows in spurts, and, if the distribution of hash values is skewed, directory can grow large. – Multiple entries with same hash value cause problems! Delete: If removal of data entry makes bucket empty, can be merged with its `split image’. –As soon as each directory element points to same bucket as its (merged) split image, can halve directory.

23 Linear Hash File Starts with M buckets (numbered 0, 1,..., M-1 and initial hash function, h 0 =mod M (or more general, h 0 (key)=h(key)mod M for any hash ftn h which maps into the integers Use Chaining to shared overflow-pages to handle overflows. At the first overflow, split bucket 0 into bucket 0 and bucket M and rehash bucket 0 records using h 1 =mod 2M. Henceforth if h 0 yields value  0, rehash using h 1 =mod 2M At the next overflow, split bucket 1 into bucket 1 and bucket M+1 and rehash bucket 1 records using h 1 =mod 2M. Henceforth if h 0 yields value  1, use h 1... When all of the original M buckets have been split (M collisions), then rehash all overflow records using h 1. Relabel h 1 as h 0, (discarding the old h 0 forever) and start a new "round" by repeating the process above for all future collisions (i.e., now there are buckets 0,...,(2M-1) and h 0 = MOD 2M ). To search for a record, let n = number of splits so far in the given round, if h 0 (key) is not greater than n, then use h 1, else use h 0.

24 02|BAID |NY |NY 5 21 Linear Hash ex. M=5 Bucketpg |CLAY |OUTBK|NJ 33|GOOD |GATER|FL 14|THAISZ|KNOB |NJ 11|BROWN |NY |NY 45 | | | 23 | | | 99 27|JONES |MHD |MN 22|ZHU |SF |CA Insert h 0 (27)  mod 5 (27)=2 C! 0  0,5, mod 10 rehash 0; n=0 27|JONES |MHD |MN OF | | | 21 Insert h 0 (8)  mod 5 (8)=3 8|SINGH |FGO |ND Insert h 0 (15)  mod 5 (15)=0  n 15|LOWE |ZAP |ND h 1 (15)  mod 10 (15)=5 15|LOWE |ZAP |ND Insert h 0 (32)  mod 5 (32)=2 ! 1  1,6, mod 10 rehash 1; n=1 32|FARNS |BEEP |NY 24|CROWE |SJ |CA |FARNS |BEEP |NY |BARBIE|NY |NY | | | 101 Insert h 0 (39)  mod 5 (39)=4 ! 2  2,7; mod 10 rehash 2; n=2 39|TULIP |DERLK|IN | | | |TULIP |DERLK|IN Insert h 0 (31)  mod 5 (31)=1 ! 3  3,8; mod 10 rehash 3; n=3 31|ROSE |MIAME|OH | | | 105 Insert h 0 (36)  mod 5 (36)=1 ! 4  4,9; mod 10 rehash 4; n=4! 36|SCHOTZ|CORN |IA | | | |SCHOTZ|CORN |IA | | |

25 02|BAID |NY |NY 5 21 LHex. 2 nd rnd M=10 h 0  mod 10 Bucketpg |CLAY |OUTBK|NJ 33|GOOD |GATER|FL 14|THAISZ|KNOB |NJ 11|BROWN |NY |NY 45 | | | 23 | | | 27|JONES |MHD |MN 22|ZHU |SF |CA h 0 27=7 | | | 21 8|SINGH |FGO |ND 15|LOWE |ZAP |ND 24|CROWE |SJ |CA |FARNS |BEEP |NY |BARBIE|NY |NY | | | | | | 39|TULIP |DERLK|IN 31|ROSE |MIAME|OH | | | | | | 36|SCHOTZ|CORN |IA | | | h 0 32=2 Collision! rehash mod | | | 109 h 0 39=9 h 0 31=1 Collision! rehash mod | | | 110 | | | h 0 36=6 OF Insert h 0 (10)  mod 10 (10)=0 10|RADHA |FGO |ND ETC.

26 Summary Hash-based indexes: best for equality searches, cannot support range searches. Static Hashing can lead to performance degradation due to collision handling problems. Extendible Hashing avoids performance problems by splitting a full bucket when a new data entry is to be added to it. (Duplicates may require overflow pages.) – Directory to keep track of buckets, doubles periodically. – Can get large with skewed data; additional I/O if this does not fit in main memory.

27 Summary Linear Hashing avoids directory by splitting buckets round-robin, and using overflow pages. – Overflow pages not likely to be long. – Duplicates handled easily. – Space utilization could be lower than Extendible Hashing, since splits not concentrated on `dense’ data areas. skewed occurs when the hash values of the data entries are not uniform! v 1 v 2 v 3 v 4 v 5... v n Distribution skew count values... v 1 v 2 v 3 v 4 v 5... v n Count skew count values... v 1 v 2 v 3 v 4 v 5... v n Dist & Count skew count values


Download ppt "6. Files of (horizontal) Records The concepts of pages or blocks suffices when doing I/O, but the higher layers of a DBMS operate on records and files."

Similar presentations


Ads by Google