Presentation is loading. Please wait.

Presentation is loading. Please wait.

Hashing CENG 351.

Similar presentations


Presentation on theme: "Hashing CENG 351."— Presentation transcript:

1 Hashing CENG 351

2 Hashing: Intro The primary goal is to locate the desired record in a single access of disk. Sequential search: O(N) B+ trees: O(logk N) Hashing: O(1) In hashing, the key of a record is transformed into an address and the record is stored at that address. Hash-based indexes are best for equality selections. Cannot support range searches. Static and dynamic hashing techniques exist. CENG 351

3 Hash-based Index Data entries are kept in buckets (an abstract term)
Each bucket is a collection of one primary page and zero or more overflow pages. Given a search key value, k, we can find the bucket where the data entry k* is stored as follows: Use a hash function, denoted by h The value of h(k) is the address for the desired bucket. h(k) should distribute the search key values uniformly over the collection of buckets CENG 351 2

4 Design Factors Bucket size: the number of records that can be held at the same address. Loading factor: the ratio of the number of records put in the file to the total capacity of the buckets. Hash function: should evenly distribute the keys among the addresses. Overflow resolution technique. CENG 351

5 Hash Functions Key mod N: Folding: Truncation: Squaring:
N is the size of the table, better if it is prime. Folding: e.g. 123|456|789: add them and take mod. Truncation: e.g map to a table of 1000 addresses by picking 3 digits of the key. Squaring: Square the key and then truncate Radix conversion: e.g treat it to be base 11, truncate if necessary. CENG 351

6 Static Hashing Primary Area: # primary pages fixed, allocated sequentially, never de-allocated; (say M buckets). A simple hash function: h(k) = f(k) mod M Overflow area: disjoint from the primary area. It keeps buckets which hold records whose key maps to a full bucket. Adding the address of an overflow bucket to a primary area bucket is called chaining. Collision does not cause a problem as long as there is still room in the mapped bucket. Overflow occurs during insertion when a record is hashed to the bucket that is already full. CENG 351 3

7 Example Assume f(k) = k. Let M = 5. So, h(k) = k mod 5
Bucket factor = 3 records. Insert: 12, 35, 44, 60, 6, 46,57,33,62,17 CENG 351

8 Example 1 2 3 4 Assume f(k) = k. Let M = 5. So, h(k) = k mod 5
Bucket factor = 3 records. Insert: 12, 35, 44, 60, 6, 46,57,33,62,17 1 2 3 4 35 60 6 46 17 12 57 62 33 overflow 44 Primary area CENG 351

9 Load Factor (Packing density)
To limit the amount of overflow we allocate more space to the primary area than we need (i.e. the primary area will be, say, 70% full) Load Factor = => Lf = # of records in the file # of spaces in primary area n M * Bkfr Where n: number of records, Bkf: the blocking factor, M: number of blocks CENG 351

10 Effects of Lf and Bkf Performance can be enhanced by the choice of bucket size and load factor. In general, a smaller load factor means less overflow and a faster fetch time; but more wasted space. A larger Bkf means less overflow in general, but slower fetch. CENG 351

11 Insertion and Deletion
Insertion: New records are inserted at the end of the chain. Deletion: Two ways are possible: Mark the record to be deleted Consolidate sparse buckets when deleting records. In the 2nd approach: When a record is deleted, fill its place with the last record in the chain of the current bucket. Deallocate the last bucket when it becomes empty. CENG 351

12 Problem of Static Hashing
The main problem with static hashing: the number of buckets is fixed: Long overflow chains can develop and degrade performance. On the other hand, if a file shrinks greatly, a lot of bucket space will be wasted. There are some other hashing techniques that allow dynamically growing and shrinking hash index. These include: linear hashing extendible hashing CENG 351 4

13 Linear Hashing It maintains a constant load factor.
Thus avoids reorganization. by incrementally adding new buckets to the primary area. In linear hashing the last bits in the hash number are used for placing the records. CENG 351

14 Example e.g. 34: 28: 08: 13: 21: 37: 12: Last 3 bits 000 8 16 32 001 17 25 010 34 50 011 11 27 100 28 101 5 110 14 111 55 15 Lf = 14/24 = 58% Lf = 15/24 = 63% Insert: 13, 21, 37,12 Lf = 16/24 = 67% 13 21 37 Lf = 17/24 = 70% CENG 351

15 Insertion of records To expand the table: split an existing bucket denoted by k digits into two buckets using the last k+1 digits. e.g. 0000 000 1000 CENG 351

16 Expanding the table 0000 16 32 001 17 25 010 34 50 011 11 27 100 28 12 101 5 13 21 110 14 111 55 15 1000 8 Boundary value 37 CENG 351

17 Hash # 1000: uses last 4 digits Hash # 1101: uses last 3 digits
0000 16 32 0001 17 0010 34 50 011 11 27 100 28 12 101 5 13 21 110 14 111 55 15 1000 8 1001 25 1010 26 k = 3 Hash # 1000: uses last 4 digits Hash # 1101: uses last 3 digits Boundary value 37 CENG 351

18 Fetching a record Calculate the hash function.
Look at the last k digits. If it’s less than the boundary value, the location is in the bucket labeled with the last k+1 digits. Otherwise it is in the bucket labeled with the last k digits. Follow overflow chains as with static hashing. CENG 351

19 Insertion Search for the correct bucket into which to place the new record. If the bucket is full, allocate a new overflow bucket. Split condition*: If Lf is above threshold, Add one more bucket to the primary area. Distribute the records from the bucket chain at the boundary value between the original area and the new primary area buckets Add 1 to the boundary value. *any overflow may be also considered as split condition (Elmasri), or Lf*Bkfr records more than needed for the given Lf (Salzberg) CENG 351

20 Deletion Read in a chain of records.
Replace the deleted record with the last record in the chain. If the last overflow bucket becomes empty, deallocate it. Grouping condition*: When the last bucket is empty or Lf is below a given threshold, contract the primary area by one bucket. Compressing the table is exact opposite of expanding it: Keep the total # of records in the file and buckets in primary area. When we have Lf * Bkfr fewer records than needed, consolidate the last bucket with the bucket which shares the same last k digits. *when number of records is Lf * Bkfr less than the number needed for Lf, it may be considered as a grouping condition (Salzberg) CENG 351

21 Extendible Hashing:Intr.
Extendible hashing does not have chains of buckets, contrary to linear hashing. Hashing is based on creating index for an index table, which have pointers to the data buckets. The number of the entries in the index table is 2i, where i is number of bit used for indexing. The records with the same first i bits are placed in the same bucket, If max content of any bucket is reached, a new bucket is added, provided the corresponding index entry is free. If max content of any bucket is reached and a new bucket cannot be added, according to the first i bits, then the table size is doubled. CENG 351

22 Extendible Hashing: Overflow
When i=j and overflow occurs, then index table is doubled; where j is the level of the index, while i is the level of data buckets Successive doubling of the index table may happen, in case of a poor hashing function. The main disadvantage of the extendible hashing is that, the index table may grow to be too big and too sparse. The best case: there are as many buckets as index table size. If the index table can fit into the memory, the access to a record requires one disk access only. CENG 351

23 Length of common hash prefix
Extendible Hashing Extendable Hashing i i2 bucket2 i3 bucket3 i1 bucket1 Data bucket Bucket address table Length of common hash prefix Hash prefix Hash function returns b bits Only the prefix i bits are used to hash the item There are 2i entries in the bucket address table Let ij be the length of the common hash prefix for data bucket j, there is 2(i-ij) entries in bucket address table points to j CENG 351

24 Splitting a bucket: Case 1
Extendable Hashing Splitting (Case 1: ij=i) Only one entry in bucket address table points to data bucket j i++; split data bucket j to j, z; ij=iz=i; rehash all items previously in j; 3 2 1 000 001 010 011 100 101 110 111 2 00 01 10 11 1 CENG 351

25 Splitting: Case 2 Extendable Hashing Splitting (Case 2: ij< i)
More than one entry in bucket address table point to data bucket j split data bucket j to j, z; ij = iz = ij +1; Adjust the pointers previously point to j to j and z; rehash all items previously in j; 2 3 000 001 010 011 100 101 110 111 2 1 3 000 001 010 011 100 101 110 111 CENG 351

26 Example Extendable Hashing
Suppose the hash function is h(x) = x mod 8 and each bucket can hold at most two records. Show the extendable hash structure after inserting 1, 4, 5, 7, 8, 2, 20. 1 4 5 7 8 2 20 001 100 101 111 000 010 1 4 00 01 10 11 1 8 2 4 5 7 1 4 5 CENG 351

27 Example Extendable Hashing inserting 1, 4, 5, 7, 8, 2, 20 1 4 5 7 8 2
001 100 101 111 000 010 3 000 001 010 011 100 101 110 111 1 8 2 4 20 7 5 1 8 2 4 5 7 00 01 10 11 CENG 351

28 Comments on Extendible Hashing
If directory fits in memory, equality search answered with one disk access. A typical example: a100MB file with 100 bytes/entry and a page size of 4K contains 1,000,000 records (as data entries) but only about 25,000 directory elements  chances are high that directory will fit in memory. If the distribution of hash values is skewed (e.g., a large number of search key values all are hashed to the same bucket ), directory can grow large. But this kind of skew can be avoided with a well-tuned hashing function CENG 351 9

29 Comments on Extendible Hashing
Delete: If removal of data entry makes bucket empty, can be merged with a “buddy” bucket. If each directory element points to same bucket as its split image, can halve directory. CENG 351 9

30 Simple Hashing: Search
No overflow case, ie. direct hit: TF=s+r+btt Overflow successful case, for average chain length of x TFs=s+r+btt + (x/2)*(s+r+btt) Overflow unsuccessful case, for average chain length of x TFu=s+r+btt + x*(s+r+btt) CENG 351

31 Deletion from a hashed file
Delete without consolidation, i.e., mark the deleted record as deleted… which requires occasional reorganization. TD=TFs+2r Where x/2 of the chain is involved in TFs- successful search. CENG 351

32 Delete with consolidation
Delete with consolidation, requires the entire chain to be read in and written out, after been updated. This will not require any reorganization. The approximate formula: TD=TFu+2r Where the full chain (x blocks) is involved in TFu… CENG 351

33 Insertion Assuming that insertion involves the modification of the last bucket, just like deletion… TI=TFu+2r CENG 351

34 Sequential Access In hashing, each record is a randomly placed and randomly accessed, requiring very long sequential record processing: TX=n*TF CENG 351

35 Linear hashing For access time considerations, experimentally obtained estimations E.g., blocking factor m=50 and load factor f=75%, the average access times for successful and unsuccessful cases can be stated as TFs=1.05*(s+r+btt) TFu=1.27*(s+r+btt) CENG 351

36 Summary Hash-based indexes: best for equality searches, cannot support range searches. Static Hashing can lead to long overflow chains. Extendible Hashing avoids overflow pages by splitting a full bucket when a new data entry is to be added to it. Directory to keep track of buckets, doubles periodically. Can get large with skewed data; additional I/O if this does not fit in main memory. CENG 351 16


Download ppt "Hashing CENG 351."

Similar presentations


Ads by Google