Presentation is loading. Please wait.

Presentation is loading. Please wait.

CENG 351 1 Hashing for files. CENG 3512 Introduction Idea: to reference items in a table directly by doing arithmetic operations to transform keys into.

Similar presentations


Presentation on theme: "CENG 351 1 Hashing for files. CENG 3512 Introduction Idea: to reference items in a table directly by doing arithmetic operations to transform keys into."— Presentation transcript:

1 CENG 351 1 Hashing for files

2 CENG 3512 Introduction Idea: to reference items in a table directly by doing arithmetic operations to transform keys into table addresses. Idea: to reference items in a table directly by doing arithmetic operations to transform keys into table addresses. Steps: Steps: 1. Compute a hash function that transforms the search key into a table address 2. Collision-resolution that deals with the keys that may be hashed to the same table address Hashing is a good example of a time-space tradeoff. Hashing is a good example of a time-space tradeoff. It is a classical computer science problem. It is a classical computer science problem.

3 CENG 3513 Motivation The primary goal is to locate the desired record in a single disk access. The primary goal is to locate the desired record in a single disk access. Sequential search: O(N) Sequential search: O(N) B+ trees: O(log k N) B+ trees: O(log k N) Hashing: O(1) Hashing: O(1) In hashing, the key of a record is transformed into an address and the record is stored at that address. In hashing, the key of a record is transformed into an address and the record is stored at that address. Hash-based indexes are best for equality selections. Cannot support range searches. Hash-based indexes are best for equality selections. Cannot support range searches. Static and dynamic hashing techniques exist. Static and dynamic hashing techniques exist.

4 CENG 3514 Hashing-based Index Data entries are kept in buckets (an abstract term) Data entries are kept in buckets (an abstract term) Each bucket is a collection of primary data pages and zero or more overflow pages. Each bucket is a collection of primary data pages and zero or more overflow pages. Given a search key value, k, we can find the bucket where the data entry k* is stored as follows: Given a search key value, k, we can find the bucket where the data entry k* is stored as follows: Use a hash function, denoted by h() Use a hash function, denoted by h() The value of h(k) is the address for the desired bucket where k* is located. The value of h(k) is the address for the desired bucket where k* is located. h(k) should distribute the search key values uniformly over the collection of buckets h(k) should distribute the search key values uniformly over the collection of buckets

5 CENG 3515 Example h(k)= K * 100, round to the nearest integer or truncated

6 CENG 3516 Design Factors Bucket size: the number of records that can be held at the same address. Bucket size: the number of records that can be held at the same address. Loading factor: the ratio of the number of records put in the file to the total capacity of the buckets in number of records. Loading factor: the ratio of the number of records put in the file to the total capacity of the buckets in number of records. Good hash function: should evenly distribute the keys among the addresses. Good hash function: should evenly distribute the keys among the addresses. Overflow resolution technique must be affective. Overflow resolution technique must be affective.

7 CENG 3517 Example simple hash Functions Key mod N: Key mod N: N is the size of the table, better if it is prime. N is the size of the table, better if it is prime. Folding: Folding: e.g. 123|456|789: add them and take mod. e.g. 123|456|789: add them and take mod. Truncation: Truncation: e.g. 123456789 map to a table of 1000 addresses by picking 3 digits of the key. e.g. 123456789 map to a table of 1000 addresses by picking 3 digits of the key. Squaring: Squaring: Square the key and then truncate Square the key and then truncate Radix conversion: Radix conversion: e.g. 1 2 3 4 treat it to be base 11, truncate if necessary. e.g. 1 2 3 4 treat it to be base 11, truncate if necessary.

8 CENG 3518 Static Hashing Primary Area: # primary pages fixed, allocated sequentially, never de-allocated; (say M buckets). Primary Area: # primary pages fixed, allocated sequentially, never de-allocated; (say M buckets). A simple hash function: h(k) = f(k) mod M A simple hash function: h(k) = f(k) mod M Overflow area: disjoint from the primary area. It keeps buckets which hold records whose key maps to a full bucket. Overflow area: disjoint from the primary area. It keeps buckets which hold records whose key maps to a full bucket. chaining the address of an overflow bucket to a primary area. chaining the address of an overflow bucket to a primary area. Collision does not cause a problem as long as there is still room in the mapped bucket. Overflow occurs during insertion when a record is hashed to the bucket that is already full. Collision does not cause a problem as long as there is still room in the mapped bucket. Overflow occurs during insertion when a record is hashed to the bucket that is already full.

9 CENG 3519 Example Assume f(k) = k. Let M = 5. So, h(k) = k mod 5 Assume f(k) = k. Let M = 5. So, h(k) = k mod 5 Bucket factor, bf = 3 records. Place the records35 60 6 12 57 46 33 62 44, 17 in 5 buckets each of bf of 3. Bucket factor, bf = 3 records. Place the records35 60 6 12 57 46 33 62 44, 17 in 5 buckets each of bf of 3. 03560 1646 2125762 333 444 17 Primary areaoverflow

10 CENG 35110 Load Factor (Packing density) To limit the amount of overflow we allocate more space to the primary area than needed (i.e. the primary area will be, say, 70% full) To limit the amount of overflow we allocate more space to the primary area than needed (i.e. the primary area will be, say, 70% full) Load Factor f = Load Factor f = => f = # of records in the file # of spaces in primary area n M * m Where n is number of records, m is the blocking factor, M is number of blocks

11 CENG 35111 Effects of load factor f and m Performance can be enhanced by the choice of bucket size and load factor. Performance can be enhanced by the choice of bucket size and load factor. In general, a smaller load factor means In general, a smaller load factor means less overflow and a faster fetch time; but more wasted space. less overflow and a faster fetch time; but more wasted space. A larger m means A larger m means less overflow in general, but slower fetch. less overflow in general, but slower fetch.

12 CENG 35112 Insertion and Deletion Insertion: New records are inserted at the end of the chain. Insertion: New records are inserted at the end of the chain. Deletion: Two ways are possible: Deletion: Two ways are possible: 1. Mark the record to be deleted 2. Consolidate sparse buckets when deleting records. In the 2 nd approach: In the 2 nd approach: When a record is deleted, fill its place with the last record in the chain of the current bucket. When a record is deleted, fill its place with the last record in the chain of the current bucket. Deallocate the last bucket when it becomes empty. Deallocate the last bucket when it becomes empty.

13 CENG 35113 Problem of Static Hashing The main problem with static hashing: the number of buckets is fixed: The main problem with static hashing: the number of buckets is fixed: Long overflow chains can develop which will eventually degrade the performance. Long overflow chains can develop which will eventually degrade the performance. On the other hand, if a file shrinks greatly, a lot of bucket space will be wasted. On the other hand, if a file shrinks greatly, a lot of bucket space will be wasted. There are some other hashing techniques that allow dynamically growing and shrinking hash index. These include: There are some other hashing techniques that allow dynamically growing and shrinking hash index. These include: linear hashing linear hashing extendible hashing extendible hashing

14 CENG 35114 Linear Hashing It maintains a constant load factor. It maintains a constant load factor. It avoids reorganization. It avoids reorganization. It does so, by incrementally adding new buckets to the primary area. It does so, by incrementally adding new buckets to the primary area. With linear hashing, the last bits in the hash number are used for placing the records. With linear hashing, the last bits in the hash number are used for placing the records.

15 CENG 35115 Example 00081632 0011725 0103450 0111127 1002812 1015 11014 1115515 Last 3 bits e.g. 34: 100010 28: 011100 13: 001101 21: 010101 Insert: 13, 21, 37 f = 15/24 = 63%

16 CENG 35116 Insertion of records To expand the table: split an existing bucket denoted by k digits into two buckets using the last k+1 digits. To expand the table: split an existing bucket denoted by k digits into two buckets using the last k+1 digits. e.g. e.g. 000 1000 0000

17 CENG 35117 Expanding the table 00001632 0011725 0103450 0111127 1002812 10151321 11014 1115515 10008 37 Boundary value

18 CENG 35118 00001632 000117 00103450 0111127 1002812 10151321 11014 1115515 10008 100125 101026 Boundary value 37 k = 3 Hash # 1000: uses last 4 digits Hash # 1101: uses last 3 digits

19 CENG 35119 Fetching a record Apply the hash function. Apply the hash function. Look at the last k digits. Look at the last k digits. If it’s less than the boundary value, the location is in the bucket area labeled with the last k+1 digits. If it’s less than the boundary value, the location is in the bucket area labeled with the last k+1 digits. Otherwise it is in the bucket area labeled with the last k digits. Otherwise it is in the bucket area labeled with the last k digits. Follow overflow chains as with static hashing. Follow overflow chains as with static hashing.

20 CENG 35120 Insertion Search for the correct bucket into which to place the new record. Search for the correct bucket into which to place the new record. If the bucket is full, allocate a new overflow bucket. If the bucket is full, allocate a new overflow bucket. If there are now f*m records more than needed for a given f, If there are now f*m records more than needed for a given f, Add one more bucket to the primary area. Add one more bucket to the primary area. Distribute the records from the bucket chain at the boundary value between the original area and the new primary area bucket, Distribute the records from the bucket chain at the boundary value between the original area and the new primary area bucket, Add 1 to the boundary value. Add 1 to the boundary value.

21 CENG 35121 Deletion Read in a chain of records. Read in a chain of records. Replace the deleted record with the last record in the chain. Replace the deleted record with the last record in the chain. If the last overflow bucket becomes empty, deallocate it. If the last overflow bucket becomes empty, deallocate it. When the number of records is less than the number needed, contract the primary area by one bucket. When the number of records is less than the number needed, contract the primary area by one bucket. Compressing the table is exact opposite of expanding it: Keep the count of the total number of records in the file and buckets in primary area. Keep the count of the total number of records in the file and buckets in primary area. Compute f * m, if we have fewer records than needed, consolidate the last bucket with the bucket which shares the same last k digits. Compute f * m, if we have fewer records than needed, consolidate the last bucket with the bucket which shares the same last k digits.

22 CENG 35122 Extendible Hashing

23 CENG 35123 Extendible Hashing:Intr. Extendible hashing does not have chains of buckets, contrary to linear hashing. Extendible hashing does not have chains of buckets, contrary to linear hashing. Hashing is based on creating index for an index table, which have pointers to the data buckets. Hashing is based on creating index for an index table, which have pointers to the data buckets. The number of the entries in the index table is 2 i, where i is number of bit used for indexing. The number of the entries in the index table is 2 i, where i is number of bit used for indexing. The records with the same first i bits are placed in the same bucket, The records with the same first i bits are placed in the same bucket, If max content of any bucket is reached, a new bucket is added, provided the corresponding index entry is free. If max content of any bucket is reached, a new bucket is added, provided the corresponding index entry is free. If max content of any bucket is reached and a new bucket cannot be added, according to the first i bits, then the table size is doubled. If max content of any bucket is reached and a new bucket cannot be added, according to the first i bits, then the table size is doubled.

24 CENG 35124 Extendible Hashing: Overflow When i=j and overflow occurs, then index table is doubled; where j is the level of the index, while i is the level of data buckets When i=j and overflow occurs, then index table is doubled; where j is the level of the index, while i is the level of data buckets Successive doubling of the index table may happen, in case of a poor hashing function. Successive doubling of the index table may happen, in case of a poor hashing function. The main disadvantage of the extendible hashing is that, the index table may grow to be too big and too sparse. The main disadvantage of the extendible hashing is that, the index table may grow to be too big and too sparse. The best case: there are as many buckets as index table size. The best case: there are as many buckets as index table size. If the index table can fit into the memory, the access to a record requires one disk access only. If the index table can fit into the memory, the access to a record requires one disk access only.

25 CENG 35125 Extendible Hashing Hash function returns b bits Hash function returns b bits Only the prefix i bits are used to hash the item Only the prefix i bits are used to hash the item There are 2 i entries in the index table There are 2 i entries in the index table Let i j be the length of the common hash prefix for data bucket j, there is 2 (i-i j ) entries in index table points to j Let i j be the length of the common hash prefix for data bucket j, there is 2 (i-i j ) entries in index table points to j i i2 bucket2 i3 bucket3 i1 bucket1 Data buckets Bucket address table: index table Length of common hash prefixHash prefix

26 CENG 35126 Splitting a bucket: Case 1 Splitting (Case 1: i j =i) Splitting (Case 1: i j =i) Only one entry in bucket address table points to data bucket j Only one entry in bucket address table points to data bucket j i++; split data bucket j; rehash all items previously in j; i++; split data bucket j; rehash all items previously in j; 3 3 2 1 3 000 001 010 011 100 101 110 111 2 00 01 10 11 2 2 1

27 CENG 35127 Splitting: Case 2 Splitting (Case 2: i j < i) Splitting (Case 2: i j < i) More than one entry in bucket address table point to data bucket j More than one entry in bucket address table point to data bucket j split data bucket j to j, z; i j = i z = i j +1; Adjust the pointers previously point to j only, now to j and z; rehash all items previously in j; split data bucket j to j, z; i j = i z = i j +1; Adjust the pointers previously point to j only, now to j and z; rehash all items previously in j; 2 2 1 3 000 001 010 011 100 101 110 111 2 2 2 3 000 001 010 011 100 101 110 111 2

28 CENG 35128 Example Extendable Hashing: Example Suppose the hash function is h(x) = x mod 8 and each bucket can hold at most two records. Show the extendable hash structure after inserting 1, 4, 5, 7, 8, 2, 20. Suppose the hash function is h(x) = x mod 8 and each bucket can hold at most two records. Show the extendable hash structure after inserting 1, 4, 5, 7, 8, 2, 20. 14578220 001100101111000010100 1414 00 1 1 1 4545 1 0 100 01 10 11 1818 1 2 4545 2 7 2 Use zero bits Use one bits Use two bits

29 CENG 35129 Example 3 000 001 010 011 100 101 110 111 1818 2 4 20 2 7 2 2 3 5 3 1818 2 2 2 2 4545 2 7 2 00 01 10 11 inserting 1, 4, 5, 7, 8, 2, 20 14578220 001100101111000010100

30 CENG 35130 Comments on Extendible Hashing If directory fits in memory, equality search is realized with one disk access. If directory fits in memory, equality search is realized with one disk access. A typical example: a 100MB file with 100 bytes record and a page (bucket) size of 4K contains 1,000,000 records (as data entries) but only 25,000 [=1,000,000/(4,000/100)] directory elements A typical example: a 100MB file with 100 bytes record and a page (bucket) size of 4K contains 1,000,000 records (as data entries) but only 25,000 [=1,000,000/(4,000/100)] directory elements  chances are high that directory will fit in memory. If the distribution of hash values is skewed (e.g., a large number of search key values all are hashed to the same bucket ), directory can grow very large! If the distribution of hash values is skewed (e.g., a large number of search key values all are hashed to the same bucket ), directory can grow very large! But this kind of skew must be avoided with a well-tuned hashing function But this kind of skew must be avoided with a well-tuned hashing function

31 CENG 35131 Comments on Extendible Hashing Delete: If removal of data entry makes a bucket empty, it can be merged with the “ buddy ” bucket. Delete: If removal of data entry makes a bucket empty, it can be merged with the “ buddy ” bucket. If each directory element points to same bucket as its split image, If each directory element points to same bucket as its split image, can halve directory. can halve directory.

32 CENG 35132 Summary, so far Hash-based indexes: best for equality searches, cannot support range searches. Hash-based indexes: best for equality searches, cannot support range searches. Static Hashing can lead to long overflow chains. Static Hashing can lead to long overflow chains. Extendible Hashing avoids overflow pages by splitting a full bucket when a new data entry is to be added to it. Extendible Hashing avoids overflow pages by splitting a full bucket when a new data entry is to be added to it. Directory to keep track of buckets, doubles periodically. Directory to keep track of buckets, doubles periodically. Can get large with skewed data; additional I/O if this does not fit in main memory. Can get large with skewed data; additional I/O if this does not fit in main memory.

33 CENG 35133 Time considerations for Hashing: Reading Assignment

34 CENG 35134 Simple Hashing: Search No overflow case, ie. direct hit: No overflow case, ie. direct hit: T F =s+r+dtt Overflow successful case, for average chain length of x Overflow successful case, for average chain length of x T Fs =s+r+dtt + (x/2)*(s+r+dtt) Overflow unsuccessful case, for average chain length of x Overflow unsuccessful case, for average chain length of x T Fu =s+r+dtt + x*(s+r+dtt)

35 CENG 35135 Deletion Deletion from a hashed file Deletion from a hashed file Delete without consolidation, i.e., mark the deleted record as deleted… which requires occasional reorganization. Delete without consolidation, i.e., mark the deleted record as deleted… which requires occasional reorganization. T D =T Fs +2r Where x/2 of the chain is involved in T Fs - successful search.

36 CENG 35136 Delete with consolidation Delete with consolidation, requires the entire chain to be read in and written out, after been updated. This will not require any reorganization. Delete with consolidation, requires the entire chain to be read in and written out, after been updated. This will not require any reorganization. The approximate formula: The approximate formula: T D =T Fu +2r Where the full chain (x blocks) is involved in T Fu … Where the full chain (x blocks) is involved in T Fu …

37 CENG 35137 Insertion Assuming that insertion involves the modification of the last bucket, just like deletion… Assuming that insertion involves the modification of the last bucket, just like deletion… T I =T Fu +2r

38 CENG 35138 Sequential Access In hashing, each record is a randomly placed and randomly accessed, requiring very long sequential record processing: In hashing, each record is a randomly placed and randomly accessed, requiring very long sequential record processing: T X =n*T F

39 CENG 35139 Sequential Access In hashing, each record is a randomly placed and randomly accessed, requiring very long sequential record processing: In hashing, each record is a randomly placed and randomly accessed, requiring very long sequential record processing: T X =n*T F

40 CENG 35140 Linear hashing: Review=1 A bucket overflow, cause a split to take place. A bucket overflow, cause a split to take place. Split will start from the first bucket address. It will cause the use of k+1 bits for the split buckets, k bits for others. Split will start from the first bucket address. It will cause the use of k+1 bits for the split buckets, k bits for others. E.e., if the bucket whose last 3 bit address is 010 split, the records with 0010 go to the one of the bucket (with 4 bit address), the records with 1010 will go to the other. E.e., if the bucket whose last 3 bit address is 010 split, the records with 0010 go to the one of the bucket (with 4 bit address), the records with 1010 will go to the other.

41 CENG 35141 Linear hashing : Review-2 The next bucket address, known as boundary value, to split needs to be recorded in the file header. The next bucket address, known as boundary value, to split needs to be recorded in the file header. If the value of the last k bits is less than that of the boundary value use k+1 bits, to find the address of the related bucket… If the value of the last k bits is less than that of the boundary value use k+1 bits, to find the address of the related bucket… Split buckets become part of the primary area. Split buckets become part of the primary area. For access time considerations, Knuth’s tables in Fig 6.2 in the textbook by Salzberg can be used. For access time considerations, Knuth’s tables in Fig 6.2 in the textbook by Salzberg can be used. E.g., blocking factor m=50 and load factor f=75%, the average access times for successful and unsuccessful cases can be stated as E.g., blocking factor m=50 and load factor f=75%, the average access times for successful and unsuccessful cases can be stated as T Fs =1.05*(s+r+dtt) T Fu =1.27*(s+r+dtt)

42 CENG 35142 Linear hashing: Insertion -1 Find the correct bucket to place the new record Find the correct bucket to place the new record If the bucket is full, allocate a new bucket and place the new record and link the new bucket to the related chain. If the bucket is full, allocate a new bucket and place the new record and link the new bucket to the related chain. If the load factor f is now imbalanced, add one more bucket to the primary area, split the records in the boundary bucket with the new bucket. If the load factor f is now imbalanced, add one more bucket to the primary area, split the records in the boundary bucket with the new bucket. Increment the boundary bucket address. Increment the boundary bucket address.

43 CENG 35143 Linear hashing: Insertion-2 An insertion formula needs to consider all the timing aspects caused by the above algorithm. An insertion formula needs to consider all the timing aspects caused by the above algorithm. T I =(T Fu +2r)+(1/m)*(s+r+dtt)+1/(f*m)*[(s+r+dtt)+2r+ s+r+dtt] The first term (T Fu +2r) is for finding the bucket to insert and write back to the disk, The first term (T Fu +2r) is for finding the bucket to insert and write back to the disk, The second term is to link a new bucket, The second term is to link a new bucket, The third term is to expand the primary area with probability of 1/(f*m) by adding a new bucket, read and write back the boundary bucket. The third term is to expand the primary area with probability of 1/(f*m) by adding a new bucket, read and write back the boundary bucket.

44 CENG 35144 Linear hashing: Deletion Every delete may cause the linked buckets to be read and rewritten, even deleted. Every delete may cause the linked buckets to be read and rewritten, even deleted. Deletion may also cause contraction of the primary area if f falls below what it should be. The last buckets consolidate with the buckets sharing the same last k bits. Deletion may also cause contraction of the primary area if f falls below what it should be. The last buckets consolidate with the buckets sharing the same last k bits. A rough estimate for the deletion time A rough estimate for the deletion time T D =T Fu +2r Where the other contributing terms and factors are ignored as they are small compared to these two terms. Where the other contributing terms and factors are ignored as they are small compared to these two terms.

45 CENG 35145 Linear hashing: Sequential Reading in record order The hash function does not preserve order. Therefore, each record is an independent search, causing The hash function does not preserve order. Therefore, each record is an independent search, causing T X =n*1.05*(s+r+dtt), for f=75% and m=50 case. Note that, there is no provision about which blocks or buckets to allocate to an indexed file. There may be small disk space allocation directory in the file header to indicate which disk extents are allocated to the file. Note that, there is no provision about which blocks or buckets to allocate to an indexed file. There may be small disk space allocation directory in the file header to indicate which disk extents are allocated to the file.

46 CENG 35146 Linear hashing: Reorganizing It is possible to reorganize a hashed file, which costs: It is possible to reorganize a hashed file, which costs: T reorg =n*(T F +2r) = n*(s+r+dtt+2r) The amount of space used by a hashed file is:- The amount of space used by a hashed file is:-n/(f*m). How many bits (k) are required to form the hash table for M buckets: How many bits (k) are required to form the hash table for M buckets: 2 k <=M<2 k+1 or k<=logM<k+1

47 CENG 35147 Redundant slides

48 CENG 35148 Predicting the distribution of records In random case, one can approximate the collision probability P(x), which is the probability that a given address will have x records assigned to it, In random case, one can approximate the collision probability P(x), which is the probability that a given address will have x records assigned to it, in n records N addresses case, x can take values 0, 1, 2, 3, etc.. bigger the x smaller is the probability. in n records N addresses case, x can take values 0, 1, 2, 3, etc.. bigger the x smaller is the probability. Rather than taking the p(x) as the probability, it can be taken as the percentage of the addresses having x logical records assigned to it by hashing. Rather than taking the p(x) as the probability, it can be taken as the percentage of the addresses having x logical records assigned to it by hashing. Thus, p(0) is the proportion of the addresses with 0 records assigned to it. Given p(0), the expected number of addresses with no assignments is N*p(0). Thus, p(0) is the proportion of the addresses with 0 records assigned to it. Given p(0), the expected number of addresses with no assignments is N*p(0). Let n/N also represents f, load factor (or packing density) Let n/N also represents f, load factor (or packing density)

49 CENG 35149 Collision resolution Collisions can be resolved by chaining. A simple (separate) chaining method, allows overflow from a full bucket to a bucket in a separate overflow area. Collisions can be resolved by chaining. A simple (separate) chaining method, allows overflow from a full bucket to a bucket in a separate overflow area. Usually N=M*m, where M is the number of buckets and m is the blocking factor. Usually N=M*m, where M is the number of buckets and m is the blocking factor. Given a bucket size and the load factor, one can compute the average number of accesses (a) required to fetch a record. For example, for m=10 and f=70, a=1.201, for m=50, f=70, a=1.018. (see the textbook for computed a values for a given f and m) Given a bucket size and the load factor, one can compute the average number of accesses (a) required to fetch a record. For example, for m=10 and f=70, a=1.201, for m=50, f=70, a=1.018. (see the textbook for computed a values for a given f and m)

50 CENG 35150 Bounded Extendible hashing A buffer of fixed size (tuned to the size of the available memory) is designated to serve as the index. The index table grows according to the extendible hashing principle, A buffer of fixed size (tuned to the size of the available memory) is designated to serve as the index. The index table grows according to the extendible hashing principle, however, index computed from a record key is assumed not to fall outside the size of this table. however, index computed from a record key is assumed not to fall outside the size of this table. The file grows by doubling the data block size, on overflow situation. Thus, the size of the bucket is power of 2. The file grows by doubling the data block size, on overflow situation. Thus, the size of the bucket is power of 2.

51 CENG 35151 Bounded Extendible hashing: Hash key digits As an example, a hashed-key digits may have the following meanings: As an example, a hashed-key digits may have the following meanings: First y digits : corresponding block address in the index area First y digits : corresponding block address in the index area Next z digits: the correct record index entry in the located index block Next z digits: the correct record index entry in the located index block Next w digits: used to indicate the offset from the beginning of the bucket, for the correct block containing the record. This is normally log 2 m, in number of bits, where m indicates the number of the blocks in the bucket. Next w digits: used to indicate the offset from the beginning of the bucket, for the correct block containing the record. This is normally log 2 m, in number of bits, where m indicates the number of the blocks in the bucket. To avoid sparsely populated buckets, chaining may be allowed to a certain extend… To avoid sparsely populated buckets, chaining may be allowed to a certain extend…


Download ppt "CENG 351 1 Hashing for files. CENG 3512 Introduction Idea: to reference items in a table directly by doing arithmetic operations to transform keys into."

Similar presentations


Ads by Google