Presentation on theme: "INDEXING AND HASHING. Basic Concepts Index: an ordered list of words or phrases used for easy access to desired information or data. Indexing mechanisms."— Presentation transcript:
INDEXING AND HASHING
Basic Concepts Index: an ordered list of words or phrases used for easy access to desired information or data. Indexing mechanisms are used for quick access to desired information or data. Example: author/title catalog in a Library. Search key: an attribute or set of attributes used to look up records in a file.
Basic Concepts (continued) An index file consists of records (entries), which are of the form below: Index files are usually and typically much smaller than the original file. There are two basic kinds of indices: Ordered indices and Hash indices Search KeyPointer
Basic Concepts (continued) Ordered indices: Based on a sorted ordering of values-search keys are stored in a sorted order. Hash indices: Based on a uniform distribution of values across a range of buckets-search keys are distributed uniformly across buckets using a hash function. The bucket to which a value is assigned is determined by a hash function.
Techniques for Ordered indexing and Hashing ( INDEX EVALUATION METRICS ) Each of the techniques to be discussed are best suited to a particular database application, so no one technique can be said to be the best. Each technique must be evaluated on the basis of the following factors: Access Types: Access types supported efficiently. Access types can include finding records with a specified attribute value and records whose attribute value fall within a specified range.
INDEX EVALUATION METRICS (Continued) Access Time: Time it takes to find a particular data item/set of items using a particular technique. Insertion Time: Time it takes to insert a new data item. Deletion Time: Time it takes to delete a data item. Space Overhead: The additional space occupied by an index structure.
Ordered Indices Ordered Index: Index entries are stored on the search key value. Example: an author catalog in a Library. Primary Index: In a sequentially ordered file, the index whose search key specifies the sequential order of the file is primary index also known as clustering index. The search key of a primary index is usually but not necessarily the primary key.
Ordered Indices Secondary index: an index whose search key specifies an order different from the sequential order of the file. Also called non- clustering index. Index-sequential file: ordered sequential file with a primary index.
Dense and Sparse Indices Dense Index Files: In a dense index, an index entry or record appears for every search-key value in the file. In a dense clustering index, the index record contains the search-key value and a pointer to the first data record with that search-key value. The rest of the records with the same search-key value would be stored sequentially after the first record, since, because the index is a clustering one, records are sorted on the same search key.
Dense Index Files (continued) In a dense nonclustering index, the index must store a list of pointers to all records with the same search-key value.
Sparse Index Files Sparse Index: In a sparse index, an index entry appears for only some of the search-key values. Sparse indices can be used only if the relation is stored in sorted order of the search key, that is, if the index is a clustering index. As is true in dense indices, each index entry contains a search-key value and a pointer to the first data record with that search-key value. To locate a record, we find the index entry with the largest search-key value that is less than or equal to the search-key value for which we are looking/searching. We start at the record pointed to by that index entry, and follow the pointers in the file until we find the desired record.
Sparse Index: contains index records for only some searchkey values. Applicable when records are sequentially ordered on searchkey. To locate a record with searchkey value K we: Find index record with largest searchkey value < K. Search file sequentially starting at the record to which the index record points.
Comparing Sparse index to Dense index files: Sparse index files have less space and less maintenance overhead for insertions and deletions than Dense indices. Sparse indices are generally slower than dense indices for locating records.
Multilevel Index If primary index does not fit in memory, access becomes expensive. Solution: treat primary index kept on disk as a sequential file and construct a sparse index on it. outer index – a sparse index of primary index inner index – the primary index file If even outer index is too large to fit in main memory, yet another level of index can be created, and so on. Indices at all levels must be updated on insertion or deletion from the file.
Index Update: Record Deletion If deleted record was the only record in the file with its particular searchkey value, the searchkey is also deleted from the index. Single level index deletion: Dense indices: Deletion of searchkey: similar to file record deletion. Sparse indices: If deleted key value exists in the index, the value is replaced by the next searchkey value in the file (in searchkey order). If the next searchkey value already has an index entry, the entry is deleted instead of being replaced.
Index Update: Record Insertion Single level index insertion: Perform a lookup using the key value from inserted record Dense indices – if the searchkey value does not appear in the index, insert it. Sparse indices – if index stores an entry for each block of the file, no change needs to be made to the index unless a new block is created. If a new block is created, the first searchkey value appearing in the new block is inserted into the index. Multilevel insertion/deletion algorithms are simple extensions of the single level algorithms.
Primary and Secondary Indices Indices offer substantial benefits when searching for records BUT Updating indices imposes overhead on database modification because when a file is modified, every index on the file must be updated. Sequential scan using primary index is efficient, but a sequential scan using a secondary index is expensive. Each record access may fetch a new block from disk. Block fetch requires about 5 to 10 micro seconds, versus about 100 nanoseconds for memory access.
Index record points to a bucket that contains pointers to all the actual records with that particular searchkey value. Secondary indices have to be dense. Secondary index on balance field of account
Indices on Multiple Keys In general a search key can have more than one attribute. A search key containing more than one attribute is referred to as a composite search key. The structure of the index is the same as that of any other index, the only difference being that the search key is not a single attribute, but rather is a list of attributes. The search key can be represented as a tuple of values, of the form (a1,..., an), where the indexed attributes are A1,..., An.
B+ Tree Index Files B+tree indices are an alternative to indexed-sequential files. Disadvantage of indexed-sequential files Performance degrades as file grows, since many overflow blocks get created. Periodic reorganization of entire file is required. Advantage of B+tree index files: Automatically reorganizes itself with small, local, changes, in the face of insertions and deletions. Reorganization of entire file is not required to maintain performance. Disadvantage of B+trees: There is extra insertion and deletion overhead, space overhead. Advantages of B+trees outweigh disadvantages B+trees are used extensively
B+ Tree node Structure A B+tree index is a multilevel index, but it has a structure that differs from that of the multilevel index-sequential file. A typical node of a B+ Tree Ki are the searchkey values Pi are pointers to children (for non-leaf nodes) or pointers to records or buckets of records (for leaf nodes). The searchkeys in a node are ordered: K1 < K2 < K3 <... < Kn–1 P1K1P2...Pn-1Kn-1Pn
B-Tree Index Files B-Tree Index Files are similar to B+-Tree index files but the primary difference or distinction is that: B-tree indices eliminates redundant storage of search-key values. B-tree allows search-key values to appear only once (if they are unique), unlike a B+-tree, where a search key value may appear in a non-leaf node, in addition to appearing in a leaf node.
Multiple-Key Access For certain types of queries it is advantageous to use multiple indices if they are available. Example: select account_number from account where branch_name = “Perryridge” and balance = 1000 There are three possible strategies for processing query using indices on single attributes: 1.Use index on branch_name to find accounts with branch name Perryridge; test balance = Use index on balance to find accounts with balances of $1000; test branch_name = “Perryridge”. 3. Use branch_name index to find pointers to all records pertaining to the Perryridge branch. Similarly use index on balance. Take intersection of both sets of pointers obtained.
Static Hashing A bucket is a unit of storage containing one or more records (a bucket is typically a disk block). In a hash file organization we obtain the bucket of a record directly from its searchkey value using a hash function. Hash function h is a function from the set of all searchkey values K to the set of all bucket addresses B. Hash function is used to locate records for access, insertion as well as deletion. Records with different searchkey values may be mapped to the same bucket; thus entire bucket has to be searched sequentially to locate a record.
Example of Hash File Organization There are 10 buckets, The binary representation of the ith character is assumed to be the integer i. The hash function returns the sum of the binary representations of the characters modulo 10 E.g. h(Perryridge) = 5 h(Round Hill) = 3 h(Brighton) = 3
Hash Functions Worst hash function maps all searchkey values to the same bucket; this makes access time proportional to the number of searchkey values in the file. An ideal hash function is uniform, i.e., each bucket is assigned the same number of searchkey values from the set of all possible values. Ideal hash function is random, so each bucket will have the same number of records assigned to it irrespective of the actual distribution of searchkey values in the file. Typical hash functions perform computation on the internal binary representation of the searchkey. For example, for a string searchkey, the binary representations of all the characters in the string could be added and the sum modulo the number of buckets could be returned.
Handling of Bucket Overflows Bucket overflow can occur because of: Insufficient buckets Skew in distribution of records. This can occur due to two reasons: multiple records have same searchkey value chosen hash function produces non-uniform distribution of key values Although the probability of bucket overflow can be reduced, it cannot be eliminated; it is handled by using overflow buckets.
Handling of Bucket Overflows (Continued) Overflow chaining: this occurs when the overflow buckets of a given bucket are chained together in a linked list. Above scheme is called closed hashing. An alternative, called open hashing, which does not use overflow buckets, is not suitable for database applications.
Example of Overflow Chaining
Hash Indices Hashing can be used not only for file organization, but also for index structure creation. A hash index organizes the search keys, with their associated record pointers, into a hash file structure. Actually, hash indices are always secondary indices; If the file itself is organized using hashing, a separate primary hash index on it using the same searchkey is unnecessary. However, we use the term hash index to refer to both secondary index structures and hash organized files.
Example of Hash Index
Deficiencies of Static Hashing In static hashing, function h maps searchkey values to a fixed set of B of bucket addresses. Databases grow or shrink with time. If initial number of buckets is too small, and file grows, performance will degrade due to too much overflows. If space is allocated for anticipated growth, a significant amount of space will be wasted initially (and buckets will be underfull). If database shrinks, again space will be wasted. One solution: periodic reorganization of the file with a new hash function. Expensive, disrupts normal operations Better solution: allow the number of buckets to be modified dynamically
Dynamic Hashing Most databases grow larger over time. If we are to use static hashing for such a database, we have three classes of options: Choose a hash function based on the current file size. This option will result in performance degradation as the database grows. Choose a hash function based on the anticipated size of the file at some point in the future. Although performance degradation is avoided, a significant amount of space may be wasted initially. Periodically reorganize the hash structure in response to file growth. Such a reorganization involves choosing a new hash function, recomputing thehash function on every record in the file, and generating new bucket assignments. This reorganization is a massive, time-consuming operation, it is necessary to forbid access to the file during reorganization.
Dynamic Hashing Dynamic hashing is good for database that grows and shrinks in size and it allows the hash function to be modified dynamically. Extendable hashing is one form of dynamic hashing. Hash function generates values over a large range: typically b-bit integers, with b = 32. At any time use only a prefix of the hash function to index into a table of bucket addresses. Let the length of the prefix be i bits, 0 ≤ i ≤ 32. Bucket address table size = 2i. Initially i = 0 Value of i grows and shrinks as the size of the database grows and shrinks. Multiple entries in the bucket address table may point to a bucket Thus, actual number of buckets is < 2i The number of buckets also changes dynamically due to coalescing and splitting of buckets.
General Extendable hash Structure
Comparison of Ordered Indexing and Hashing Cost of periodic reorganization Relative frequency of insertions and deletions Is it desirable to optimize average access time at the expense of worstcase access time? Expected type of queries: Hashing is generally better at retrieving records having a specified value of the key. If range queries are common, ordered indices are to be preferred IN PRACTICE: PostgreSQL supports hash indices, but discourages use due to poor performance. Oracle supports static hash organization, but not hash indices. SQLServer supports only B+trees.
Bitmap Indices Bitmap indices are a specialized type of index designed for easy and efficient querying on multiple keys, although each bitmap index is built on a single key. For bitmap indices to be used, records in a relation must be numbered sequentially, starting from 0 (zero). Bitmap Indices are applicable on attributes that take on a relatively small number of distinct values. Example: gender, country, state, … Income-level (income broken up into a small number of levels such as 09999, , ,50000infinity)
Bitmap Index Structure In its simplest form, a bitmap index on the attribute A of relation r consists of one bitmap for each value that A can take. Each bitmap has as many bits as the number of records in the relation. The ith bit of the bitmap for value vj is set to 1 if the record numbered i has the value vj for attribute A. All other bits of the bitmap are set to 0.
Bitmap Indices (Continued)
Bitmap indices are useful for queries on multiple attributes and not particularly useful for single attribute queries. Queries are answered using bitmap operations Intersection (and) Union (or) Complementation (not) Each operation takes two bitmaps of the same size and applies the operation on corresponding bits to get the result bitmap. Example: AND = OR = NOT = Bitmap indices generally very small compared with the relation size. For example: if record is 100 bytes, space for a single bitmap is 1/800 of space used by the relation. If number of distinct attribute values is 8, bitmap is only 1% of the relation size.
Index Definition in SQL The SQL standard does not provide any way for the database user or administrator to control what indices are created and maintained in the database system. Indices are not required for correctness, since they are redundant data structures. However, indices are important for efficient processing of transactions, including both update transactions and queries. Indices are also important for efficient enforcement of integrity constraints. In principle, a database system can decide automatically what indices to create. However, because of the space cost of indices, as well as the effect of indices on update processing, it is not easy to automatically make the right choices about what indices to maintain. Therefore, most SQL implementations provide the programmer control over creation and removal of indices via data-definition language commands.
Index Definition in SQL (Continued) We create an index with the create index command, as shown in the SQL command below: create index on ( ); To define an index named dept_index on the instructor relation with dept_name as the search key; Example: create index dept_index on instructor (dept_name); Use create unique index to indirectly specify and enforce the condition that the search key is a candidate key. To drop an index, the index name we specified for an index is required. Example: drop index Most database systems allow specification of type of index, and clustering.
END of SLIDE AT YOUR LEISURE YOU CAN READ UP PARTITIONED HASHING.