Database Systems (資料庫系統)

Slides:



Advertisements
Similar presentations
External Memory Hashing. Model of Computation Data stored on disk(s) Minimum transfer unit: a page = b bytes or B records (or block) N records -> N/B.
Advertisements

CS4432: Database Systems II Hash Indexing 1. Hash-Based Indexes Adaptation of main memory hash tables Support equality searches No range searches 2.
Hash-Based Indexes Jianlin Feng School of Software SUN YAT-SEN UNIVERSITY.
Hash-based Indexes CS 186, Spring 2006 Lecture 7 R &G Chapter 11 HASH, x. There is no definition for this word -- nobody knows what hash is. Ambrose Bierce,
1 Hash-Based Indexes Module 4, Lecture 3. 2 Introduction As for any index, 3 alternatives for data entries k* : – Data record with key value k – –Choice.
Hash-Based Indexes The slides for this text are organized into chapters. This lecture covers Chapter 10. Chapter 1: Introduction to Database Systems Chapter.
Database Management Systems 3ed, R. Ramakrishnan and J. Gehrke1 Hash-Based Indexes Chapter 11.
Database Management Systems 3ed, R. Ramakrishnan and J. Gehrke1 Hash-Based Indexes Chapter 11.
CPSC 404, Laks V.S. Lakshmanan1 Hash-Based Indexes Chapter 11 Ramakrishnan & Gehrke (Sections )
ICOM 6005 – Database Management Systems Design Dr. Manuel Rodríguez-Martínez Electrical and Computer Engineering Department Lecture 11 – Hash-based Indexing.
Chapter 11 (3 rd Edition) Hash-Based Indexes Xuemin COMP9315: Database Systems Implementation.
Hash Indexes: Chap. 11 CS634 Lecture 6, Feb
Index tuning Hash Index. overview Introduction Hash-based indexes are best for equality selections. –Can efficiently support index nested joins –Cannot.
ICS 421 Spring 2010 Indexing (2) Asst. Prof. Lipyeow Lim Information & Computer Science Department University of Hawaii at Manoa 2/23/20101Lipyeow Lim.
1 Hash-Based Indexes Yanlei Diao UMass Amherst Feb 22, 2006 Slides Courtesy of R. Ramakrishnan and J. Gehrke.
B+-tree and Hashing.
1 Hash-Based Indexes Chapter Introduction  Hash-based indexes are best for equality selections. Cannot support range searches.  Static and dynamic.
FALL 2004CENG 3511 Hashing Reference: Chapters: 11,12.
1 Hash-Based Indexes Chapter Introduction : Hash-based Indexes  Best for equality selections.  Cannot support range searches.  Static and dynamic.
Hashing and Hash-Based Index. Selection Queries Yes! Hashing  static hashing  dynamic hashing B+-tree is perfect, but.... to answer a selection query.
1 Database Systems ( 資料庫系統 ) December 6/7, 2006 Lecture #9.
1 Database Systems ( 資料庫系統 ) November 8, 2004 Lecture #9 By Hao-hua Chu ( 朱浩華 )
Database Management Systems 3ed, R. Ramakrishnan and J. Gehrke1 Tree- and Hash-Structured Indexes Selected Sections of Chapters 10 & 11.
Database Management 7. course. Reminder Disk and RAM RAID Levels Disk space management Buffering Heap files Page formats Record formats.
Database Management Systems 3ed, R. Ramakrishnan and J. Gehrke1 Hash-Based Indexes Chapter 11 Modified by Donghui Zhang Jan 30, 2006.
Introduction to Database, Fall 2004/Melikyan1 Hash-Based Indexes Chapter 10.
1.1 CS220 Database Systems Indexing: Hashing Slides courtesy G. Kollios Boston University via UC Berkeley.
Static Hashing (using overflow for collision managment e.g., h(key) mod M h key Primary bucket pages 1 0 M-1 Overflow pages(as separate link list) Overflow.
Database Management Systems 3ed, R. Ramakrishnan and J. Gehrke1 Indexed Sequential Access Method.
Database Management Systems, R. Ramakrishnan and J. Gehrke1 Hash-Based Indexes Chapter 10.
B-Trees, Part 2 Hash-Based Indexes R&G Chapter 10 Lecture 10.
1 Database Systems ( 資料庫系統 ) November 28, 2005 Lecture #9.
Hash-Based Indexes. Introduction uAs for any index, 3 alternatives for data entries k*: w Data record with key value k w w Choice orthogonal to the indexing.
Database Applications (15-415) DBMS Internals- Part IV Lecture 15, March 13, 2016 Mohammad Hammoud.
Database Applications (15-415) DBMS Internals- Part III Lecture 13, March 06, 2016 Mohammad Hammoud.
Database Management 7. course. Reminder Disk and RAM RAID Levels Disk space management Buffering Heap files Page formats Record formats.
Database Applications (15-415) DBMS Internals- Part V Lecture 14, Oct 18, 2016 Mohammad Hammoud.
Tree-Structured Indexes
COP Introduction to Database Structures
Dynamic Hashing (Chapter 12)
Are they better or worse than a B+Tree?
Hash-Based Indexes Chapter 11
Hashing CENG 351.
Database Management Systems (CS 564)
Database Applications (15-415) DBMS Internals- Part III Lecture 15, March 11, 2018 Mohammad Hammoud.
Database Applications (15-415) DBMS Internals- Part V Lecture 17, March 20, 2018 Mohammad Hammoud.
Hashing Chapter 11.
Introduction to Database Systems
B+-Trees and Static Hashing
External Memory Hashing
Tree- and Hash-Structured Indexes
CS222: Principles of Data Management Notes #8 Static Hashing, Extendible Hashing, Linear Hashing Instructor: Chen Li.
Hash-Based Indexes R&G Chapter 10 Lecture 18
Hash-Based Indexes Chapter 10
B-Trees, Part 2 Hash-Based Indexes
CS222P: Principles of Data Management Notes #8 Static Hashing, Extendible Hashing, Linear Hashing Instructor: Chen Li.
Hashing.
Hash-Based Indexes Chapter 11
Index tuning Hash Index.
Indexing and Hashing B.Ramamurthy Chapter 11 2/5/2019 B.Ramamurthy.
LINEAR HASHING E0 261 Jayant Haritsa Computer Science and Automation
Database Systems (資料庫系統)
Index tuning Hash Index.
Database Systems (資料庫系統)
Hash-Based Indexes Chapter 11
Chapter 11 Instructor: Xin Zhang
Tree- and Hash-Structured Indexes
CS222/CS122C: Principles of Data Management UCI, Fall 2018 Notes #07 Static Hashing, Extendible Hashing, Linear Hashing Instructor: Chen Li.
Index Structures Chapter 13 of GUW September 16, 2019
Presentation transcript:

Database Systems (資料庫系統) November 23, 2011 Lecture #9

Announcement TA will talk about midterm exams and grading guide.

Thermo-Electrical Materials 1

Hash-Based Indexing Chapter 11 1

Introduction Hash-based indexes are best for equality selections. Cannot support range searches. Equality selections are useful for natural join operations. 2

Review: Tree Index ISAM vs. B+ trees. What is the main difference between them? Static vs. dynamic structure What is the tradeoff between them? Graceful table growth/shrink vs. concurrency performance 2

Similar tradeoff for hashing Possible to exploit tradeoff between dynamic structure for better performance? Static hashing technique Two dynamic hashing techniques Extendible Hashing Linear Hashing 2

Basic Static Hashing # primary pages fixed, allocated sequentially, never de-allocated; overflow pages if needed. h(k) mod N = bucket to which data entry with key k belongs. (N = # of buckets) N = 2 ^ i (or the bucket number is the i-least significant bits) h(key) mod N h key Primary bucket pages Overflow pages 3*, 5* 97* 1 2 3 . N-1 4*, 6* 0* 11* 3

Static Hashing (Contd.) Buckets contain data entries. Hash function takes search key field of record r and distribute values over range 0 ... N-1. h(key) = (a * key + b) usually works well. a and b are constants; lots known about how to tune h. Cost for insertion/delete/search: 2/2/1 disk page I/Os (no overflow chains). 4

What is the performance problem for static hashing? Long overflow chains can develop and degrade performance. Why poor performance? Scan through overflow chains linearly. How to fix this problem? Extendible and Linear Hashing: Dynamic techniques to fix this problem.

Extendible Hashing Simple Solution – remove overflow chain. When bucket (primary page) becomes full, .. Re-organize file by adding (doubling) buckets. What is the cost concern in doubling buckets? High cost: rehash all entries - reading and writing all pages is expensive! Primary bucket pages Overflow pages 1 2 . N-1 0* h(key) mod N 3*, 5* key 4*, 6* h 97* N N+1 … 2N-1 5

How to minimize the rehashing cost? Use another level of indirection Directory of pointers to buckets Insert 0 (00) Double the directory (no rehashing) Split just the overflow bucket. What is cost here? Bucket A 4* 12* 32* 8* Doubled Directory Bucket A1 Directory 1* 5* 21* 13* Bucket B 00 Doubled Directory Bucket A1 01 10 Bucket C 11 10* 15* 7* 19* Bucket D 5

Extendible Hashing Directory much smaller than file, so doubling much cheaper. Only one page of data entries is split. How to adjust the hash function? Before doubling directory h(r) → 0..N-1 buckets. After doubling directory? h(r) → 0 .. 2N-1 (the range is doubled) 5

Example Directory is array of size 4. What info this hash data structure maintains? Need to know current #bits used in the hash function Global Depth May need to know #bits used to hash a specific bucket Local Depth Can global depth < local depth? 13* 00 01 10 11 LOCAL DEPTH GLOBAL DEPTH DIRECTORY Bucket A Bucket B Bucket C Bucket D DATA PAGES 10* 1* 21* 4* 12* 32* 16* 15* 7* 19* 2 5* 6

Insert 20 (10100): Double Directory 11 2 LOCAL DEPTH DIRECTORY GLOBAL DEPTH Bucket A Bucket B Bucket C Bucket D 1* 5* 21* 13* 32* 16* 10* 15* 7* 19* 4* 12* 19* 2 000 001 010 011 100 101 110 111 3 DIRECTORY Bucket A Bucket B Bucket C Bucket D Bucket A2 (`split image' of Bucket A) 32* 1* 5* 21* 13* 16* 10* 15* 7* 4* 20* 12* LOCAL DEPTH GLOBAL DEPTH Know the difference between double directory & split bucket. double directory: Increment global depth Rehash bucket A Increment local depth, why track local depth? Split the bucket -> check local depth = global depth -> if yes, double the directory -> rehash the entries and distribute into two buckets -> if no, rehash the entries and distribute them into two buckets. -> increment the local depth. 7

Insert 9 (1001): only Split Bucket 3 3 LOCAL DEPTH LOCAL DEPTH 32* 16* Bucket A 32* 16* Bucket A GLOBAL DEPTH GLOBAL DEPTH 3 2 3 3 000 1* 5* 21* 13* Bucket B 000 1* 9* 3 21* 13* Bucket B2 (split image of Bucket B) 5* Bucket B 001 001 2 2 010 010 011 10* Bucket C 011 10* Bucket C 100 100 2 2 101 101 15* 7* 19* Bucket D 15* 7* 19* 110 110 Bucket D 111 111 Split the bucket -> check local depth = global depth -> if yes, double the directory -> rehash the entries and distribute into two buckets -> if no, rehash the entries and distribute them into two buckets. -> increment the local depth. 3 DIRECTORY 3 DIRECTORY 4* 12* 20* Bucket A2 4* 12* 20* Bucket A2 Only split bucket: Rehash bucket B Increment local depth 7

Points to Note What is the global depth of directory? Max # of bits needed to tell which bucket an entry belongs to. What is the local depth of a bucket? # of bits used to determine if an entry belongs to this bucket. When does bucket split cause directory doubling? Before insert, bucket is full & local depth = global depth. How to efficiently double the directory? Directory is doubled by copying it over and `fixing’ pointer to split image page. Note that you can do this only by using the least significant bits in the directory. 8

Directory Doubling Why use least significant bits in directory? Allows for doubling via copying! 3 3 000 000 001 001 2 2 010 010 00 00 011 011 01 01 100 100 Split buckets 10 10 101 101 11 11 110 110 111 111 Least Significant vs. Most Significant

Directory Copying via Insertion 20 (10100) 3 LOCAL DEPTH 4* 12* 32* 8* Bucket A 00 01 10 11 2 LOCAL DEPTH DIRECTORY GLOBAL DEPTH Bucket A Bucket B Bucket C Bucket D 10* 4* 12* 32* 8* 1* 5* 21* 13* 15* 7* 19* GLOBAL DEPTH 3 2 000 1* 5* 21* 13* Bucket B 001 2 010 011 10* Bucket C 100 2 101 Split the bucket -> check local depth = global depth -> if yes, double the directory -> rehash the entries and distribute into two buckets -> if no, rehash the entries and distribute them into two buckets. -> increment the local depth. 110 15* 7* 19* Bucket D 111 3 DIRECTORY Bucket A2 (`split image' of Bucket A) 7

Comments on Extendible Hashing If directory fits in memory, equality search answered with one disk access; else two. What is the problem with extendible hashing? Consider if the distribution of hash values is skewed (concentrates on a few buckets) Directory can grow large. Can you come up with one insertion leading to multiple splits? 9

Skewed data distribution (multiple splits) 1 LOCAL DEPTH Assume each bucket holds two data entries Insert 2 (binary 10) – how many times of DIR doubling? Insert 16 (binary 10000) – how many times of DIR doubling? How to solve this data skew problem? 0* 8* GLOBAL DEPTH 1 1 1

Insert 2 (10) 1 LOCAL DEPTH GLOBAL DEPTH 0* 8* 2 LOCAL DEPTH 0* 8* 1 LOCAL DEPTH GLOBAL DEPTH 0* 8* 2 LOCAL DEPTH 0* 8* GLOBAL DEPTH 1 00 2* 01 10 11

Insert 16 (10000) 1 LOCAL DEPTH GLOBAL DEPTH 0* 8* 000 001 LOCAL DEPTH 1 LOCAL DEPTH GLOBAL DEPTH 0* 8* 000 001 LOCAL DEPTH GLOBAL DEPTH 0* 8* 1 010 011 3 2 100 101 110 111 2 LOCAL DEPTH 0* 8* GLOBAL DEPTH 1 00 01 10 11

Comments on Extendible Hashing Delete: If removal of data entry makes bucket empty, can be merged with `split image’. If each directory element points to same bucket as its split image, can halve directory. 9

Delete 10* 00 01 10 11 2 LOCAL DEPTH DIRECTORY GLOBAL DEPTH Bucket A Bucket B Bucket C Bucket D 1* 5* 21* 13* 32* 16* 10* 15* 7* 19* 4* 12* 00 01 10 11 2 LOCAL DEPTH 1 DIRECTORY GLOBAL DEPTH Bucket A Bucket B Bucket B2 1* 5* 21* 13* 32* 16* 15* 7* 19* 4* 12*

Delete 15*, 7*, 19* 00 01 10 11 2 1 LOCAL DEPTH GLOBAL DEPTH Bucket A Bucket B 1* 5* 21* 13* 32* 16* 4* 12* 00 01 10 11 2 LOCAL DEPTH 1 DIRECTORY GLOBAL DEPTH Bucket A Bucket B Bucket B2 1* 5* 21* 13* 32* 16* 15* 7* 19* 4* 12* DIRECTORY 00 01 1 LOCAL DEPTH GLOBAL DEPTH Bucket A Bucket B 1* 5* 21* 13* 32* 16* 4* 12*

Linear Hashing (LH) Another dynamic hashing scheme, as an alternative to Extendible Hashing. What are problems in static/extendible hasing? Static hashing: long overflow chains Extendible hashing: data skew causing large directory Is it possible to come up with a more balanced solution? Shorter overflow chains than static hashing No need for directory in extendible hashing How do you get rid of directory (indirection)? 10

Linear Hashing (LH) Basic Idea: Pages are split when overflows occur – but not necessarily the page with the overflow. Splitting occurs in turn, in a round robin fashion. one by one from the first bucket to the last bucket. Use a family of hash functions h0, h1, h2, ... Each function’s range is twice that of its predecessor. When all the pages at one level (the current hash function) have been split, a new level is applied. Splitting occurs gradually Primary pages are allocated in order & consecutively. 10

Linear Hashing Verbal Algorithm Initial Stage. The initial level distributes entries into N0 buckets. Call the hash function to perform this h0.

Linear Hashing Verbal Algorithm Splitting buckets. If a bucket overflows its primary page is chained to an overflow page (same as in static hashing). Also when a bucket overflows, some bucket is split. The first bucket to be split is the first bucket in the file (not necessarily the bucket that overflows). The next bucket to be split is the second bucket in the file … and so on until the Nth. has been split. When buckets are split their entries (including those in overflow pages) are distributed using h1. To access split buckets the next level hash function (h1) is applied. h1 maps entries to 2N0 (or N1)buckets.

Linear Hashing Example Initially, the index level equal to 0 and N0 equals 4 (three entries fit on a page). h0 range = 4 buckets Note that next indicates which bucket is to split next. (Round Robin) Now consider what happens when 9 (1001) is inserted (which will not fit in the second bucket). h0 next 64 36 1 17 5 6 31 15 00 01 10 11

Insert 9 (1001) 00 01 10 11 h0 000 h1 next 64 01 h0 1 17 5 9 10 h0 6 11 h0 31 15 100 h1 36 next 64 36 1 17 5 6 31 15

Linear Hashing Example 2 The page indicated by next is split (the first one). Next is incremented. (000)H1 next 64 (01)H0 1 17 5 9 (10)H0 6 (11)H0 31 15 (100)H1 36 An overflow page is chained to the primary page to contain the inserted value. Note that the split page is not necessary the overflow page – round robin. If h0 maps a value from zero to next – 1 (just the first page in this case), h1 must be used to insert the new entry. Note how the new page falls naturally into the sequence as the fifth page. - Note that the split bucket is not necessary the overflow bucket. The split bucket is chosen based on round robin. - Need to use both h0 and maybe h1. Apply h0 first, if it is hashed to bucket before next pointer. Then use h1.

Insert 8 (1000), 7(111), 18(10010), 14(1100) (000)H1 64 8 (01)H0 next 17 5 9 (10)H0 6 18 (11)H0 31 15 7 (100)H1 36 14 - Note that the split bucket is not necessary the overflow bucket. The split bucket is chosen based on round robin. - Need to use both h0 and maybe h1. Apply h0 first, if it is hashed to bucket before next pointer. Then use h1.

Before Insert 11 (1011) (000)H1 64 8 (01)H0 next 1 17 5 9 (10)H0 6 18 31 15 7 (100)H1 36 14 - Note that the split bucket is not necessary the overflow bucket. The split bucket is chosen based on round robin. - Need to use both h0 and maybe h1. Apply h0 first, if it is hashed to bucket before next pointer. Then use h1.

After Insert 11 (1011) Which hash function (h0 or h1) for searching 9 (1001)? How about searching for 18 (10010)? (000)H1 64 8 (001)H1 next 1 17 9 (10)H0 6 18 (11)H0 31 15 7 11 (100)H1 36 14 (101)H1 5 - Note that the split bucket is not necessary the overflow bucket. The split bucket is chosen based on round robin. - Need to use both h0 and maybe h1. Apply h0 first, if it is hashed to bucket before next pointer. Then use h1.

Linear Hashing Insert 32, 162 Insert 10, 13, 233 next3 64 8 32 16 17 9 h0 next1 10 18 6 14 next2 11 31 15 7 36 5 13 - 23 Insert 32, 162 Insert 10, 13, 233 After the 2nd. split the base level is 1 (N1 = 8), use h1. Subsequent splits will use h2 for inserts between the first bucket and next-1.

Notes for Linear Hashing Why linear hashing does not need a directory but extendible hashing needs one? Consider finding i-th bucket New buckets are created in order (sequentially, i.e., round robin) in linear hashing Level progression: Once all Ni buckets of the current level (i) are split the hash function hi is replaced by hi+1. The splitting process starts again at the first bucket and hi+2 is applied to find entries in split buckets.

Tree vs. Hash-Structured Indexing Compare them on cost of Equality search: find all employees of age 30. Range search: find all the employees of ages 30~39. 2

Tree vs. Hash-Structured Indexing Tree index supports both range searches and equality searches efficiently. Why efficient range searches? Data entries (on the leaf nodes of the tree) are sorted. Perform equality search on the first qualifying data entry + scan to find the rests. Data records also need to be sorted by search key in case that the range searches access record fields other than the search key. Hash index supports equality search efficiently, but not range search. Why inefficient range searches? Data entries are hashed (using a hash function), not sorted.