Presentation is loading. Please wait.

Presentation is loading. Please wait.

B+ Tree Index tuning--. overview B + -Tree Scalability Typical order: 100. Typical fill-factor: 67%. –average fanout = 133 Typical capacities (root at.

Similar presentations


Presentation on theme: "B+ Tree Index tuning--. overview B + -Tree Scalability Typical order: 100. Typical fill-factor: 67%. –average fanout = 133 Typical capacities (root at."— Presentation transcript:

1 B+ Tree Index tuning--

2 overview

3 B + -Tree Scalability Typical order: 100. Typical fill-factor: 67%. –average fanout = 133 Typical capacities (root at Level 1, and has 133 entries): –Level 5: 133 4 = 312,900,700 records –Level 4: 133 3 = 2,352,637 records Can often hold top levels in buffer pool: –Level 1 = 1 page = 8 Kbytes –Level 2 = 133 pages = 1 Mbyte –Level 3 = 17,689 pages = 133 MBytes

4 A Note on `Order’ Order (d) concept replaced by physical space criterion in practice (`at least half-full’). –Index pages can typically hold many more entries than leaf pages. –Variable sized records and search keys mean different nodes will contain different numbers of entries. –Even with fixed length fields, multiple records with the same search key value (duplicates) can lead to variable-sized data entries

5 Inserting a Data Entry into a B+ Tree Find correct leaf L. Put data entry onto L. –If L has enough space, done! –Else, must split L (into L and a new node L2) Redistribute entries evenly, copy up middle key. Insert index entry pointing to L2 into parent of L. This can happen recursively –To split index node, redistribute entries evenly, but push up middle key. (Contrast with leaf splits.) Splits “grow” tree; root split increases height. –Tree growth: gets wider or one level taller at top.

6 Inserting 7 & 8 into Example B+ Tree Root 2 35 71416 1920222427 293334 38 39 2 35 7 8 5 (Note that 5 is copied up and continues to appear in the leaf.) 1724 30 13 1724 30 13 Observe how minimum occupancy is guaranteed in both leaf and index pg splits.

7 Insertion (Cont) Note difference between copy-up and push-up; be sure you understand the reasons for this. appears once in the index. Contrast 52430 17 13 (Note that 17 is pushed up and only this with a leaf split.) 2 35 7 8 5 1724 30 13

8 Example B+ Tree After Inserting 8 v Notice that root was split, leading to increase in height. v In this example, we can avoid splitting by re-distributing entries; however, this is usually not done in practice. Why? v checking whether redistribution is possible may increase I/Os 23 Root 17 24 30 1416 1920222427 293334 38 39 135 758

9 Deleting a Data Entry from a B+ Tree Start at root, find leaf L where entry belongs. Remove the entry. –If L is at least half-full, done! –If L has only d-1 entries, Try to re-distribute, borrowing from sibling (adjacent node with same parent as L). If re-distribution fails, merge L and sibling. If merge occurred, must delete entry (pointing to L or sibling) from parent of L. Merge could propagate to root, decreasing height.

10 Example Tree After (Inserting 8, Then) Deleting 19 Deleting 19 is easy. 23 Root 17 24 30 1416 2022 2427 293334 38 39 135 758

11 Example Tree After Deleting 20... Deleting 20 is done with re-distribution. Notice how middle key is copied up. 23 Root 17 30 1416 3334 38 39 135 7582224 27 29

12 ... And Then Deleting 24 Must merge. Observe `toss’ of index entry (on right), and `pull down’ of index entry (below). 30 2227 293334 38 39 2 3 7 1416 22 27 29 3334 38 39 58 Root 30 135 17

13 Example of Non-leaf Re-distribution (Delete 24) In contrast to previous example, can re- distribute entry from left child of root to right child. Root 135 1720 22 30 1416 1718 203334 38 39 22272921 758 32 Root 135 1720 22 30 1416 1718 20 3334 38 39 2729 21 75832 2224 27

14 After Re-distribution Intuitively, entries are re-distributed by `pushing through’ the splitting entry in the parent node. It suffices to re-distribute index entry with key 20; we’ve re-distributed 17 as well for illustration. 1416 3334 38 39 222729 1718 2021 758 23 Root 135 17 30 20 22

15 B+tree deletions in practice –Often, mergeing/coalescing is not implemented –Too hard and not worth it!

16 Duplicates Using overflow page Several leaf pages may contain entries with the same key –To retrieve all data entries with a given key, we search for the left-most data entry with the key, then follow the leaf sequence pointers to find other pages.

17 B+-Tree Performance Tree levels –Tree Fanout Size of key Page utilization Tree maintenance –Online On inserts On deletes –Offline Tree locking Tree root in main memory

18 B+-Tree Performance Key length influences fanout –Choose small key when creating an index –Key compression Prefix compression (Oracle 8, MySQL): only store that part of the key that is needed to distinguish it from its neighbors: Smi, Smo, Smy for Smith, Smoot, Smythe. Front compression (Oracle 5): adjacent keys have their front portion factored out: Smi, (2)o, (2)y. There are problems with this approach: –Processor overhead for maintenance –Locking Smoot requires locking Smith too.

19 Prefix Key Compression Key values in index entries only `direct traffic’; can often compress them. E.g., If we have adjacent index entries with search key values Dannon Yogurt, David Smith and Devarakonda Murthy, we can abbreviate David Smith to Dav. (The other keys can be compressed too...) – Is this correct? Not quite! What if there is a data entry Davey Jones? (Can only compress David Smith to Davi) –In general, while compressing, must leave each index entry greater than every key value (in any subtree) to its left. Insert/delete must be suitably modified.

20 summarize


Download ppt "B+ Tree Index tuning--. overview B + -Tree Scalability Typical order: 100. Typical fill-factor: 67%. –average fanout = 133 Typical capacities (root at."

Similar presentations


Ads by Google