Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Distributed Databases CS347 Lecture 14 May 30, 2001.

Similar presentations


Presentation on theme: "1 Distributed Databases CS347 Lecture 14 May 30, 2001."— Presentation transcript:

1 1 Distributed Databases CS347 Lecture 14 May 30, 2001

2 2 Topics for the Day Query processing in distributed databases –Localization –Distributed query operators –Cost-based optimization

3 3 Query Processing Steps Decomposition –Given SQL query, generate one or more algebraic query trees Localization –Rewrite query trees, replacing relations by fragments Optimization –Given cost model + one or more localized query trees –Produce minimum cost query execution plan

4 4 Decomposition Same as in a centralized DBMS Normalization (usually into relational algebra) Select A,C From R Natural Join S Where (R.B = 1 and S.D = 2) or (R.C > 3 and S.D = 2)  (R.B = 1 v R.C > 3)  (S.D = 2) RS Conjunctive normal form

5 5 Redundancy elimination (S.A = 1)  (S.A > 5)  False (S.A < 10)  (S.A < 5)  S.A < 5 Algebraic Rewriting –Example: pushing conditions down Decomposition S S T T  cond  cond1  cond2  cond3

6 6 Localization Steps 1.Start with query tree 2.Replace relations by fragments 3.Push  up & ,  down (CS245 rules) 4.Simplify – eliminating unnecessary operations Note: To denote fragments in query trees [R: cond] Relation that fragment belongs to Condition its tuples satisfy

7 7 Example 1  E=3 R  E=3  [R: E<10] [R: E  10]   E=3 [R: E<10] [R: E  10]  E=3 [R: E<10]

8 8 Example 2 R S A [R: A 10][S: A<5] [S: A  5]   A R1R1 R2R2 R3R3 S1S1 S2S2

9 9 R 1 S 1 R 1 S 2 R 2 S 1 R 2 S 2 R 3 S 1 R 3 S 2  AAAAAA  [R:A 10][S:A  5] A AA

10 10 Rules for Horiz. Fragmentation  C1 [R: C 2 ]  [R: C 1  C 2 ] [R: False]  Ø [R: C 1 ] [S: C 2 ]  [R S: C 1  C 2  R.A = S.A] In Example 1:  E=3 [R 2 : E  10]  [R 2 : E=3  E  10]  [R 2 : False]  Ø In Example 2: [R: A<5] [S: A  5]  [R S: R.A < 5  S.A  5  R.A = S.A]  [R S: False]  Ø AA A A A

11 11 Example 3 – Derived Fragmentation R S K S’s fragmentation is derived from that of R. [R: A<10] [R: A  10] [S: K=R.K  R.A<10] [S: K=R.K  R.A  10]  K R1R1 R2R2 S2S2 S1S1

12 12  KKKK R1R1 S1S1 R1R1 S2S2 S1S1 S2S2 R2R2 R2R2  K K [R: A<10] [S: K=R.K  R.A<10][R: A  10] [S: K=R.K  R.A  10]

13 13 Example 4 – Vertical Fragmentation ARAR  A R 1 (K,A,B) R 2 (K,C,D) K  A  K,A R 1 (K,A,B) R 2 (K,C,D) K  A R 1 (K,A,B)

14 14 Rule for Vertical Fragmentation Given vertical fragmentation of R(A): R i =  Ai (R), A i  A For any B  A:  B (R) =  B [ R i | B  A i  Ø] i

15 15 Parallel/Distributed Query Operations Sort –Basic sort –Range-partitioning sort –Parallel external sort-merge Join –Partitioned join –Asymmetric fragment and replicate join –General fragment and replicate join –Semi-join programs Aggregation and duplicate removal

16 16 Parallel/distributed sort Input: relation R on –single site/disk –fragmented/partitioned by sort attribute –fragmented/partitioned by some other attribute Output: sorted relation R –single site/disk –individual sorted fragments/partitions

17 17 Basic sort Given R(A,…) range partitioned on attribute A, sort R on A Each fragment is sorted independently Results shipped elsewhere if necessary 7 3 11 17 14 27 22 1020 3 7 11 14 17 22 27 1020

18 18 Range partitioning sort Given R(A,….) located at one or more sites, not fragmented on A, sort R on A Algorithm: range partition on A and then do basic sort R 1s R 2s R 3s a0a0 Local sort RaRa R1R1 R2R2 R3R3 RbRb Result a1a1

19 19 Selecting a partitioning vector Possible centralized approach using a “coordinator” –Each site sends statistics about its fragment to coordinator –Coordinator decides # of sites to use for local sort –Coordinator computes and distributes partitioning vector For example, –Statistics could be (min sort key, max sort key, # of tuples) –Coordinator tries to choose vector that equally partitions relation

20 20 Example Coordinator receives: –From site 1: Min 5, Max 9, 10 tuples –From site 2: Min 7, Max 16, 10 tuples Assume sort keys distributed uniformly within [min,max] in each fragment Partition R into two fragments 5 10 15 20 k0k0 What is k 0 ?

21 21 Variations Different kinds of statistics –Local partitioning vector –Histogram Multiple rounds between coordinator and sites –Sites send statistics –Coordinator computes and distributes initial vector V –Sites tell coordinator the number of tuples that fall in each range of V –Coordinator computes final partitioning vector V f 5 6 8 10 343 local vector # of tuples Site 1

22 22 Parallel external sort-merge Local sort Compute partition vector Merge sorted streams at final sites RaRa RbRb Local sort R as R bs R1R1 R2R2 R3R3 a0a0 a1a1 In order Result

23 23 Parallel/distributed join Input:Relations R, S May or may not be partitioned Output:R S Result at one or more sites

24 24 Partitioned Join Local join Result f(A) RaRa RbRb R1R1 R2R2 R3R3 S1S1 S2S2 S3S3 SaSa SbSb Note: Works only for equi-joins Join attribute A

25 25 Partitioned Join Same partition function (f) for both relations f can be range or hash partitioning Any type of local join (nested-loop, hash, merge, etc.) can be used Several possible scheduling options. Example: –partition R; partition S; join –partition R; build local hash table for R; partition S and join Good partition function important –Distribute join load evenly among sites

26 26 Asymmetric fragment + replicate join Local join Result RaRa RbRb R1R1 f R2R2 R3R3 S SaSa SbSb Join attribute A Partition function union Any partition function f can be used (even round-robin) Can be used for any kind of join, not just equi-joins S S

27 27 General fragment + replicate join RaRa RbRb R1R1 R2R2 RnRn Partition R1R1 R2R2 RnRn Replicate m copies SaSa SbSb S1S1 S2S2 SmSm Partition S1S1 S2S2 SmSm Replicate n copies

28 28 All n x m pairings of R,S fragments R1R1 S1S1 R2R2 S1S1 RnRn S1S1 R1R1 SmSm R2R2 SmSm RnRn SmSm Result Asymmetric F+R join is a special case of general F+R. Asymmetric F+R is useful when S is small.

29 29 Used to reduce communication traffic during join processing R S = (R S) S = R (S R) = (R S) (S R) Semi-join programs

30 30 Example  A (S) = [2,10,25,30] b a2 10 c d 25 30 y x3 10 z w 15 25 32x ABAC SR R S = w y10 25 Compute S (R S ) Using semi-join, communication cost = 4 A + 2 (A + C) + result Directly joining R and S, communication cost = 4 (A + B) + result

31 31 Comparing communication costs Say R is the smaller of the two relations R and S (R S) S is cheaper than R S if size (  A S) + size (R S) < size (R) Similar comparisons for other types of semi-joins Common implementation trick: –Encode  A S (or  A R) as a bit vector –1 bit per domain of attribute A 0 0 1 1 0 1 0 0 0 0 1 0 1 0 0

32 32 n-way joins To compute R S T –Semi-join program 1: R’ S’ T where R’ = R S & S’ = S T –Semi-join program 2: R’’ S’ T where R’’ = R S’ & S’ = S T –Several other options In general, number of options is exponential in the number of relations

33 33 Other operations Duplicate elimination –Sort first (in parallel), then eliminate duplicates in the result –Partition tuples (range or hash) and eliminate duplicates locally Aggregates –Partition by grouping attributes; compute aggregates locally at each site

34 34 Example sum(sal) group by dept sum RaRa RbRb

35 35 Example sum RaRa RbRb Aggregate during partitioning to reduce communication cost Does this work for all kinds of aggregates?

36 36 Query Optimization Generate query execution plans (QEPs) Estimate cost of each QEP ($,time,…) Choose minimum cost QEP What’s different for distributed DB? –New strategies for some operations (semi-join, range-partitioning sort,…) –Many ways to assign and schedule processors –Some factors besides number of IO’s in the cost model

37 37 Cost estimation In centralized systems - estimate sizes of intermediate relations For distributed systems –Transmission cost/time may dominate –Account for parallelism –Data distribution and result re-assembly cost/time Work at site T1T2 answer 100 IOs 50 IOs 70 IOs 20 IOs Plan A Plan B

38 38 Optimization in distributed DBs Two levels of optimization Global optimization –Given localized query and cost function –Output optimized (min. cost) QEP that includes relational and communication operations on fragments Local optimization –At each site involved in query execution –Portion of the QEP at a given site optimized using techniques from centralized DB systems

39 39 Search strategies 1.Exhaustive (with pruning) 2.Hill climbing (greedy) 3.Query separation

40 40 A fixed set of techniques for each relational operator Search space = “all” possible QEPs with this set of techniques Prune search space using heuristics Choose minimum cost QEP from rest of search space Exhaustive with Pruning

41 41 RST R S R  T S R S T T S T  R (S R) T (T S) R |R|>|S|>|T| RST AB 212 1 2 Prune because cross-product not necessary Prune because larger relation first 1 Ship S to R Semi-join Ship T to S Semi-join Example

42 42 Hill Climbing Begin with initial feasible QEP At each step, generate a set S of new QEPs by applying ‘transformations’ to current QEP Evaluate cost of each QEP in S Stop if no improvement is possible Otherwise, replace current QEP by the minimum cost QEP from S and iterate x Initial plan 1 2

43 43 RSTV ABC Example R S T V R 1 10 S 2 20 T 3 30 V 4 40 Rel. Site # of tuples Goal: minimize communication cost Initial plan: send all relations to one site To site 1: cost=20+30+40= 90 To site 2: cost=10+30+40= 80 To site 3: cost=10+20+40= 70 To site 4: cost=10+20+30= 60 Transformation: send a relation to its neighbor

44 44 Initial feasible plan P0: R (1  4); S (2  4); T (3  4) Compute join at site 4 Assume following sizes: R S  20 S T  5 T V  1 Local search

45 45 4 1 2 SR 1020 cost = 30 4 12 R S R 10 20 cost = 30 4 12 S R S 20 cost = 40 No change Worse

46 46 4 2 3 T S 30 20 cost = 50 T 4 23 T S 30 5 cost = 35 4 23 S T S 20 5 cost = 25 Improvement

47 47 P1: S (2  3); R (1  4);  (3  4) where  = S T Compute answer at site 4 Now apply same transformation to R and  Next iteration 4 1 3 R  4 1 3   R 4 1 3 R R 

48 48 Resources Ozsu and Valduriez. “Principles of Distributed Database Systems” – Chapters 7, 8, and 9.


Download ppt "1 Distributed Databases CS347 Lecture 14 May 30, 2001."

Similar presentations


Ads by Google