Presentation is loading. Please wait.

Presentation is loading. Please wait.

CS246 Ranked Queries. Junghoo "John" Cho (UCLA Computer Science)2 Traditional Database Query (Dept = “CS”) & (GPA > 3.5) Boolean semantics Clear boundary.

Similar presentations


Presentation on theme: "CS246 Ranked Queries. Junghoo "John" Cho (UCLA Computer Science)2 Traditional Database Query (Dept = “CS”) & (GPA > 3.5) Boolean semantics Clear boundary."— Presentation transcript:

1 CS246 Ranked Queries

2 Junghoo "John" Cho (UCLA Computer Science)2 Traditional Database Query (Dept = “CS”) & (GPA > 3.5) Boolean semantics Clear boundary between “answers” and “non- answers” Goal: Find all “matching” tuples Optionally ordered by a certain field T: All Tuples A: Answer Clear boundary

3 Junghoo "John" Cho (UCLA Computer Science)3 Ranked Queries Find “cheap” houses “close” to UCLA Cheap( x ) & NearUCLA( x ) Non-Boolean semantics No clear boundary between “answers” and “non- answers” Answers inherently ranked Goal: Find top ranked tuples T: All Tuples A: Answer No clear boundary

4 Junghoo "John" Cho (UCLA Computer Science)4 Issues? How to rank? Distance 3 miles: proximity? Similarity: looks like “Tom Cruise”? How to combine rankings? Price = 0.8, Distance = 0.2. Overall grade? Weighting? Price is twice more “important” than distance? Query processing? Traditional query processing is based on Boolean semantics

5 Junghoo "John" Cho (UCLA Computer Science)5 Fagin’s paper Previously all of the 4 issues were a “black art” No disciplined way to address the problems Fagin’s paper studied the last 3 issues in a more “disciplined” way Combination of ranks Weighting Query processing Find general “properties” and derive a formula satisfying the properties

6 Junghoo "John" Cho (UCLA Computer Science)6 Topics Combining multiple grades Weighting Efficient query processing

7 Junghoo "John" Cho (UCLA Computer Science)7 Rank Combination Cheap( x ) & NearUCLA( x ) Cheap( x ) = 0.3 NearUCLA( x ) = 0.8 Overall ranking? How would you approach the problem?

8 Junghoo "John" Cho (UCLA Computer Science)8 General Query (Cheap( x ) & (NearUCLA( x ) | NearBeach( x ))) & RedRoof( x ) How to compute the overall grade? CheapNearUCLANearBeachRedRoof | & & 0.30.20.80.6

9 Junghoo "John" Cho (UCLA Computer Science)9 Main Idea Atomic scoring function  A ( x ): given by application  Cheap ( x ) = 0.3,  NearUCLA ( x ) = 0.2 … Query: recursive application of AND and OR (Cheap & (NearUCLA | NearBeach)) & RedRoof Combination of two grades for “AND” and “OR” 2 -nary function: t : [0, 1] 2  [0,1] Example: min(a, b) for “AND”?  Cheap & NearUCLA ( x ) = min(0.3, 0.2) = 0.2 Properties of AND/OR scoring function?

10 Junghoo "John" Cho (UCLA Computer Science)10 Properties of Scoring Function Logical equivalence The same overall score for logically equivalent queries  A&(B|C) ( x ) =  (A&B)|(A&C) ( x ) Monotonicity if  A ( x 1 ) <  A ( x 2 ) and  B ( x 1 ) <  B ( x 2 ), then  A&B ( x 1 ) <  A&B ( x 2 ) t(x 1, x 2 ) < t(x’ 1, x’ 2 ) if x i < x’ I for all i

11 Junghoo "John" Cho (UCLA Computer Science)11 Uniqueness Theorem The min() and max() are the only scoring functions with the two properties Min() for “AND” and Max() for “OR” Quite surprising and interesting result More discussion later Is logical equivalence really true?

12 Junghoo "John" Cho (UCLA Computer Science)12 Question on Logical Equivalence? Query: Homepage of “John Grisham” PageRank & John & Grisham Logically equivalent, but are they same? Does logical equivalence hold for non-Boolean queries? PRJohnGrisham & & PRJohnGrisham & &

13 Junghoo "John" Cho (UCLA Computer Science)13 Summary of Scoring Function Question: how to combine rankings Scoring function: combine grades Results from fuzzy logic Logical equivalence Monotonicity Uniqueness theorem Min() for “AND” and Max() for “OR” Logical equivalence may not be valid for graded Boolean expression

14 Junghoo "John" Cho (UCLA Computer Science)14 Topics Combining multiple grades Weighting Efficient query processing

15 Junghoo "John" Cho (UCLA Computer Science)15 Weighting of Grades Cheap( x ) & NearUCLA( x ) What if proximity is “more important” than price? Assign weights to each atomic query Cheap( x ) = 0.2, weight = 1 NearUCLA( x ) = 0.8, weight = 10 Proximity is 10 times more important than price Overall grade?

16 Junghoo "John" Cho (UCLA Computer Science)16 Formalization m -atomic queries  = (  1, …,  m ) : weight of each atomic query X = ( x 1, …, x m ) : grades from each atomic query f ( x 1, …, x m ) : unweighted scoring function f  ( x 1, …, x m ) : new weighted scoring function What should f  ( x 1, …, x m ) be given  ? Properties of f  ( x 1, …, x m )?

17 Junghoo "John" Cho (UCLA Computer Science)17 Properties P1: When all weights are equal f (1/m, …, 1/m) ( x 1, …, x m ) = f ( x 1, …, x m ) P2: If an argument has zero weight, we can safely drop the argument f (  1, …,  m-1, 0) ( x 1, …, x m ) = f (  1, …,  m-1 ) ( x 1, …, x m-1 ) P3: f  (X) should be locally linear f  +(1-  )  ’ ( x 1, …, x m ) =  f  ( x 1, …, x m ) + (1-  ) f  ’ ( x 1, …, x m )

18 Junghoo "John" Cho (UCLA Computer Science)18 Local Linearity Example  1 = (1/2, 1/2), f  1 (X) = 0.2  2 = (1/4, 3/4), f  2 (X) = 0.4 If  3 = (3/8, 5/8) = 1/2  1 + 1/2  2 f  3 (X) = 1/2 f  1 (X) + 1/2 f  2 (X) = 0.3 Q: m-atomic queries. How many independent weight assignments? A: m. Only m degrees of freedom Very strong assumption Not too unreasonable, but no rationale

19 Junghoo "John" Cho (UCLA Computer Science)19 Theorem 1·(  1 -  2 ) f ( x 1 ) + 2·(  2 -  3 ) f ( x 1, x 2 ) + 3·(  3 -  4 ) f ( x 1, x 2, x 3 ) + … m·  m · f ( x 1, …, x m ) is the only function that satisfies such properties

20 Junghoo "John" Cho (UCLA Computer Science)20 Examples  = (1/3, 1/3, 1/3) 1·(1/3-1/3) f ( x 1 ) + 2·(1/3-1/3) f ( x 1, x 2 ) + 3·(1/3) f ( x 1, x 2, x 3 ) = f ( x 1, x 2, x 3 )  = (1/2, 1/4, 1/4) 1·(1/2-1/4) f ( x 1 ) + 2·(1/4-1/4) f ( x 1, x 2 ) + 3·(1/4) f ( x 1, x 2, x 3 ) = 1/4 f ( x 1 ) + 3/4 f ( x 1, x 2, x 3 )  = (1/2, 1/3, 1/6) 1·(1/2-1/3) f ( x 1 ) + 2·(1/3-1/6) f ( x 1, x 2 ) + 3·(1/6) f ( x 1, x 2, x 3 ) = 1/6 f ( x 1 ) + 2/6 f ( x 1, x 2 ) + 3/6 f ( x 1, x 2, x 3 )

21 Junghoo "John" Cho (UCLA Computer Science)21 Summary of Weighting Question: different “importance” of grades  = (  1, …,  m ): weight assignment Uniqueness theorem Local linearity and two other reasonable assumption 1·(  1 -  2 ) f ( x 1 ) + 2·(  2 -  3 ) f ( x 1, x 2 ) + … m·  m · f ( x 1, …, x m ) Linearity assumption questionable

22 Junghoo "John" Cho (UCLA Computer Science)22 Application? Web page ranking PageRank & (Keyword1 & Keyword2 & …) Should we use min()? min(keyword1, keyword2, keyword3,…) Would it be better than the cosine measure? If PageRank is 10 times more important, should we use Fagin’s formula? 9/11 PR + 2/11 min(PR, min(keywords)) Would it be better than other ranking function? Is Fagin’s formula practical?

23 Junghoo "John" Cho (UCLA Computer Science)23 Topics Combining multiple grades Weighting Efficient query processing

24 Junghoo "John" Cho (UCLA Computer Science)24 Question How can we process ranked queries efficiently? Top k answers for “Cheap( x ) & NearUCLA( x )” Assume we have good scoring functions How do we process traditional Boolean query? GPA > 3.5 & Dept = “CS” What’s the difference? What is difficult compared to Boolean query?

25 Junghoo "John" Cho (UCLA Computer Science)25 Naïve Solution Cheap( x ) & NearUCLA( x ) 1. Read prices of all houses 2. Compute distances of all houses 3. Compute combined grades of all houses 4. Return the k -highest grade objects Clearly very expensive when database is large

26 Junghoo "John" Cho (UCLA Computer Science)26 Main Idea We don’t have to check all objects/tuples Most tuples have low grades and will not be returned Basic algorithm Check top objects from each atomic query and find the best objects Question: How many objects should we see from each “atomic query”?

27 Junghoo "John" Cho (UCLA Computer Science)27 Architecture a: 0.9 b: 0.8 c: 0.7 … d: 0.9 a: 0.85 b: 0.78 … b: 0.9 d: 0.9 a: 0.75 … f ( x 1, x 2, x 3 ) b: 0.78 a: 0.75 How many to check? How to minimize it? Sorted access Random access any monotonic function

28 Junghoo "John" Cho (UCLA Computer Science)28 Three Papers Fuzzy queries Optimal aggregation Minimal probing

29 Junghoo "John" Cho (UCLA Computer Science)29 Fagin’s Model a: 0.9 b: 0.8 c: 0.7 … d: 0.9 a: 0.85 b: 0.78 … b: 0.9 d: 0.9 a: 0.75 … f ( x 1, x 2, x 3 ) Sorted access

30 Junghoo "John" Cho (UCLA Computer Science)30 Fagin’s Model Sorted access on all streams Cost model: # objects accessed by sorted/random accesses c s s + c r r Ignore the cost for “sorting” Reasonable when objects have been sorted already Sorted index Inappropriate when objects have not been sorted We have to compute grades for all objects Sorting can be costly

31 Junghoo "John" Cho (UCLA Computer Science)31 Main Question How many objects to access? When can we stop? A: When we know that we have seen at least k objects whose scores are higher than any unseen objects

32 Junghoo "John" Cho (UCLA Computer Science)32 Fagin’s First Algorithm Read objects from each stream in parallel Stop when k objects have been seen in common from all streams Top answers should be in the union of the objects that we have seen Why? f ( x 1, x 2, x 3 ) a: 0.9 b: 0.8 c: 0.7 … d: 0.9 a: 0.85 b: 0.78 … b: 0.9 d: 0.9 a: 0.75 … k objects 

33 Junghoo "John" Cho (UCLA Computer Science)33 Stopping Condition Reason The grades of the k objects in the intersection is higher than any unseen objects Proof x : object in the intersection, y : unseen object y 1  x 1. Similarly y i  x i for all i f ( y 1, …, y m )  f ( x 1, …, x m ) due to monotonicity

34 Junghoo "John" Cho (UCLA Computer Science)34 Fagin’s First Algorithm 1. Get objects from each stream in parallel until we have seen k objects in common from all streams 2. For all objects that we have seen so far If its complete grade is not known, obtain unknown grades by random access 3. Find the object with the highest grade

35 Junghoo "John" Cho (UCLA Computer Science)35 Example ( k = 2) a: 0.9 b: 0.8 c: 0.7 … d: 0.9 a: 0.85 b: 0.5 … min ( x 1, x 2 ) d: 0.6 c: 0.2 0.6 a0.9 d 0.2 0.85 b0.8 c0.7 0.5 0.6 0.2 x1x1 x2x2 min 0.85 0.5 a: 0.85 d: 0.6

36 Junghoo "John" Cho (UCLA Computer Science)36 Performance We only look at a subset of objects Ignoring high cost for random access, clearly better than the naïve solution Total number of accesses O ( N (m-1)/m k 1/m ) assuming independent and random object order for each atomic query E.g., O ( N 1/2 k 1/2 ) if m = 2

37 Junghoo "John" Cho (UCLA Computer Science)37 Summary of Fagin’s Algorithm Sorted access on all streams Stopping condition k common objects from all streams

38 Junghoo "John" Cho (UCLA Computer Science)38 Problem of Fagin’s Algorithm Performance depends heavily on object orders in the streams k = 1, min(x1, x2) We need to read all objects Sorted access until 3 rd objects and random access for all remainder Can we avoid this pathological scenario? b: 1 a: 1 c: 1 d: 0 e: 0 e: 1 d: 1 b: 1 c: 0 a: 0

39 Junghoo "John" Cho (UCLA Computer Science)39 New Idea Let us read all grades of an object once we see it from a sorted access Do not need to wait until the streams give k common objects Less dependent on the object order When can we stop? Until we have seen k common objects from sorted accesses?

40 Junghoo "John" Cho (UCLA Computer Science)40 When Can We Stop? If we are sure that we have seen at least k objects whose grades are higher than those of unseen objects How do we know the grades of unseen objects? Can we predict the maximum grade of unseen objects?

41 Junghoo "John" Cho (UCLA Computer Science)41 Maximum Grade of Unseen Objects Assuming min(x1, x2), what will be the maximum grade of unseen objects? a: 1 b: 0.9 c: 0.8 d: 0.7 e: 0.6 e: 1 d: 0.8 b: 0.7 c: 0.7 a: 0.2 x1 < 0.8 and x2 < 0.7, so at most min(0.8, 0.7) = 0.7 Generalization?

42 Junghoo "John" Cho (UCLA Computer Science)42 Generalization x i : the minimum grade from stream i by sorted access f ( x 1, …, x m ) is the maximum grade of unseen objects x i < x i for all unseen objects f ( x 1, …, x m ): monotonic x1 x2

43 Junghoo "John" Cho (UCLA Computer Science)43 Basic Idea of TA We can stop when top k seen object grades are higher than the maximum grade of unseen objects Maximum grade of unseen objects: f ( x 1, …, x m )

44 Junghoo "John" Cho (UCLA Computer Science)44 Threshold Algorithm 1. Read one object from each stream by sorted access 2. For each object O that we just read Get all grades for O by random access If f (O) is in top k, store it in a buffer 3. If the lowest grade of top k object is larger than the threshold f ( x 1, …, x m ) stop

45 Junghoo "John" Cho (UCLA Computer Science)45 f (0.9,0.9) = 0.9f (0.8,0.85) = 0.8f (0.7,0.5) = 0.5 Example ( k = 2) a: 0.9 b: 0.8 c: 0.7 … d: 0.9 a: 0.85 b: 0.5 … min ( x 1, x 2 ) d: 0.6 c: 0.2 a0.9 d b0.8 0.6 x1x1 x2x2 min 0.85 0.5 a: 0.85 d: 0.6 c0.70.2 f (1,1) = 1

46 Junghoo "John" Cho (UCLA Computer Science)46 Comparison of FA and TA? TA sees fewer objects than FA TA always stops earlier than FA When we have seen k objects in common, their grades are higher than the threshold TA may perform more random accesses than FA In TA, ( m -1) random accesses for each object In FA, Random accesses are done at the end, only for missing grades TA requires bounded buffer space ( k ) At the expense of more random seeks

47 Junghoo "John" Cho (UCLA Computer Science)47 Comparison of FA and TA TA can be better in general, but it may perform more random seeks What if random seek is very expensive or impossible? Algorithm with no random seek possible?

48 Junghoo "John" Cho (UCLA Computer Science)48 Algorithm NRA An algorithm with no random seek Isn’t random seek essential? How can we know the grade of an object when some of its grades are missing?

49 Junghoo "John" Cho (UCLA Computer Science)49 Basic Idea We may still compute the lower bound of an object, even if we miss some of its grades E.g., max(0.6, x )  0.6 We may also compute the upper bound of an object, even if we miss some of its grades E.g., max(0.6, x )  0.8 if x  0.8 If the lower bound of O 1 is higher than the upper bound of other objects, we can return O 1

50 Junghoo "John" Cho (UCLA Computer Science)50 Generalization ( x 1, …, x m ): the minimum grades from sorted access Lower bound of object: 0 for missing grades When x 3, x 4 are missing, f ( x 1, x 2, 0, 0) From monotonicity Upper bound of object: x i for missing grades When x 3, x 4 are missing, f ( x 1, x 2, x 3, x 4 ) x 3  x 3, x 4  x 4, thus f ( x 1, x 2, x 3, x 4 )  f ( x 1, x 2, x 3, x 4 )

51 Junghoo "John" Cho (UCLA Computer Science)51 NRA Algorithm 1. Read one object from each stream by sorted access. Assume ( x 1, …, x m ) are the lowest grades from the streams 2. For each object O seen so far Update its upper/lower bounds by Upper bound = use x i for missing grades Lower bound = use 0 for missing grades 3. If lower bounds of top k objects are larger than upper bounds of any other object, stop

52 Junghoo "John" Cho (UCLA Computer Science)52 AVG(0.5,0.7)=0.6 AVG(0.5,0.2)=0.35 AVG(0.3,0.7)=0.5 AVG(0.5,0.6) = 0.55AVG(0.3,0.2) = 0.25 Example ( k = 2) a: 0.9 b: 0.5 c: 0.3 … d: 0.7 a: 0.6 e: 0.2 … AVG ( x 1, x 2 ) a0.9 d0.7 0.6 b0.5 AVG(0,0.7)=0.35 AVG(0.3,0)=0.15 x1x1 x2x2 Lower Bound AVG(0.9,0)=0.45 AVG(0.5,0)=0.25 a, d c0.3 0.2e AVG(0,0.2)=0.1 AVG(0.9,0.7)=0.8 AVG(0.3,0.2)=0.25 AVG(0.9,0.7)=0.8 AVG(0.5,0.6)=0.55 AVG(0.3,0.2)=0.25 Upper Bound 0.75 AVG(0.9,0.7) = 0.8

53 Junghoo "John" Cho (UCLA Computer Science)53 Properties of NRA No random access We may return an object even if we don’t know its grade We may only know its lower bound We need to constantly update the upper bounds of objects As threshold value decreases

54 Junghoo "John" Cho (UCLA Computer Science)54 Chang’s View Computing grades can be expensive Sorting is expensive Minimize sorted access

55 Junghoo "John" Cho (UCLA Computer Science)55 Chang’s Model Sorted access on one stream and random access on the remaining streams At least one sorted access necessary to “discover” objects Cost model: # of random accesses Reasonable when the objects are not sorted for some streams

56 Junghoo "John" Cho (UCLA Computer Science)56 Chang’s Model a: 0.9 b: 0.8 c: 0.7 … d: 0.9 a: 0.85 b: 0.78 … b: 0.9 d: 0.9 a: 0.75 … f ( x 1, x 2, x 3 ) Sorted access Random access

57 Junghoo "John" Cho (UCLA Computer Science)57 Chang’s Solution Main Idea? Probe only necessary attributes A probe is necessary iff we cannot find the right answer without it Which probe is necessary?

58 Junghoo "John" Cho (UCLA Computer Science)58 Necessary Probes Assume attribute probe order is fixed Assume min() is the scoring function Assume the threshold (grade of k th highest object) is 0.7 a : (0.9, 0.3, 0.2) Is the the second probe necessary? b : (0.5, 0.7, 0.3) Is the second probe necessary? Is the necessity dependent on algorithm?

59 Junghoo "John" Cho (UCLA Computer Science)59 Observation Probe necessity is independent of algorithm Purely dependent on the dataset Assuming probe order is fixed How do we find the necessary probes? When the upper bound of the grade of object O goes below the threshold, no more probe is necessary from the object How do we find the threshold value? We know the upper bound of the threshold Threshold upper bound = k th upper bound of grades (g k ) … Upper bound of Object grades k g1g1 gkgk

60 Junghoo "John" Cho (UCLA Computer Science)60 Algorithm MPro As long as we probe objects with grades above the threshold upper bound, we are safe Q : a priority queue for the upper bound grades of objects Pick the top object O from Q Probe the next attribute of O Stop if we have the complete grade for the top k objects in Q

61 Junghoo "John" Cho (UCLA Computer Science)61 Property of MPro MPro is optimal in the exact sense (not in the big O sense) All probes are necessary Assuming we need to compute the complete grades for all returned objects Assuming object probing order fixed No other algorithm can beat MPro Does it work for max()? Performance depends on the scoring function Good only when the upper bound is “tight”

62 Junghoo "John" Cho (UCLA Computer Science)62 Other Issues for MPro How to select the attribute probing order Mpro is optimal given a particular probing order Attribute probing order affects performance significantly Probe order estimation from sampling How to parallelize Mpro Probe top k objects simutaneously

63 Junghoo "John" Cho (UCLA Computer Science)63 Summary Efficient processing of ranked queries Sorted access Random access FA: k common objects TA: threshold value NRA: upper and lower bounds MPro: necessary probe principle

64 Junghoo "John" Cho (UCLA Computer Science)64 Hints on Paper Writing The goal of a paper is to be read and used by other people Should be easy to understand Tricky balance How to make a paper easy to read? Explicitly specify your assumptions Readers do not know what you think! Use examples Run experiments


Download ppt "CS246 Ranked Queries. Junghoo "John" Cho (UCLA Computer Science)2 Traditional Database Query (Dept = “CS”) & (GPA > 3.5) Boolean semantics Clear boundary."

Similar presentations


Ads by Google