Presentation is loading. Please wait.

Presentation is loading. Please wait.

+ Efficient network aware search in collaborative tagging Sihem Amer Yahia, Michael Benedikt, Laks V.S. Lakshmanan, Julia Stoyanovich Presented by: Ashish.

Similar presentations


Presentation on theme: "+ Efficient network aware search in collaborative tagging Sihem Amer Yahia, Michael Benedikt, Laks V.S. Lakshmanan, Julia Stoyanovich Presented by: Ashish."— Presentation transcript:

1 + Efficient network aware search in collaborative tagging Sihem Amer Yahia, Michael Benedikt, Laks V.S. Lakshmanan, Julia Stoyanovich Presented by: Ashish Chawla CSE 6339 Spring 2009

2 + Overview Opportunity: Explore keyword search in a context where query results are determined by opinion of network of taggers related to a seeker. Incorporate social behavior into processing search queries Network Aware Search Results determined by opinion of network. Existing top-k are too space intensive Dependence of scores on seeker’s network Investigate clustering seekers based on behavior of networks Del.icio.us datasets were used for experiments 2

3 + Introduction What is Network Aware Search? Examples: Flickr, YouTube, del.icio.us, photo tagging on Facebook Users contribute content annotate items (photos, videos, URLs, …) with tags form social networks friends/family, interest-based need help discovering relevant content What is Relevance of an item? 3

4 + What is Network-Aware Search? 4

5 + Claims Define what is network-aware search. Improvise top-k algorithms to Network-Aware Search, by using score upper-bounds and EXACT strategy. Refine score upper-bounds based on the user’s network and tagging behavior 5

6 + Data Model Roger, i1, music Roger, i3, music Roger, i5, sports … Hugo, i1, music Hugo, i22, music … Minnie, i2, sports … Linda, i2, football Linda, i28, news … Tagged(user u,item i,tag t) Taggers =  u TaggedSeekers =  u Link Link(u 1,v 1 ): directed edge Network (u) = { v | Link (u,v) } For seeker u 1 ε Seekers, Network(u 1 ) = neighbors of u 1 Link (user u, user v) 6

7 + What are Scores? Query is a set of tags Q = {t 1,t 2,…,t n } example: fashion, www, sports, artificial intelligence For a seeker u, a tag t, and a item I (Score per tag) score(i,u,t) = f(|Network(u)  {v, |Tagged(v,i,t)}|) Overall Score of the query score(i,u,Q) = g(score(i,u,t 1 ), score(i,u,t 2 ),…, score(i,u, t n )) f and g are monotone, where f = COUNT, g = SUM 7

8 + Problem Statement Given a user query Q = t 1 … t n and a number k, we want to efficiently determine the top k items ie: k items with the highest overall score 8

9 + Standard Top-k Processing Q = {t 1,t 2,…,t n } Inverted lists per tag, IL 1, IL 2, … IL n, sorted on scores score (i) = g(score(i, IL 1 ), score(i, IL 2 ), …, score(i, IL 3 )) Intuition high-scoring items are close to the top of most lists Fagin-style processing: NRA (no random access) access all lists sequentially in parallel maintain a heap sorted on partial scores stop when partial score of k th item > best case score of unseen/incomplete items 9

10 + Item 25 0.6 Item 17 0.6 Item 83 0.9 item780.5item380.6item170.7 item830.4item140.6item610.3 item170.3item50.6item810.2 item210.2item830.5item650.1 item910.1item210.3item100.1 item440.1 [0.9, 2.1] Item 17 [0.6, 2.1] item 25 [0.6, 2.1] worst score best-score Min top-2 score : 0.6 Threshold (Max of unseen tuples): 2.1 Pruning Candidates: Min top-2 < best score of candidate Stopping Condition Threshold < min top-2 ? List 1 List 2 List 3 Candidates 0.6+0.6+0.9=2.1 NRA 10

11 + item 25 0.6 item 17 0.6 item 83 0.9 item 78 0.5 item 38 0.6 item 17 0.7 item 83 0.4 item 14 0.6 item 61 0.3 item 17 0.3 item 5 0.6 item 81 0.2 item 21 0.2 item 83 0.5 item 65 0.1 item 91 0.1 item 21 0.3 item 10 0.1 item 44 0.1 worst score best-score Min top-2 score : 0.9 Threshold (Max of unseen tuples): 1.8 Pruning Candidates: Min top-2 < best score of candidate Stopping Condition Threshold < min top-2 ? item 17 [1.3, 1.8] item 83 [0.9, 2.0] item 25 [0.6, 1.9] item 38 [0.6, 1.8] item 78 [0.5, 1.8] List 1 List 2 List 3 Candidates NRA 11

12 + item 25 0.6 item 17 0.6 item 83 0.9 item 78 0.5 item 38 0.6 item 17 0.7 item 83 0.4 item 14 0.6 item 61 0.3 item 17 0.3 item 5 0.6 item 81 0.2 item 21 0.2 item 83 0.5 item 65 0.1 item 91 0.1 item 21 0.3 item 10 0.1 item 44 0.1 worst score best-score item 83 [1.3, 1.9] item 17 [1.3, 1.9] item 25 [0.6, 1.5] item 78 [0.5, 1.4] Min top-2 score : 1.3 Threshold (Max of unseen tuples): 1.3 Pruning Candidates: Min top-2 < best score of candidate Stopping Condition Threshold < min top-2 ? no more new items can get into top-2 but, extra candidates left in queue List 1 List 2 List 3 Candidates NRA 12

13 + item 25 0.6 item 17 0.6 item 83 0.9 item 78 0.5 item 38 0.6 item 17 0.7 item 83 0.4 item 14 0.6 item 61 0.3 item 17 0.3 item 5 0.6 item 81 0.2 item 21 0.2 item 83 0.5 item 65 0.1 item 91 0.1 item 21 0.3 item 10 0.1 item 44 0.1 worst score best-score Min top-2 score : 1.3 Threshold (Max of unseen tuples): 1.1 Pruning Candidates: Min top-2 < best score of candidate Stopping Condition Threshold < min top-2 ? no more new items can get into top-2 but, extra candidates left in queue item 17 1.6 item 83 [1.3, 1.9] item 25 [0.6, 1.4] List 1 List 2 List 3 Candidates NRA 13

14 + item 25 0.6 item 17 0.6 item 83 0.9 item 78 0.5 item 38 0.6 item 17 0.7 item 83 0.4 item 14 0.6 item 61 0.3 item 17 0.3 item 5 0.6 item 81 0.2 item 21 0.2 item 83 0.5 item 65 0.1 item 91 0.1 item 21 0.3 item 10 0.1 item 44 0.1 Min top-2 score : 1.6 Threshold (Max of unseen tuples): 0.8 Pruning Candidates: Min top-2 < best score of candidate item 83 1.8 item 17 1.6 List 1 List 2 List 3 Candidates NRA 14

15 + NRA performs only sorted accesses (SA) (No Random Access) Random access (RA) lookup actual (final) score of an item often very useful Problems with NRA high bookkeeping overhead for “high” values of k, gain in even access cost not significant NRA 15

16 + TA item 25 0.6 item 17 0.6 item 83 0.9 item 78 0.5 item 38 0.6 item 17 0.7 item 83 0.4 item 14 0.6 item 61 0.3 item 17 0.3 item 5 0.6 item 81 0.2 item 21 0.2 item 83 0.5 item 65 0.1 item 91 0.1 item 21 0.3 item 10 0.1 item 44 0.1 lists sorted by score List 1 List 2 List 3 (a 1, a 2, a 3 ) 16

17 + TA item 25 0.6 item 17 0.6 item 83 0.9 item 78 0.5 item 38 0.6 item 17 0.7 item 83 0.4 item 14 0.6 item 61 0.3 item 17 0.3 item 5 0.6 item 81 0.2 item 21 0.2 item 83 0.5 item 65 0.1 item 91 0.1 item 21 0.3 item 10 0.1 item 44 0.1 lists sorted by score item 83 1.8 item 17 1.6 item 25 0.6 Candidates min top-2 score: 1.6 maximum score for unseen items: 2.1 TA Algorithm: round 1 List 1 List 2 List 3 read one item from every list Random access 17

18 + Computing Exact Scores: Naïve item score i7 i1i1 i2 i3 i4 i5 i6 i8 16 73 65 62 40 39 18 16 seeker Jane i7 i5i5 i9 i2 i6 i5 i8 i3 seeker Ann 10 53 36 30 15 14 10 5 scoreitem tag = photos item score i7 i1i1 i8 i4 i2 i3 i6 i9 15 30 29 27 25 23 20 13 seeker Jane i4 i5i5 i2 i8 i7 i1 i6 i3 seeker Ann 60 99 80 78 75 72 63 50 scoreitem tag = music Typical: Maintain single inverted list per (seeker, tag), items ordered by score + can use standard top-k algorithms -- high space overhead 18

19 + Computing Score Upper-Bounds Space saving strategy. Maintain entries of the form (item,itemTaggers) where itemTaggers are all taggers who tagged the item with the tag. Here every item is stored at most once. Q now: what score to store with each entry? We store the maximum score that an item can have across all possible seekers. This is Global Upper-Bound strategy Limitation: Time to dynamically computing exact scores at query time. 19

20 + Score Upper-Bounds Global Upper-Bound (GUB): 1 list per tag tag = music itemtaggers upper-bound i6 i1 i2 i3 i5 i4 i9 i7 i8 Miguel,… Kath, … Sam, … Miguel, … Peter, … Jane, … Mary, … Miguel, … Kath, … 18 73 65 62 53 40 36 16 all seekers + low space overhead -- item upper-bounds, and list order(!) may differ from EXACT for most users -- time to dynamically computing exact scores at query time. How do we do top-k processing with score upper-bounds? Q: what score to store with each entry? We store the maximum score that an item can have across all possible seekers. 20

21 + Top-k with Score Upper-Bounds gNRA - “generalized no random access” access all lists sequentially in parallel maintain a heap with partial exact scores stop when partial exact score of k th item > highest possible score from unseen/incomplete items (computed using current list upper-bounds) 21

22 + gNRA – NRA Generalization 22

23 + gTA – TA Generalization 23

24 + Performance of Global Upper Bound (GUB) and Exact Space overhead total # number of entries in all inverted lists Query processing time # of cursor moves 24 GUBExact space (IL entries) 74K63M time 479-18K13 - 189 space baseline time baseline

25 + Clustering and Query-Processing We want to reduce the distance between score upper-bound and the exact score. Greater the distance, more processing may be required Core idea Cluster users into groups and compute upper-bound for the group. Intuition group users whose behavior is similar 25

26 + Clustering Seekers Cluster the seekers based on similarity in their scores (because score of an item depends on the network). Form an inverted list IL t,C for every tag t and cluster C (the score of an item being the maximum score over all seekers in the cluster). Query processing for Q = t 1.. t n and seeker u, we First find the cluster C(u) And then perform aggregation over the collection Global Upper-Bound (GUB) is where all seekers fall into the same cluster. 26

27 + Clustering Seekers assign each seeker to a cluster compute an inverted list per cluster ub(i,t,C) = max u  C |Network(u)  {v|Tagged(v,i,t j )}| + tighter bounds, item order usually closer to EXACT order than in Global Upper-Bound -- space overhead still high (trade-off) 27 item taggers upper-bound chanel puma gucci adidas diesel versace nike prad a Miguel,… Kath, … Sam, … Miguel, … Peter, … Jane, … Mary, … Chris, … 18 73 65 62 53 40 36 16 Global Upper-Bound item taggers upper-bound gucci versace chanel prada puma Bob,… Peter, … Mary, … Chris, … Alice, … 65 40 18 16 10 Example of Clusters C1: seekers Bob & Alice item taggers upper-bound puma adidas diesel nike Miguel,… Sam, … Miguel, … Jane, … 73 62 53 36 gucci Kath, … 5 C2: seekers Sam & Miguel

28 + How do we cluster seekers? Finding a cluster that minimizes worst, average computation time of top-k algorithms is NP-hard. Proofs by reduction from independent task scheduling problem and minimum sum of squares problem Authors present some heuristics Use some form of Normalized Discounted Cumulative Gain (NDCG) which is a measure of the quality of a clustered list for a given seeker and keyword. The metric compares the ideal (exact score) order in inverted lists with actual (score upper-bound) order 28

29 + NDCG - Example idocIDLog I – base2 Ranking Rank/log i – base 2 Ideal rankingIdeal ranking/log i –base 2 1D1 0.003N/A3 2D2 1.0022.003.00 3D3 1.5831.892.001.26 4D4 2.0000.002.001.00 5D5 2.3210.431.000.43 6D6 2.5820.770.00 Cumulative Gain (CG) 9.495.105.69 Distributive CG 8.10 Ideal DCG 8.69 Normalized CG (nDCG) 8.10/8.69 = 0.93 29

30 + Clustering Taggers For each tag t we partition the taggers into separate clusters. We form inverted list and an item i in the list for cluster C gets the score as max u seekers |Network(u) ∩ C ∩ {v 1 | Tagged(v 1,i,t)}| How to cluster taggers? Graph with nodes as taggers and an edge exists between nodes v 1 and v 2 iff: Items(v 1,t) ∩ Items(v 2,t) ≥ threshold 30

31 + Clustering Seekers Metrics Space Global Upper Bound has the lowest overhead. ASC and NCT achieve an order of magnitude improvement in space overhead over Exact. Time Both gNRA and gTA outperform Global Upper-bound. ASC outperforms NCT on both sequential and total accesses in all cases for gTA and in all cases except one for gNRA. Inverted lists are shorter Score upper-bound order similar to exact score order for many users Average % improvement over Global Upper-Bound Normalized Cut: 38-72% Ratio Association 67-87% 31

32 + Clustering Seekers Cluster-Seekers improves query execution time over GUB by at least an order of magnitude, for all queries and all users 32

33 + Clustering Taggers Space Overhead is significantly lower than that of Exact and of Cluster- Seekers Time Best Case: all taggers relevant to a seeker will reside in a single cluster Worst Case: All taggers will reside in separate clusters. Idea: cluster taggers based on overlap in tagging assign each tagger to a cluster compute cluster upper-bounds: ub(i,t,C) = max u  Seekers, v  C |Network(u)  {v |Tagged(v,i,t j )}| 33

34 + Clustering Taggers 34

35 + Conclusion and Next Steps Cluster-Taggers worked best for seekers whose network fell into at most 3 * #tags clusters For others, query execution time degraded due to the number of inverted lists that had to be processed For these seekers Cluster-Taggers outperformed Cluster-Seekers in all cases Cluster-Taggers outperforms Global Upper-Bound by 94- 97%, in all cases. Extended traditional top-k algorithms Achieved a balance between time and space consumption. 35

36 + Questions? WebCT/ achawla@ieee.org Thank You!


Download ppt "+ Efficient network aware search in collaborative tagging Sihem Amer Yahia, Michael Benedikt, Laks V.S. Lakshmanan, Julia Stoyanovich Presented by: Ashish."

Similar presentations


Ads by Google