Presentation is loading. Please wait.

Presentation is loading. Please wait.

Recommendation Systems and Web Search Eytan AdarHP Labs Baruch AwerbuchJohns Hopkins University Boaz Patt-ShamirTel Aviv University David PelegWeizmann.

Similar presentations


Presentation on theme: "Recommendation Systems and Web Search Eytan AdarHP Labs Baruch AwerbuchJohns Hopkins University Boaz Patt-ShamirTel Aviv University David PelegWeizmann."— Presentation transcript:

1 Recommendation Systems and Web Search Eytan AdarHP Labs Baruch AwerbuchJohns Hopkins University Boaz Patt-ShamirTel Aviv University David PelegWeizmann Institute Mark TuttleHP Labs

2 Slide 2 Recommendation systems eBay –Buyers and sellers rate each transaction –Users check the ratings before each transaction Amazon –System monitors user purchases –System recommends products based on this history SpamNet (a collaborative filter) –SpamNet filters mail, users identify mistakes –Users gain reputations –Reputations affect weight of recommendations

3 Slide 3 The problem There are many known attacks –Many villains boost each other’s reputation –One villain acts honest for a while How can honest users collaborate effectively … in the presence of dishonest, malicious users What is a good model for recommendation systems? What are algorithms can we prove something about? We give fast, robust, practical algorithms.

4 Slide 4 A recommendation model n players, both honest and dishonest (  n honest) –dishonest players can be arbitrarily malicious, collude m objects, both good and bad (  m good) –players probe objects to learn whether good or bad –there is a cost to probing a bad object a public billboard –players post the results of their probes (the honest do…) –billboard is free: no cost to read or write We think this is a direct abstraction of eBay

5 Slide 5 A simple game At each step of the game, one player takes a turn: –consult the billboard –probe an object –post the result on the billboard Honest players follow the protocol Dishonest players can collude in arbitrary ways Goal: help honest users find good objects at minimal cost.

6 Slide 6 Bad ways to play the game Always try the object with the highest number of positive recommendations –dishonest players recommend bad objects –honest players try all bad objects first Always try the object with the least number of negative recommendations –dishonest players slander good objects –honest players try all bad objects first –(a popular strategy on eBay, but quite vulnerable) Simple combinations also fail

7 Slide 7 What about other approaches? Collaborative filtering [Goldberg, Nichols, Oki, Terry 1998], [Azar, Fiat, Karlin, McSherry, Saia 2001], [Drineas, Kerenidis, Raghavan 2002], [Kleinberg, Sandler 2003] –all players are honest, solutions are centralized Web-searching algorithms [Brin, Page 1998], [Kleinberg 1999] –compute a “transitive popular vote” of the participants –easily spammed by a clique of colluding crooks Trust [Kamvar, Schlosser, Garcia-Molina 2003] –use trusted authorities to assign trust to players –do we really care about trust? we care about cost! Simple randomized algorithms can minimize cost.

8 Slide 8 Our results A simple, efficient algorithm for many contexts –Objects come and go over time –Players access different subsets of objects –Players have different tastes for objects The most interesting model Our algorithm can be substantially better than others Players probe few objects –Only a constant when most players are honest Corporate web search is the perfect application –Implemented on HP’s internal search engine

9 Slide 9 Good ways to play the game Exploration rule: –“choose a random object, and probe it” –okay if most objects are good Exploitation rule: –“choose a random player, and probe an object it liked” –okay if most players are honest But are there many good objects/honest players?!?

10 Slide 10 The balanced rule The balanced rule: flip a coin –if heads, follow exploration rule –if tails, follow exploitation rule How well does it work? We compute expected cost of probe sequence  cost(  ) = number of bad objects probed by honest players

11 Slide 11 Split  each time a good player finds a good object  = … a b c d e d e d e e … D0D0 D1D1 D2D2 expected cost of D i = –each probe hits good object with probability total expected cost is expected cost of D 0 = –each probe hits good object with probability

12 Slide 12 Understanding that number Finding a good object Spreading the good news Probes by a player to learn the good news This is just a log factor away from optimal (lower bounds later)

13 Slide 13 Different models Dynamic object model –objects come and go over time Partial access model –players access overlapping subsets of the objects Taste model –players have different notions of good and bad In each model, analysis has similar feel…

14 Slide 14 Dynamic object model Objects come and go with each step: –first the objects change (changes are announced to all) –then some player takes a step Algorithm for player p: if p has found a good object o then p probes o else p follows the Balanced Rule

15 Slide 15 Competitive analysis Optimal probe sequence  opt has simple form:  opt = a a a x x x c c c c d d d e e e … good object no good objects Optimality cost depends on the environment: –player and object schedules, honest players, good objects –adversary à powerful, adaptive, and Byzantine! switches(  opt ) = number of distinct objects in  opt Our probe sequence  partitioned by the switches:  = o o o o o o o o o o o o o o o o …

16 Slide 16 Competitive analysis Compare cost of  and  opt switch-by-switch cost(  ) · cost(  opt ) + switches(  opt ) (cost to satisfy) · cost(  opt ) + switches(  opt ) (1/  + n log  n) · cost(  opt ) + switches(  opt ) (m + n log n) Lower bound (Yao’s Lemma): cost(  ) ¸ cost(  opt ) + switches(  opt ) (m)

17 Slide 17 Partial access model Each player can access only some objects w x a y b z p q r good object bad object common interest essential for help from neighbors –q gets help from p, but not r –r gets no useful help at all P O hard to measure help from neighbors accurately –we bound work of players P with common interest O

18 Slide 18 Partial access algorithm Similar algorithm: –choose randomly from neighbors, not all players work until one in P finds O work until all in P learn of O += ¿ m + n log n exploration + exploitation Similar analysis: Good analysis in this model is very difficult: –we know common interest is essential for others to help –sometimes others don’t help (players might as well sample) even if each player has common interest with many players

19 Slide 19 Simulation exploration probability steps 10,000 players, 50% honest 10,000 objects, 1% good 1,000 objects/player Balanced rule tolerates coin bias quite well Can we optimize the rule by changing the bias? many good objects  emphasize exploration many honest players  emphasize exploitation

20 Slide 20 Differing tastes model Player tastes differ: good(player) µ objects Player tastes overlap: good(p1) Å good(p2)  ; Special interest group S=(P,O): “anything in O could satisfy anyone in P” O µ good(p) 8 p 2 P Players don’t know which SIG they are in. Players follow the Balanced Rule.

21 Slide 21 Complexity Theorem: Given a SIG (P,O), the expected total work to satisfy everyone in P is Pretty good: within a log factor when |P|=O(n)

22 Slide 22 A very popular model Most algorithms are off-line: prepare in advance –Assume underlying object set structure: The web PageRank [Google], HITS [Kleinberg] –Assume underlying stochastic model: Analyze past choices [KRRT’98], [KS’03], [AFKMS’01] –Heuristically identify similar users/objects GroupLens Project Drineas, Kerenidis, Raghavan: first on-line algorithm –Algorithm recommends, observers user response

23 Slide 23 DKR’02 Assumes: –User preferences given by types (up to noise) –O(1) dominant types cover most users Must know the number of dominant types –Type orthogonality: No two types like the same object –Type gap: The dominant types are by far the most popular The algorithm: –Uses SVD to compute the types, learn user types –Runs in time O(mn)

24 Slide 24 The Balanced Rule No types: SIGs a much looser characterization No type orthogonality: SIGs can overlap, even “subtyping” is allowed No type gap: SIG popularity irrelevant A simple distributed algorithm Tolerates malicious players Shares work evenly if players run at similar rates Fast: runs in time O(m + n log n)

25 Slide 25 A look at individual work Consider a synchronous, round-based model –Each player probes an object each round until satisfied –One model of players running at similar speeds –A good model for studying individual work Theorem: Lower bounds Theorem: Balanced rule halts in

26 Slide 26 p n + 1 objects, n honest probes good object gets p n votes n objects, n honest probes good object gets a vote How is constant even possible? honest players, 1 good object all objects all objects with 1 vote all objects with votes probe, probe probe, probe, probe Contains the good object Expect the good object at most bad objects Expect the good object at most 2 bad objects so probe them all

27 Slide 27 Candidate sets work well Theorem: “Distilling candidate sets” is faster: –Constant 1/  rounds if n 1-  honest players –Even with many dishonest players, expected rounds is only Remember: Balanced rule was

28 Slide 28 Conclusions Contributions –a new model for the study of reputation –simple algorithms for collaboration in spite of asynchronous behavior changing objects, differing access, differing tastes arbitrarily malicious behavior from players Our work generalizes to multiple values –players want a “good enough” object reduces to our binary values, even for multiple thresholds –players want “the best” or “in the top 10%” early stopping is no longer possible

29 Slide 29 Future work Better lower bounds, better analysis… We never used –the identities of the malicious players –the negative votes on the billboard –the number of good objects or honest players can we get rid of the log factors if we do? What if the supply of each object is finite? What if objects have prices? And the prices affect whether a player wants the object?

30 Slide 30 Many interesting problems remain… But now, an application to Intranet search…

31 Slide 31 xSearch: Improving Intranet search Internet search engines are wonderful Intranet search is disappointing Some simple ideas to improve intranet search –Heuristic for deducing user recommendations –Algorithms for reordering search results Implemented on the @HP search engine –Getting 10% of the queries –Collecting data on performance –Results are preliminary, but look promising…

32 Slide 32 Who is “lampman”? @hp says:

33 Slide 33 Who is “lampman”? We say:

34 Slide 34 What is going on? Searching corporate webs should be so easy Google is phenomenal on the external web Why not just run Google on the internal web? Because it doesn’t work, and not just at HP…

35 Slide 35 IBM vs Google et al (2003) Identified the top 200 and median 150 queries Found the “right” answer to each query by hand Ran Google over the IBM internal web Now how does Google do on these queries? –Popular queries: only 57% succeed –Median queries: only 46% succeed –Weak notion of success: “right answer in top 20 results” Compare that to your normal Google experience! –“Right answer in top 5 results 85% of the time”

36 Slide 36 Link structure is crucial Internet search algorithms depend on link structure: –Good pages point to great pages, etc. –Important pages are at the “center of the web” Link structure has a strongly connected component: strongly connected component incoming pages outgoing pages other reachable pages 30% of the Internet,10% of IBM The meaty part of the Intranet is just a third that of the Internet!

37 Slide 37 Corporate webs are different More structure –Small groups build large parts of web (silos) –Periodically reviewed and branded No outgoing links from many pages –Documents intended to be informative not “interesting” –Pages generated automatically from databases No reward for creating/updating/advertising pages –Do you advertise your project page? –Do you advertise your track club page?

38 Slide 38 Corporate queries are different Queries are focused and have few “right” answers What is “vacation” looking for? –Inside HP: what are the holidays and how to report vacation time one or two pages on the HR web. –Outside HP: interesting vacation spots or vacation deals any number of answers pages would be satisfactory

39 Slide 39 What does @hp do now? Based on the Verity Ultraseek search engine –Classical information retrieval on steroids –Many tunable parameters and knobs Manually constructs “best bets” for popular queries Works as well as any other intranet search engine

40 Slide 40 Our approach: user collaboration @hp user interface our reordering heuristics @hp search engine query hits How to collect user feedback? How to use user feedback? query’ hits’

41 Slide 41 Collecting feedback: last clicks The “last click” heuristic: –Assume the last link the user clicks is the right link –Assume users stop hunting when they find the right page Easy, unobtrusive implementation: –Tag each query by a user with a session id –Log each link clicked on with the session id –Periodically scan the log to compute the “last clicks” Effective heuristic: –Agreement: 44% of last clicks go to same page –Quality: 72% of last clicked pages are good pages

42 Slide 42 Last clicks example: “mis” 455http://persweb.corp.hp.com/comp/employee/program/tr/sop/mis_guide.htm 78http://hrexchange.corp.hp.com/HR_News/newslink040204-sopenroll.htm 13http://hrexchange.corp.hp.com/HR_News/newslink100303-SOPenroll.htm 10http://dimpweb.dublin.hp.com/factsyseng/mis/ 5 http://hrexchange.corp.hp.com/HR_News/newslink121203- stock_holidayclosure.htm 2http://canews-stage.canada.hp.com/archives/april04/SOP/index.asp 2http://canews.canada.hp.com/archives/april04/SOP/index.asp 1http://hpcc886.corp.hp.com/comp/employee/program/tr/sop/stockcert.htm 1http://hpcc886.corp.hp.com/comp/employee/program/tr/sop/purhpshares.htm “Mellon Investor Services” or “Manufacturing Information Systems” Surprisingly, the @hp top result was never the last clicktop result

43 Slide 43 Using feedback: ranking algorithms Statistical ordering –Interpret last clicks as recommendation –Rank by popularity (most recommended first, etc) –Robust: highly stable in presence of spam Move-to-front ordering –Each time a page is recommended (last-clicked), move it to the top of the list. –Fast: once the good page is found, it moves to the top (tsunami) Probabilistic ordering –Choose pages one after another –Probability of selection = frequency of endorsement –Best of both worlds, and many theoretical results –Plus: all new pages have some chance of being seen!

44 Slide 44 An experimental implementation Working with @hp search (Anne Murray Allen): –Ricky Ralston –Chris Jackson –Richard Boucher Now part of the @hp search engine: –Getting 10% of queries sent to @hp search –A few months of experience shows promising results…

45 Slide 45 Example: “paycheck” at @hp a movie!! notice from 1999 three copies of slides on redeeming eawards

46 Slide 46 Example: “paycheck” with move2front how to access (was #6) how to set up (was #7) May 2004 upgrade notice (was #41) 401(k) catchup (almost no votes)

47 Slide 47 Example: “active gold” at @hp hydra migration sports award can’t read dead link

48 Slide 48 Example: “active gold” with statistical home page (was #8) (10 of 12 votes)

49 Slide 49 Example: “payroll” at @hp

50 Slide 50 Example: “payroll” with statistical US main payroll (was #6) payroll forms (was #17) the best bet (was #16) payroll phone numbers (was #33)

51 Slide 51 Example: “payroll” with move2front lots of churn among top hits

52 Slide 52 Moving in the right direction Many compelling instances of progress How effective are we in general? –Let’s track position of last-clicked link –We win when we move it up the page –We lose when we move it down

53 Slide 53 Number of times we win

54 Slide 54 Number of positions we move

55 Slide 55 Some improvements hidden Some effects of our improvements are masked –Manually constructed best bet lists –Extensive fine-tuning of Ultraseek engine Our hope or claim: –We should be able to replace best bets –We are cheaper/easier than tuning Ultraseek Even now we improve 5 places 20% of the time Hope for a real user study…

56 Slide 56 Conclusion Intranet search is notoriously hard User collaboration and feedback can help Intranet search small part of larger vision: –How to use user feedback and collaboration for Searching unstructured data: eg, books scanned at a9.com Building general purpose recommendation systems

57


Download ppt "Recommendation Systems and Web Search Eytan AdarHP Labs Baruch AwerbuchJohns Hopkins University Boaz Patt-ShamirTel Aviv University David PelegWeizmann."

Similar presentations


Ads by Google