Presentation is loading. Please wait.

Presentation is loading. Please wait.

Click Evidence Signals and Tasks Vishwa Vinay Microsoft Research, Cambridge.

Similar presentations


Presentation on theme: "Click Evidence Signals and Tasks Vishwa Vinay Microsoft Research, Cambridge."— Presentation transcript:

1 Click Evidence Signals and Tasks Vishwa Vinay Microsoft Research, Cambridge

2 Introduction Signals – Explicit Vs Implicit Evidence – Of what? – From where? – Used how? Tasks – Ranking, Evaluation & many more things search

3 Clicks as Input Task = Relevance Ranking – Feature in relevance ranking function Signal – select URL, count(*) as DocFeature from Historical_Clicks group by URL – select Query, URL, count(*) as QueryDocFeature from Historical_Clicks group by Query, URL

4 Clicks as Input Feature in relevance ranking function Static feature (popularity) Dynamic feature (for this query-doc pair) – “Query Expansion using Associated Queries”, Billerbeck et al, CIKM 2003 – “Improving Web Search Ranking by Incorporating User Behaviour”, Agichtein et al, SIGIR 2006 – ‘Document Expansion’ – Signal bleeds to similar queries

5 Clicks as Output Task = Relevance Ranking – Result Page = Ranked list of documents – Ranked list = Documents sorted based on Score – Score = Probability that this result will be clicked Signal – Did my prediction agree with the user’s action? – “Web-Scale Bayesian Click-through rate Prediction for Sponsored Search Advertising in Microsoft’s Bing Search Engine”, Graepel et al, ICML 2010

6 Clicks as Output Calibration: Merging results from different sources (comparable scores) – “Adaptation of Offline Vertical Selection Predictions in the Presence of User Feedback”, Diaz et al, SIGIR 2009 Onsite Adaptation of ranking function – “A Decision Theoretic Framework for Ranking using Implicit Feedback”, Zoeter et al, SIGIR 2008

7 Clicks for Training Rank 1Doc1 http://irsg.bcs.org/SearchSolutions/2010/sse2010.php Rank 2Doc2 http://irsg.bcs.org/SearchSolutions/2009/sse2009.php Rank 3Doc3 http://isquared.wordpress.com/2010/10/10/search- solutions-2010-titles-and-abstracts/

8 Clicks for Training Preferences from Query-> {URL, Click} events – Rank bias & Lock-in Randomisation & Exploration – “Accurately Interpreting Clickthrough Data as Implicit Feedback”, Joachims et al, SIGIR 2005 Preference Observations into Relevance Labels – “Generating Labels from Clicks”, Agrawal et al, WSDM 2010

9 Clicks for Evaluation Task = Evaluating a ranking function Signal – Engagement and Usage metrics Query=“Search Solutions 2010” Controlled experiments for A/B Testing RankOld RankerNew (and Improved?) 1 http://irsg.bcs.org/SearchSolutions/2009/sse 2009.php http://irsg.bcs.org/SearchSolutions/2010/ sse2010.php 2 http://www.online- information.co.uk/online2010/trails/search- solutions.html http://isquared.wordpress.com/2010/10/ 10/search-solutions-2010-titles-and- abstracts/ 3 http://irsg.bcs.org/SearchSolutions/2010/sse 2010.php http://irsg.bcs.org/SearchSolutions/2009/ sse2009.php

10 Clicks for Evaluation Disentangling relevance from other effects – “An experimental comparison of click position-bias models”, Craswell et al, WSDM 2008 Label-free evaluation of retrieval systems (‘Interleaving’) – “How Does Clickthrough Data Reflect Retrieval Quality?”, Radlinski et al, CIKM 2008

11 Personalisation with Clicks Task = Separate out Individual preferences from aggregates Signal : {User, Query, URL, Click} tuples Query=“Search Solutions 2010” RankURLTonyVinay 1 http://irsg.bcs.org/SearchSolutions/2010/sse2010.php 2 http://isquared.wordpress.com/2010/10/10/search- solutions-2010-titles-and-abstracts/ 3 http://irsg.bcs.org/SearchSolutions/2009/sse2009.php

12 Personalisation with Clicks Click event as a rating – “Matchbox: Large Scale Bayesian Recommendations”, Stern et al, WWW 2009 Sparsity - collapse using user groups (groupisation) “Discovering and Using Groups to Improve Personalized Search”, Teevan et al, WSDM 2009 - collapse using doc structure

13 Miscellaneous Using co-clicking for query suggestions – “Random Walks on the Click Graph”, Craswell et al, SIGIR 2007 User behaviour models for – Ranked lists: “Click chain model in Web Search”, Guo et al, WWW 2009 – Whole page: “Inferring Search Behaviors Using Partially Observable Markov Model”, Wang et al, WSDM 2010 User activity away from the result page – “BrowseRank: Letting Web Users Vote for Page Importance”, Liu et al, SIGIR 2008

14 Additional Thoughts Impressions & Examinations – Raw click counts versus normalised ratios Query=“Search Solutions 2010” Page / RankURLImpressionExamination 1 / 1 http://irsg.bcs.org/SearchSolutions/2 010/sse2010.php 11 1 / 2 http://www.online- information.co.uk/online2010/trails/s earch-solutions.html 11 1 / 3 http://isquared.wordpress.com/2010 /10/10/search-solutions-2010-titles- and-abstracts/ 11 1 / 4 http://irsg.bcs.org/SearchSolutions/2 009/sse2009.php 10? ………… 1 / 10 http://somesite.org/irrelevant.htm 10 2 / 1 http://someothersite.org/alsoirreleva nt.htm 00

15 Clicks and Enterprise Search Relying on the click signal – Machine learning and non-click features – Performance Out-Of-the-Box – Shipping a shrink-wrapped product The self-aware adapting system – Good OOB – Gets better with use – Knows when things go wrong

16 Thank you vvinay@microsoft.com


Download ppt "Click Evidence Signals and Tasks Vishwa Vinay Microsoft Research, Cambridge."

Similar presentations


Ads by Google