Click Evidence Signals and Tasks Vishwa Vinay Microsoft Research, Cambridge.

Slides:



Advertisements
Similar presentations
Accurately Interpreting Clickthrough Data as Implicit Feedback Joachims, Granka, Pan, Hembrooke, Gay Paper Presentation: Vinay Goel 10/27/05.
Advertisements

Evaluating the Robustness of Learning from Implicit Feedback Filip Radlinski Thorsten Joachims Presentation by Dinesh Bhirud
Improvements and extras Paul Thomas CSIRO. Overview of the lectures 1.Introduction to information retrieval (IR) 2.Ranked retrieval 3.Probabilistic retrieval.
Psychological Advertising: Exploring User Psychology for Click Prediction in Sponsored Search Date: 2014/03/25 Author: Taifeng Wang, Jiang Bian, Shusen.
1 Evaluation Rong Jin. 2 Evaluation  Evaluation is key to building effective and efficient search engines usually carried out in controlled experiments.
Modelling Relevance and User Behaviour in Sponsored Search using Click-Data Adarsh Prasad, IIT Delhi Advisors: Dinesh Govindaraj SVN Vishwanathan* Group:
Optimizing search engines using clickthrough data
Query Chains: Learning to Rank from Implicit Feedback Paper Authors: Filip Radlinski Thorsten Joachims Presented By: Steven Carr.
1.Accuracy of Agree/Disagree relation classification. 2.Accuracy of user opinion prediction. 1.Task extraction performance on Bing web search log with.
Eye Tracking Analysis of User Behavior in WWW Search Laura Granka Thorsten Joachims Geri Gay.
WSCD INTRODUCTION  Query suggestion has often been described as the process of making a user query resemble more closely the documents it is expected.
1 Learning User Interaction Models for Predicting Web Search Result Preferences Eugene Agichtein Eric Brill Susan Dumais Robert Ragno Microsoft Research.
Improving relevance prediction by addressing biases and sparsity in web search click data Qi Guo, Dmitry Lagun, Denis Savenkov, Qiaoling Liu
Mining the Search Trails of Surfing Crowds: Identifying Relevant Websites from User Activity Data Misha Bilenko and Ryen White presented by Matt Richardson.
Statistic Models for Web/Sponsored Search Click Log Analysis The Chinese University of Hong Kong 1 Some slides are revised from Mr Guo Fan’s tutorial at.
Learning to Rank: New Techniques and Applications Martin Szummer Microsoft Research Cambridge, UK.
Time-dependent Similarity Measure of Queries Using Historical Click- through Data Qiankun Zhao*, Steven C. H. Hoi*, Tie-Yan Liu, et al. Presented by: Tie-Yan.
Cohort Modeling for Enhanced Personalized Search Jinyun YanWei ChuRyen White Rutgers University Microsoft BingMicrosoft Research.
Finding Advertising Keywords on Web Pages Scott Wen-tau YihJoshua Goodman Microsoft Research Vitor R. Carvalho Carnegie Mellon University.
Online Search Evaluation with Interleaving Filip Radlinski Microsoft.
Time-Sensitive Web Image Ranking and Retrieval via Dynamic Multi-Task Regression Gunhee Kim Eric P. Xing 1 School of Computer Science, Carnegie Mellon.
SIGIR’09 Boston 1 Entropy-biased Models for Query Representation on the Click Graph Hongbo Deng, Irwin King and Michael R. Lyu Department of Computer Science.
Advisor: Hsin-Hsi Chen Reporter: Chi-Hsin Yu Date:
Modern Retrieval Evaluations Hongning Wang
An Experimental Comparison of Click Position-Bias Models Nick Craswell Onno Zoeter Michael Taylor Bill Ramsey Microsoft Research.
Anindya Ghose Sha Yang Stern School of Business New York University An Empirical Analysis of Sponsored Search Performance in Search Engine Advertising.
Introduction to Information Retrieval Introduction to Information Retrieval BM25, BM25F, and User Behavior Chris Manning, Pandu Nayak and Prabhakar Raghavan.
User Browsing Graph: Structure, Evolution and Application Yiqun Liu, Yijiang Jin, Min Zhang, Shaoping Ma, Liyun Ru State Key Lab of Intelligent Technology.
Understanding and Predicting Graded Search Satisfaction Tang Yuk Yu 1.
Improving Web Search Ranking by Incorporating User Behavior Information Eugene Agichtein Eric Brill Susan Dumais Microsoft Research.
Ramakrishnan Srikant Sugato Basu Ni Wang Daryl Pregibon 1.
Fan Guo 1, Chao Liu 2 and Yi-Min Wang 2 1 Carnegie Mellon University 2 Microsoft Research Feb 11, 2009.
Xiaoying Gao Computer Science Victoria University of Wellington Intelligent Agents COMP 423.
CIKM’09 Date:2010/8/24 Advisor: Dr. Koh, Jia-Ling Speaker: Lin, Yi-Jhen 1.
Implicit Acquisition of Context for Personalization of Information Retrieval Systems Chang Liu, Nicholas J. Belkin School of Communication and Information.
Lecture 2 Jan 13, 2010 Social Search. What is Social Search? Social Information Access –a stream of research that explores methods for organizing users’
Implicit User Feedback Hongning Wang Explicit relevance feedback 2 Updated query Feedback Judgments: d 1 + d 2 - d 3 + … d k -... Query User judgment.
Personalized Search Xiao Liu
Search Engines that Learn from Implicit Feedback Jiawen, Liu Speech Lab, CSIE National Taiwan Normal University Reference: Search Engines that Learn from.
Question Answering over Implicitly Structured Web Content
Lecture 2 Jan 15, 2008 Social Search. What is Social Search? Social Information Access –a stream of research that explores methods for organizing users’
Modeling term relevancies in information retrieval using Graph Laplacian Kernels Shuguang Wang Joint work with Saeed Amizadeh and Milos Hauskrecht.
LANGUAGE MODELS FOR RELEVANCE FEEDBACK Lee Won Hee.
Personalizing Web Search using Long Term Browsing History Nicolaas Matthijs, Cambridge Filip Radlinski, Microsoft In Proceedings of WSDM
Qi Guo Emory University Ryen White, Susan Dumais, Jue Wang, Blake Anderson Microsoft Presented by Tetsuya Sakai, Microsoft Research.
Jiafeng Guo(ICT) Xueqi Cheng(ICT) Hua-Wei Shen(ICT) Gu Xu (MSRA) Speaker: Rui-Rui Li Supervisor: Prof. Ben Kao.
Implicit User Feedback Hongning Wang Explicit relevance feedback 2 Updated query Feedback Judgments: d 1 + d 2 - d 3 + … d k -... Query User judgment.
More Than Relevance: High Utility Query Recommendation By Mining Users' Search Behaviors Xiaofei Zhu, Jiafeng Guo, Xueqi Cheng, Yanyan Lan Institute of.
Learning User Behaviors for Advertisements Click Prediction Chieh-Jen Wang & Hsin-Hsi Chen National Taiwan University Taipei, Taiwan.
1 Click Chain Model in Web Search Fan Guo Carnegie Mellon University PPT Revised and Presented by Xin Xin.
Advisor: Koh Jia-Ling Nonhlanhla Shongwe EFFICIENT QUERY EXPANSION FOR ADVERTISEMENT SEARCH WANG.H, LIANG.Y, FU.L, XUE.G, YU.Y SIGIR’09.
26/01/20161Gianluca Demartini Ranking Categories for Faceted Search Gianluca Demartini L3S Research Seminars Hannover, 09 June 2006.
Modern Retrieval Evaluations Hongning Wang
Identifying “Best Bet” Web Search Results by Mining Past User Behavior Author: Eugene Agichtein, Zijian Zheng (Microsoft Research) Source: KDD2006 Reporter:
Why Decision Engine Bing Demos Search Interaction model Data-driven Research Problems Q & A.
1 Random Walks on the Click Graph Nick Craswell and Martin Szummer Microsoft Research Cambridge SIGIR 2007.
Learning to Rank: From Pairwise Approach to Listwise Approach Authors: Zhe Cao, Tao Qin, Tie-Yan Liu, Ming-Feng Tsai, and Hang Li Presenter: Davidson Date:
Navigation Aided Retrieval Shashank Pandit & Christopher Olston Carnegie Mellon & Yahoo.
Ariel Fuxman, Panayiotis Tsaparas, Kannan Achan, Rakesh Agrawal (2008) - Akanksha Saxena 1.
Recommender Systems & Collaborative Filtering
Accurately Interpreting Clickthrough Data as Implicit Feedback
Evaluation Anisio Lacerda.
Search User Behavior: Expanding The Web Search Frontier
Modern Retrieval Evaluations
Content-Aware Click Modeling
Evidence from Behavior
CS246: Leveraging User Feedback
Click Chain Model in Web Search
Efficient Multiple-Click Models in Web Search
Interactive Information Retrieval
Presentation transcript:

Click Evidence Signals and Tasks Vishwa Vinay Microsoft Research, Cambridge

Introduction Signals – Explicit Vs Implicit Evidence – Of what? – From where? – Used how? Tasks – Ranking, Evaluation & many more things search

Clicks as Input Task = Relevance Ranking – Feature in relevance ranking function Signal – select URL, count(*) as DocFeature from Historical_Clicks group by URL – select Query, URL, count(*) as QueryDocFeature from Historical_Clicks group by Query, URL

Clicks as Input Feature in relevance ranking function Static feature (popularity) Dynamic feature (for this query-doc pair) – “Query Expansion using Associated Queries”, Billerbeck et al, CIKM 2003 – “Improving Web Search Ranking by Incorporating User Behaviour”, Agichtein et al, SIGIR 2006 – ‘Document Expansion’ – Signal bleeds to similar queries

Clicks as Output Task = Relevance Ranking – Result Page = Ranked list of documents – Ranked list = Documents sorted based on Score – Score = Probability that this result will be clicked Signal – Did my prediction agree with the user’s action? – “Web-Scale Bayesian Click-through rate Prediction for Sponsored Search Advertising in Microsoft’s Bing Search Engine”, Graepel et al, ICML 2010

Clicks as Output Calibration: Merging results from different sources (comparable scores) – “Adaptation of Offline Vertical Selection Predictions in the Presence of User Feedback”, Diaz et al, SIGIR 2009 Onsite Adaptation of ranking function – “A Decision Theoretic Framework for Ranking using Implicit Feedback”, Zoeter et al, SIGIR 2008

Clicks for Training Rank 1Doc1 Rank 2Doc2 Rank 3Doc3 solutions-2010-titles-and-abstracts/

Clicks for Training Preferences from Query-> {URL, Click} events – Rank bias & Lock-in Randomisation & Exploration – “Accurately Interpreting Clickthrough Data as Implicit Feedback”, Joachims et al, SIGIR 2005 Preference Observations into Relevance Labels – “Generating Labels from Clicks”, Agrawal et al, WSDM 2010

Clicks for Evaluation Task = Evaluating a ranking function Signal – Engagement and Usage metrics Query=“Search Solutions 2010” Controlled experiments for A/B Testing RankOld RankerNew (and Improved?) php sse2010.php 2 information.co.uk/online2010/trails/search- solutions.html 10/search-solutions-2010-titles-and- abstracts/ php sse2009.php

Clicks for Evaluation Disentangling relevance from other effects – “An experimental comparison of click position-bias models”, Craswell et al, WSDM 2008 Label-free evaluation of retrieval systems (‘Interleaving’) – “How Does Clickthrough Data Reflect Retrieval Quality?”, Radlinski et al, CIKM 2008

Personalisation with Clicks Task = Separate out Individual preferences from aggregates Signal : {User, Query, URL, Click} tuples Query=“Search Solutions 2010” RankURLTonyVinay solutions-2010-titles-and-abstracts/ 3

Personalisation with Clicks Click event as a rating – “Matchbox: Large Scale Bayesian Recommendations”, Stern et al, WWW 2009 Sparsity - collapse using user groups (groupisation) “Discovering and Using Groups to Improve Personalized Search”, Teevan et al, WSDM collapse using doc structure

Miscellaneous Using co-clicking for query suggestions – “Random Walks on the Click Graph”, Craswell et al, SIGIR 2007 User behaviour models for – Ranked lists: “Click chain model in Web Search”, Guo et al, WWW 2009 – Whole page: “Inferring Search Behaviors Using Partially Observable Markov Model”, Wang et al, WSDM 2010 User activity away from the result page – “BrowseRank: Letting Web Users Vote for Page Importance”, Liu et al, SIGIR 2008

Additional Thoughts Impressions & Examinations – Raw click counts versus normalised ratios Query=“Search Solutions 2010” Page / RankURLImpressionExamination 1 / /sse2010.php 11 1 / 2 information.co.uk/online2010/trails/s earch-solutions.html 11 1 / 3 /10/10/search-solutions-2010-titles- and-abstracts/ 11 1 / /sse2009.php 10? ………… 1 / / 1 nt.htm 00

Clicks and Enterprise Search Relying on the click signal – Machine learning and non-click features – Performance Out-Of-the-Box – Shipping a shrink-wrapped product The self-aware adapting system – Good OOB – Gets better with use – Knows when things go wrong

Thank you