Implicit User Feedback Hongning Wang Explicit relevance feedback 2 Updated query Feedback Judgments: d 1 + d 2 - d 3 + … d k -... Query User judgment.

Slides:



Advertisements
Similar presentations
Accurately Interpreting Clickthrough Data as Implicit Feedback Joachims, Granka, Pan, Hembrooke, Gay Paper Presentation: Vinay Goel 10/27/05.
Advertisements

Evaluating the Robustness of Learning from Implicit Feedback Filip Radlinski Thorsten Joachims Presentation by Dinesh Bhirud
Evaluating Novelty and Diversity Charles Clarke School of Computer Science University of Waterloo two talks in one!
Improvements and extras Paul Thomas CSIRO. Overview of the lectures 1.Introduction to information retrieval (IR) 2.Ranked retrieval 3.Probabilistic retrieval.
1 Evaluation Rong Jin. 2 Evaluation  Evaluation is key to building effective and efficient search engines usually carried out in controlled experiments.
Modelling Relevance and User Behaviour in Sponsored Search using Click-Data Adarsh Prasad, IIT Delhi Advisors: Dinesh Govindaraj SVN Vishwanathan* Group:
Optimizing search engines using clickthrough data
Query Chains: Learning to Rank from Implicit Feedback Paper Authors: Filip Radlinski Thorsten Joachims Presented By: Steven Carr.
1.Accuracy of Agree/Disagree relation classification. 2.Accuracy of user opinion prediction. 1.Task extraction performance on Bing web search log with.
Eye Tracking Analysis of User Behavior in WWW Search Laura Granka Thorsten Joachims Geri Gay.
Information Retrieval Models: Probabilistic Models
1 Learning User Interaction Models for Predicting Web Search Result Preferences Eugene Agichtein Eric Brill Susan Dumais Robert Ragno Microsoft Research.
Improving relevance prediction by addressing biases and sparsity in web search click data Qi Guo, Dmitry Lagun, Denis Savenkov, Qiaoling Liu
Statistic Models for Web/Sponsored Search Click Log Analysis The Chinese University of Hong Kong 1 Some slides are revised from Mr Guo Fan’s tutorial at.
Evaluating Search Engine
Click Evidence Signals and Tasks Vishwa Vinay Microsoft Research, Cambridge.
1 CS 430 / INFO 430 Information Retrieval Lecture 8 Query Refinement: Relevance Feedback Information Filtering.
1 Discussion Class 11 Click through Data as Implicit Feedback.
INFO 624 Week 3 Retrieval System Evaluation
Link Analysis, PageRank and Search Engines on the Web
1 CS 430 / INFO 430 Information Retrieval Lecture 24 Usability 2.
1 MARG-DARSHAK: A Scrapbook on Web Search engines allow the users to enter keywords relating to a topic and retrieve information about internet sites (URLs)
Online Search Evaluation with Interleaving Filip Radlinski Microsoft.
Adapting Deep RankNet for Personalized Search
Modern Retrieval Evaluations Hongning Wang
Evaluation David Kauchak cs458 Fall 2012 adapted from:
Evaluation David Kauchak cs160 Fall 2009 adapted from:
An Experimental Comparison of Click Position-Bias Models Nick Craswell Onno Zoeter Michael Taylor Bill Ramsey Microsoft Research.
A Comparative Study of Search Result Diversification Methods Wei Zheng and Hui Fang University of Delaware, Newark DE 19716, USA
User Browsing Graph: Structure, Evolution and Application Yiqun Liu, Yijiang Jin, Min Zhang, Shaoping Ma, Liyun Ru State Key Lab of Intelligent Technology.
Understanding and Predicting Graded Search Satisfaction Tang Yuk Yu 1.
Improving Web Search Ranking by Incorporating User Behavior Information Eugene Agichtein Eric Brill Susan Dumais Microsoft Research.
Ramakrishnan Srikant Sugato Basu Ni Wang Daryl Pregibon 1.
Fan Guo 1, Chao Liu 2 and Yi-Min Wang 2 1 Carnegie Mellon University 2 Microsoft Research Feb 11, 2009.
Web Search. Structure of the Web n The Web is a complex network (graph) of nodes & links that has the appearance of a self-organizing structure  The.
Hao Wu Nov Outline Introduction Related Work Experiment Methods Results Conclusions & Next Steps.
Mining the Web to Create Minority Language Corpora Rayid Ghani Accenture Technology Labs - Research Rosie Jones Carnegie Mellon University Dunja Mladenic.
Implicit Acquisition of Context for Personalization of Information Retrieval Systems Chang Liu, Nicholas J. Belkin School of Communication and Information.
Search Result Interface Hongning Wang Abstraction of search engine architecture User Ranker Indexer Doc Analyzer Index results Crawler Doc Representation.
Relevance Feedback Hongning Wang What we have learned so far Information Retrieval User results Query Rep Doc Rep (Index) Ranker.
Search Engine Architecture
Qi Guo Emory University Ryen White, Susan Dumais, Jue Wang, Blake Anderson Microsoft Presented by Tetsuya Sakai, Microsoft Research.
Presenter: Lung-Hao Lee Nov. 3, Room 310.  Introduction  Related Work  Methods  Results ◦ General Gaze Distribution on SERPs ◦ Effects of Task.
Chapter 8 Evaluating Search Engine. Evaluation n Evaluation is key to building effective and efficient search engines  Measurement usually carried out.
Implicit User Feedback Hongning Wang Explicit relevance feedback 2 Updated query Feedback Judgments: d 1 + d 2 - d 3 + … d k -... Query User judgment.
Search Result Interface Hongning Wang Abstraction of search engine architecture User Ranker Indexer Doc Analyzer Index results Crawler Doc Representation.
More Than Relevance: High Utility Query Recommendation By Mining Users' Search Behaviors Xiaofei Zhu, Jiafeng Guo, Xueqi Cheng, Yanyan Lan Institute of.
1 Click Chain Model in Web Search Fan Guo Carnegie Mellon University PPT Revised and Presented by Xin Xin.
Modern Retrieval Evaluations Hongning Wang
Chapter. 3: Retrieval Evaluation 1/2/2016Dr. Almetwally Mostafa 1.
Relevance Feedback Hongning Wang
TO Each His Own: Personalized Content Selection Based on Text Comprehensibility Date: 2013/01/24 Author: Chenhao Tan, Evgeniy Gabrilovich, Bo Pang Source:
Identifying “Best Bet” Web Search Results by Mining Past User Behavior Author: Eugene Agichtein, Zijian Zheng (Microsoft Research) Source: KDD2006 Reporter:
A Framework to Predict the Quality of Answers with Non-Textual Features Jiwoon Jeon, W. Bruce Croft(University of Massachusetts-Amherst) Joon Ho Lee (Soongsil.
Learning to Rank: From Pairwise Approach to Listwise Approach Authors: Zhe Cao, Tao Qin, Tie-Yan Liu, Ming-Feng Tsai, and Hang Li Presenter: Davidson Date:
To Personalize or Not to Personalize: Modeling Queries with Variation in User Intent Presented by Jaime Teevan, Susan T. Dumais, Daniel J. Liebling Microsoft.
Navigation Aided Retrieval Shashank Pandit & Christopher Olston Carnegie Mellon & Yahoo.
Opinion spam and Analysis 소프트웨어공학 연구실 G 최효린 1 / 35.
Accurately Interpreting Clickthrough Data as Implicit Feedback
Search User Behavior: Expanding The Web Search Frontier
Search Engine Architecture
Content-Aware Click Modeling
Learning to Rank Shubhra kanti karmaker (Santu)
Relevance Feedback Hongning Wang
Evidence from Behavior
Search Engine Architecture
CS246: Leveraging User Feedback
CS 4501: Information Retrieval
Click Chain Model in Web Search
Efficient Multiple-Click Models in Web Search
Presentation transcript:

Implicit User Feedback Hongning Wang

Explicit relevance feedback 2 Updated query Feedback Judgments: d 1 + d 2 - d 3 + … d k -... Query User judgment Retrieval Engine Document collection Results: d d … d k : Information Retrieval

Relevance feedback in real systems Google used to provide such functions – Vulnerable to spammers Relevant Nonrelevant 6501: Information Retrieval3

How about using clicks Clicked document as relevant, non-clicked as non-relevant – Cheap, largely available 6501: Information Retrieval4

Is click reliable? Why do we click on the returned document? – Title/snippet looks attractive We haven’t read the full text content of the document – It was ranked higher Belief bias towards ranking – We know it is the answer! 6501: Information Retrieval5

Is click reliable? Why do not we click on the returned document? – Title/snippet has already provided the answer Instant answers, knowledge graph – Extra effort of scrolling down the result page The expected loss is larger than skipping the document – We did not see it…. Can we trust click as relevance feedback? 6501: Information Retrieval6

Accurately Interpreting Clickthrough Data as Implicit Feedback [Joachims SIGIR’05] Eye tracking, click and manual relevance judgment to answer – Do users scan the results from top to bottom? – How many abstracts do they read before clicking? – How does their behavior change, if search results are artificially manipulated? 6501: Information Retrieval7

Which links do users view and click? Positional bias First 5 results are visible without scrolling Fixations: a spatially stable gaze lasting for approximately ms, indicating visual attention 6501: Information Retrieval8

Do users scan links from top to bottom? View the top two results within the second or third fixation Need scroll down to view these results 6501: Information Retrieval9

Which links do users evaluate before clicking? The lower the click in the ranking, the more abstracts are viewed before the click 6501: Information Retrieval10

Does relevance influence user decisions? Controlled relevance quality – Reverse the ranking from search engine Users’ reactions – Scan significantly more abstracts than before – Less likely to click on the first result – Average clicked rank position drops from 2.66 to 4.03 – Average clicks per query drops from 0.8 to : Information Retrieval11

Are clicks absolute relevance judgments? Position bias – Focus on position one and two, equally likely to be viewed 6501: Information Retrieval12

Are clicks relative relevance judgments? Clicks as pairwise preference statements – Given a ranked list and user clicks Click > Skip Above Last Click > Skip Above Click > Earlier Click Last Click > Skip Previous Click > Skip Next (1) (2) (3) 6501: Information Retrieval13

Clicks as pairwise preference statements Accuracy against manual relevance judgment 6501: Information Retrieval14

How accurately do clicks correspond to explicit judgment of a document? Accuracy against manual relevance judgment 6501: Information Retrieval15

What do we get from this user study? Clicks are influenced by the relevance of results – Biased by the trust over rank positions Clicks as relative preference statement is more accurate – Several heuristics to generate the preference pairs 6501: Information Retrieval16

How to utilize such preference pairs? Pairwise learning to rank algorithms – Will be covered later 6501: Information Retrieval17

An eye tracking study of the effect of target rank on web search [Guan CHI’07] Break down of users’ click accuracy – Navigational search 6501: Information Retrieval18 First result

An eye tracking study of the effect of target rank on web search [Guan CHI’07] Break down of users’ click accuracy – Informational search 6501: Information Retrieval19 First result

Users failed to recognize the target because they did not read it! Navigational search 6501: Information Retrieval20

Users did not click because they did not read the results! Informational search 6501: Information Retrieval21

Predicting clicks: estimating the click- through rate for new ads [Richardson WWW’07] Cost per click: basic business model in search engines Estimated click-through rate 6501: Information Retrieval22

Combat position-bias by explicitly modeling it Calibrated CTR for ads ranking Discounting factor Logistic regression by features of the ad 6501: Information Retrieval23

Parameter estimation 6501: Information Retrieval24

Calibrated CTR is more accurate for new ads Simple counting of CTR Unfortunately, their evaluation criterion is still based on biased clicks in testing set 6501: Information Retrieval25

Click models Decompose relevance-driven clicks from position-driven clicks – Examine: user reads the displayed result – Click: user clicks on the displayed result – Atomic unit: (query, doc) (q,d 1 ) (q,d 4 ) (q,d 3 ) (q,d 2 ) Prob. Pos. Click probability Examine probability Relevance quality CS 6501: Information Retrieval

Cascade Model [Craswell et al. WSDM’08] Kind of “Click > Skip Above”? 6501: Information Retrieval27

User Browsing Model [Dupret et al. SIGIR’08] Examination depends on distance to the last click – From absolute discount to relative discount Attractiveness, determined by query and URL Examination, determined by position and distance to last click EM for parameter estimation Kind of “Click > Skip Next” + “Click > Skip Above”? CS 6501: Information Retrieval

More accurate prediction of clicks Perplexity – randomness of prediction Cascade model Browsing model 6501: Information Retrieval29

Dynamic Bayesian Model [Chapelle et al. WWW’09] A cascade model – Relevance quality: Perceived relevance User’s satisfaction Examination chain Intrinsic relevance CS 6501: Information Retrieval

Accuracy in predicting CTR 6501: Information Retrieval31

Revisit User Click Behaviors Match my query? Redundant doc? Shall I move on? 6501: Information Retrieval

Content-Aware Click Modeling [Wang et al. WWW’12] Encode dependency within user browsing behaviors via descriptive features Relevance quality of a document: e.g., ranking features Chance to further examine the result documents: e.g., position, # clicks, distance to last click Chance to click on an examined and relevant document: e.g., clicked/skipped content similarity 6501: Information Retrieval

Quality of relevance modeling Estimated relevance for ranking 6501: Information Retrieval

Understanding user behaviors Analyzing factors affecting user clicks 6501: Information Retrieval

What you should know Clicks as implicit relevance feedback Positional bias Heuristics for generating pairwise preferences Assumptions and modeling approaches for click models 6501: Information Retrieval36

6501: Information Retrieval37