Presentation is loading. Please wait.

Presentation is loading. Please wait.

Improving Web Search Ranking by Incorporating User Behavior Information Eugene Agichtein Eric Brill Susan Dumais Microsoft Research.

Similar presentations


Presentation on theme: "Improving Web Search Ranking by Incorporating User Behavior Information Eugene Agichtein Eric Brill Susan Dumais Microsoft Research."— Presentation transcript:

1 Improving Web Search Ranking by Incorporating User Behavior Information Eugene Agichtein Eric Brill Susan Dumais Microsoft Research

2 2 Web Search Ranking Rank pages relevant for a query Rank pages relevant for a query –Content match e.g., page terms, anchor text, term weights e.g., page terms, anchor text, term weights –Prior document quality e.g., web topology, spam features e.g., web topology, spam features –Hundreds of parameters Tune ranking functions on explicit document relevance ratings Tune ranking functions on explicit document relevance ratings

3 3 Query: SIGIR 2006 Users can help indicate most relevant results Users can help indicate most relevant results

4 4 Web Search Ranking: Revisited Incorporate user behavior information Incorporate user behavior information –Millions of users submit queries daily –Rich user interaction features (earlier talk) –Complementary to content and web topology Some challenges: Some challenges: –User behavior “in the wild” is not reliable –How to integrate interactions into ranking –What is the impact over all queries

5 5 Outline Modelling user behavior for ranking Modelling user behavior for ranking Incorporating user behavior into ranking Incorporating user behavior into ranking Empirical evaluation Empirical evaluation Conclusions Conclusions

6 6 Related Work Personalization Personalization –Rerank results based on user’s clickthrough and browsing history Collaborative filtering Collaborative filtering –Amazon, DirectHit: rank by clickthrough General ranking General ranking –Joachims et al. [KDD 2002], Radlinski et al. [KDD 2005]: tuning ranking functions with clickthrough

7 7 Rich User Behavior Feature Space Observed and distributional features Observed and distributional features –Aggregate observed values over all user interactions for each query and result pair –Distributional features: deviations from the “expected” behavior for the query Represent user interactions as vectors in user behavior space Represent user interactions as vectors in user behavior space –Presentation: what a user sees before a click –Clickthrough: frequency and timing of clicks –Browsing: what users do after a click

8 8 Some User Interaction Features Presentation ResultPosition Position of the URL in Current ranking QueryTitleOverlap Fraction of query terms in result Title Clickthrough DeliberationTime Seconds between query and first click ClickFrequency Fraction of all clicks landing on page ClickDeviation Deviation from expected click frequency Browsing DwellTime Result page dwell time DwellTimeDeviation Deviation from expected dwell time for query

9 9 Training a User Behavior Model Map user behavior features to relevance judgements Map user behavior features to relevance judgements RankNet: Burges et al., [ICML 2005] RankNet: Burges et al., [ICML 2005] –Scalable Neural Net implementation –Input: user behavior + relevance labels –Output: weights for behavior feature values –Used as testbed for all experiments

10 10 Training RankNet For query results 1 and 2, present pair of vectors and labels, label(1) > label(2) For query results 1 and 2, present pair of vectors and labels, label(1) > label(2)

11 11 RankNet [Burges et al. 2005] Feature Vector1 Label1 NN output 1 For query results 1 and 2, present pair of vectors and labels, label(1) > label(2) For query results 1 and 2, present pair of vectors and labels, label(1) > label(2)

12 12 RankNet [Burges et al. 2005] Feature Vector2 Label2 NN output 1 NN output 2 For query results 1 and 2, present pair of vectors and labels, label(1) > label(2) For query results 1 and 2, present pair of vectors and labels, label(1) > label(2)

13 13 RankNet [Burges et al. 2005] NN output 1 NN output 2 Error is function of both outputs (Desire output1 > output2) For query results 1 and 2, present pair of vectors and labels, label(1) > label(2) For query results 1 and 2, present pair of vectors and labels, label(1) > label(2)

14 14 Predicting with RankNet Feature Vector1 NN output Present individual vector and get score Present individual vector and get score

15 15 Outline Modelling user behavior Modelling user behavior Incorporating user behavior into ranking Incorporating user behavior into ranking Empirical evaluation Empirical evaluation Conclusions Conclusions

16 16 User Behavior Models for Ranking Use interactions from previous instances of query Use interactions from previous instances of query –General-purpose (not personalized) –Only available for queries with past user interactions Models: Models: –Rerank, clickthrough only: reorder results by number of clicks –Rerank, predicted preferences (all user behavior features): reorder results by predicted preferences –Integrate directly into ranker: incorporate user interactions as features for the ranker

17 17 Rerank, Clickthrough Only Promote all clicked results to the top of the result list Promote all clicked results to the top of the result list –Re-order by click frequency Retain relative ranking of un-clicked results Retain relative ranking of un-clicked results

18 18 Rerank, Preference Predictions Re-order results by function of preference prediction score Re-order results by function of preference prediction score Experimented with different variants Experimented with different variants –Using inverse of ranks –Intuition: scores not comparable  merge ranks

19 19 Integrate User Behavior Features Directly into Ranker For a given query For a given query –Merge original feature set with user behavior features when available –User behavior features computed from previous interactions with same query Train RankNet on enhanced feature set Train RankNet on enhanced feature set

20 20 Outline Modelling user behavior Modelling user behavior Incorporating user behavior into ranking Incorporating user behavior into ranking Empirical evaluation Empirical evaluation Conclusions Conclusions

21 21 Evaluation Metrics Precision at K: fraction of relevant in top K Precision at K: fraction of relevant in top K NDCG at K: norm. discounted cumulative gain NDCG at K: norm. discounted cumulative gain –Top-ranked results most important MAP: mean average precision MAP: mean average precision – –Average precision for each query: mean of the precision at K values computed after each relevant document was retrieved

22 22 Datasets 8 weeks of user behavior data from anonymized opt-in client instrumentation 8 weeks of user behavior data from anonymized opt-in client instrumentation Millions of unique queries and interaction traces Millions of unique queries and interaction traces Random sample of 3,000 queries Random sample of 3,000 queries –Gathered independently of user behavior –1,500 train, 500 validation, 1,000 test Explicit relevance assessments for top 10 results for each query in sample Explicit relevance assessments for top 10 results for each query in sample

23 23 Methods Compared Content only: BM25F Content only: BM25F Full Search Engine: RN Full Search Engine: RN –Hundreds of parameters for content match and document quality –Tuned with RankNet Incorporating User Behavior Incorporating User Behavior –Clickthrough: Rerank-CT –Full user behavior model predictions: Rerank-All –Integrate all user behavior features directly: +All

24 24 Content, User Behavior: Precision at K, queries with interactions BM25 < Rerank-CT < Rerank-All < +All

25 25 Content, User Behavior: NDCG BM25 < Rerank-CT < Rerank-All < +All

26 26 Full Search Engine, User Behavior: NDCG, MAP MAPGain RN0.270 RN+ALL0.3210.052 ( 19.13%) BM250.236 BM25+ALL0.2920.056 (23.71%)

27 27 Impact: All Queries, Precision at K < 50% of test queries w/ prior interactions +0.06-0.12 precision over all test queries

28 28 Impact: All Queries, NDCG +0.03-0.05 NDCG over all test queries

29 29 Which Queries Benefit Most Most gains are for queries with poor ranking

30 30 Conclusions Incorporating user behavior into web search ranking dramatically improves relevance Incorporating user behavior into web search ranking dramatically improves relevance Providing rich user interaction features to ranker is the most effective strategy Providing rich user interaction features to ranker is the most effective strategy Large improvement shown for up to 50% of test queries Large improvement shown for up to 50% of test queries

31 31 Thank you Text Mining, Search, and Navigation group: http://research.microsoft.com/tmsn/ Adaptive Systems and Interaction group: http://research.microsoft.com/adapt/ Microsoft Research

32 32 Content,User Behavior: All Queries, Precision at K BM25 < Rerank-CT < Rerank-All < All

33 33 Content, User Behavior: All Queries, NDCG BM25 << Rerank-CT << Rerank-All < All

34 34 Results Summary Incorporating user behavior into web search ranking dramatically improves relevance Incorporating user behavior into web search ranking dramatically improves relevance Incorporating user behavior features into ranking directly most effective strategy Incorporating user behavior features into ranking directly most effective strategy Impact on relevance substantial Impact on relevance substantial Poorly performing queries benefit most Poorly performing queries benefit most

35 35 Promising Extensions Backoff (improve query coverage) Backoff (improve query coverage) Model user intent/information need Model user intent/information need Personalization of various degrees Personalization of various degrees Query segmentation Query segmentation


Download ppt "Improving Web Search Ranking by Incorporating User Behavior Information Eugene Agichtein Eric Brill Susan Dumais Microsoft Research."

Similar presentations


Ads by Google