Presentation is loading. Please wait.

Presentation is loading. Please wait.

Towards Context-Aware Search by Learning A Very Large Variable Length Hidden Markov Model from Search Logs Huanhuan Cao 1, Daxin Jiang 2, Jian Pei 3, Enhong.

Similar presentations


Presentation on theme: "Towards Context-Aware Search by Learning A Very Large Variable Length Hidden Markov Model from Search Logs Huanhuan Cao 1, Daxin Jiang 2, Jian Pei 3, Enhong."— Presentation transcript:

1 Towards Context-Aware Search by Learning A Very Large Variable Length Hidden Markov Model from Search Logs Huanhuan Cao 1, Daxin Jiang 2, Jian Pei 3, Enhong Chen 1 and Hang Li 2 1 University of Science and Technology of China, 2 Microsoft Research Asia, 3 Simon Fraser University

2 Context of User Queries A user usually raises multiple queries and conducts multiple rounds of interactions for an information need. User query Current Query click Context One round of Interaction

3 An Example Suppose Ada plans to buy a new car and need some cars reviews. But she doesnt know to formulate an effective query. Consequently, she raises a series of queries about different cars. No surprisingly, for each query, the review web sites are ranked low and not easy to be noticed.

4 Why Context is Useful? Suppose we have such a search log: SIDSearch session S1Ford => Toyota => GMC => Allstate www.autohome.com S2Ford cars => Toyota cars => GMC cars => Allstate www.autohome.com S3Ford cars => Toyota cars => Allstate www.allstate.com S4 GMC => GMC dealers www.gmc.com

5 Patterns in The Search Log Pattern1: – 50% users clicked a cars review web site www.autohome.com after asking a series of cars. www.autohome.com Ada will have better experience if the search engine knows pattern1.

6 Pattern2: – 75% users searched for car insurances after a series of queries about different cars. The search engine will provide more appropriate query suggestions and URL recommendations if it knows pattern2. Idea: Learning from search log to provide context-aware ranking, query suggestion and URL recommendation.

7 Related Work Mining wisdom of the crowds from search logs Improve ranking Use click-through data as implicit feedback Query suggestion Mining click-through data Mining session data Mixture: CACB URL recomendationMining search trials Only CACB considers context, but: 1. CACB constraints a query to one search intent 2. CACB doesnt use click information as context 3. CACB can only be used for query suggestion

8 Modeling Context by vlHMM (variable length Hidden Markov Model)

9 Overview of Technique Details Definition of vlHMM Parameters Estimation Challenges and Strategies Applications

10 Formal Definition Given: – A set of hidden states {s 1 … s Ns }; – A set of queries {q 1 … q Nq }; – A set of URLs {u 1 … u Nu }; – The maximal length T max of state sequences A vlHMM is a probability model defined as follows: – The transition probability distribution Δ = {P(s i |S j )}; – The initial state distribution Ψ = {P(s i )}; – The emission probability distribution for each state sequence Λ = {P(q, U|S j )};

11 Parameter Estimation Let X = {O 1 …O N } be the set of training sessions, where: – O n is a sequence of pairs (q n,1,U n,1 ) … (q n,Tn,U n,Tn ) – q n,t and U n,t are the t-th query and the set of clicked URLs, respectively – Moreover, we use u n,t,k to denote the k-th URL in U n,t. The task is to find Θ * such that

12 EM The original problem is in a complex form which may not have a closed-form solution. Alternatively, we use an iterative method: EM (Expectation Maximum). Objective function:

13 E-step: M-step:

14 Challenges for Training A Large vlHMM Challenge1: – The EM algorithm needs a user-specified number of hidden states. – However, in our problem, the hidden states correspond to users' search intents, whose number is unknown. Strategy: – We apply the mining techniques developed by our previous work as a prior process to the parameter learning process.

15 Challenge2: – Search logs may contain hundreds of millions of training sessions. – It is impractical to learn a vlHMM from such a huge training data set using a single machine. Strategy: – We deploy the learning task on a distributed system under the map-reduce programming model

16 Challenge3: – Each machine needs to hold the values of all parameters. – Since the log data usually contains millions of unique queries and URLs, the space of parameters is extremely large. Strategy: – we develop a special initialization strategy based on the clusters mined from the click-through bipartite

17 Applications Given a observation O consists of q 1 … q t and U 1 … U t Document re-ranking: – Rank by P(u|O) = P(u|s t ) P(s t |O) Query suggestion & URL-recommendation: – Suggest top k queries with P(q|O) = P(q|s t+1 ) P(s t+1 |O) – Recommend top k URLs with P(u|O) = P(u|s t+1 ) P(s t+1 |O) The advantages of our model: unification and power of prediction.

18 Experiments A large-scale search log from Live Search – Web searches in English from the US market Training Data – 1,812,563,301 search queries, – 2,554,683,191 clicks – 840,356,624 sessions – 151,869,102 unique queries – 114,882,486 unique URLs. Test Data – 100,000 sessions extracted from another search log

19 Coverage For each test session, the vlHMM deals with each q i. When i > 1, is used as a context. The total coverage is 58.3%. Denote the set of test cases without context as Test0 and the other as Test1. For the covered cases in Test1, 25.5% contexts are recognized.

20 Re-ranking Baseline: – Boost the URLs with high click times given the query. Evaluation: – Sample 500 re-ranking URL pairs from Test0 and from the cases whose context are recognized in Test1, respectively. – Each re-ranking URL pair is judged as Improved or Degraded or Unsure by 3 experts.

21 The effectiveness of re-ranking by the vlHMM and Baseline1 on (a) Test0 and (b) Test1.

22 Examples of Re-ranking Search for gamesUp the URL about game Visit the homepage of Ask Jeeves Up the URL which introduces the history of Ask Jeeves

23 URL Recommendation Baseline: – Recommend the URLs with high click times following the current query. Evaluation: – Leave-one-out" method: given, we use q T-1 as the test query and consider U T as the ground truth. – Suppose the set of recommended URLs is R, the precision is |RU T |/|R| and the recall is |R U T |/|U T |.

24 The precision and recall of the URLs recommended by the vlHMM and Baseline2.

25 An Example of URL Recommendation Search for online store about electronics Online store about equipments Online store about electronics

26 Query Suggestion Baseline: – CACB, a context-aware concept based approach of query suggestion. Evaluation: – The results of two approaches are comparable since they both consider contexts. – However, the ratio of recognizing contexts is increased by 55% by vlHMM.

27 Summary We propose a general approach to context-aware search by learning a vlHMM from log data. We tackle the challenges of learning a large vlHMM with millions of states from hundreds of millions of search sessions. The experimental results on a large real data set clearly show that our context-aware approach is both effective and efficient.

28 Our recent works on context-aware search: Huanhuan Cao, Derek Hao Hu, Dou Shen, Daxin Jiang, Jian-tao Sun, Enhong Chen and Qiang Yang. Context-aware query classification. To appear in SIGIR09. Huanhuan Cao, Daxin Jiang, Jian Pei, Enhong Chen and Hang Li. Towards context-aware search by learning a large variable length Hidden Markov Model from search logs. To appear in WWW09. Huanhuan Cao, Daxin Jiang, Jian Pei, Qi He, Zhen Liao, Enhong Chen and Hang Li. Context-aware query suggestion by mining click-through and session data. KDD08, pages 875-883, 2008. (This paper won the Best Application Paper Award of KDD08)

29 Thanks


Download ppt "Towards Context-Aware Search by Learning A Very Large Variable Length Hidden Markov Model from Search Logs Huanhuan Cao 1, Daxin Jiang 2, Jian Pei 3, Enhong."

Similar presentations


Ads by Google